
After more than twenty years building and managing compliance programmes across jurisdictions, I have learned one thing: compliance rarely shrinks. The regulatory perimeter continues to expand and compliance gets redistributed across more stakeholders, more systems, more evidence requests, and more reporting lines. That is why the rising cost of compliance feels paradoxical: even when headcount stays flat, the workload keeps expanding.
If you are a compliance officer, this is the practical problem to solve in 2026: your board wants agility and proof of effectiveness, your CFO wants cost discipline, regulators and auditors want traceability and evidence, not just policies and governments speak about competitiveness.
The question is not whether compliance costs are rising. The question is why, and how to redesign your operating model so you simplify execution while improving control.
Why compliance costs keep rising (and why it is structural)
1) Regulatory volume and velocity create “exposure windows”
Most teams can monitor new laws, guidelines, and enforcement trends. The cost explosion happens after that identification.
The hard part is compressing the time from:
- regulatory change identified
- risk assessment completed
- ownership assigned
- controls updated
- testing performed
- evidence stored and retrievable
75% of companies take more than a year from new regulation identification to full implementation. 33% will take up to 2 years.
Any delay creates an exposure window, meaning obligations exist while operations are not yet fully aligned. Exposure windows are not neutral: teams compensate with interim controls, emergency projects, escalations, and reactive audit preparation.
2) Implementation and embedment are the real cost engine
Interpretation is rarely the dominant cost. Organizational translation is.
The most expensive work typically sits in:
- stakeholder engagement, task allocation, and follow-up
- process redesign and workflow changes
- control design and control effectiveness testing
- training and communications (often with fatigue and low retention)
- evidence collection (especially quarter-end chases)
- governance minutes, decision logs, and board reporting
- systems changes and data access
This is why compliance cost is often structural, not simply “more rules.”
3) Evidence expectations keep rising (effectiveness over existence)
Across frameworks relevant in France and Spain, the direction of travel is consistent: programs must be effective.
- In France, Sapin II Article 17 sets out required anti-corruption measures (including risk mapping, third-party due diligence, training, internal controls, and evaluation). See the official law text on Légifrance.
- The AFA (Agence française anticorruption) has made expectations around risk-based design and auditable proof very explicit in its guidance. See the AFA’s recommendations page.
- ISO frameworks (for example ISO 37001 for anti-bribery) are built around documented systems, operational controls, monitoring, and continual improvement. See ISO 37001 overview.
The common consequence: evidence becomes a product you must continuously manufacture, store, and retrieve.
4) Data governance has become compliance infrastructure
Data governance is no longer “IT’s topic.” It is a regulatory infrastructure.
If you cannot reliably collect, process, store, reconcile, and retrieve data, you cannot:
- keep risk mapping current
- show control performance over time
- produce audit-ready evidence quickly
- trust AI outputs built on that data
5) Geopolitical and regulatory uncertainty
Uncertainty drives precautionary cost. When the furture is unstable, organisations over-monitor and over-control to avoid exposure.
Regulatory fragmentation, sanctions, trade controls, AI governance and ESG divergence are most pressing challenges.
Where compliance costs the most: 4 structural bottlenecks
These are the bottlenecks that inflate cost while degrading control quality.
Bottleneck 1: risk assessment delays
When regulqtory risk assessment requires 2-8 weeks, organisations experience:
- unclear decision rights (who can conclude “no action needed”)
- overlapping interpretations (legal vs compliance vs business)
- duplicated review cycles
- deadline compression later in implementation
Compressed implementation increases error probability. Errors create remediation. Remediation is expensive.
Bottleneck 2: evidence and audit trail burden
Regulators and auditors do not only ask “did you implement?” They ask “can you prove you implemented, embedded, and tested?”
A defensible audit trail typically includes evidence:
- that regulatory developments were identified
- that risk assessment was performed
- that controls were updated and assigned
- that training occurred
- that effectiveness test were performed
- that governance approved ddecisions and follow-up
When this is manual and scattered (emails, local drives, spreadsheets), evidence gathering & processing becomes the cost center.
Bottleneck 3: fragmented control frameworks
In many organizations, legal tracks developments, compliance tracks obligations, risk tracks controls, IT tracks system changes, internal audit tracks performance testing.
If these are not connected, you get multiple sources of truth, weak traceability, duplicated controls, and reactive audit prep.
Bottleneck 4: missing data lineage from requirement to test result
If you cannot trace:
Requirement → process → control → test → evidence → metric
you compensate with extra oversight and redundant controls, because you cannot trust the chain.

A practical redesign: 7 moves to simplify compliance while improving control
The goal is not “more automation.” The goal is a different cost structure: less duplication, shorter cycle times, and better proof.
Move 1: shorten the time to a first defensible regulatory risk assessment
A good operating model produces a fast, documented initial risk assessment, then deepens it only when needed.
Use a two-stage triage:
- stage 1 (48 hours to 5 days, depending on your risk): relevance and rough impact
- stage 2 (project): detailed implementation plan, control design, and testing
This reduces “analysis drag” while improving defensibility.
Move 2: separate control design from control effectiveness testing
Many programs overspend because they confuse:
- control design: is the control logically capable of preventing/detecting the risk?
- control effectiveness: is it operating as intended, consistently, with evidence?
Treat them as two different workflows with different owners and outputs.
Minimum standard you can use across Sapin II, ISO 37001, UNE-style programs
- design review: once per year or when the process changes
- effectiveness testing: risk-based cadence (monthly/quarterly/annual)
This prevents the common trap: rewriting policies and controls every year while never measuring whether they work.
Move 3: rationalise your control library (reduce redundancy safely)
Redundant controls are rarely removed because teams fear “removing safety.” The way out is traceability.
Run a control rationalisation sprint:
- consolidate duplicate controls across countries into one standard, with local add-ons only when truly required
- define one control owner (operational), one challenger (compliance), one tester (could be compliance, risk, or internal audit)
- retire controls with no clear risk link, no test method, or no evidence path
Decision rule that works in practice: if a control cannot be tested, it is not a control, it is a statement.
Move 4: build “regulatory to control” mapping as a living system
This is the structural fix for fragmentation.
Create a single mapped chain:
- obligation (what is required)
- risk scenario (what could go wrong)
- control objective (what must be achieved)
- control (what is done)
- test (how you verify)
- evidence (what you store)
Table: what you gain by completing the chain
Problem you feel | Typical root cause | What mapping changes | Cost impact |
|---|---|---|---|
duplicated reviews | multiple sources of truth | one traceable chain | fewer cycles |
audit panic | evidence scattered | evidence linked to controls | faster retrieval |
too many controls | no rationalization basis | redundancy becomes visible | fewer controls |
weak KPIs | activity-based metrics | outcome-linked metrics | better resourcing case |
Move 5: industrialise evidence collection (do not “chase” evidence)
Evidence work becomes manageable when you standardize what counts as acceptable proof.
Checklist: evidence pack index (copy and adapt)
- risk assessment and updates (method, inputs, approvals)
- control inventory (owners, frequency, test method)
- testing plan and results (including exceptions)
- remediation log (actions, owners, dates, closure proof)
- training matrix (role-based), completion, and effectiveness checks
- third-party due diligence files (tiered) and monitoring
- governance records (committee cadence, decisions, escalations)
If you cannot retrieve these within a reasonable timeframe, you do not have “too little documentation.” You have an evidence architecture problem.
Move 6: treat data governance as a compliance control, not an IT project
For compliance teams, the most useful data governance deliverables are simple and operational.
Start with two artifacts:
- a data inventory for compliance-critical datasets (third parties, payments, gifts and hospitality, intermediaries, approvals, training, speak-up)
- a data lineage note for each, answering: source, owner, quality checks, retention, access rules
You are building the infrastructure that makes monitoring and reporting reliable.
For an overarching management system lens, ISO 37301 can be a helpful reference point.
Move 7: use AI where it changes structure (not where it adds a gadget)
AI tends to reduce cost structurally when it compresses cycle time and removes duplication, while strengthening traceability.
High-impact, defensible use cases include:
- faster triage and drafting of structured risk assessment summaries (with human sign-off)
- obligation extraction and regulatory-to-control mapping suggestions (with review)
- workflow automation for implementation tasks, escalations, and decision logs
- evidence classification and completeness checks
- continuous monitoring signals (exceptions, anomalies) routed to owners
Decision tree: where to automate first
- If your main pain is audit readiness, automate evidence collection and evidence indexing first.
- If your main pain is slow change implementation, automate intake, triage, ownership assignment, and workflow tracking first.
- If your main pain is “too many controls,” automate mapping and rationalization support first (but do not automate the decision to retire controls).
Guardrail to keep it defensible: if an AI output cannot be explained, traced to inputs, and approved by a named owner, it should not drive compliance decisions.

Frequently asked questions
Is simplifying compliance compatible with Sapin II and AFA expectations? Yes, if simplification reduces fragmentation and strengthens traceability, evidence, and operational ownership. Regulators and auditors typically reward clearer accountability and testable controls over large, generic policy sets.
What is the biggest hidden driver of compliance cost? In practice, it is often time, especially slow impact assessment and slow evidence retrieval. Time creates exposure windows, then forces expensive “catch-up” projects.
How do I prove effectiveness, not just activity? Link controls to tests and outcomes: exception rates, remediation closure time, re-occurrence of issues, and documented management decisions. Activity metrics (trainings delivered, policies published) are rarely enough alone.
Should we reduce the number of controls to save money? Only if you can show the remaining controls are better designed, better owned, and better tested. Control rationalization without traceability usually increases risk.
Where does AI fit without creating new risk? AI fits best inside governed workflows where humans approve conclusions, and where outputs are logged with sources, versions, and rationale. Treat AI as assistive, not autonomous.

