Friday, April 17, 2026

Loi Sapin II risk mapping that actually survives an AFA audit

Iratxe Gurpegui
Written by
Iratxe Gurpegui
8 min read
Loi Sapin II risk mapping that actually survives an AFA audit

The first time the AFA asks, "how did you score this risk?", the room goes quiet.

Not because the team did nothing. Because the risk map was built like a workshop deliverable, not like an auditable decision.

Good compliance teams get cornered by one simple gap: they can show the heatmap, but they cannot replay the reasoning.

That is the line between a Loi Sapin II risk map that looks fine — and one that actually survives an AFA audit.

Start here: what AFA auditors are actually testing

Under Sapin II, risk mapping is not a poster. It is the backbone that should justify the rest of the program — third parties, gifts and hospitality, accounting controls, training, monitoring.

Article 17 of the Sapin II law requires a risk map as part of the eight mandatory measures. And because every other measure must be risk-based, the map must be real, not decorative.

AFA auditors do not just look for "a methodology." They look for whether your methodology produces decisions that are consistent, explainable, and connected to controls. The AFA's own recommendations make this explicit: the map must offer "l'assurance raisonnable que les risques identifiés sont le fidèle reflet de ceux auxquels l'organisation est réellement exposée" — a reasonable assurance that identified risks genuinely reflect actual exposure.

If you cannot show the chain from inputs to score to action, you are running paper compliance — just in a spreadsheet.

The one criterion that determines survival: replayability.

If an auditor picks a high-risk scenario and you can reproduce — calmly and quickly — how you got there, who validated it, what data you used, what controls cover it, what gaps remain, and what you did about those gaps, you are in a strong position.

If you cannot, you are negotiating from weakness.

What AFA actually asks on-site: the five-question test

Before building anything, it helps to know exactly what you are preparing for. Here is a realistic audit moment.

AFA is on site. They choose a scenario that always exists in mid-size companies: sales intermediaries in a high-growth market. They ask five questions, in this order:

  1. Why is this scenario even in your universe, and why is it framed this way?
  2. What concrete data did you use — not opinions — to score likelihood and impact?
  3. Who participated, what roles were represented, and how did you challenge bias?
  4. What controls reduce the risk, and do you test them?
  5. What happened since the last update that proves your map is alive?

If your answers are "we did interviews" and "the business confirmed", you will spend the rest of the audit digging for artifacts.

If your answers are "here is the scenario library version, here is the data pack, here is the gross and net scoring rationale, here is the control link, here are the last two monitoring results and the remediation record" — the pressure drops immediately.

That is not perfection. That is basic defensibility.

Keep those five questions visible as you build. Each one maps directly to something you need to have ready.

The methodology the AFA actually expects: six steps

The AFA recommendations describe a specific six-step process for building a defensible risk map. Most companies skip steps three and four — which is precisely where audits break down.

Step 1 — Roles and responsibilities. Who owns the map? The AFA expects a clear split: the compliance officer coordinates, process owners contribute their operational knowledge, and the governing body validates. If only one person built the map, the auditor will question whether it reflects actual business reality.

Step 2 — Risk identification through process analysis. Risks must be identified by analyzing real business processes — not by copying a generic risk list. The AFA is explicit: a pre-built risk library can inform workshops, but cannot predetermine their output. The scenarios that emerge must be documented in writing, with the exchanges that produced them.

Step 3 — Gross risk scoring. This is the inherent risk level before any controls are applied. The AFA expects you to evaluate each scenario on three dimensions: impact (reputational, financial, legal), frequency, and aggravating factors such as geography or counterparty type. Written rationale is required — consensus scores with no documentation are not defensible.

Step 4 — Net or residual risk scoring. Once existing controls are taken into account, what risk remains? This step requires you to assess whether your controls actually work — drawing on audit results, control testing, and incident history. This is where most maps fail: controls are listed, not evaluated.

Step 5 — Prioritization and action plan. Residual risks are ranked. For those above your acceptance threshold, a remediation or strengthening plan is built — with named owners, deadlines, and follow-up mechanisms. The AFA is clear that without this plan, the map is informational, not governing.

Step 6 — Formalization, update, and archiving. The map must be documented, dated, versioned, and archived — along with all supporting materials: workshop notes, scoring rationale, methodology, and the governing body's validation. The AFA recommends annual review as a minimum, plus event-driven updates when material changes occur (new market, new channel, acquisition, incident, audit finding).

What to build: the minimum audit pack, question by question

The goal is not a perfect tool. The goal is a clean evidence trail — regardless of whether you work in a spreadsheet, a GRC platform, or a custom database.

Here is the structure, organized around the five AFA questions:

AFA audit question

What to show in under 2 minutes

What usually breaks

Fix that works

What is your risk mapping scope?

Scope memo: entities, countries, processes, third parties — plus exclusions and why

Scope is implicit and changes every year

A one-page scope decision with versioning

How did you identify risks?

Scenario library with sources (workshops, incidents, audits, spend data) + written exchange summaries

Vague risk labels ("corruption risk")

Scenario-based risks tied to real activities, documented in workshop notes

How did you score risks?

Gross scoring rationale + net scoring with control effectiveness assessment per scenario

Scores are "consensus," no rationale; gross and net scores not distinguished

Mandatory written rationale field for both gross and net scores

How do controls reduce risk?

Control mapping per scenario (design) + testing and monitoring evidence (effectiveness)

Controls are listed, not tested

Separate design evidence from test evidence

How do you update?

Triggers log + last update record + governing body validation

Annual refresh theater

Event-driven updates with a trigger register; annual review at minimum

Three documentation elements matter more than most teams expect.

The methodology annex. The AFA explicitly requires that the map be accompanied by documentation describing how it was built — the identification, scoring, and prioritization methodology. Without it, the auditor cannot assess whether your approach was sound.

The decision trail. Who validated the final map? Who challenged a score? Who accepted residual risk? The AFA expects governing body sign-off, documented and dated, before the map is deployed and at each update. If you cannot identify those decision points, you are telling the auditor the map is informational — not governing.

The archiving requirements. The AFA specifies what must be retained: records of exchanges with staff, gross and net scoring methodology and definitions, risk identification and classification procedures, each version of the map presented to leadership, validations, and associated action plans. Versions must be dated, referenced, and archived.

⚠️ One practical rule to apply immediately: every high-risk scenario should have a named owner and at least one tracked remediation or monitoring action.

Where AI helps — and where it does not

Once your evidence structure is clear, AI can accelerate the operational work significantly. But the boundaries matter.

Where AI earns its place:

  • Extracting risk signals from existing documents (audit reports, incident logs, third-party files)
  • Standardizing scenario wording so your map is comparable across entities
  • Flagging inconsistencies (same risk, different score, no explanation)
  • Pre-filling evidence requests and chasing artifacts through workflow automation, not email

This is exactly the kind of work Naltilia is built for: regulatory risk assessment linked to remediation actions, with automated data collection, so the map stays connected to reality instead of drifting into slideware.

Where AI should not go: deciding the score in a black box. The AFA expects human judgment and documented rationale at each scoring step — and that accountability cannot be automated away. You can automate the grind. You cannot outsource the reasoning.

How to use this before your next AFA audit

Stop asking: "do we have a risk map?"

Start asking: "can we replay one risk end-to-end, with evidence, in under five minutes?"

If the answer is no, your priority is not redesigning the heatmap. Your priority is building the trail: process analysis, scenario documentation, gross and net scoring rationale, governing body validation, control testing, and update records.

A simple starting point: pick your top three high-risk scenarios. For each one, run through the five AFA questions above. Identify every gap. Fill them in order of audit exposure.

Do that, and the audit conversation changes. You are no longer defending a document. You are showing a system.

Frequently asked questions

How often should we update a Loi Sapin II risk map? The AFA requires an annual review at minimum. In practice, you should also update on triggers: new country, new sales channel, new third-party model, incident, audit finding, or M&A.

What is the biggest AFA audit mistake in risk mapping? Conflating gross and net risk — or skipping the net scoring step entirely. Without evidence that you assessed whether your controls actually work, the residual risk score is not defensible.

Do we need a complex methodology to satisfy AFA? No. You need a consistent scale, documented thresholds, and the ability to show how you applied them at each step. Simple and repeatable beats complex and fragile.

Can we use AI to build the risk map? Yes — for document extraction, consistency checks, and workflow automation. No, if you expect it to replace judgment or if you cannot explain outputs to an auditor.

Want your risk mapping to be replayable, not just presentable?

If you are rebuilding your Sapin II risk mapping — or you are a year away from an AFA audit and already feel the spreadsheet wobble — Naltilia can help you turn the map into an evidence-linked workflow.

About the Author

Iratxe Gurpegui

Iratxe Gurpegui

I've spent 20 years as a compliance and competition lawyer across Europe and Latin America, and throughout my career, I've seen firsthand how complex and costly regulations can hold companies back. But I've also learned that compliance doesn't have to be a burden, it can be a strategic advantage. My mission is to help companies harness the power of AI, transforming compliance into something faster, simpler, and most importantly, a real driver of growth for businesses.