Friday, December 19, 2025

5 Compliance Blind Spots Regulators (and Auditors) Notice First

5 Compliance Blind Spots Regulators (and Auditors) Notice First

Regulators and external auditors do not start with your best slide. They look for the gaps that reveal whether your program works in real life. If you are aiming for Loi Sapin II, ISO 37001, UNE 19601 or UNE 19603 alignment, or preparing for AML and AI Act expectations, the fastest way to raise confidence is to eliminate the blind spots they spot first.

This practical guide highlights five common mistakes, why they happen, the risks they create, and how to fix them with actions you can start this quarter.

A clean “compliance radar” visual showing five blind spots labeled policy-only programs, unmonitored controls, third-party gap, no evidence of adoption, and activity-only KPIs. A central hub displays risk, controls, owners, and evidence connected by simple lines.

The five blind spots at a glance

Blind spot

What auditors expect instead

Policy library, not a risk-based system

Documented risk assessment that drives controls, owners and priorities

Controls exist on paper, no monitoring

Evidence that controls run on time, with exceptions tracked and remediated

Third parties ignored or treated once

Risk-tiered onboarding, renewals, clauses, and training for high-risk partners

Documents without adoption proof

Traceable approvals, communications, training, attestations, investigations, sanctions, remediation

Activity metrics only

Leading and lagging indicators linked to risk reduction and effectiveness

1) Treating compliance as a policy library instead of a risk-based system

What it looks like: a shelf of copied templates, broad statements about zero tolerance, and no traceable link between your top risks and what the company actually does. Policies look similar across entities and markets regardless of exposure.

Why it happens: speed and checkbox audits, a belief that policies equal compliance, and a promise to tailor later when there is time.

Why it is risky: you end up controlling the wrong things, or nothing at all. When an incident hits, you cannot justify why resources went to low-risk areas while higher exposures were unaddressed. Standards like Loi Sapin II, ISO 37001 and UNE 19601 expect documented risk mapping that drives proportionate controls.

Fix in practice: start with a structured risk assessment. Cover entities, processes, markets, third parties and red flags. Score inherent risk, then link each prioritized scenario to at least one preventive and one detective control, with an owner and evidence source. Revisit quarterly for dynamic areas like sales channels, intermediaries or high-risk AI deployments.

  • Anti-corruption example (ISO 37001, Sapin II): if third-party sales agents in two markets drive most exposure, mandate enhanced due diligence, commission approvals with segregation of duties, and post-deal sampling of invoices and hospitality logs.
  • Antitrust example (UNE 19603): for trade association participation, require pre-clearance of agendas, counsel attendance for sensitive topics, and a post-meeting log review.
  • AI Act example: for high-risk AI systems, inventory use-cases, apply risk controls like human oversight, and log decisions for traceability.

Naltilia angle: risk mapping that stays alive, not an annual slide. Naltilia links each risk to its control, owner and evidence so changes in risk automatically update monitoring scope and priorities. For a step-by-step method, see our guide on how to build a compliance risk map in 6 steps.

2) Confusing “having controls” with “monitoring controls”

What it looks like: a register lists approvals, four-eyes checks and logs, but nobody can show if they actually ran last month, by whom, and with what evidence. Exceptions are discovered during annual audits, not managed in real time.

Why it happens: monitoring feels like overhead, ownership is unclear, and evidence is scattered across emails, spreadsheets and local shared drives.

Why it is risky: regulators care about effectiveness, not intentions. In practice, a control that is not performed is a control that does not exist. UNE 19601, ISO 37301 and AML programs all require ongoing monitoring and timely remediation.

Fix in practice: for each control, define the owner, frequency, procedure reference, evidence type and storage location, plus an exception workflow. Use sampling for high-frequency controls and define what counts as a failure. Review exceptions in a monthly risk and controls meeting.

Naltilia angle: continuous controls monitoring, reminders and evidence collection in one place. Our case study shows how moving from quarterly spreadsheets to automated monitoring cuts manual work and produces timely exceptions. Read more in compliance control monitoring, a case study.

3) Building a program that ignores third parties (the supplier-shaped hole)

What it looks like: basic onboarding questionnaires sent by email, inconsistent checks by region, renewals forgotten after the first year, and high-risk intermediaries slipping through because procurement speed beats diligence. KYC is a one-time PDF.

Why it happens: no risk tiering, no workflow, manual chasing, and no renewal triggers. Vendor owners change and documents disappear in local folders.

Why it is risky: many enforcement actions begin with third parties rather than employees. ISO 37001, Sapin II and AML regimes all expect proportionate due diligence, ongoing monitoring and clear contract clauses for audit, termination and training.

Fix in practice: risk-tier all suppliers and partners. Apply proportionate due diligence by tier. Set renewal cycles and alerting, embed contract clauses for audit rights and anti-corruption commitments, and require training or attestations for high-risk partners. Centralize documentation and maintain a clean trail of who reviewed what and when.

Operating across jurisdictions often requires local government interactions, licensing and document submissions.

4) Overproducing documents and under-producing “adopted and applied” proof

What it looks like: beautiful codes of conduct and policies, but thin evidence of board approval, workforce communication, completion of compliance risk training, policy attestations, investigation logs, sanctions or remediation. The story ends at publication instead of adoption.

Why it happens: writing is easier than operationalizing. Evidence requirements are not defined up front, so teams scramble later to reconstruct a past they did not capture.

Why it is risky: in an investigation, “we had a policy” is the weakest defense. Authorities look for traceability, timelines and outcomes. Guidance from agencies like AFA and the US Department of Justice stresses proof of adoption, effectiveness and accountability across the program lifecycle.

Fix in practice: define an evidence model before you roll out. Decide what must be retained, where it lives, for how long, and who validates completeness. Capture structured data, not only PDFs, so you can answer who, what, when questions in minutes.

Examples of strong artifacts:

  • Approvals, board or committee minutes with dates and sign-offs tied to policy versions
  • Communications, distribution lists and open rates for key policy updates
  • Training, named rosters, completion timestamps and scenario scores for compliance risk training
  • Attestations, time-stamped, per policy and per role, with renewal cadences
  • Investigations, case logs with allegations, steps taken, outcomes and remediation
  • Sanctions, consistent application records mapped to HR systems, with oversight checks

Naltilia angle: an evidence library and automated document drafting that outputs audit-ready artifacts, not just text. Pre-filled attestations, event triggers and one-click approvals accelerate collection. If you want to reduce quarter-end chasing, this guide shows how to automate it: how to automate evidence collection for compliance controls.

5) Measuring activity instead of effectiveness (training hours are not reduced risk)

What it looks like: quarterly packs report the number of trainings delivered and policies updated, but omit incidents, control failures, overdue items, hotspots or repeat root causes. The headline looks good while risk remains unchanged.

Why it happens: outcome metrics are harder, data sits in silos, and nobody likes to showcase bad news. Without integration across risk, controls, issues and remediation, it is difficult to tell if things are getting better.

Why it is risky: you cannot improve what you do not measure. Early signals are missed, and resources do not shift to where they matter most. Frameworks like ISO 37301 and UNE 19601 are built around continual improvement, which implies effectiveness metrics.

Fix in practice: track leading and lagging indicators tied to your top risks and controls.

  • Leading, control completion rates by owner, exception counts and age, third-party refresh rates, time-to-approve high-risk transactions, time-to-remediate issues, training completion and pass rates for role-specific scenarios
  • Lagging, number and severity of incidents, confirmed breaches, sanctions applied, repeated root causes, litigation or regulatory inquiries

Evaluate compliance risk training by outcomes, for example post-training decision accuracy in risky scenarios, complaint rates, and a drop in repeat findings, not just hours logged.

Naltilia angle: dashboards that connect risks, controls, exceptions, issues and remediation. This gives executives a single view of where risk is trending and whether investment is working. For a pragmatic KPI set you can adopt quickly, see our playbook in compliance risk management, a practical guide for SMEs.

A simple pipeline diagram with five boxes labeled risk assessment, controls, monitoring, evidence, and reporting. Arrows connect the boxes from left to right, and above each box a short note reads owners, frequency, exceptions, retention and KPIs.

Frequently asked questions

How do auditors quickly tell if a program is risk-based? They look for a current risk assessment, how it guided policy and control choices, and whether owners and evidence are tied to those choices. Vague enterprise-wide policies with no link to top exposures are a red flag.

How often should we update our risk map? Update at least annually, and quarterly for volatile areas like third-party channels, new markets, M&A integration, high-risk AI deployments or product launches. Material events should trigger an immediate review.

What is an acceptable cadence for third-party renewals? Risk-tiered. High-risk intermediaries or public-facing agents often require annual or even semi-annual refresh, medium risk every two years, and low risk every three years, alongside contract clause and training checks.

What evidence matters most for training? Named rosters, completion timestamps, scenario scores, and post-training effectiveness signals such as fewer repeat findings. Hours spent are rarely convincing on their own.

Move from paper to proof with naltilia

If you recognized your organization in any of these blind spots, you are not alone. Mid-sized teams can close the gaps quickly with a living risk map, continuous monitoring and an evidence-first mindset.

Naltilia’s AI-powered platform streamlines risk assessment, automates control monitoring, centralizes third-party due diligence records, drafts tailored policies, and collects audit-ready evidence with reminders and workflows. That means less manual chasing and a clearer story when regulators or auditors ask, “Does it work in practice?”

Want to see how it fits your program, from ISO 37001 or Sapin II to UNE 19601, antitrust and AML, and to prepare for AI Act expectations, book a short walkthrough.