
Internal control is the set of governance, processes, and activities an organization uses to provide reasonable assurance that it will achieve objectives such as compliance, reliable reporting, and operational effectiveness. In practice, it is how you prove (to leadership, auditors, and sometimes regulators) that controls are not just written, they are designed appropriately, performed consistently, and corrected when they fail.
A useful reference point is the COSO internal control framework, which is widely used by auditors to evaluate whether a control system is properly designed and operating effectively (COSO overview). In compliance programs, internal control is also embedded in standards and regulatory expectations, for example:
What internal control means in a compliance context
In a compliance program, internal control is not limited to finance. It includes the concrete checks and approvals that prevent, detect, and correct misconduct risks, for example:
- gifts and hospitality approvals and registers
- third-party due diligence before onboarding
- segregation of duties in procurement and vendor payments
- conflict-of-interest disclosures and mitigation
- antitrust safeguards for trade association participation
- whistleblowing intake, triage, investigation, and remediation
The audit-ready question is simple: can you show the control worked, for the period under review, and can you explain exceptions and remediation?
Types of controls (and why the category matters in audits)
Auditors and regulators will typically want you to articulate what kind of control you rely on, and why it is appropriate for the risk.
Controls by purpose
- Preventive controls: stop the issue before it happens (example: pre-approval for gifts above a threshold).
- Detective controls: identify issues after they occur (example: periodic review of gifts register for anomalies).
- Corrective controls: fix root causes and prevent recurrence (example: change approval thresholds after repeated breaches).
Controls by execution mode
- Manual controls: performed by people (example: compliance review of a third-party file).
- Automated controls: performed by systems (example: system blocks vendor creation without required fields).
- Hybrid controls: system-supported, human decision (example: automated screening plus compliance sign-off).
Controls by level
- Entity-level controls: apply across the organization (example: tone at the top, disciplinary framework, speak-up governance).
- Process-level controls: embedded in a process (example: procurement due diligence workflow).
The point is not to label for labeling’s sake. The point is that testing, evidence, and failure modes differ.
Control type | What it does | Typical evidence an auditor accepts | Common failure mode to watch |
|---|---|---|---|
Preventive | Reduces likelihood | Approval logs, configured thresholds, delegation matrix | Approvals bypassed “because urgent” |
Detective | Finds issues | Review reports, sampling sheets, exception logs | Reviews done late or without follow-up |
Corrective | Fixes recurrence | Root-cause analysis, remediation plan, completion evidence | Actions not owned, not tracked to closure |
Entity-level | Sets environment | governance minutes, training strategy, KPI reporting | Looks good on paper, no operational link |
Process-level | Embeds compliance in work | workflow records, tickets, system screenshots | Workarounds, shadow spreadsheets |
Elements of an effective internal control system (what “good” looks like)
Using COSO’s structure is often the fastest way to explain maturity to auditors, including internal audit and external reviewers.
COSO component | What auditors look for in practice | Compliance examples |
|---|---|---|
Control environment | Accountability, independence, culture signals, resourcing | documented compliance governance, escalation paths, disciplinary measures |
Risk assessment | Updated, usable risk map, clear methodology, ownership | anti-bribery and antitrust risk mapping, third-party segmentation |
Control activities | Controls are designed, mapped to risks, and performed | approvals, due diligence, accounting checks, conflict-of-interest workflows |
Information and communication | Policies are understandable, targeted, and actually used | role-based training, policy attestations, practical playbooks |
Monitoring activities | Ongoing testing and improvement, not annual panic | control testing plan, KRIs, remediation tracking, management review |
A practical add-on for 2026 audit expectations is evidence discipline: retention rules, version control, traceability from risk to control to test to remediation.

6 internal control mistakes to avoid to stay audit-ready all year
This section is written for teams who are tired of “audit season” firefighting. The common thread in the six mistakes below is that they create gaps between:
- what the control is supposed to do (design)
- what actually happened (operation)
- what you can prove (evidence)
Mistake 1: treating internal controls as a yearly documentation exercise
What it looks like: controls are documented once a year (often right before an ISO or AFA-style review), then left untouched. Evidence is pulled from emails and screenshots in a rush.
Why it breaks audit readiness: auditors test a period. If you cannot show regular performance and monitoring, the program can look like “paper compliance”, even if people act in good faith.
How to fix it: run internal control as an operating rhythm, not a project.
A lightweight, audit-friendly cadence for mid-size companies:
- Monthly: capture evidence for key controls, log exceptions, update action tracker.
- Quarterly: sample-test priority controls, review KPIs with leadership, validate third-party changes.
- Biannual: refresh risk mapping assumptions for top risks (anti-bribery, antitrust, criminal exposure), review policy exceptions.
- Annual: full control library review, training effectiveness review, independent assessment planning.
If you need a simple rule: if a control is “key” in your risk map, it must have a defined frequency, an owner, and a repeatable evidence output.
Mistake 2: mixing up control design and control effectiveness testing
What it looks like: teams produce strong policies and procedures, then assume they are “effective” because they exist. Testing is confused with training completion rates or “we sent the policy” metrics.
Why it is dangerous: most standards and regulators are moving toward effectiveness. France’s AFA guidance is explicit that measures should be implemented and assessed, not merely formal (AFA resources). ISO standards also require monitoring and continual improvement.
How to fix it: separate three questions and document them distinctly.
Question | What it means | Example in practice | Evidence to keep |
|---|---|---|---|
Is the control designed appropriately? | the control, if followed, would mitigate the risk | gifts above $X require pre-approval | control description, thresholds rationale, approval matrix |
Is the control operating as designed? | people/system actually performed it | approvals were requested before gifts were given | workflow logs, timestamps, register entries |
Is the control effective? | it reduced risk, or detected issues early, with follow-up | exceptions decreased, suspicious patterns flagged and remediated | test results, exceptions, root-cause notes, remediation closure |
A practical testing approach that works for lean teams is risk-based sampling:
- define what “compliance” means for the control (pass/fail criteria)
- sample a period (for example, one quarter)
- record exceptions and causes
- document remediation and retest if needed
Mistake 3: building a control library that is too big to run
What it looks like: dozens (or hundreds) of controls listed because different frameworks are merged , or because every past incident generated a new control. The team cannot collect evidence consistently.
Why it breaks audit readiness: when everything is “key”, nothing is testable. Auditors often prefer a smaller set of clearly owned, well-evidenced controls over a large, inconsistently executed catalog.
How to fix it: rationalize controls using a simple decision tree.
Decision point | If yes | If no |
|---|---|---|
Does this control mitigate a top risk in the current risk map? | keep and test it | consider retiring or downgrading |
Can the business execute it at the defined frequency? | keep | redesign (simplify, automate, change frequency) |
Is there a reliable evidence output? | keep | redesign the evidence mechanism |
Is it redundant with another control? | consolidate | keep as distinct only if it covers a different failure mode |
A practical target for many mid-size teams is:
- a short list of key controls for top risks (anti-bribery, third parties, accounting integrity, antitrust sensitive activities, whistleblowing)
- a broader list of supporting controls that are monitored more lightly
Mistake 4: unclear operational ownership (controls “owned by compliance”)
What it looks like: compliance is shown as the owner for procurement checks, HR disciplinary steps, finance controls, and business approvals. In reality, compliance can advise, but cannot operate all controls.
Why it is dangerous: it creates a control gap in real life, and a credibility gap in audits. Under Sapin II programs, AFA expectations emphasize implementation in operations, not just design by compliance. Under ISO and UNE standards, accountability and operational integration are core.
How to fix it: define ownership with a RACI-style control record, and make it visible.
A control record template you can reuse:
Field | What to fill |
|---|---|
Control name | short, specific, action-based |
Risk linked | risk ID and scenario (not a vague category) |
Control objective | what failure it prevents or detects |
Control type | preventive, detective, corrective, entity-level, process-level |
Frequency | event-based, monthly, quarterly, etc. |
Performer (responsible) | the operational role that executes it |
Approver (accountable) | the role that is accountable for outcomes |
Compliance role | define as oversight, challenge, or execution (be honest) |
Evidence output | what file or log proves performance |
Test method | how you will test it and how often |
Exceptions workflow | where exceptions go, who decides, how remediation is tracked |
If you only change one thing: make “performer” a business role by default, and keep compliance as second-line oversight unless the control is inherently second-line.
Mistake 5: evidence that is not audit-grade (or not retrievable within 48 hours)
What it looks like: evidence scattered in inboxes, chat tools, local drives, or ad hoc spreadsheets. People rely on screenshots without context. No consistent naming convention, no retention rule, no link to control and period.
Why it breaks audit readiness: audit work is largely evidence work. Weak evidence increases sampling pain, escalations, and findings, even when the underlying practice is decent.
How to fix it: define a minimum evidence standard.
An “audit-grade evidence” checklist for controls:
- evidence is tied to a control ID and a risk
- evidence shows period coverage (date range and frequency)
- evidence shows who performed and who approved (where applicable)
- evidence is tamper-evident or comes from a system log when possible
- exceptions are recorded with rationale and remediation
- retention and access rules are defined (including cross-border considerations)
If your leadership asks “are we audit-ready?”, a good internal metric is time-to-evidence. If it takes longer than a day or two to retrieve key evidence, audit readiness is usually fragile.
Mistake 6: monitoring that stops at activity KPIs (and misses control failures)
What it looks like: dashboards report outputs like “number of trainings delivered” or “number of due diligences completed”, but do not report whether controls are working, failing, or being bypassed.
Why it is dangerous: activity metrics can look healthy while the risk increases. Standards like ISO 37001 and the UNE family are aligned with the idea of measuring performance and improving. Auditors also look for monitoring that can detect breakdowns.
How to fix it: add a small set of effectiveness signals for each top-risk control.
Examples that are practical and defensible:
- Exception rate for a control (percentage of sampled items failing criteria)
- Time to remediate exceptions (median days to closure)
- Repeat exceptions (same root cause recurring)
- Coverage (percentage of population in scope that went through the control)
- Override rate (how often the control was bypassed, with approvals)
Keep the model simple: for each key control, define 1 coverage metric, 1 quality metric, and 1 remediation metric.
A year-round audit-ready internal control plan (one page)
If you need a simple playbook that survives real operations, use this as your baseline.
Step 1: define the “audit perimeter” (what you must be ready to show)
Anchor it to your actual exposures and frameworks, not to a generic list. For example:
- Sapin II perimeter (anticorruption risk mapping, third parties, accounting controls, training, whistleblowing, evaluation)
- UNE 19601 and UNE 19603 perimeter (criminal and antitrust controls, reporting channels, monitoring)
Step 2: select a small set of key controls
A practical starting point for many mid-size companies is 15 to 30 key controls across:
- third-party onboarding and renewals
- gifts and hospitality approvals and registers
- conflicts of interest disclosures and mitigation
- procurement and payment controls (segregation of duties, vendor master data)
- antitrust sensitive interactions (trade associations, competitor contact)
- speak-up intake to remediation loop
Step 3: standardize control records and evidence outputs
Use the control record template above and enforce consistent evidence outputs.
Step 4: run quarterly testing and remediation
Quarterly is often the sweet spot: frequent enough to avoid surprises, light enough to be sustainable.
Step 5: report to leadership with a “what changed” narrative
Avoid static dashboards. For executive reporting, the key is:
- trend (improving, stable, deteriorating)
- why it changed (root causes)
- what you are doing (remediation actions and owners)
This is also where you connect internal control to resourcing discussions without sounding alarmist.
How naltilia can help
If your main bottleneck is execution and evidence, automation can turn internal control into a steady rhythm instead of quarter-end chasing. Naltilia supports compliance teams by streamlining risk mapping workflows, structuring control libraries, automating data collection for evidence, tracking remediation actions to closure, and producing audit-ready narratives and KPIs that leadership can understand. This is particularly useful when you operate across France and Spain and need consistent controls, owners, and proof across entities.
Contact Naltilia to discuss your internal control monitoring approach.
Frequently asked questions
What is the difference between an internal control and a compliance control? An internal control is a broader concept (financial, operational, compliance). A compliance control is an internal control specifically designed to prevent, detect, or correct compliance risks, for example third-party due diligence or gifts approvals.
How often should we test controls to stay audit-ready? Typically, key controls are tested at least quarterly or semiannually, depending on risk and transaction volume. The most important factor is consistency and documented remediation when exceptions occur.
What evidence do auditors usually accept for internal controls? System logs, workflow records, approvals with timestamps, registers, sampling sheets, exception logs, and remediation proof are commonly accepted. Evidence should show period coverage, ownership, and traceability to the control.
How do we avoid “paper compliance” in an AFA-aligned Sapin II program? Link your risk map to a small set of owned controls, define repeatable evidence outputs, test control operation, record exceptions, and track remediation to closure. Be ready to explain what changed during the year.
Do we need separate control libraries for ISO 37001, Sapin II, and UNE standards? Usually not. A single control library can work if it is mapped to each framework’s requirements, with clear owners, evidence, and testing. Duplicating controls often creates execution gaps.
This article is general information, not legal advice.

