Sunday, March 15, 2026

5 due diligence questionnaire blind spots to fix at every due diligence level

Iratxe Gurpegui
Written by
Iratxe Gurpegui
10 min read
5 due diligence questionnaire blind spots to fix at every due diligence level

Due diligence questionnaires are supposed to reduce risk. In practice, they often create two outcomes that compliance teams hate:

  • As a buyer, you get long, vague answers you cannot defend in an AFA audit or an ISO 37001 certification review
  • as a supplier, you spend hours answering five different client questionnaires that ask the same thing in five different ways

The fix is not “a better questionnaire” in the abstract. It is designing questionnaires that work at every due diligence level, produce verifiable evidence, and turn responses into decisions, actions, and monitoring.

What a due diligence questionnaire is (and what it is not)

A due diligence questionnaire (DDQ) is a structured set of questions used to evaluate a counterparty’s risk profile and control environment. You see it in third-party onboarding (vendors, agents, distributors), client onboarding (KYS), partnerships, and sometimes M&A.

A DDQ is not a substitute for:

  • Independent screening (sanctions, adverse media, beneficial ownership checks)
  • Contract controls (flow-down clauses, audit rights)
  • Post-onboarding monitoring

It is one input into a broader due diligence workflow.

Why DDQs matter for compliance programs

DDQs often become “audit exhibits” because they are one of the few elements that show you assessed risk before engaging, and that you applied proportional measures. For example,

  • In France, third-party assessment is explicitly part of the anti-corruption compliance framework under loi Sapin II (article 17), and the AFA expects a risk-based approach that is operational and traceable. See the legal text on Légifrance.
  • Under ISO 37001, due diligence on business associates is a core expectation for an anti-bribery management system (see the standard’s scope on ISO).

The common theme: you will be asked to show not only that you collected information, but that you used it to manage risk.

A simple model: 3 due diligence levels and what “good” looks like

Most teams informally do tiering, but fail to encode it into the questionnaire design and the evidence they retain.

Due diligence level

Typical trigger

Questionnaire goal

Expected output you can keep as evidence

Level 1, basic

Low inherent risk, standard supplier

Confirm identity, scope, and baseline controls

Completed DDQ, screening results, or no DDQ, acceptance decision

Level 2, enhanced

Medium risk, higher spend, sensitive services

Validate key controls, ownership, and red flags

Completed DDQ plus supporting documents, risk score, mitigation plan

Level 3, deep dive

High risk, public officials touchpoints, intermediaries, high-risk countries

Test credibility of controls and investigate inconsistencies

Completed DDQ supported by interviews, document pack, approvals, ongoing monitoring plan

Your DDQ should look different at each level. Most blind spots come from treating DDQs as a one-size-fits-all document.

A simple five-step flow diagram showing a two-way due diligence questionnaire workflow: request, collect documents, review and score, decide and remediate, monitor and refresh.

Blind spot 1: the questionnaire is not tied to a decision

What it looks like

  • You ask 60 questions, but you cannot explain what “pass” means
  • Reviewers interpret answers differently across countries
  • You have policies, but no link to control design vs control effectiveness

Why it is dangerous

Auditors and regulators tend to test whether your process is risk-based and consistent. A DDQ that does not drive a decision is hard to defend as an effective control.

Fix when you send DDQs to third parties (buyer perspective)

Add a one-paragraph “decision statement” at the top of every DDQ, and build your review around it.

Decision statement template (copy/paste)

  • Purpose: “this questionnaire supports our decision to onboard/renew entity for scope under policy/framework.”
  • Decision owner: “final decision sits with role, based on compliance review and business sponsorship.”
  • Due diligence level: “level 1/2/3 based on tiering rule.”
  • Decision outputs: “accept, accept with mitigations, escalate for deep dive, reject.”
  • Minimum evidence: “documents required at this level: list.”

Then add a short scoring guide for reviewers (even if qualitative) and keep it stable.

Fix when you receive DDQs from clients or partners (seller perspective)

Create your own internal “response position” before answering.

  • Define who signs off (legal, compliance, finance, IT security)
  • Define your risk posture (what you can state confidently, what needs qualification)
  • Define standard attachments (certifications, policies, training overview, whistleblowing process description)

This turns each incoming DDQ into a controlled publication process, not a scramble.

Blind spot 2: questions collect opinions, not evidence

What it looks like

  • “Do you have an anti-corruption program?” (yes/no)
  • “Do employees receive training?” (yes/no)
  • “Do you comply with antitrust law?” (yes/no)

Why it is dangerous

A “yes” is not evidence. Under AFA expectations and ISO-aligned audits, teams are asked to demonstrate that controls exist, are implemented, and are periodically evaluated.

Fix when you send DDQs

Convert broad questions into evidence-based prompts with a defined set of acceptable documents. Ask for the minimum necessary at each due diligence level.

Topic

Weak DDQ question

Evidence-based DDQ prompt (better)

Typical acceptable evidence

Anti-corruption

“do you have a policy?”

“provide your anti-corruption policy and last approval date. describe how it is communicated.”

Policy PDF, approval record, communication email, intranet screenshot (if available)

Training

“do you train staff?”

“describe your training cadence and scope for high-risk roles. provide last cycle completion evidence.”

Training matrix, completion report, sample module outline

Speak-up

“do you have a hotline?”

“describe reporting channels, confidentiality safeguards, and triage ownership.”

Procedure, vendor contract, anonymized workflow screenshot

Third-party management

“do you assess third parties?”

“describe your third-party tiering and refresh triggers.”

Tiering criteria, sample risk assessment record (redacted)

If you only remember one rule: ask for one element per control that proves it is real.

Fix when you receive DDQs

Maintain a controlled “evidence pack” (sometimes called a trust pack) that you can reuse across clients. It should be versioned and dated.

A practical minimum set for mid-size companies:

  • Code of conduct and anti-corruption policy (dated, approved)
  • Competition/antitrust guidance (especially if you operate in trade associations or pricing-sensitive markets)
  • Whistleblowing/speak-up procedure
  • Gifts and hospitality rules
  • Third-party due diligence process overview
  • Training overview for the last 12 months

This reduces rework and helps you answer consistently.

Blind spot 3: you cannot prove who answered what, and when

What it looks like

  • DDQs are completed in emails and spreadsheets
  • Supporting documents are attached in a thread, then lost
  • No clear operational ownership, compliance becomes the “typing service”

Why it is dangerous

From an effectiveness standpoint, weak traceability undermines credibility. If you cannot show who provided information and who approved the decision, your DDQ is fragile in audits and internal investigations.

Fix when you send DDQs

Design DDQs as a controlled workflow, not a file.

  • Require named respondents by topic (finance, HR, sales, legal)
  • Capture timestamps and versions
  • Store responses and evidence in a retrievable repository tied to the third-party record
  • Document the review and approval step

Also, define “operational ownership” explicitly: the business owner is accountable for completeness, compliance is accountable for the challenge and decision support.

Fix when you receive DDQs

Implement a simple internal RACI for inbound questionnaires.

RACI template (minimal and effective)

  • Responsible: compliance operations (coordinates, drafts)
  • Accountable: general counsel or compliance officer (final sign-off)
  • Consulted: HR (training), finance (payments), IT/security (data), sales (scope)
  • Informed: business sponsor

If you serve multiple large clients, this process matters as much as your answers.

Blind spot 4: “one questionnaire” ignores due diligence levels

What it looks like

  • Low-risk vendors get a 12-tab questionnaire and stop responding
  • High-risk intermediaries get the same form as a low-risk supplier
  • Teams use “enhanced due diligence” as an afterthought, not a designed path

Why it is dangerous

You create both coverage gaps and fatigue. In practice, fatigue pushes business teams to bypass compliance, and coverage gaps create real exposure.

Fix when you send DDQs

Build a tiered DDQ with branching logic.

A pragmatic decision tree you can apply consistently:

  • If the third party has public official interaction, success fees, or represents you to win business, start at level 2 and often escalate to level 3.
  • If the service is standard, low spend, and no sensitive touchpoints, stay at level 1.
  • If the third party operates in a high-risk geography, or you cannot identify beneficial ownership clearly, escalate.

A concrete example: if procurement is buying employee sportswear from a specialty sportswear retailer like Fabbrica Ski Sises online shop, a level 1 DDQ is typically enough, focused on identity, basic integrity confirmations, and standard terms. The same should not be true for a sales agent paid on commission in a public-tender market.

Fix when you receive DDQs

When clients send questionnaires that do not match your risk profile (for example, they treat you like an intermediary when you are a standard supplier), respond with a structured explanation:

  • Clarify your role and scope (what you do, what you do not do)
  • Provide your standard evidence pack
  • Propose a proportional approach (what you can answer quickly, what requires time)

This is not pushing back, it is showing maturity and saving time.

Blind spot 5: no remediation loop, no refresh, no monitoring

What it looks like

  • Red flags are noted, then nothing happens
  • Mitigations are agreed by email, but not tracked
  • Renewals happen with no update, the DDQ becomes a static snapshot

Why it is dangerous

This is where “paper compliance” is born: you can prove you asked questions once, but you cannot prove you reduced risk.

Fix when you send DDQs

Turn DDQ findings into three controlled outputs:

  • A documented decision (accept, conditional accept, reject)
  • A remediation plan with owners and deadlines (for example, policy adoption, training, contract clause updates)
  • Refresh triggers (annual for level 2, more frequent for level 3, event-based triggers for all)

Typical refresh triggers that work in practice:

  • Change in beneficial ownership
  • New high-risk geography or new use case
  • Negative media or enforcement action
  • Contract renewal or spend increase

Fix when you receive DDQs

Track commitments you make in DDQs like you would track client contractual obligations.

If you state “training is annual,” be ready to show the next cycle. If you state “we perform third-party due diligence,” be ready to show the workflow, even if redacted.

This is uncomfortable at first, but it prevents inconsistent statements across clients.

Digitalisation and AI: where it helps on both sides (without creating new risk)

Done well, digitalisation and AI reduce manual work while improving consistency and auditability.

High-value, low-regret uses when you send DDQs

  • Dynamic questionnaires with branching by due diligence levels
  • Automated evidence collection and reminders
  • Structured scoring support (with human approval)
  • Central evidence library linked to the third party record
  • Monitoring triggers (news alerts, ownership changes) feeding refresh workflows

High-value, low-regret uses when you receive DDQs

  • A single “source of truth” evidence pack with version control
  • AI-assisted mapping of incoming questions to your standard answers and attachments
  • Gap detection (flag questions you cannot answer consistently, or that require remediation)
  • Response workflows with approvals, timestamps, and retention rules

Two guardrails you should keep

  • Keep a human in the loop for risk decisions and for any negative conclusion
  • Keep traceability, store the prompt, source documents, approver, and final response when AI is used to draft content

Quick checklists you can reuse

Checklist for sending a DDQ to third parties

  • Define the decision statement (purpose, owner, outputs)
  • Select the due diligence level using explicit tiering rules
  • Ask evidence-based questions, not yes/no assertions
  • State acceptable evidence for each control topic
  • Capture respondent identity, timestamps, and versions
  • Document the review outcome and approvals
  • Convert findings into tracked remediation actions
  • Define refresh cadence and event-based triggers

Checklist for receiving a DDQ from a client or partner

  • Assign RACI and internal deadlines, do not let sales own answers alone
  • Respond from a controlled evidence pack (versioned)
  • Keep answers consistent across clients, explain scope clearly
  • Log commitments you make (training cadence, monitoring, audits)
  • Retain the submitted DDQ, attachments, and approvals for audit readiness
  • Flag recurring gaps and turn them into an internal remediation backlog

How naltilia can help

Naltilia supports DDQ work where teams usually get stuck: operational workflows, evidence, and audit readiness. You can use it to automate data collection, assign owners, and keep an evidence library linked to each third party or client request. If you want to reduce DDQ cycle time while improving the quality of evidence, you can contact Naltilia.

This article is general information, not legal advice.

About the Author

Iratxe Gurpegui

Iratxe Gurpegui

I've spent 20 years as a compliance and competition lawyer across Europe and Latin America, and throughout my career, I've seen firsthand how complex and costly regulations can hold companies back. But I've also learned that compliance doesn't have to be a burden, it can be a strategic advantage. My mission is to help companies harness the power of AI, transforming compliance into something faster, simpler, and most importantly, a real driver of growth for businesses.