Thursday, February 26, 2026

4 AI-pricing traps that can trigger antitrust scrutiny — and how to prove you still compete independently

Iratxe Gurpegui
Written by
Iratxe Gurpegui
10 min read
4 AI-pricing traps that can trigger antitrust scrutiny — and how to prove you still compete independently

When pricing, promotions, assortment, or inventory decisions start coming from algorithms, antitrust risk changes shape. The main shift is not that “AI is illegal”, it is that automation can reduce your company’s independent decision making, sometimes without anyone intending to coordinate with competitors.

For compliance officers and in-house legal teams, the hard part is practical: how do you let the business use AI for speed and consistency, while being able to demonstrate (to competition authorities, auditors, or internal reviewers) that you still compete independently?

This article focuses on emerging antitrust risks linked to AI-driven commercial decision making, plus executive-level safeguards and evidence you can build now.

What changes when AI influences commercial decisions

Using algorithms to recommend or execute pricing, offers, or replenishment is generally legitimate. Many systems optimize against internal signals like demand, stock levels, lead times, and marketing campaigns.

The antitrust risk appears when the system:

  • Uses competitor-sensitive inputs (directly or indirectly) that undermine independence.
  • Makes market behavior more predictable and easier to align.
  • Enables faster monitoring and “punishment” of deviations.
  • Embeds objectives that effectively discourage competition (even if phrased as “stability”).

Even when there is no explicit “chat” between competitors, authorities can investigate whether market players coordinated or exchanged sensitive information, including through intermediaries or shared tools.

Four emerging AI-related antitrust risk patterns (with concrete red flags)

Pattern 1: The shared algorithm or vendor becomes a coordination “hub”

Risk in plain terms: multiple competitors outsource key commercial decisions to the same provider (or the same algorithmic logic). If the provider receives non-public data from several competing clients, the tool can become a “hub” through which behaviors align.

This is often discussed as a “hub-and-spoke” theory in enforcement and litigation. A widely cited example in the US context is the RealPage litigation in the rental housing market, where plaintiffs allege landlords delegated pricing to a common system fed with competitors’ non-public data.

Red flags your team can test for (practical):

  • The provider serves several direct competitors in the same country or region.
  • The contract or product documentation is vague about whether “benchmark” or “market” inputs include client data.
  • The vendor proposes “industry model improvements” using pooled learnings.
  • Your internal teams cannot explain, in operational terms, what data goes into the recommendation.
Diagram showing hub-and-spoke risk: several competing companies on the left and right send non-public pricing and demand data to a central algorithm vendor, which then outputs aligned pricing recommendations back to each company.

Pattern 2: Pricing tools operationalize a collusive understanding (monitoring and discipline)

Risk in plain terms: sometimes there is a human agreement or at least contacts, and a pricing algorithm is used to implement, monitor, and enforce the alignment.

A well-known example is the UK Competition and Markets Authority (CMA) case involving Trod Ltd and GB Eye Ltd, where the CMA found that the parties unlawfully agreed not to undercut each other’s prices on Amazon UK for certain products, and that the automated repricing software helped execute the arrangement. The CMA decision is a useful training case because it shows how “configure the tool” can become “execute the agreement”.

Source: CMA decision on online sales of posters and frames (Trod/GB Eye).

Red flags (often visible in configurations, not policies):

  • “Ignore” lists, competitor-exclusion rules, or “follow seller X” logic.
  • Rules that match a specific competitor’s price rather than optimizing independently.
  • Exception handling that treats “price deviation” as an error to be corrected.
  • Very high-frequency repricing (minutes) with no human review thresholds.

Pattern 3: Sensitive information exchange through AI inputs, benchmarking, or data partnerships

Risk in plain terms: the problem is not AI, it is the data. If your system ingests non-public competitor information (even indirectly via a vendor, a joint data initiative, or overly granular benchmarks), you may create an information exchange risk.

This is particularly relevant where commercial teams often look for “market intelligence” feeds. Some are fine (public prices, public promotions). Others can be risky if they contain recent, granular, actionable signals.

Red flags:

  • “Anonymized” benchmark data that is still granular enough to infer competitor behavior.
  • Near real-time competitor signals beyond publicly observable prices.
  • Data-sharing arrangements with industry partners without a competition law review.
  • AI training datasets that include restricted customer or competitor information.

Reference point for assessing competitor data exchanges: the European Commission’s materials on horizontal cooperation agreements.

Pattern 4: Algorithmic “standardization” and tacit alignment (legal uncertainty, real compliance exposure)

Risk in plain terms: even without explicit agreements, algorithms that react to each other can create stable parallel behavior (for example, persistent price matching). Authorities have signaled concerns that algorithmic markets can make coordination easier.

Compliance should treat this as a risk to manage and evidence, not as a settled legal theory that always triggers liability. Your goal is to show that your company designed objectives and guardrails to compete independently.

Red flags:

  • Your price dispersion versus key competitors collapses and stays low.
  • The model repeatedly “tracks” a particular competitor.
  • Teams cannot articulate the pro-competitive rationale and constraints.
  • There is no thresholds, and no monitoring of competitive outcomes.

Executive safeguards to demand (and how to prove they exist)

These safeguards are written for an executive committee and compliance steering committee. They map well to general compliance system expectations (governance, documentation, monitoring, continuous improvement).

1) Input governance: document every data source the model uses

In most AI antitrust failures, the root cause is not “the math”, it is what went into the model.

A practical input governance checklist:

  • List all input sources (internal systems, public data, purchased data, vendor-provided benchmarks).
  • Classify each source: internal, public, third-party, competitor-sensitive.
  • Decide and document prohibitions (for example, no non-public competitor data).
  • Record who approved the allowed data sources (legal, compliance, business owner).

Evidence to retain:

  • Data source register with approvals.
  • Vendor documentation on data provenance.
  • Periodic re-certification that inputs have not changed.

2) Real segregation between clients of the same vendor

If multiple competitors use the same vendor, you need more than “trust us”. You need technical and contractual segregation.

Control expectations you can request from vendors:

  • Separate environments and access controls.
  • No cross-client pooling of sensitive or recent granular data.
  • Restrictions on model training using your data.
  • Audit rights or at least meaningful assurance reports where appropriate.

Important nuance for compliance teams: “anonymized” does not always mean safe. If data is recent, granular, and segmented, it can remain actionable.

3) Objective and rule governance: who decided what the algorithm should optimize

Algorithms are not neutral. They optimize what you tell them to optimize.

What to require from algorithm design:

  • A written objective statement (for example, maximize margin within compliance constraints, avoid stockouts, respect competition law constraints).
  • A list of prohibited objectives (for example, “avoid competing on price”, “match competitor X”).
  • Named accountable owners (business owner, second-line reviewer, legal sign-off when needed).

Evidence to retain:

  • Objective approval memo.
  • Parameter approval workflow.
  • Records of exceptions and escalations.

4) Audit trail and change control: be able to reconstruct “what changed, when, and why”

If a regulator, internal audit, or an external reviewer asks “why did your system recommend this price?”, you need a reconstructable trail.

Minimum change control expectations:

  • Versioning for models and rules.
  • Logs of parameter changes, who changed them, and who approved.
  • Testing records before deployment (including competitive outcome checks).

This is also where operational tooling helps. For recurring reviews (for example, quarterly algorithm governance reviews, annual vendor renewals, model recertifications), consider using a dedicated reminder and proof-capture workflow so deadlines do not slip and evidence is not scattered. Some teams use an expiration reminder software approach to centralize renewals, recurring compliance checklists, and audit trails.

5) Operational guardrails: limits, alerts, and effective human oversight

Human oversight must be real, not symbolic. People need the ability to intervene.

Controls that work well in practice:

  • Hard constraints (price floors/ceilings, discount limits, approval thresholds).
  • Alerts on suspicious patterns (persistent matching, sudden drop in dispersion, abnormal reactions to one competitor)
  • Enhanced review for strategic changes (new market entry, new competitor, crisis pricing).

A decision tree for approving AI-driven pricing or offer engines

This is designed for a lightweight “go/no-go” review by legal and compliance before a pilot.

  • If the system uses any non-public competitor data (directly, via vendor, or via partnerships), then stop and redesign inputs, or obtain a documented legal assessment with mitigation.
  • If the same vendor provides pricing optimization to several direct competitors, then require segregation measures and clear contractual restrictions before rollout.
  • If the tool allows competitor-specific rules (ignore lists, match-to-seller), then restrict features, add monitoring, and require approvals for exceptions.
  • If you cannot explain the objective function and constraints in operational terms, then treat it as not ready for production.
  • If you cannot produce logs and change history, then do not deploy without basic audit trail.

A template: algorithm governance one-pager (what auditors and executives expect)

You can copy this into your internal documentation system.

Field

What to write

System name and scope

What decisions it recommends or executes (price, promo, inventory)

Business owner

Person accountable for outcomes

Compliance and legal reviewers

Named second-line and legal contacts

Approved objective

What is optimized, with prohibited objectives listed

Input sources

Internal, public, third-party, with prohibited inputs clearly stated

Competitor interaction

How competitor prices are used (public only, frequency, safeguards)

Vendor model

Provider, segregation commitments, data use clauses

Change control

Versioning, approvals, testing requirements

Monitoring and alerts

What metrics you watch (dispersion, matching, exceptions)

Human oversight

Who can pause, who can override, and escalation path

Implementation plan (30, 60, 90 days) for mid-size compliance teams

First 30 days: Build an inventory and stop the blind spots

Focus on visibility.

  • Inventory AI and algorithmic tools used in pricing, promotions, procurement, sales enablement, and revenue management.
  • Identify shared vendors and data sources.
  • Freeze high-risk features until reviewed (for example, competitor ignore lists).

Days 31 to 60: Formalize controls and ownership

Focus on governance that can be tested.

  • Publish the one-pager template and require it for each system.
  • Add input governance and change control requirements to procurement and vendor management.
  • Define monitoring metrics and escalation thresholds.

Days 61 to 90: Start effectiveness testing (not just design)

Focus on proof.

  • Run configuration reviews and sampling of logs.
  • Test whether alerts trigger, and whether humans can intervene in time.
  • Produce a short executive report: what tools exist, what risks were found, what actions were taken.

How naltilia can help

When AI and algorithms touch pricing and commercial decisions, compliance quickly becomes an evidence problem: who approved objectives, what data went in, what changed, what was tested, and what monitoring shows over time. Naltilia can help teams operationalize this with structured risk mapping workflows, remediation tracking, automated evidence collection, and control monitoring that produces audit-ready trails and board-friendly KPIs.

If you want to explore an automation-first approach, contact Naltilia.

Frequently asked questions

Is using AI for pricing illegal under antitrust law? Generally, no. The risk is how the system is designed and governed, especially the inputs (competitor-sensitive data), objectives, and the ability to coordinate or monitor competitors.

What is the biggest practical antitrust risk with AI in commercial decisions? Typically, it is uncontrolled inputs and vendor arrangements that reduce independent decision making, combined with weak change control and lack of monitoring.

Do we need to ban competitor price tracking entirely? Not necessarily. Tracking publicly available prices is common, but you should document data sources, frequency, safeguards, and ensure the system is not configured to follow a specific competitor or enforce alignment.

What should we show in an audit or dawn raid scenario? A clear inventory of tools, approved objectives and constraints, data provenance documentation, change logs, monitoring dashboards, and records of interventions (alerts, overrides, remediation).

This article is general information, not legal advice.

About the Author

Iratxe Gurpegui

Iratxe Gurpegui

I've spent 20 years as a compliance and competition lawyer across Europe and Latin America, and throughout my career, I've seen firsthand how complex and costly regulations can hold companies back. But I've also learned that compliance doesn't have to be a burden, it can be a strategic advantage. My mission is to help companies harness the power of AI, transforming compliance into something faster, simpler, and most importantly, a real driver of growth for businesses.