Monday, November 10, 2025

AI Act readiness: compliance management that scales

AI Act readiness: compliance management that scales

In the summer of 2025, Elena, the sole compliance officer of a 600-employee manufacturing group, received the board’s question of the year: “Are we ready for the EU AI Act?” She opened a spreadsheet, counted more than twenty AI systems embedded in connected factories, HR screening tools, and customer-facing chatbots—and realised the list was already outdated.

Elena’s story mirrors that of dozens of mid-market organisations racing to understand a regulation that re-defines the way technology is built, bought, and monitored in Europe. This article distils what the AI Act demands and outlines a pragmatic, scalable approach to regulatory compliance management that any intermediate-sized enterprise can adopt.

Why the AI Act changes the compliance playbook

Unlike sector-specific rules, the AI Act takes a horizontal, risk-based approach. Every organisation that develops, distributes, integrates, or merely uses AI systems within the EU market falls somewhere on its matrix of roles and obligations. The final text was approved in 2024 and its main provisions have begun and will begin to apply between 2025 and 2027, giving companies precious little time to organise.

The core requirements at a glance

AI Act category

Examples

Key obligations

Prohibited uses

Social scoring by governments, untargeted facial recognition in public spaces

Must not be placed on the market or put into service

High-risk systems

Recruitment algorithms, credit scoring, critical infrastructure autonomy

Conformity assessment, quality & risk management, data governance, incident logging, post-market monitoring

Residual (limited) risk systems

Chatbots, deepfake generators, product recommendation engines

Transparency disclosures, human oversight, end-user awareness & training

Minimal risk systems

Spam filters, simple analytics dashboards

Voluntary codes of conduct

Most SMEs and intermediate enterprises will discover that their AI footprint sits largely in the residual risk tier. The obligations are less onerous than for high-risk systems but still far from trivial.

A three-step roadmap for AI Act readiness

1 Map every AI system in the organisation

Start by building an exhaustive inventory. Go beyond officially procured software and include:

  • Embedded machine-learning components in IoT devices or production lines.
  • SaaS features marketed as “smart”, “predictive”, or “assistant”.
  • Open-source models used by developers or data scientists.

Practical tip: run a short survey in engineering, IT procurement, and business units asking one simple question: “Does this product or process make automated decisions based on data patterns?” The answers will surface hidden models faster than technical audits alone.

2 Categorise by role and responsibility

Under article 3 of the AI Act you can act as:

  • Provider (developer): you create or substantially modify an AI system.
  • Deployer (user): you put the system into service under your own control.
  • Importer / distributor: you place third-party AI on the EU market.

Mapping roles matters because the same system may trigger different duties depending on who touches it.

3 Assess the risk level of each system

The Commission’s Annex III lists high-risk use cases; anything not falling there defaults to residual or minimal risk. When in doubt, apply the “risk test”:

  1. Could the AI system, if it errs, harm a person’s fundamental rights or safety?
  2. Is the system used in a domain explicitly flagged by the Act (employment, credit, critical services)?

A “yes” to the first question and “no” to the second usually lands you in the residual risk bucket. This is where transparency, oversight, and training become your new baseline.

Illustration of a compliance officer standing before a large interactive screen displaying an AI system inventory, colour-coded by risk level, with factory icons, chatbot symbols, and HR application graphics

Meeting residual risk obligations without drowning in paperwork

Transparent communication

Article 50 of the AI Act requires that users are clearly informed they are interacting with an AI system. For internal tools, this can be a banner or splash screen. For customer-facing chatbots, it should be embedded at the start of each conversation and in privacy notices.

Checklist for compliant disclosures:

  • State that an AI system is in use.
  • Describe the system’s purpose in plain language.
  • Provide a point of contact for questions or complaints.

Human oversight mechanisms

Residual risk systems must remain subject to meaningful human control. That means:

  • Defining in advance what decisions humans can override and how.
  • Logging overrides for auditability.
  • Setting thresholds that automatically escalate to a human review when confidence scores drop.

In practice, establishing a human-in-the-loop policy for every residual risk model avoids ambiguities when incidents occur.

Awareness and training

Employees who interact with AI outputs must understand:

  • The limitations and potential biases of the model or system.
  • Their responsibility to challenge or override erroneous outputs.

A short, role-based e-learning module updated annually often satisfies the requirement—provided completion rates are monitored and linked to access controls.

Scaling compliance with automation

Manually tracking each disclosure and training certificate becomes unsustainable once the number of models exceeds single digits. Platforms such as Naltilia support compliance officers by:

  • Automated data collection: connectors pull model metadata from development tools and SaaS dashboards into a single inventory.
  • Regulatory risk assessment workflows: predefined templates for AI Act roles and risk categories reduce interpretation errors.
  • Remediation playbooks: generate tailored oversight procedures and disclosure language aligned with residual risk obligations.
Diagram showing an AI governance workflow: AI system inventory flows into risk assessment, which feeds remediation actions and automated policy generation, all managed on a single platform

putting it all together: a 90-day action plan

Week

Milestone

Output

1–2

Kick-off & scoping

Cross-functional task force formed, glossary agreed

3–5

System mapping

Centralised inventory with role attribution

6–7

Risk classification

List of high vs residual risk systems validated by legal

8–10

Oversight & disclosure design

Draft human-in-the-loop SOPs, user disclosure templates

11–12

Training rollout & tooling

E-learning launched, Naltilia instance configured

13

Board update

Compliance dashboard with readiness score

Following this structured sprint, most medium-sized organisations can reach an AI-Act-ready posture for residual risk systems well before enforcement starts.

The cost of waiting vs the value of readiness

Fines for non-compliance range up to 7.5 million € or 1.5% of global turnover for limited-risk obligations—small compared with high-risk penalties but still material for intermediate enterprises. More importantly, customers and investors are already requesting AI governance evidence in 2025 due-diligence questionnaires. Early movers that demonstrate robust oversight win trust and tenders.

Elena’s company completed its 90-day plan in April 2025. When a prospective automotive client asked for AI Act documentation, she exported Naltilia’s dashboard. The deal closed two weeks later.

Next steps

  1. Audit your current compliance stack against the AI Act roadmap above.
  2. Explore how Naltilia’s regulatory risk assessment and compliance workflow automation could cut preparatory time by half.
  3. Subscribe to Naltilia’s Regulatory Updates & Insights newsletter for real-time guidance on the AI Act and other emerging frameworks.

Staying compliant is no longer a legal side quest. With the right structure and tooling, it becomes a competitive advantage that scales.