The EU AI Act is not a white paper. It is law.

Prohibited-use rules have been in force since February 2, 2025. General-purpose AI (GPAI) provider obligations became applicable August 2, 2025. The high-risk system requirements -- the ones that affect most production AI -- arrive in stages across 2026 and 2027.

If you are reading this article, you are likely in one of two positions. Either you are building AI systems that will be classified as high-risk under the Act, and you need to understand what that means for your engineering. Or you are building GPAI-powered systems and need to understand the obligations that flow from your model providers to you.

This is not a legal brief. It is an engineering playbook. We are architects, not lawyers. But we know that compliance is ultimately an engineering problem -- the legal team defines what must be true; the engineering team makes it true. And the gap between "we should be compliant" and "we can prove compliance" is exactly the gap this article addresses.


I. The Timeline: What Is Already in Force

The AI Act entered into force on August 1, 2024. Its provisions phase in over a staggered timeline:

Aug 1, 2024    Act enters into force
  Feb 2, 2025    Prohibited AI practices (Art. 5) apply
  Feb 2, 2025    AI literacy obligations (Art. 4) apply
  Aug 2, 2025    GPAI provider obligations (Chapter V) apply
  Aug 2, 2025    Governance structure established
  Aug 2, 2026    High-risk requirements (Chapter III) apply
  Aug 2, 2027    Extended deadline for high-risk AI in
                 Annex I regulated products

Two things are already live that many teams have not internalized:

Prohibited practices (since Feb 2025). Social scoring. Untargeted facial recognition from CCTV. Emotion inference in workplaces and education (with narrow exceptions). Subliminal manipulation. Exploitation of vulnerabilities. If your system does any of these, you are already non-compliant.

AI literacy (since Feb 2025). Organizations deploying AI must ensure staff have "sufficient AI literacy" for their roles. This is vague enough to be annoying and specific enough to be enforceable. Document your training programs.

GPAI obligations (since Aug 2025). If you provide a general-purpose AI model -- or build on top of one -- you have transparency obligations. Technical documentation. Energy consumption reporting. Copyright compliance summaries. If you use a model from Anthropic, OpenAI, Google, or Meta, your provider has obligations; you have downstream responsibilities.


II. What Makes a System "High-Risk"

The Act defines high-risk AI in two categories:

Annex I: AI embedded in regulated products. Medical devices, vehicles, aviation, toys, elevators, pressure equipment, machinery. If your AI is a safety component of a product already regulated under EU harmonization legislation, it is high-risk. The extended 2027 deadline applies here.

Annex III: Standalone high-risk AI systems. This is the broader category:

  • Biometric identification and categorization of natural persons.
  • Critical infrastructure management and operation.
  • Education and vocational training -- access, assessment, monitoring.
  • Employment -- recruitment, task allocation, performance monitoring, termination.
  • Essential services -- credit scoring, insurance, social benefits.
  • Law enforcement -- risk assessment, evidence evaluation, profiling.
  • Migration and border control -- risk assessment, document verification.
  • Justice and democratic processes -- legal interpretation, election influence.

If your AI system makes decisions or assists decisions in these domains, it is likely high-risk. The classification is based on the system's purpose and impact, not on the underlying technology.

The practical test

Ask three questions:

  1. Does my AI system make or meaningfully assist decisions about people?
  2. Are those decisions in an Annex III domain?
  3. Could those decisions significantly affect someone's access to services, employment, education, or rights?

If the answer to all three is yes, assume high-risk until legal analysis says otherwise. It is cheaper to over-comply and adjust than to under-comply and discover.


III. What High-Risk Systems Must Have

Chapter III, Section 2 of the Act specifies the requirements. Translated into engineering deliverables:

Risk management system (Art. 9)

A continuous, iterative process for identifying, analyzing, estimating, and mitigating risks. Not a one-time risk assessment. A living system that:

  • Identifies known and reasonably foreseeable risks.
  • Estimates risks based on intended use and foreseeable misuse.
  • Adopts risk mitigation measures.
  • Tests the system to ensure mitigation works.
  • Documents everything.

Engineering translation: Build a risk register. Link risks to system components. Link mitigations to code (tests, guardrails, constraints). Review quarterly. Your CI pipeline should include risk-related test suites.

Data governance (Art. 10)

Training, validation, and testing datasets must be relevant, sufficiently representative, free of errors to the extent possible, and appropriate for the intended purpose. You must document:

  • Data collection processes.
  • Data preparation operations (cleaning, labeling, enrichment).
  • Assumptions about what the data represents.
  • Assessment of availability, quantity, and suitability.
  • Measures to detect and address biases.

Engineering translation: Data lineage tracking. Schema documentation. Bias detection pipelines. Version control for datasets. If you fine-tune models, document the training data provenance with the same rigor you document code provenance.

Technical documentation (Art. 11)

Before the system is placed on the market, prepare technical documentation demonstrating compliance. The documentation must include:

  • General description of the system (purpose, intended use, limitations).
  • System architecture and computational resources.
  • Training methodology and techniques.
  • Data governance measures.
  • Performance metrics and evaluation results.
  • Risk management documentation.
  • Description of human oversight measures.
  • Expected lifetime and maintenance plan.

Engineering translation: Architecture Decision Records (ADRs). Model cards. Eval results with methodology. Infrastructure diagrams. This is the artifact that auditors will read. If it does not exist, you are not compliant, regardless of how good your system is.

Logging (Art. 12)

High-risk AI systems must support automatic logging of events relevant to identifying risks, monitoring operation, and facilitating post-market monitoring. Logs must:

  • Be traceable to specific decisions or outputs.
  • Cover the system's operational lifetime.
  • Be accessible to the deployer and to market surveillance authorities.

Engineering translation: Structured, queryable audit logs. Not application logs. Decision logs -- what input was received, what the model produced, what action was taken, what outcome resulted. Think of it as a ledger of AI decisions.

Decision Log Schema (conceptual):

{ "decision_id": "uuid", "timestamp": "iso8601", "system_version": "v2.3.1", "input_hash": "sha256", "model_id": "claude-opus-4-20250514", "model_output_hash": "sha256", "action_taken": "approved | rejected | escalated", "human_override": true | false, "outcome_recorded_at": "iso8601 | null", "outcome": "correct | incorrect | disputed | null" }

Human oversight (Art. 14)

High-risk systems must be designed to allow effective human oversight. This includes:

  • The ability for humans to understand the system's capabilities and limitations.
  • The ability to correctly interpret outputs.
  • The ability to override or disregard the system's output.
  • The ability to interrupt or halt the system (the "kill switch").

Engineering translation: Dashboard showing system reasoning. Override mechanisms in the UI. Kill switch accessible to authorized operators. Training materials for operators. Not aspirational. Deployed and tested.

Accuracy, robustness, cybersecurity (Art. 15)

Systems must achieve appropriate levels of accuracy, robustness, and cybersecurity. They must be resilient to errors, faults, and inconsistencies. They must resist attempts to exploit vulnerabilities.

Engineering translation: Eval suites with accuracy baselines. Adversarial testing (prompt injection, data poisoning, evasion). Security audits. Incident response plans. Continuous monitoring with alerting.


IV. The Harmonised Standards Gap

The Act relies on harmonised standards -- technical specifications developed by CEN/CENELEC that provide a "presumption of conformity." If you comply with a harmonised standard, you are presumed to comply with the corresponding Act requirement.

The problem: these standards do not exist yet. CEN/CENELEC requested them in May 2024. The estimated timeline for development and publication in the Official Journal is 2-3 years. Some fast-track procedures may accelerate this, but even optimistic estimates place widespread availability in late 2026 or 2027.

This creates a compliance vacuum. The obligations apply in August 2026. The standards that tell you exactly how to demonstrate compliance may not be finalized until later.

The practical response: Build your own evidence trail now. Use existing frameworks as scaffolding:

  • NIST AI Risk Management Framework (AI RMF 1.0) -- the most mature general-purpose AI risk framework available.
  • ISO/IEC 42001:2023 -- AI management system standard. Certifiable.
  • ISO/IEC 23894:2023 -- AI risk management guidance.
  • ALTAI (Assessment List for Trustworthy AI) -- the European Commission's self-assessment checklist.

None of these are EU AI Act harmonised standards. But they demonstrate good-faith effort and structured compliance thinking. When the harmonised standards arrive, map your existing documentation to them. You will be ahead of every team that waited.


V. Architecture Patterns for Compliance

Compliance is expensive when bolted on. It is cheap when built in. Here are the patterns that make compliance a natural byproduct of good architecture.

Pattern 1: Decision audit trail

Every AI-assisted decision flows through an audit service that logs the full context: input, model version, model output, human review status, final action, and outcome.

Request --> [AI Service] --> [Audit Service] --> [Action]
                                    |
                                    v
                             [Decision Store]
                                    |
                                    v
                             [Compliance Dashboard]

The audit service is the technical documentation generator. It produces the logs required by Art. 12, the monitoring data required by Art. 9, and the oversight visibility required by Art. 14.

Pattern 2: Model versioning and reproducibility

Every deployed model has a version identifier, a model card, and archived eval results. When a model version is replaced, the previous version and its documentation are retained for the system's operational lifetime.

Model Registry
  +------------------------------------------+
  | model_id: credit-scoring-v2.3            |
  | base_model: claude-opus-4-20250514       |
  | fine_tune_data: dataset-v7 (sha256)      |
  | eval_results: eval-2026-03-15.json       |
  | risk_assessment: risk-v2.3.md            |
  | deployed_at: 2026-03-20T14:00:00Z        |
  | retired_at: null                         |
  +------------------------------------------+

This is not aspirational infrastructure. This is Art. 11 (technical documentation) and Art. 17 (quality management) in code.

Pattern 3: Bias detection pipeline

For systems that affect people (employment, credit, education), run bias detection as part of the eval pipeline. Check demographic parity, equalized odds, or calibration across protected groups -- whichever metrics are appropriate for your domain.

# Bias eval (simplified)
for group in protected_groups:
    group_results = [r for r in eval_results if r.group == group]
    group_rate = positive_rate(group_results)
    baseline_rate = positive_rate(eval_results)
    disparity = abs(group_rate - baseline_rate) / baseline_rate

if disparity > threshold: alert(f"Disparity for {group}: {disparity:.2%}") block_deployment()

This is Art. 10 (data governance) and Art. 9 (risk management) operationalized. The code is the compliance artifact.

Pattern 4: Human oversight dashboard

Build a dashboard that shows operators:

  • Current system status and confidence levels.
  • Recent decisions with reasoning summaries.
  • Override history (who overrode, when, why).
  • Escalation queue for uncertain cases.
  • Kill switch with confirmation.

This is Art. 14 in a browser tab. If your oversight mechanism is "someone can SSH into the server and kill the process," you are not compliant.


VI. GPAI: Your Model Provider's Obligations and Your Responsibilities

If you use models from Anthropic, OpenAI, Google, or other GPAI providers, those providers have obligations under Chapter V of the Act (applicable since August 2, 2025):

  • Maintain and make available technical documentation.
  • Provide information and documentation to downstream providers.
  • Comply with EU copyright law.
  • Publish a sufficiently detailed summary of training data content.

Your responsibility as a deployer: Ensure that the model you integrate has adequate documentation from its provider. If you build a high-risk system on top of a GPAI model, you inherit the obligation to demonstrate that the overall system meets high-risk requirements -- even if the model itself is the provider's responsibility.

Practically: request model cards, eval results, and compliance documentation from your model provider. If they cannot provide it, that is a supply chain risk. Document the gap and mitigate (with your own evals, guardrails, and oversight).


VII. The Enforcement Reality

The AI Act provides for significant penalties:

  • Prohibited practices: Up to 35M EUR or 7% of global annual turnover.
  • High-risk non-compliance: Up to 15M EUR or 3% of global annual turnover.
  • Incorrect information to authorities: Up to 7.5M EUR or 1.5% of global annual turnover.

National competent authorities (one per member state) will be designated for market surveillance. The EU AI Office oversees GPAI models directly.

Enforcement will initially focus on prohibited practices and high-profile cases. But the documentation and logging requirements create a paper trail that makes future enforcement straightforward. If you do not have the documentation when asked, the penalty assessment is simple.


VIII. The 90-Day Compliance Sprint

If you are reading this in spring 2026 and your high-risk system is not yet compliant, here is the minimum viable path:

Days 1-30: Classify and document

  • Classify your AI systems against Annex III. Get legal sign-off on classification.
  • Create model cards for every AI component. Document base models, fine-tuning data, intended use, known limitations.
  • Produce architecture documentation: system diagrams, data flow diagrams, component descriptions.
  • Begin the risk register. Identify the top 10 risks for each high-risk system.

Days 31-60: Build the infrastructure

  • Deploy the decision audit trail. Every AI-assisted decision is logged with full context.
  • Deploy the human oversight dashboard. Operators can see decisions, override, and halt.
  • Implement bias detection in the eval pipeline. Run it against production data.
  • Set up model versioning. Every model change is tracked, documented, and reversible.

Days 61-90: Test and prove

  • Run a mock audit. Pretend a market surveillance authority has requested your technical documentation. Can you produce it within 48 hours?
  • Conduct adversarial testing. Prompt injection, data poisoning, evasion. Document results and mitigations.
  • Review your GPAI supply chain. Request documentation from model providers. Document gaps.
  • Brief your executive team. Present the compliance status, the remaining gaps, and the timeline to close them.

IX. Compliance as Architecture

The EU AI Act is the first comprehensive AI regulation to become law. It will not be the last. Canada's AIDA, Brazil's AI framework, and various US state-level proposals are in progress. The specific requirements will differ. The structural requirements -- documentation, logging, human oversight, bias testing, risk management -- will not.

Teams that build these structures now are not just complying with one regulation. They are building the infrastructure for operating AI responsibly in a world where regulation is the norm, not the exception.

Good architecture makes compliance a byproduct. Decision audit trails, model registries, eval pipelines, bias detection, human oversight dashboards -- these are not compliance costs. They are engineering tools that make your system better, safer, and more maintainable.

The EU AI Act does not ask you to stop building AI. It asks you to prove that your AI works as intended, fails gracefully, and treats people fairly. If you are a competent engineering team, you should want that anyway. The Act just makes it non-optional.

Build the infrastructure. Write the documentation. Run the evals. The deadline is August 2, 2026. The architecture should have been yesterday.


References

  1. European Parliament & Council. Regulation (EU) 2024/1689 (AI Act). eur-lex.europa.eu
  2. EU AI Act Explorer. Full Text and Analysis. artificialintelligenceact.eu
  3. European Commission. AI Act Timeline and Implementation. digital-strategy.ec.europa.eu
  4. NIST. AI Risk Management Framework (AI RMF 1.0). nist.gov
  5. ISO/IEC 42001:2023. Artificial Intelligence -- Management System. iso.org
  6. ISO/IEC 23894:2023. AI Risk Management Guidance. iso.org
  7. European Commission. ALTAI: Assessment List for Trustworthy AI. digital-strategy.ec.europa.eu
  8. CEN-CENELEC. Standardisation Request for AI Act. cencenelec.eu
  9. Anthropic. Claude Model Card. docs.anthropic.com
  10. EU AI Office. General-Purpose AI Code of Practice. digital-strategy.ec.europa.eu
  11. Veale, M. & Zuiderveen Borgesius, F. (2021). Demystifying the Draft EU Artificial Intelligence Act. Computer Law Review International.
  12. Smuha, N.A. (2021). From a 'Race to AI' to a 'Race to AI Regulation'. Law, Innovation and Technology.