Thing Event SystembyPentatonic
Blog/EU AI Act Compliance with Event Logs
ComplianceMarch 28, 202612 min read

How to comply with the EU AI Act using immutable event logs

The EU AI Act's high-risk enforcement deadline is August 2026 — five months away. If your AI agents make consequential decisions, you need governance infrastructure now. Here's how immutable event logs satisfy the key requirements, with code examples.

Enforcement timeline

Aug 2024AI Act enters into force
Feb 2025Prohibited practices apply
Aug 2026High-risk AI system requirements apply
Aug 2027Full enforcement for all AI systems

What counts as "high-risk"?

The EU AI Act classifies AI systems by risk level. High-risk systems include those used in:

  • Employment and worker management — recruitment, task allocation, performance evaluation
  • Credit and financial services — creditworthiness assessment, pricing, risk scoring
  • Essential services — insurance, healthcare access, social welfare
  • Law enforcement and migration — border control, evidence evaluation
  • Critical infrastructure — energy, transport, water, digital systems

AI agents operating in agentic commerce — making purchases, processing returns, managing inventory, or scoring creditworthiness — fall squarely into the high-risk category if they make or influence consequential decisions.

The six articles that matter

Six articles in the EU AI Act create specific obligations for high-risk AI systems. Each one maps directly to capabilities that an immutable event store provides natively.

ArticleRequirementHow TES satisfies it
Art. 9Risk management systemEvent-level risk classification. Every agent action is typed and categorised — query risk distribution across any time window.
Art. 12Record-keepingImmutable event log. Every state change stored as append-only event with timestamp, source, client ID, and full payload.
Art. 13TransparencyFull audit trail export. tes.audit() generates structured compliance reports by jurisdiction, regulation, and time period.
Art. 14Human oversightSession-level oversight events. Human approval, rejection, and escalation recorded as first-class events with correlation IDs.
Art. 15Accuracy & robustnessAI enrichment validation. Vision, pricing, and embedding quality tracked per-event. Bias Evolution detects drift in agent behaviour.
Art. 17Quality managementEvent statistics and analytics. Real-time dashboards for event volume, error rates, and processing latency.

Article 12: Record-keeping in practice

Article 12 requires that high-risk AI systems have "logging capabilities" that enable "the recording of events relevant to identifying situations that may result in the AI system posing a risk." In practice, this means every consequential agent action must be logged with enough context to reconstruct the decision.

In an event-sourced system, this is the default behaviour. Every mutation emits an immutable event. Events cannot be updated or deleted. The log is the system of record.

Every agent action is an immutable event
// Agent processes a return — event is stored immutably
await tes.emit("agent_session.action_executed", {
  session_id: "ses_eu01",
  action: "process_return",
  thing_id: "thing_456",
  decision: "approve_full_refund",
  amount: 89.99,
  currency: "EUR",
  reasoning: "Item condition grade A, within 30-day policy",
  human_override: false,
});

// This event cannot be updated or deleted.
// It is the compliance record.

Article 13: Transparency exports

Article 13 requires that high-risk AI systems be "designed and developed in such a way as to ensure that their operation is sufficiently transparent." When a regulator asks what your AI agent did and why, you need a structured, exportable answer.

Export compliance report for EU AI Act
const report = await tes.audit({
  jurisdiction: "EU",
  regulation: "AI_ACT",
  period: "2026-Q2",
  include: [
    "risk_classification",
    "human_oversight_events",
    "conformity_checks",
    "transparency_logs",
  ],
});

// report.compliant → true
// report.total_events → 47,283
// report.human_oversight_events → 1,204
// report.artifacts → exportable structured records

Article 14: Proving human oversight

Article 14 requires that high-risk AI systems "be designed and developed in such a way that they can be effectively overseen by natural persons." This doesn't just mean humanscan intervene — it means you must prove they did where required.

In TES, human oversight actions are first-class events. When a human approves, rejects, or escalates an agent's decision, that action is recorded with the same immutability guarantees as any other event. Correlation IDs link oversight events to the agent actions they govern.

Human oversight as an immutable event
// Human reviews and approves agent's refund decision
await tes.emit("agent_session.human_oversight", {
  session_id: "ses_eu01",
  oversight_type: "approval",
  agent_action: "process_return",
  reviewer: "ops_team_lead",
  decision: "approved",
  notes: "Verified item condition matches agent assessment",
});

// Query: show me all human oversight for this session
const oversight = await tes.query(`{
  eventsByEntity(entityId: "ses_eu01") {
    eventType timestamp payload
  }
}`);

The 10-point compliance checklist

Before August 2026, every organisation deploying high-risk AI agents should verify:

  1. 01Every agent action is logged as an immutable event with timestamp and source
  2. 02Events cannot be updated or deleted after creation
  3. 03Correlation IDs link related events across systems and sessions
  4. 04Human oversight actions are recorded as first-class events
  5. 05Risk classification is applied to each event type
  6. 06Compliance reports can be exported by jurisdiction and time period
  7. 07The audit trail covers the complete decision chain, not just the final action
  8. 08AI enrichment quality is tracked (vision, pricing, embedding confidence)
  9. 09Bias detection runs continuously on event streams
  10. 10Data residency requirements are met (EU-only processing if required)

Penalties are real

Non-compliance with high-risk requirements carries penalties of up to 15 million euros or 3% of global annual turnover, whichever is higher. For violations of prohibited AI practices, penalties reach 35 million euros or 7%. These aren't theoretical — the EU has shown with GDPR that it enforces at scale.

Start now, not in July

Building compliance infrastructure after the deadline means building it too late. The EU AI Act timeline is already in motion. If your AI agents make consequential decisions, the time to instrument them with immutable event logs is now.

The Thing Event System provides this infrastructure out of the box: append-only events, correlation IDs, compliance exports, and continuous bias detection. Start with the free tier — 10,000 events per month, no credit card required.

Pentatonic Engineering

London, UK

Time is running out

August 2026 is closer than you think

Start building governance into your AI agent infrastructure today. Free tier includes 10,000 events per month.