Thing Event SystembyPentatonic

Use Case

AI agent governance that comes from the architecture, not an afterthought

Guardrails prevent bad actions. But regulators, auditors, and your own risk team need proof of what actually happened. TES makes governance a natural output of the event store — not a bolt-on logging system.

EU AI Act maximum penalty

€35M

Or percentage of global turnover

7%

High-risk enforcement deadline

Aug 2026

Governance pillars

Six pillars of agent governance

Each pillar is a natural output of the event-sourced architecture. No separate governance layer required.

01

Immutable decision records

Every agent action is an append-only event — what it did, when, for whom, and why. No updates, no deletes. The event stream is the single source of truth.

02

Correlation across systems

Correlation IDs link events across agents, sessions, and services. One query reconstructs the full decision chain — even when five agents contributed to a single outcome.

03

Human oversight tracking

Approval, rejection, and escalation events are first-class citizens. Prove that humans were in the loop where required — with timestamps, reviewer IDs, and reasoning.

04

Continuous bias detection

Bias Evolution runs evolutionary loops over your event streams, detecting patterns and drift in real time. Not static audits — living detection that evolves with your data.

05

Compliance exports

Generate structured reports by jurisdiction (EU AI Act, Colorado AI Act), time period, entity type, or agent session. Machine-readable, regulator-ready.

06

Time travel debugging

Replay events to any point. Reconstruct what an agent knew, what it decided, and what state the world was in at that moment. Debug after the fact without guesswork.

In practice

From agent action to compliance report in one query

Every agent decision is an immutable event. When a regulator, auditor, or your own risk team asks "what did this agent do?", the answer is a single query — not a reconstruction from scattered logs.

Read the full EU AI Act compliance guide or see why audit trails matter more than guardrails.

Query the audit trail
// Reconstruct an agent's full decision chain
const trail = await tes.query(`{
  eventsByEntity(entityId: "ses_7f2a") {
    eventType timestamp payload source
  }
}`);
Export compliance report
const report = await tes.audit({
  jurisdiction: "EU",
  regulation: "AI_ACT",
  period: "2026-Q2",
  include: [
    "risk_classification",
    "human_oversight_events",
    "transparency_logs",
  ],
});
// → Structured, exportable, regulator-ready

Regulatory landscape

One event store, multiple jurisdictions

EU AI Act

Enacted

High-risk AI system requirements: transparency, human oversight, risk management, record-keeping.

Deadline: Aug 2026

Colorado AI Act

Enacted

Impact assessments, disclosure obligations, risk management programmes for consequential AI decisions.

Deadline: Feb 2026

NIST AI RMF

Framework

Risk management framework for trustworthy AI: govern, map, measure, manage.

Deadline: Ongoing

EU DPP Regulation

Phasing in

Digital Product Passports for product lifecycle transparency and circular economy compliance.

Deadline: 2026-2027

Start governing

Build governance into your AI agents from day one

Free tier includes 10,000 events per month with full audit trail, compliance exports, and bias detection.