Use Case
AI agent governance that comes from the architecture, not an afterthought
Guardrails prevent bad actions. But regulators, auditors, and your own risk team need proof of what actually happened. TES makes governance a natural output of the event store — not a bolt-on logging system.
EU AI Act maximum penalty
€35M
Or percentage of global turnover
7%
High-risk enforcement deadline
Aug 2026
Governance pillars
Six pillars of agent governance
Each pillar is a natural output of the event-sourced architecture. No separate governance layer required.
Immutable decision records
Every agent action is an append-only event — what it did, when, for whom, and why. No updates, no deletes. The event stream is the single source of truth.
Correlation across systems
Correlation IDs link events across agents, sessions, and services. One query reconstructs the full decision chain — even when five agents contributed to a single outcome.
Human oversight tracking
Approval, rejection, and escalation events are first-class citizens. Prove that humans were in the loop where required — with timestamps, reviewer IDs, and reasoning.
Continuous bias detection
Bias Evolution runs evolutionary loops over your event streams, detecting patterns and drift in real time. Not static audits — living detection that evolves with your data.
Compliance exports
Generate structured reports by jurisdiction (EU AI Act, Colorado AI Act), time period, entity type, or agent session. Machine-readable, regulator-ready.
Time travel debugging
Replay events to any point. Reconstruct what an agent knew, what it decided, and what state the world was in at that moment. Debug after the fact without guesswork.
In practice
From agent action to compliance report in one query
Every agent decision is an immutable event. When a regulator, auditor, or your own risk team asks "what did this agent do?", the answer is a single query — not a reconstruction from scattered logs.
Read the full EU AI Act compliance guide or see why audit trails matter more than guardrails.
// Reconstruct an agent's full decision chain
const trail = await tes.query(`{
eventsByEntity(entityId: "ses_7f2a") {
eventType timestamp payload source
}
}`);const report = await tes.audit({
jurisdiction: "EU",
regulation: "AI_ACT",
period: "2026-Q2",
include: [
"risk_classification",
"human_oversight_events",
"transparency_logs",
],
});
// → Structured, exportable, regulator-readyRegulatory landscape
One event store, multiple jurisdictions
EU AI Act
EnactedHigh-risk AI system requirements: transparency, human oversight, risk management, record-keeping.
Deadline: Aug 2026
Colorado AI Act
EnactedImpact assessments, disclosure obligations, risk management programmes for consequential AI decisions.
Deadline: Feb 2026
NIST AI RMF
FrameworkRisk management framework for trustworthy AI: govern, map, measure, manage.
Deadline: Ongoing
EU DPP Regulation
Phasing inDigital Product Passports for product lifecycle transparency and circular economy compliance.
Deadline: 2026-2027
Start governing
Build governance into your AI agents from day one
Free tier includes 10,000 events per month with full audit trail, compliance exports, and bias detection.