Event sourcing is the backbone of agentic AI
AI agents are making real decisions — purchasing, returning, scheduling, negotiating. But most agent architectures have no record of what actually happened or why. Event sourcing fixes this at the infrastructure level.
The problem with stateful AI agents
Most AI agent frameworks store state in a mutable database. When an agent updates an entity, the previous value is overwritten. The agent's decision is lost. If something goes wrong — a bad purchase, a misrouted return, a compliance violation — there's no way to reconstruct what the agent knew, what it decided, or why.
This isn't a theoretical concern. Agents operating in commerce are already processing real transactions. Google, Salesforce, and Visa are building agentic commerce infrastructure where AI agents act on behalf of consumers. When an agent spends real money, "the database says it's fine" isn't an acceptable audit trail.
What event sourcing gives you
Event sourcing inverts the relationship between state and history. Instead of storing current state and discarding history, you store every state change as an immutable event. Current state is derived by replaying events — a projection.
For AI agents, this means:
- 01Complete history. Every decision, every tool call, every state change is recorded. You can reconstruct what the agent knew at any point in time.
- 02Audit trails by construction. You don't add audit logging as an afterthought — the event stream is the audit trail.
- 03Time travel. Replay events to any point. Debug agent behaviour by stepping through its exact sequence of actions.
- 04AI enrichment. Every event is a trigger point. When an event arrives, you can run embedding, classification, pricing, or anomaly detection — automatically.
import { TESClient } from "@pentatonic-ai/agent-events";
const tes = new TESClient({ apiKey: process.env.TES_API_KEY });
// Agent makes a purchase decision
await tes.emit("agent_session.action_executed", {
session_id: "ses_abc",
action: "purchase",
thing_id: "thing_123",
amount: 149.99,
currency: "GBP",
reasoning: "Price below threshold, condition grade A",
});
// Later: reconstruct the agent's decision trail
const trail = await tes.query(`{
eventsByEntity(entityId: "ses_abc", limit: 50) {
eventType timestamp payload source
}
}`);Why traditional databases fail here
A relational database with UPDATE and DELETE operations destroys history by design. You can add triggers, change-data-capture, or audit tables — but these are bolted-on approximations of what event sourcing gives you natively.
Kafka and event streaming platforms solve part of the problem — they capture events in transit. But they're transport layers, not systems of record. Events flow through Kafka; they live in an event store. You need both durability and queryability to build agent governance.
The agentic commerce case
The term "agentic commerce" describes AI agents that transact on behalf of humans — comparing products, negotiating prices, processing returns, managing subscriptions. This isn't speculative. Salesforce's Agentforce, Google's agent APIs, and Mastercard's agent payment protocols are already in production or preview.
In this world, every agent action is a potential liability. When an agent returns a product, who authorised it? When it spends money, was the price within the human's constraints? When it routes a shipment, did it comply with jurisdiction-specific regulations?
Event sourcing answers these questions because the answers are in the events. Correlation IDs link related actions across systems. Timestamps establish causal ordering. Immutability guarantees the record hasn't been altered after the fact.
How TES implements this
The Thing Event System (TES) is an immutable event store purpose-built for this problem. Every entity — things, holders, locations, products — is tracked through up to 26 lifecycle stages. Every state change emits an event. Events trigger an AI enrichment pipeline that generates embeddings, classifications, and market pricing automatically.
The architecture is:
- Append-only event spine — no updates, no deletes, full history
- Derived projections — current state computed from events, always consistent
- AI enrichment consumers — vision, embedding, pricing, taxonomy run on every event
- Vector search — 1024-dim BGE-M3 embeddings for semantic search across entities
- Edge-native deployment — Cloudflare Workers, sub-50ms globally, 300+ locations
The result: your AI agents have a shared, auditable record of every entity they touch. Governance isn't a separate system — it's the natural output of the event store.
The regulatory tailwind
The EU AI Act enforces high-risk AI system requirements in August 2026. Article 13 requires transparency. Article 14 requires human oversight mechanisms. Article 9 requires ongoing risk management. All of these assume you can answer the question: "what did the AI agent do, and was it authorised?"
If your agent infrastructure is built on mutable state, answering that question requires reconstructing history from scattered logs. If it's built on event sourcing, the answer is a single query.
The bet
Event sourcing is not a new pattern — it's been used in financial systems, gaming, and distributed systems for decades. What's new is applying it to AI agent infrastructure, where the combination of immutability, enrichment, and search creates something greater than the sum of its parts.
The agents are coming. The regulations are coming. The infrastructure to govern both needs to be in place before either arrives in full force. Event sourcing is that infrastructure.
Pentatonic Engineering
London, UK