Thing Event SystembyPentatonic
Blog/Kafka to Event Stores
ArchitectureMarch 28, 20268 min read

From Kafka to event stores: when to make the switch

Kafka is brilliant at what it does — high-throughput event streaming between services. But when you need to query event history, track entity lifecycles, or generate compliance reports, you're using a transport layer as a database. That's when you need an event store.

Different tools, different jobs

Kafka is an event transport layer. Events flow through it from producers to consumers. It can retain events, but it's not designed to be queried — you read by offset, not by entity or time range.

An event store is a system of record. Events live in it permanently. You query them by entity, by type, by time range, by semantic similarity. Current state is derived from events. The history is the truth.

These aren't competing tools — they're complementary. Kafka moves events between services. An event store holds the permanent record. Many architectures use both.

Side-by-side comparison

DimensionKafkaTES
Primary roleEvent transport (pub/sub)Event storage (system of record)
Data retentionConfigurable (default 7 days)Immutable, forever
Query modelConsumer offsets, sequential readGraphQL API — query by entity, type, time range
State managementKafka Streams / externalBuilt-in projections derived from events
AI enrichmentBuild your own pipelineAutomatic: vision, pricing, embeddings, taxonomy
Vector searchExternal (Pinecone, Weaviate)Built-in, 1024-dim BGE-M3, cosine similarity
Audit trailNot designed for thisNative — immutable events are the audit trail
Entity modelSchemaless topicsThings, Holders, Locations, Products, Shipments, Payments
InfrastructureCluster management (or Confluent Cloud)Managed, edge-native, zero ops
Compliance exportsBuild your ownBuilt-in — by jurisdiction, regulation, time period

Signs you need an event store

You probably need to add an event store when:

  1. 01You're building audit queries against Kafka. If you're writing consumers that reconstruct entity history from topic offsets, you're building an event store — poorly.
  2. 02You need entity-level queries. "Show me everything that happened to order #42" requires scanning entire topics in Kafka. An event store indexes by entity ID.
  3. 03Retention is a problem. Kafka's retention is configurable but finite by default. Event stores retain everything — that's the point.
  4. 04Compliance requires event history. The EU AI Act requires transparency logs. Kafka wasn't designed for compliance reporting.
  5. 05You want AI enrichment. If every event should trigger embedding generation, classification, or pricing — that's a consumer pipeline on Kafka, but built-in on TES.

Using both together

The most common pattern is: Kafka handles inter-service communication, TES handles the permanent record. Services produce events to Kafka. A consumer writes them to TES. The event store becomes the queryable, enriched, compliance-ready system of record.

Alternatively, services can write directly to TES via the GraphQL API or SDK, and TES delivers events to downstream consumers via webhooks. This eliminates Kafka entirely for use cases where high-throughput pub/sub isn't needed.

What about EventStoreDB (Kurrent)?

EventStoreDB (rebranded as Kurrent) is the most established dedicated event store. It provides event sourcing primitives — streams, projections, subscriptions. TES adds AI enrichment (vision, pricing, embeddings), vector search, entity lifecycle tracking, and compliance exports on top of the event sourcing foundation.

If you need a pure event store for microservice CQRS, Kurrent is excellent. If you need an event store that also identifies items from photos, searches by semantic similarity, and exports EU AI Act compliance reports, TES is the one that does both.

Pentatonic Engineering

London, UK

Create free account

Try the event store

Events deserve a permanent home

Stop building audit trails on transport layers. TES stores, enriches, and makes your events queryable forever.