1.5M Events/sec on SQLite: What Happens When You Stop Fighting Shared State

typescript dev.to

I spent years building applications with Postgres. It works. But every time I modeled a domain with aggregates and events, I felt like I was fighting the database instead of working with it. Squeezing domain events into relational tables. Writing migration scripts when the domain evolved. Managing FK cascades when a user wanted their data deleted.

And then there was the scaling question. I never hit the limits of what a single Postgres instance could handle. But I kept looking at what would happen if I did. CockroachDB with Raft consensus adding 2ms to every write. ScyllaDB with ring management and nodetool cleanup runs. It scared me. Not because I needed it, but because I knew the complexity would eat me alive if the day came.

So I asked a different question: what if the entity that owns the data is the unit of everything?

The idea

In a traditional database, you have one big shared store and many writers competing for it. The entire field of database engineering (MVCC, row locks, optimistic concurrency, two-phase commit) exists to manage that competition. What if you eliminated the competition entirely?

Warp makes each entity (a user, an account, an order, a device) its own actor. An OTP process on the BEAM with its own mailbox. Each actor has its own partition in a SQLite shard file and its own append-only event log. One writer per entity. No locks, no MVCC, no conflicts. Not because of clever algorithms, but because the architecture makes them impossible.

Two requests hit the same entity at the same time? They queue in the actor mailbox. The second one waits microseconds, not milliseconds. Different entities run in parallel on different actors. A million users checking out their carts simultaneously never block each other.

const db = new Warp({ host: "localhost", port: 9090 })
const alice = db.entity("user/alice")

// Two requests to the same entity? They queue.
// Different entities? Fully parallel.
await alice.append("Credited", { amount: 5000 }, { aggregate: "Account" })
await alice.append("Debited", { amount: 1000 }, { aggregate: "Account" })

const balance = await alice.get("Account")
// => { balance: 4000 }
Enter fullscreen mode Exit fullscreen mode

Why SQLite?

Because it is the one database I fully understand. No server, no config, no mystery. I wanted to see what happens when you shard SQLite by entity and let the BEAM handle concurrency. Each shard file has its own writer actor that batches writes into transactions. Reads for a single entity come from actor memory (nanoseconds). Cross-entity queries go through rqlite projections.

I wrote more about using SQLite in production in a separate post. Warp takes that idea further by sharding SQLite per entity and letting the BEAM manage concurrency across shards.

The results surprised me. macOS performs slightly worse than Linux in Docker due to I/O scheduling and Rosetta, so I benchmarked both:

Setup Warp Competitor Advantage
Mac native, single event, 200 writers 63K ev/s Cassandra: 30K, CockroachDB: 18K 2-3.5x
Docker Linux (5 cores), single event 139K ev/s ScyllaDB: 10K 13.9x
Docker Linux (5 cores), batch 500/call 1,553K ev/s ScyllaDB: 49K 31.7x

Same hardware (M1), same workload, same event schema. The batch path packs 500 events into a single NIF call, so it is 2 Erlang messages per 500 events vs 1,000 for individual writes. Docker uses the native C writer backend.

Events, not rows

Warp stores an append-only log of events per entity. You do not UPDATE a row. You append a fact about what happened. "Alice was credited 5000." "Alice was debited 1000." The aggregate function folds all events into current state. Balance = 0 + 5000 - 1000 = 4000.

This gives you a complete audit trail for free. Every event has an ID, a sequence number, and a timestamp.

const history = await alice.history(100)
// => [{ type: "Credited", amount: 5000, seq: 1 },
//     { type: "Debited", amount: 1000, seq: 2 }]
Enter fullscreen mode Exit fullscreen mode

The GDPR moment

This is where entity-native storage really shines. When Alice closes her account and wants all her data deleted, here is what that looks like in Postgres: delete from 7 tables in FK dependency order, purge replicas, advance replication slots, rotate the WAL, log the deletion for audit. Dozens of lines of code across multiple database connections.

In Warp:

// Data portability (GDPR Article 20)
const allEvents = await alice.export()

// Right to erasure (GDPR Article 17)
await alice.delete()
// Actor stopped. Events purged. Projections cleaned. Done.
Enter fullscreen mode Exit fullscreen mode

Two lines. One entity, one call, gone. No FK cascade. No orphan check. No "did we miss a table?"

Cross-entity operations

Within one entity, everything is strongly consistent. But what about transferring money from Alice to Bob? You cannot lock two actors. That is where sagas come in.

const result = await db.saga("transfer-001")
  .step("user/alice",
    { type: "Debited", payload: { amount: 500 } },
    { type: "Credited", payload: { amount: 500 } },
    { aggregate: "Account", schemaVsn: 1 }
  )
  .step("user/bob",
    { type: "Credited", payload: { amount: 500 } },
    { type: "Debited", payload: { amount: 500 } },
    { aggregate: "Account", schemaVsn: 1 }
  )
  .commit()

// result.resultType: "COMMITTED" | "COMPENSATED" | "STUCK"
Enter fullscreen mode Exit fullscreen mode

Deterministic event IDs make retries idempotent. If the credit to Bob fails, the saga automatically reverses the debit from Alice.

What about queries?

Getting a single entity state is instant (from actor memory). For cross-entity queries, Warp uses projections that build query-friendly tables in rqlite from your events.

// Cross-entity query via rqlite projections
const accounts = await db.query(
  "SELECT id, balance FROM accounts WHERE balance > ?",
  ["100000"]
)

// Real-time streaming
for await (const event of alice.subscribe("Account")) {
  console.log(event.eventType, event.payload)
}
Enter fullscreen mode Exit fullscreen mode

The tradeoffs

Warp is not a general-purpose database. There are no ad-hoc JOINs. No window functions. Cross-entity reads are eventually consistent through projections. If you need OLAP, export to a data warehouse. If your primary access pattern is "create, read, update, and delete data belonging to a specific entity," Warp is faster, simpler, and gives you more mental clarity than any shared-writer database.

The honest position

I did not come from distributed systems. I came from domain-driven design. I think in entities, aggregates, and events. Warp is what happens when you stop translating that mental model into SQL and instead build storage that speaks it natively. The benchmarks surprised me. The simplicity is what I was after.

The source is at gitlab.com/dwighson/warp. The landing page, interactive tutorial, and TypeScript docs are live. It is v0.1, and I would love feedback.

Source: dev.to

arrow_back Back to Tutorials