17 Days Later: Axiomify v5 Is Live
On April 21 I published a post about why I built Axiomify — a Node.js framework where your Zod schema is your validation, your TypeScript types, and your OpenAPI documentation. One definition. No drift.
That was 17 days ago. Here's what I've shipped since.
What changed between v4 and v5
The first post described the architecture and the problem it solves. v5 is about making that architecture faster, more correct, and production-ready.
v5 adds multi-core clustering — and it's built to actually work.
Most Node.js clustering tutorials will send you in the wrong direction. The standard approach uses Node's default SCHED_RR mode, where the primary process accepts every TCP connection and forwards it to a worker via IPC. One thread in the hot path. Add more workers, add more IPC overhead — the primary is still the bottleneck.
Axiomify's listenClustered() skips that entirely. Each worker binds its own socket directly:
cluster.schedulingPolicy = cluster.SCHED_NONE; // before any fork
server.listen({ port, reusePort: true }); // in each worker
SCHED_NONE stops Node from intercepting worker listen calls. reusePort lets each worker own its socket. The Linux kernel distributes connections without any user-space coordination — zero IPC in the request hot path.
Results on a real 8-core machine (co-located loadgen — dedicated loadgen would show higher numbers):
| Adapter | 1 worker | 2 workers | Gain |
|---|---|---|---|
| @axiomify/http | 35,800 req/s | 57,200 req/s | +60% |
| @axiomify/fastify | 21,300 req/s | 35,200 req/s | +65% |
Every adapter — Express, Fastify, Hapi, HTTP, and the new native uWS adapter — supports listenClustered() out of the box. Crash recovery with exponential backoff, graceful SIGTERM drain, and zero-downtime rolling restart via kill -USR2 <primary-pid> are included.
The validator was doing double the work.
Every validated request was running two full passes: AJV for structural validation, then unconditionally schema.parse() again. The second pass is only needed for schemas with .transform(), .default(), or .coerce — but for a plain object schema with no transforms, it was pure waste on every single request.
v5 walks the schema once at startup to detect whether transforms exist. If not, the second pass is skipped entirely. The result is a 15–25% throughput improvement on validated routes — which is most routes in a typical API.
What's new
@axiomify/native — a uWebSockets.js adapter. The first post listed Express, Fastify, Hapi, and Node.js http as adapter options. Native uWS is now a fifth option for when you need maximum throughput from a single process:
| Route | Req/s | p99 |
|---|---|---|
| GET /users/:id/posts/:postId | 83,947 | 20ms |
| GET /ping | 73,511 | 26ms |
| POST /echo (JSON body) | 54,720 | 30ms |
Same API. Same Zod schemas. Same useOpenAPI() call. You swap HttpAdapter for NativeAdapter and the performance changes. Everything else stays identical.
@axiomify/security — XSS protection, HTTP Parameter Pollution normalisation, SQL injection heuristics, and prototype pollution blocking. useSecurity(app, options).
@axiomify/fingerprint — server-side request fingerprinting with confidence scoring. Bot detection and fraud signals without client-side JavaScript.
That takes the monorepo from 17 packages to 20.
The things that don't show up in benchmarks
The first post focused on developer experience and architecture. What I spent the last 17 days on — alongside the new features — is the kind of work that makes a framework trustworthy rather than just interesting.
Test coverage: ~70% → 91.6%. 462 tests across 51 test files. Cross-adapter parity tests run the same test suite against all five adapters, guaranteeing identical behaviour regardless of which one you use.
CodeQL analysis on every push. Security vulnerabilities caught before they reach main.
OpenSSF Scorecard compliance. Branch protection requiring reviews before merge, least-privilege GitHub Actions token permissions, Dependabot for automatic dependency updates.
Crash circuit breaker. If 5 workers crash within 30 seconds, the primary aborts with a clear error message instead of spinning in a respawn loop burning CPU on a broken config.
Zero-downtime rolling restart. kill -USR2 <primary-pid> restarts workers one at a time. Each replacement comes fully up before the next worker is terminated.
None of this is glamorous. All of it matters if you're running something real.
Where this is going
The current dispatcher overhead versus a bare adapter is around 25% — hook iteration, compiled-state lookup, pipeline execution per request. That's acceptable for most workloads. The next major milestone closes it by generating a compiled handler function per route at startup, inlining the validation, params, and serialization directly.
This is only possible because of the schema-first architecture described in the first post — the framework has a complete Intermediate Representation of every route at registration time. It's the natural endgame of the "one schema, everything derived" idea.
Axiomify is on npm. The clustering works. The validation is faster. The test suite is thorough.
npm install @axiomify/core @axiomify/openapi
GitHub: github.com/OTopman/axiomify
What would make you actually switch frameworks? Curious what the real friction is — drop it in the comments.