Add HTTP/3 to Your Node.js App

javascript dev.to

Let's start with a question: when was the last time you thought about what protocol your Node.js server actually speaks?

If you're like most developers, the answer is probably "never." You set up Express, called app.listen(), and moved on to building features. And that's totally fine — that's how it should work.

But here's the thing: the internet has moved on, and your server probably hasn't.

Every major browser — Chrome, Firefox, Safari, Edge — already speaks HTTP/3. Google has been serving search results over it since 2020. Cloudflare routes over 30% of its traffic through it. Meta, YouTube, Instagram — all HTTP/3. The biggest companies in the world decided that HTTP/2 over TCP wasn't fast enough, and they built something better.

Meanwhile, most Node.js apps are still serving responses over HTTP/1.1. Some have upgraded to HTTP/2. But almost none speak HTTP/3 — because until now, there was no simple way to do it in Node.js.

That's what this article is about. We'll look at why HTTP/3 exists, what real problems it solves (spoiler: if your users are on WiFi or mobile, this matters a lot), and how to add it to your existing Node.js app without rewriting a single route.


The Problem Nobody Talks About: TCP Is Holding You Back

TCP has been the backbone of the internet for over 40 years. It's reliable, it's battle-tested, and it's everywhere. So what's wrong with it?

Nothing — if you're on a perfect network. But let's be honest: nobody is on a perfect network.

Head-of-Line Blocking: The Silent Performance Killer

Imagine you're loading a web page. Your browser opens an HTTP/2 connection and starts downloading 15 resources simultaneously — CSS, JavaScript, images, fonts — all multiplexed over a single TCP connection. Sounds efficient, right?

Now imagine one single packet gets lost. Maybe the user is on WiFi and walked a bit too far from the router. Maybe they're on a train and the signal hiccupped for a millisecond.

Here's what happens: TCP freezes everything.

Not just the resource that lost a packet — all 15 resources stop. TCP guarantees in-order delivery, so it holds every subsequent packet hostage until that one lost packet is retransmitted and arrives. Your CSS file is ready, your JavaScript is ready, six images are ready — but nothing can be delivered to the browser until that one missing packet shows up.

This is called head-of-line blocking, and it's the dirty secret of HTTP/2. The multiplexing that was supposed to make everything faster actually made this problem worse — because now all your streams share the same TCP bottleneck.

On a clean fiber connection, you'll barely notice. But on WiFi — especially busy coffee shop WiFi, conference WiFi, home WiFi with 12 devices — packet loss happens constantly. On mobile networks — 4G/5G with signal fluctuations — it happens even more.

The result? Your page loads feel janky. Your API calls sometimes take 200ms and sometimes take 2 seconds, and you can't figure out why. Your WebSocket connections drop and reconnect. It's not your code — it's TCP.

The Handshake Tax

Every time a user opens a new connection to your server over HTTPS, they pay a "handshake tax":

  1. TCP handshake — 1 round-trip (SYN → SYN-ACK → ACK)
  2. TLS handshake — 1 to 2 more round-trips (ClientHello → ServerHello → keys)

That's 2–3 round-trips before a single byte of real data flows.

On a fast connection with 20ms latency, that's 40–60ms. Barely noticeable. But on a mobile connection with 150ms latency? That's 300–450ms of pure waiting. Almost half a second, just for the privilege of starting to talk to your server.

And it gets worse. Users don't just connect once — they open new connections to different subdomains, to your CDN, to your API. Each one pays the same tax.

The WiFi-to-Cellular Problem

Here's one that every mobile user has experienced: you're loading a page on WiFi, then you walk out of range. Your phone switches to cellular. What happens?

Every single TCP connection dies.

TCP connections are identified by four things: source IP, source port, destination IP, destination port. When your phone switches networks, your IP changes — and TCP has no mechanism to migrate connections. They all drop, the browser reopens them, pays the handshake tax again, and resends the requests.

If your user was in the middle of uploading a file or streaming data, it's gone. Start over.


Enter QUIC: TCP's Replacement

QUIC isn't an incremental improvement over TCP. It's a complete replacement — a new transport protocol built from scratch by Google, then standardized by the IETF (RFC 9000). And it was specifically designed to solve every problem we just described.

Independent Streams: No More Head-of-Line Blocking

QUIC runs over UDP and implements its own reliability layer. But unlike TCP, QUIC knows about individual streams. If a packet belonging to stream #3 gets lost, only stream #3 waits for the retransmission. Streams #1, #2, #4–15 keep flowing without interruption.

Think about what this means for your users on WiFi. That single lost packet that used to freeze all 15 resources? Now it only affects the one resource it belongs to. The rest of the page continues loading normally.

This is the single biggest performance improvement HTTP/3 brings, and it's most dramatic exactly where your users need it most — on unreliable connections.

Faster Handshakes: 1 Round-Trip (or Zero)

QUIC merges the transport and TLS handshake into a single operation. First connection to a server? One round-trip and you're sending data. That 300–450ms tax on mobile drops to 150ms.

Reconnecting to a server you've visited before? Zero round-trips. QUIC supports 0-RTT resumption — the client can start sending data in the very first packet. For APIs and real-time apps, this is transformative.

Connection Migration: Seamless Network Switches

QUIC connections aren't identified by IP addresses. Instead, each connection has a Connection ID — a random token that both sides remember. When your user walks out of WiFi range and their phone switches to cellular, the Connection ID stays the same. The QUIC connection survives.

No dropped connections. No re-handshakes. No lost uploads. The user doesn't even notice the switch happened.

Built-in Encryption

Every QUIC connection is encrypted by default using TLS 1.3 — the latest and most secure version. There's no option to run unencrypted QUIC. Security isn't an afterthought or an optional layer — it's fundamental to the protocol.


So Why Isn't Everyone Using HTTP/3 in Node.js?

Good question. The browsers are ready. The protocol is standardized. Major servers support it. So what's the holdup?

The problem is that Node.js doesn't have a built-in QUIC implementation.

Unlike HTTP/1.1 and HTTP/2, which have node:http and node:http2, there's no node:http3. It's been discussed for years, there have been experimental branches, but nothing has shipped. The reason is complex: QUIC requires deep integration with TLS internals that Node's existing APIs don't expose, and the QUIC state machine is genuinely complex — packet scheduling, congestion control, key rotation, connection migration.

This left Node.js developers with two options: either proxy through a server like nginx that supports HTTP/3 (adding infrastructure complexity), or use native C++ bindings (adding build complexity and platform-specific pain).

Until now.


QUICO: HTTP/3 for Node.js, Pure JavaScript

QUICO is an open-source library that brings full QUIC and HTTP/3 support to Node.js. And it does it with one key design decision: everything is pure JavaScript. No C++ bindings, no OpenSSL dependency, no node-gyp, no platform-specific builds.

npm install quico
Enter fullscreen mode Exit fullscreen mode

That's all it takes. It works on Linux, macOS, Windows, ARM, Docker, edge devices — anywhere Node.js runs.

But the really nice part is the API. QUICO was designed to be a drop-in replacement for node:https. If you already know how to use https.createServer(), you already know how to use QUICO.

Your First HTTP/3 Server

import quico from 'quico';
import fs from 'node:fs';

const server = quico.createServer({
  key: fs.readFileSync('server.key'),
  cert: fs.readFileSync('server.crt')
}, (req, res) => {
  res.writeHead(200, { 'content-type': 'text/plain' });
  res.end('Hello from HTTP/3!');
});

server.listen(4433, () => {
  console.log('HTTP/3 server on https://localhost:4433');
});
Enter fullscreen mode Exit fullscreen mode

Look familiar? It should — it's the same pattern as https.createServer(). Same req, same res, same writeHead(), same end(). The only difference is what's happening underneath: instead of TCP + TLS, you're now running QUIC + HTTP/3.

It Works with Express (and Fastify, and Koa)

Here's where it gets interesting. Because QUICO mirrors the node:https API, your existing frameworks just work:

import express from 'express';
import quico from 'quico';
import fs from 'node:fs';

const app = express();

app.get('/', (req, res) => {
  res.json({
    message: 'Hello!',
    protocol: req.httpVersion  // "3.0" ← this is new
  });
});

app.get('/users', (req, res) => {
  // Your existing routes — nothing changes
  res.json(users);
});

quico.createServer({
  key: fs.readFileSync('server.key'),
  cert: fs.readFileSync('server.crt')
}, app).listen(4433);
Enter fullscreen mode Exit fullscreen mode

All your routes, all your middleware, all your error handlers — everything stays exactly the same. You're just replacing https.createServer with quico.createServer. That's the whole migration.

And your users get: faster connections, no head-of-line blocking, resilient connections on flaky WiFi, and seamless network switches on mobile. Without changing a single line of your application logic.

Making HTTP/3 Requests from Node.js

QUICO also works as a client. Want to make requests to HTTP/3-enabled servers?

import quico from 'quico';

quico.request('https://www.google.com/', (res) => {
  console.log('Status:', res.statusCode);    // 200
  console.log('Protocol:', res.httpVersion); // "3.0"

  res.on('data', (chunk) => process.stdout.write(chunk));
});
Enter fullscreen mode Exit fullscreen mode

The client is smart about protocol negotiation. It uses Alt-Svc headers to discover HTTP/3 support, and if the server doesn't support it, it falls back gracefully to HTTP/2 and then HTTP/1.1. No code changes, no error handling — it just works.

Multi-Domain Hosting with SNI

Running multiple domains on one server? QUICO supports SNICallback, just like node:https:

import quico from 'quico';
import tls from 'lemon-tls';

const server = quico.createServer({
  SNICallback(servername, cb) {
    cb(null, tls.createSecureContext({
      key: fs.readFileSync(`certs/${servername}.key`),
      cert: fs.readFileSync(`certs/${servername}.crt`)
    }));
  }
}, handler);

server.listen(4433);
Enter fullscreen mode Exit fullscreen mode

Same pattern, same mental model. One server, many domains, now over HTTP/3.


WebTransport: The Real Upgrade from WebSocket

If you've ever built a real-time application — chat, gaming, live dashboards, collaborative editing — you've probably used WebSocket. And you've probably struggled with it.

WebSocket has fundamental limitations that no amount of library code can fix:

It runs on TCP, which means it suffers from the same head-of-line blocking we talked about. If one message stalls, every message behind it stalls too. For a chat app, maybe that's acceptable. For a game where you need 60 updates per second, it's painful.

Everything is reliable and ordered, even when you don't want it to be. In a game, you don't care about the player's position from 200ms ago — you want the latest position. But WebSocket will dutifully deliver every single update in order, even the stale ones.

No multiplexing. Want to send different types of data — chat messages on one channel, game state on another, voice data on a third? With WebSocket, it's all one stream. You end up building your own multiplexing layer on top.

WebTransport solves all of this. It's built on top of QUIC and gives you:

  • Bidirectional streams — like WebSocket, but multiplexed. Open as many independent channels as you need.
  • Unidirectional streams — when you only need to send data one way.
  • Unreliable datagrams — fire-and-forget messages with no head-of-line blocking. Perfect for game state, sensor data, or anything where the latest value is all that matters.

And because it runs over QUIC, you get all the benefits: no head-of-line blocking between streams, encrypted by default, and resilient to network changes.

Here's what a WebTransport server looks like with QUICO:

import quico from 'quico';

quico.createServer({ key: KEY, cert: CERT }, (req, res) => {
  if (req.headers[':protocol'] === 'webtransport') {
    // Accept the WebTransport session
    res.writeHead(200);

    // Handle bidirectional streams from the client
    req.on('stream', (stream) => {
      stream.on('data', (chunk) => {
        console.log('Received:', chunk.toString());
        stream.write('Acknowledged');
      });
      stream.on('end', () => stream.end());
    });

    // Handle unreliable datagrams
    req.on('datagram', (data) => {
      // Process real-time data (game state, sensor data, etc.)
      res.sendDatagram(data); // echo back
    });

    // The server can also open streams toward the client
    const push = res.createBidirectionalStream();
    push.write('Welcome! Server-initiated stream.');
    push.end();
  } else {
    // Regular HTTP/3 request
    res.writeHead(200, { 'content-type': 'text/plain' });
    res.end('Hello HTTP/3!');
  }
}).listen(4433);
Enter fullscreen mode Exit fullscreen mode

And on the client side — in the browser — you use the standard WebTransport API:

const wt = new WebTransport('https://yourserver.com:4433/live');
await wt.ready;

// Reliable bidirectional stream (like WebSocket, but multiplexed)
const stream = await wt.createBidirectionalStream();
const writer = stream.writable.getWriter();
await writer.write(new TextEncoder().encode('Hello'));

// Unreliable datagrams (like UDP — latest value wins)
const dgramWriter = wt.datagrams.writable.getWriter();
await dgramWriter.write(new TextEncoder().encode(JSON.stringify({
  x: player.x,
  y: player.y,
  timestamp: Date.now()
})));
Enter fullscreen mode Exit fullscreen mode

Notice that the browser-side code uses the standard WebTransport API — no special client library needed. QUICO on the server, native browser API on the client. That's it.

For testing without a browser, QUICO also includes a Node.js WebTransport client with the same API, so you can write integration tests entirely in Node.


What's Under the Hood

You might be wondering: how does a pure JavaScript library implement a protocol as complex as QUIC?

QUICO implements the full stack from the ground up:

  • UDP layer — uses Node's built-in dgram module for raw UDP sockets
  • QUIC transport (RFC 9000) — packet parsing, frame encoding, connection IDs, ACK processing, flow control, and key rotation
  • TLS 1.3 — handled by LemonTLS, a pure JavaScript TLS implementation that provides the cryptographic layer QUIC requires (AES-128-GCM, AES-256-GCM, X25519, P-256, P-384)
  • HTTP/3 (RFC 9114) — request/response mapping onto QUIC streams
  • QPACK (RFC 9204) — header compression with static and dynamic tables
  • WebTransport (RFC 9297) — bidirectional streams, unidirectional streams, and datagrams

No OpenSSL. No native code. Every packet, every frame, every byte is JavaScript you can step through in a debugger.

The fact that this is all JavaScript means something important for you as a developer: you can actually understand what's happening. If something goes wrong, you can set a breakpoint and trace it. You can't do that with a C++ QUIC library.

Tested Against the Real World

This isn't a spec-only implementation. QUICO has been tested for interoperability against the QUIC implementations powering real production traffic:

  • Google — the creators of QUIC
  • Cloudflare — one of the largest HTTP/3 deployments in the world
  • Facebook / Meta
  • Microsoft
  • nginx — the QUIC-enabled branch

When you use QUICO to connect to these servers, it works. When these servers connect to your QUICO server, it works. That level of interoperability matters.


How to Test Locally

QUIC requires TLS 1.3 with ECDSA certificates. Regular self-signed RSA certificates won't work. The easiest way to set up local development is with mkcert:

# Install mkcert (one-time setup)
brew install mkcert   # macOS
# or: choco install mkcert   # Windows
# or: apt install mkcert   # Linux

# Create a local CA and generate certificates
mkcert -install
mkcert localhost 127.0.0.1 ::1
Enter fullscreen mode Exit fullscreen mode

This creates localhost+2.pem and localhost+2-key.pem files that QUICO can use directly.

To test with curl:

curl --http3 https://localhost:4433 --insecure
Enter fullscreen mode Exit fullscreen mode

To test with Chrome:

chrome --enable-quic --quic-version=h3 \
  --origin-to-force-quic-on=localhost:4433 \
  https://localhost:4433
Enter fullscreen mode Exit fullscreen mode

What You Get Today

QUICO gives you a fully working HTTP/3 server and client with QPACK header compression, WebTransport (bidirectional streams, unidirectional streams, and datagrams), automatic H3 → H2 → H1 fallback, and drop-in compatibility with Express, Fastify, and Koa. TLS 1.3 is handled internally with support for multiple cipher suites and key exchange algorithms.

It's been tested for interoperability against Google, Cloudflare, Facebook, Microsoft, and nginx — the same servers handling real production traffic today.

The library is in active development and improving fast. Check the GitHub repo for the latest status and to follow along.


Getting Started

npm install quico
Enter fullscreen mode Exit fullscreen mode

If you have an existing HTTPS server, the migration is one line:

- import https from 'node:https';
+ import quico from 'quico';
- https.createServer({ key, cert }, app).listen(443);
+ quico.createServer({ key, cert }, app).listen(4433);
Enter fullscreen mode Exit fullscreen mode

Everything else stays the same.

Resources:


Why This Matters

We're at an interesting moment in web infrastructure. HTTP/3 and QUIC aren't future technology — they're current technology that most servers haven't adopted yet. The browsers are ready. The protocols are standardized. The big players are already there.

The gap has been tooling. For Node.js developers specifically, there hasn't been a simple path to HTTP/3. QUICO fills that gap.

Your users are already on networks where QUIC makes a real difference — WiFi with interference, cellular with handoffs, VPNs with added latency. The improvements aren't theoretical: faster handshakes, no head-of-line blocking, connections that survive network switches.

And the best part? You don't have to rewrite anything to get there.

Source: dev.to

arrow_back Back to Tutorials