Building a TCP Server in Rust With tokio: From Accept Loop to Graceful Shutdown

rust dev.to

Handling concurrent network connections requires managing lifecycles without blocking the event loop, which dictates high-throughput and fault isolation.

What We're Building

We are constructing a production-grade TCP server using the Tokio runtime. The scope includes binding a socket to a local interface, establishing a non-blocking accept loop, spawning isolated tasks for each connection, and implementing a graceful shutdown mechanism that listens for SIGINT or SIGTERM. The resulting architecture ensures that a single failing client does not crash the entire server and allows the system to drain pending requests before exiting.

Step 1 — Initialize the Runtime and Listener

Before accepting connections, you must boot the asynchronous runtime and configure the TCP socket. You bind the listener to a specific address to restrict traffic to the intended interface.

use tokio::net::TcpListener;

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    let listener = TcpListener::bind("127.0.0.1:8080").await?;
    println("Listening on port 8080...");
    loop {
        // Accept loop logic follows
    }
}
Enter fullscreen mode Exit fullscreen mode

Using tokio::net::TcpListener ensures the accept operation does not block the current thread, allowing the runtime to handle I/O multiplexing efficiently.

Step 2 — The Non-Blocking Accept Loop

The core logic involves polling the listener for incoming connections. You iterate indefinitely, attempting to pull a stream from the queue, and handling errors like address in use gracefully.

use tokio::sync::mpsc;

// ... previous setup
while let Ok((stream, addr)) = listener.accept().await {
    println("New connection from: {}", addr);
    // Spawn connection handler
    tokio::spawn(async move {
        // Handle stream...
    });
}
Enter fullscreen mode Exit fullscreen mode

The while let pattern decouples the acceptance logic from the handling logic, preventing the thread from stalling while waiting for a socket to become available.

Step 3 — Isolate Connection Tasks

Each incoming connection must be handled by its own task. This prevents memory growth in a single heap space and allows individual connections to fail without affecting others.

use tokio::sync::oneshot;

let handle = tokio::spawn(async move {
    let (tx, mut rx) = oneshot::channel();
    // Process stream buffer
    // Drop `stream` when done to release resources
    drop(stream); 
});
Enter fullscreen mode Exit fullscreen mode

Spawning connections ensures resource independence, which is critical for high-concurrency backends where a malicious or misbehaving client must not starve legitimate users.

Step 4 — Implement Graceful Shutdown

Stopping the server abruptly drops active sockets, potentially causing timeouts for clients. Graceful shutdown requires draining incoming requests and waiting for active handlers to finish.

use tokio::signal;
use std::time::Duration;

// Create a shutdown sender
let (shutdown_tx, _shutdown_rx) = mpsc::channel(1);

tokio::spawn(async move {
    loop {
        tokio::select! {
            Ok((_, addr)) = listener.accept() => {
                // Spawn handle
            },
            _ = shutdown_tx.send(()) => {
                println!("Shutting down...");
                // Exit loop
                break;
            }
        }
    }
});

// Wait for signal to arrive
let _ = signal::ctrl_c().await;
Enter fullscreen mode Exit fullscreen mode

This architecture uses tokio::select! to race between accepting connections and receiving shutdown signals, ensuring the runtime waits for cleanup before terminating.

Key Takeaways

  • Async I/O: Blocking the event loop inside the accept loop kills performance, so tokio::net::TcpListener is mandatory.
  • Task Isolation: Spawning connection handlers in the runtime prevents memory leaks and cascading failures across processes.
  • Signal Handling: Using tokio::signal allows the application to listen for OS-level interrupts and trigger cleanup logic instead of hard exiting.
  • Resource Cleanup: Dropping handles ensures the operating system reclaims file descriptors once the server stops accepting new connections.
  • Channel Synchronization: Using channels to signal shutdown allows the main loop to exit cleanly without relying on external processes.
  • Error Propagation: Handling Result types ensures that binding failures or listener accept errors do not silently ignore critical infrastructure states.

What's Next?

To harden this implementation, you should implement TLS encryption using tokio-rustls to secure data in transit. Adding metrics via Prometheus or OpenTelemetry allows you to monitor active connection counts and response times in real time. Next, consider migrating to a framework like Actix-web to integrate this handler into a full API surface with routing support. Finally, implement circuit breaker patterns to prevent cascading failures if upstream dependencies become unstable.

Further Reading

Part of the Rust community aims to share knowledge about building resilient network services.


Built with ❤️ by your team.

Source: dev.to

arrow_back Back to Tutorials