This is part of my HNG DevOps internship series. Follow along as I document every stage.
Previous article: How I Secured a Linux Server from Scratch: HNG DevOps Stage 0
A Quick Recap
In Stage 0, I provisioned a Linux server on Oracle Cloud, hardened SSH access, configured UFW, set up Nginx, and secured everything with a Let's Encrypt SSL certificate. If you missed that, the link is above.
Stage 1 builds directly on top of that foundation. This time, the task was to write an actual API, deploy it on any server (me I used the same server that was used for stage 0), and configure Nginx to act as a reverse proxy in front of it.
The Task
Here is a summary of what needed to be done:
Build the API
Three endpoints, all returning Content-Type: application/json, HTTP status 200, and responding within 500ms:
-
GET /returns:
{"message":"API is running"}
-
GET /healthreturns:
{"message":"healthy"}
-
GET /mereturns:
{"name":"Your Full Name","email":"<your email>","github":"https://github.com/<yourusername>"}
Deploy It
- Run the app on a non-public port
- Configure Nginx to reverse proxy public traffic to the app
- Keep the service running persistently using
systemd,pm2, or equivalent
Document It
- Push the code to a public GitHub repository with a README covering how to run it locally, the endpoints, and the live URL
Why Rust?
The task suggested Node.js, Python, PHP, or Go as expected options. I went with Rust using the Axum framework instead, and here is why.
One of the evaluation criteria was that all endpoints must respond within 500ms. Rust is compiled to native machine code, has no garbage collector, and starts up in milliseconds. The memory footprint of the running service on the server was 1.2MB. For comparison, a Node.js Express app doing the same thing would typically sit around 40-60MB.
For a task where performance is explicitly measured, Rust made sense. The API logic itself is also simple enough that the extra strictness of the type system doesn't slow you down. It was a good fit.
Writing the API
I used Axum, which is the most popular async web framework in the Rust ecosystem. Here is the full main.rs:
use axum::{Json, Router, routing::get};
use serde::Serialize;
use std::net::SocketAddr;
#[derive(Serialize)]
struct MessageResponse {
message: &'static str,
}
#[derive(Serialize)]
struct MeResponse {
name: &'static str,
email: &'static str,
github: &'static str,
}
fn app() -> Router {
Router::new()
.route("/", get(root))
.route("/health", get(health))
.route("/me", get(me))
}
async fn root() -> Json<MessageResponse> {
Json(MessageResponse {
message: "API is running",
})
}
async fn health() -> Json<MessageResponse> {
Json(MessageResponse { message: "healthy" })
}
async fn me() -> Json<MeResponse> {
Json(MeResponse {
name: "Gideon Bature",
email: "infoaboutgideon@gmail.com",
github: "https://github.com/GideonBature",
})
}
#[tokio::main]
async fn main() {
let port = std::env::var("PORT")
.ok()
.and_then(|value| value.parse::<u16>().ok())
.unwrap_or(3000);
let addr = SocketAddr::from(([127, 0, 0, 1], port));
let listener = tokio::net::TcpListener::bind(addr)
.await
.expect("failed to bind TCP listener");
axum::serve(listener, app())
.await
.expect("server exited unexpectedly");
}
A few things worth noting here. The app binds to 127.0.0.1 (localhost) on port 3000, not 0.0.0.0. This means it is only reachable from within the server itself. The outside world cannot hit port 3000 directly. Nginx will be the one receiving public traffic and forwarding it internally. This is intentional and is exactly what the task requires.
The PORT environment variable is also read at startup, with 3000 as a fallback. This makes the binary flexible across different environments.
And the Cargo.toml:
[package]
name = "hng-devops-api-deployment"
version = "0.1.0"
edition = "2021"
[dependencies]
axum = "0.8"
tokio = { version = "1", features = ["full"] }
serde = { version = "1", features = ["derive"] }
serde_json = "1"
The Cross-Compilation Challenge
My server runs on Oracle Cloud's free tier Ampere A1 chip, which is aarch64 (ARM 64-bit). My local machine is a Mac with Apple Silicon, which is also ARM but a different target entirely. I needed to produce a Linux ARM binary from macOS.
The reason I chose to build locally and copy the binary to the server rather than build directly on the server is practical. Oracle's free tier has limited CPU and RAM. Rust compilation is memory-hungry and would have been painfully slow, possibly even crashing on a 1GB RAM instance.
First, confirm your server's architecture:
ssh <user>@<server-ip> "uname -m"
# aarch64
Add the cross-compilation target locally:
rustup target add aarch64-unknown-linux-gnu
I first tried using cross, a popular Rust cross-compilation tool:
cargo install cross
cross build --release --target aarch64-unknown-linux-gnu
This failed with:
error: toolchain 'stable-x86_64-unknown-linux-gnu' may not be able to run on this system
The issue is that cross on Apple Silicon tries to install an x86_64 Linux toolchain, which doesn't make sense on an ARM Mac. This is a known compatibility problem with ARM Macs.
The fix was to bypass cross entirely and use Docker directly. This approach pulls a pre-built cross-compilation image and runs the build inside a container, with the project folder mounted in:
docker run --rm \
-v "$(pwd)":/project \
-v "$HOME/.cargo/registry":/usr/local/cargo/registry \
-w /project \
rust:latest \
bash -c "apt-get update && apt-get install -y gcc-aarch64-linux-gnu && rustup target add aarch64-unknown-linux-gnu && CARGO_TARGET_AARCH64_UNKNOWN_LINUX_GNU_LINKER=aarch64-linux-gnu-gcc cargo build --release --target aarch64-unknown-linux-gnu"
The $HOME/.cargo/registry mount is important. It caches your downloaded crates so subsequent builds don't re-download everything from scratch.
After a few minutes, the binary was ready at:
target/aarch64-unknown-linux-gnu/release/hng-devops-api-deployment
Copy it to the server:
scp target/aarch64-unknown-linux-gnu/release/hng-devops-api-deployment <user>@<server-ip>:~/
Verify the architecture is correct:
ssh <user>@<server-ip> "file ~/hng-devops-api-deployment"
# hng-devops-api-deployment: ELF 64-bit LSB pie executable, ARM aarch64
Quick smoke test on the server:
ssh <user>@<server-ip>
PORT=3000 ./hng-devops-api-deployment &
curl http://localhost:3000/
curl http://localhost:3000/health
curl http://localhost:3000/me
All three returned the expected JSON. Now on to making it run permanently.
Setting Up the systemd Service
Running the binary manually works, but it stops the moment you close the terminal or the server reboots. systemd is the Linux service manager that keeps processes alive automatically. Think of it as telling the operating system: "this process should always be running."
First, move the binary to a proper system location:
sudo mv ~/hng-devops-api-deployment /usr/local/bin/
sudo chmod +x /usr/local/bin/hng-devops-api-deployment
Create the service file:
sudo nano /etc/systemd/system/hng-api.service
[Unit]
Description=HNG DevOps API
After=network.target
[Service]
Type=simple
User=ubuntu
Environment=PORT=3000
ExecStart=/usr/local/bin/hng-devops-api-deployment
Restart=always
RestartSec=5
[Install]
WantedBy=multi-user.target
A few things to understand in this file:
After=network.target tells systemd to only start this service after the network is available. Since this is an API that binds to a network port, that ordering matters.
Restart=always means if the process ever crashes for any reason, systemd will automatically restart it after RestartSec=5 seconds.
WantedBy=multi-user.target means this service starts automatically on every normal system boot.
Now enable and start it:
sudo systemctl daemon-reload
sudo systemctl enable hng-api
sudo systemctl start hng-api
sudo systemctl status hng-api
The output should show Active: active (running). Notice the memory usage:
Memory: 1.2M
That is 1.2 megabytes. The entire running API server. That is Rust.
Configuring Nginx as a Reverse Proxy
With the API running on port 3000 internally, the last step is to tell Nginx to forward public requests to it. This is called a reverse proxy and it is how virtually every production web application is deployed. Nginx becomes the single public entry point, handling SSL, routing, and security, while the app just focuses on responding to requests.
Open the Nginx config from Stage 0:
sudo nano /etc/nginx/sites-available/hng
Replace the entire content with this updated version that adds the three new proxy locations:
server {
listen 443 ssl;
server_name <your-domain>;
ssl_certificate /etc/letsencrypt/live/<your-domain>/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/<your-domain>/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
# Stage 1 - proxy root to Rust API
location / {
proxy_pass http://127.0.0.1:3000/;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
# Stage 1 - proxy /health to Rust API
location /health {
proxy_pass http://127.0.0.1:3000/health;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
# Stage 1 - proxy /me to Rust API
location /me {
proxy_pass http://127.0.0.1:3000/me;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
# Stage 0 - static JSON (kept from previous stage)
location = /api {
add_header Content-Type application/json;
return 200 '{"message":"HNGI14 Stage 0","track":"DevOps","username":"Gideon Bature"}';
}
}
server {
listen 80;
server_name <your-domain>;
if ($host = <your-domain>) {
return 301 https://$host$request_uri;
}
return 404;
}
The proxy_set_header lines are worth understanding. Host $host passes the original domain name to the app. X-Real-IP $remote_addr passes the real client IP address. Without these, your app would see all requests as coming from 127.0.0.1, which makes logging and debugging harder.
Test and reload:
sudo nginx -t
sudo systemctl reload nginx
Final Verification
curl https://<your-domain>/
curl https://<your-domain>/health
curl https://<your-domain>/me
All three should return clean JSON with no HTML, no errors, and no delay.
The Big Picture
Stage 1 introduced a pattern that shows up in almost every real production deployment:
Internet -> Nginx (port 443) -> App (port 3000, internal only)
The app never talks to the internet directly. Nginx sits in front of it and controls everything that comes in and goes out. This separation means you can add rate limiting, authentication, caching, or swap out the app entirely, all without changing what the outside world sees.
The systemd service pattern is equally important. In production, you never want to manually restart a service after a crash or reboot. systemd handles that automatically, and if something does go wrong, journalctl -u hng-api gives you the full logs to debug with.
| What we did | Why it matters |
|---|---|
| Rust with Axum | Native performance, tiny memory footprint |
| Cross-compiled binary | No build toolchain needed on the server |
| Bound to 127.0.0.1 | App port never exposed to the internet |
| systemd service | Automatic restarts and boot persistence |
| Nginx reverse proxy | Single public entry point for all traffic |
Stage 2 is next. Follow along as I keep documenting the journey.