I built Talome over the last couple of weeks. It's an open-source home server platform (AGPL-3.0) where AI is the primary interface. You tell it what you want in plain English, and it does the work — installs apps, wires services together, monitors containers, creates new apps, and when I'm really annoyed at the UI, rewrites its own source code.
No YAML. No wiki tabs. No spreadsheet of port assignments.
The one-message media stack
If you've ever set up a media server, you know the drill. Install Jellyfin. Install Sonarr. Install Radarr. Install Prowlarr. Install qBittorrent. Now wire them together — Sonarr needs to know about qBittorrent, Radarr needs the same, Prowlarr needs to push indexers to both. Each service has its own web UI, its own settings page, its own API key format.
It's an afternoon of work, every single time.
In Talome, it's one message:
User: Set up a media server with Jellyfin, Sonarr, Radarr, Prowlarr, and qBittorrent. My media is at
/mnt/media.
Behind the scenes, the assistant chains through 10+ tool calls — search_apps, install_app (x5), wire_apps, arr_add_root_folder, arr_add_download_client, arr_sync_indexers_from_prowlarr, check_service_health — and reports back with URLs. About 60 seconds later, everything is running, connected, and ready.
What's inside
Installs and wires services together — download clients, API keys, root folders, quality profiles. One message.
Suggests what to put on tonight — learns what you watch, pulls from Plex/Jellyfin on-deck and your Radarr library, asks if you're in the mood for the same thing or something new.
Organizes your audiobook library — Audiobookshelf integration, cover art, metadata, series grouping, ID3 cleanup. No more Volume 1 — track 03.mp3.
Background monitoring — rule-based detectors every 60s, LLM triage, attempted remediation. You wake up to a summary instead of a page at 3am.
Self-improvement — reads its own TypeScript source, writes fixes, runs the type checker, rolls back via git stash if anything breaks. Yes, it can edit its own UI. No, it hasn't escaped yet.
Creates new apps from a description — "a dashboard that shames me for how many Docker containers I have running" becomes a working app with its own web UI, built against Talome's design system.
Uses Claude Code under the hood for heavy lifting — app scaffolding and self-improvement run through headless Claude Code, which means the interactive chat stays cheap and fast while the expensive work happens out of band.
MCP server + skills — drive every Talome tool from Claude Code or Cursor. Skills like /create-app, /self-improve, /add-domain are part of the repo.
Dashboard with drag-and-drop widgets, cinema mode, dark mode, 20+ ready widgets.
230+ tools that actually execute across Docker, media, networking, backups, and monitoring.
AGPL-3.0. No feature gates. No license switch. No "community edition".
The part I think is actually novel
Talome can read and modify its own TypeScript source code.
Tell it "the dashboard sidebar feels cluttered" and it:
- Reads the relevant component with
read_file - Drafts a diff with
plan_change - Applies it with
apply_change - Runs
tsc --noEmitin the affected workspace - If the compiler fails → automatic rollback via
git stash - If it passes → commit with a full audit trail
Every change is reversible with rollback_change. There's a full evolution log in the dashboard showing every attempt, the diff, which files changed, and how long it took.
The heavy lifting here runs through headless Claude Code — not the Vercel AI SDK — because Claude Code has its own prompt caching and session management, which keeps token costs down for long-running tasks like "refactor this component" or "implement this feature."
The stack
- TypeScript strict everywhere, Zod for all runtime validation
- Hono backend, Next.js 16 frontend (App Router, React 19)
- SQLite + Drizzle ORM — single-file database, no external services
- Vercel AI SDK + Anthropic Claude, OpenAI, or local Ollama
- Docker for managed apps, multi-arch image (amd64 + arm64)
- MCP server that auto-syncs from the tool registry — zero extra config
The tool registry is worth a word. Tools are organized into domains that activate dynamically based on which apps you have configured. Install Sonarr, and 27 arr tools appear. Configure Home Assistant, and 5 smart-home tools unlock. The LLM only sees tools it can actually use, which keeps it fast and accurate.
Where I'm at
So far Talome is playing well with Claude and Claude Code on my Mac mini — that's the setup I've been daily-driving. OpenAI support is in, Ollama is experimental. I'd love to see how it behaves with:
- Other providers — OpenAI in production, Ollama on a real local model, anything else I should try
- Other platforms — Linux homelabs, Synology / UGREEN NAS boxes, Raspberry Pi, whatever you're running
- Other use cases — I built it around media + monitoring because that's my pain, but the tool system is generic. Curious what people bend it toward: smart home, dev environments, family photo backups, whatever
Try it
curl -fsSL https://get.talome.dev | bash
Requirements: macOS or Linux, 2GB RAM, 5GB disk, Docker (for managed apps), and an Anthropic API key.
- Website: talome.dev
- Repo: github.com/tomastruben/Talome
- Docs: talome.dev/docs
- Discord: discord.gg/HK7gFaVRJ
I'm looking for people to break it before a public launch:
- Does the installer work on your hardware?
- What breaks?
- What's missing?
- What would you want it to do that it doesn't?
First 50 testers get credited in the README + an @Early Adopter role in Discord. I'm there daily, bugs get triaged fast.
Thanks for reading. Honest feedback welcome.