This is a post I've been wanting to write for a while. Not because I think I've figured something out, but because I keep getting the same reaction when I mention my stack: "wait, you're doing agent stuff in Go?" So here's the long answer.
The agentic AI ecosystem runs on Python. LangChain, CrewAI, AutoGen, Semantic Kernel, LlamaIndex, all Python. The SDKs are Python-first. The tutorials assume Python. If you want to build something for AI agents, the path of least resistance is pip install.
I went the other way. Over the past weeks I've built five agent-related projects in Go: a governance proxy, a memory server, an MCP bridge for Ollama, an autonomous research agent, and a management dashboard backend. This wasn't a carefully planned decision. I started with one project, it worked, thanks to Claude, and I kept going. This is a field report on that choice, for better or worse.
The projects
| Project | What it does | Go lines | Binary | Direct deps |
|---|---|---|---|---|
| agent-mesh | Governance proxy for agent tool calls | 11,340 | 7.2 MB | 1 (yaml.v3) |
| mem7 | Shared memory MCP server | 2,938 | — | 1 (sqlite) |
| scout7 | Autonomous web research agent | 1,112 | 9.3 MB | 1 (yaml.v3) |
| ollama-mcp-go | MCP server for Ollama | 929 | 8.4 MB | 0 |
| agent7 | Mesh management dashboard | — | — | — |
The pattern that jumps out: agent-mesh has one external dependency. Eleven thousand lines of Go covering the MCP client and server, a policy engine, rate limiter, approval workflow, trace store, OTEL exporter, HTTP proxy, and CLI. All built on the standard library plus a YAML parser. ollama-mcp-go has zero external dependencies. I didn't plan this. It just kept working without pulling in more packages, so I never did.
Why Go for this layer
AI agents need two things to work: an LLM that reasons, and infrastructure that moves data between the LLM and the outside world. The first part is model weights and tensor math, and Python owns that space for good reason. The second part is networking, concurrency, serialization, and process management. That's plumbing. Go was designed for plumbing.
MCP is a concurrency problem. An MCP proxy manages multiple stdio subprocesses (one per upstream server), each with its own stdin/stdout pair, plus incoming requests from multiple agents over HTTP or stdio. agent-mesh starts all MCP servers in parallel on boot, and goroutines with channels make this natural. The equivalent Python code would be asyncio tasks with subprocess pipes, which works but can be difficult to implement properly.
Single binary distribution matters. curl | tar xz and it runs. No runtime, no virtualenv, no node_modules. When your user is a developer setting up Claude Code, "download one file" beats "install Python 3.11+, create a venv, pip install six packages, hope nothing conflicts with your system Python." Cross-compilation is free: GOOS=darwin GOARCH=arm64 go build produces a macOS ARM binary from a Linux machine.
Startup time is not trivial. agent-mesh launches as a subprocess of Claude Code (via MCP stdio). Every millisecond of startup is latency the user feels on the first tool call. Go binaries start in single-digit milliseconds. A Python process with imports takes ~200–500ms before it executes a single line of application code.
Memory footprint is predictable. agent-mesh idles at ~14 MB RSS. A comparable Python process with FastAPI, uvicorn, and a few imports sits at 80–120 MB before handling a request. When you run five MCP servers plus a proxy on a developer laptop, that adds up.
The cost
Choosing Go for agent infrastructure has real costs. I'd be lying if I said it was all smooth, and I think being honest about that is more useful than a sales pitch.
The ecosystem doesn't exist. There's no LangChain for Go. No official MCP SDK (I wrote my own client and server, which was educational but not exactly fast). No Ollama Go client that speaks MCP (I wrote ollama-mcp-go). Every integration point is hand-built. In Python, these are one-line imports. In Go, they're weekends.
LLM interaction is verbose. Calling Ollama's chat API in Python with the official SDK is three lines. In Go, it's an HTTP request, JSON marshaling, response parsing, and explicit error handling. About 30 lines for the same result. And I do not mention the headache with understanding how interfaces works in Go...
Prototyping is slower. Python lets you sketch an idea in 20 lines and iterate. Go's type system and explicit error handling force structure earlier. For scout7 (the web research agent), the first working version took longer than it would have in Python. But the second version, with proper error handling, timeouts, and graceful shutdown, came almost for free because Go had already made me handle those cases.
Hiring and contributions. If these projects grow, the contributor pool for Go agent tooling is smaller than Python's. Most people building in this space think in Python. That's a real constraint.
I'm still learning Go. I should probably mention that. I don't come from a Go background. My primary language is Python (even if I'm not an expert), and I've been writing Go for weeks only, not years. I don't always understand what the compiler is angry about. I occasionally discover that a pattern I was proud of is actually an antipattern in Go.
Half of these projects were pair-programmed with Claude, which means an AI agent helped me build governance tooling for AI agents. There's a joke in there somewhere. But the point is: you don't need to master a language to ship useful things in it. You need a good feedback loop. The compiler catches the stupid mistakes. The AI catches the architectural ones, and you keep asking questions about things you don't understand. I catch the rest... eventually.
Where Python wins
I still use Python where it makes sense. event7 (schema registry governance) has a FastAPI backend because it's a web app with a database, not a systems proxy. The LangChain demo for agent-mesh is Python, because it demonstrates governance on a Python agent. If I were building a RAG pipeline or fine-tuning a model, I'd use Python without hesitation.
The split is simple: if it touches model weights or needs rapid ML prototyping, Python. If it's infrastructure that moves data, enforces policy, or manages processes, Go.
What the ecosystem is missing
The Go ecosystem for agentic AI has real gaps:
- No official MCP SDK for Go. Anthropic ships Python and TypeScript. The community Go implementations are young and incomplete. I ended up writing my own MCP client and server inside agent-mesh because none of the existing options handled both stdio and SSE transports reliably.
- No structured output libraries. Python has Instructor, Outlines, Marvin for forcing LLM output into typed schemas. In Go, you parse JSON and hope. scout7 for example does regex extraction on LLM responses as a fallback. It works but it's not elegant.
- No agent framework. Not necessarily a bad thing, since frameworks often add more abstraction than value in my opinion. But it means every Go agent starts from scratch.
These gaps are also opportunities. The first solid Go MCP SDK will get adoption, because Go developers building agent tooling have no good option today.
The real argument
The case for Go in agent infrastructure isn't "Go is better than Python." It's that agents are not monoliths. An agent system has layers: the reasoning layer (LLM), the orchestration layer (framework), and the infrastructure layer (transport, policy, memory, tracing). Python dominates the first two. The third layer is systems programming, and Go is a systems language.
Service meshes didn't get written in Java because microservices were Java. Envoy is C++. Linkerd is Rust. They're infrastructure. They sit below the application and need to be fast, small, and reliable. Agent governance, memory servers, and MCP proxies are the same kind of thing. They deserve a systems language.
Five projects in, I haven't regretted the choice. The binaries are small, the deploys are trivial, the concurrency model fits perfectly the problem, and the dependency count stays close to zero. The ecosystem cost is real but manageable. You write more code eventually, but you understand all of it.
If you're considering Go for this kind of work and you're not sure, just try one small project. A single MCP server, maybe. The worst that happens is you learn something. That's more or less how all of this started for me.
All projects mentioned are open-source: agent-mesh, mem7, scout7, ollama-mcp-go.