After working on several Rust projects with AI assistance, I want to share a division of responsibility that has worked well in practice, and ask whether others have found the same — or a better approach in other languages.
The core model:
The programmer owns the world-view of the project — module boundaries, crate structure, system architecture, and design trade-offs. The AI implements the flesh — concrete trait implementations, error handling, boilerplate, and module-level logic.
The programmer owns the architecture; the AI owns the implementation. And the Rust compiler makes sure the two stay honest with each other.
In practice, I have found that AI can already generate crate-level Rust code that compiles, runs correctly, and is architecturally sound — requiring only minor adjustments. Occasionally it produces solutions I would not have thought of myself.
Why Rust works particularly well for this model:
The borrow checker and type system act as a verification layer between the programmer's intent and the AI's output. If the AI misunderstands the architecture, the compiler rejects it. This means the programmer does not need to read every line of generated code — the compiler enforces the contract.
This is not just about safety. It also forces the AI to produce code that is structurally honest. There is no way to paper over a bad design with runtime workarounds.
Comparison with other languages:
Python — The iteration cycle is fastest, and AI generates idiomatic Python reliably. But there is no compiler to catch architectural mismatches. The programmer must read and verify far more of the generated code manually. The division of responsibility is blurry.
TypeScript — Static types help, and strict mode catches a meaningful class of AI mistakes. A reasonable middle ground, though AI frequently reaches for any to silence errors, weakening the contract.
Go — Simple syntax reduces the surface area for AI mistakes. But nil pointer panics and verbose error handling mean AI output still requires careful review. The compiler helps less than Rust.
Java — Mature static analysis tooling provides a review pipeline, but AI tends to generate verbose, over-engineered boilerplate. The null problem remains. Architecturally, the programmer ends up doing more correction work.
C++ — AI can generate C++ that compiles but contains subtle memory bugs the compiler does not catch. The division of responsibility breaks down — the programmer must verify deeply, which defeats the purpose.
Personal note:
I came to Rust from a Java background, after also working with Go, JavaScript, Python, and C++. It took three attempts and roughly six months of serious effort before Rust clicked.
What surprised me was that Rust is, in a deeper sense, a remarkably clean language. The complexity is front-loaded — concentrated at compile time — so the resulting code is predictable and self-consistent in a way that Java or C++ rarely achieves at scale.
In the AGI era, that front-loaded strictness turns out to be exactly what you want when working with AI. The compiler becomes a shared contract between the programmer's world-view and the AI's implementation. Neither side can cheat.
The programmer owns the architecture; the AI owns the implementation. And the Rust compiler makes sure the two stay honest with each other.
Question:
Is this division of responsibility — programmer as architect, AI as implementer — viable in your experience? And does the language choice fundamentally change how well this model works?
If you could only choose one language for AI-assisted development going forward, which would it be — and why?