Most working engineers have spent ninety percent of their concurrent-programming life in one model: shared memory protected by locks. Threads that all see the same variables. Mutexes around the critical sections. Hope and care. It's the model every OS textbook teaches, every mainstream language supports, and every senior engineer has a horror story about.
It's also not the only option. Or even the best one, for many of the problems it gets used for. Three other models — CSP, actors, and software transactional memory — have been around for decades, mature enough for production, and each solves a class of problems that lock-based designs handle poorly.
This is a map of all four, from a working backend engineer who uses each of them for different jobs, and a take on when each is the right answer.
tl;dr — Concurrency has four viable pillars: shared memory + locks (threads, mutexes), CSP (channels, Go), actors (mailboxes, Erlang), and STM (transactional memory, Clojure). None is universally better. Each solves a different problem and has a different failure mode. Senior designs often mix three of them in one system. Mutex-for-everything works until it doesn't — usually at exactly the scale you promised you'd never reach.
Pillar 1: Shared Memory + Locks
The default. Threads, mutexes, atomics, condition variables. Every mainstream language has them.
How it works: multiple threads of execution share the same address space. They read and write the same data. Mutexes make sure only one thread touches a critical section at a time. Atomics do the same for single-word operations without a full lock.
Where it shines:
-
Simple shared counters and caches.
atomic.AddInt64,sync.Map, LRU caches. The right tool. - Tight single-process coordination where the code is small enough for one person to hold in their head.
- Performance-critical paths where the overhead of channel sends or actor dispatches is too much.
Failure modes:
- Deadlocks. Two threads acquire locks in opposite order. Happens.
- Priority inversion. Low-priority thread holds the lock, high-priority thread waits, work piles up.
- Lock ordering bugs at scale. When N components each take M locks, the reasoning gets exponential.
-
Memory-model weirdness. What one thread writes, another may not immediately see. You start caring about happens-before, acquire/release semantics, and why
volatilein Java is not what you thought. - Invisible races. The worst kind. Tests pass; production fails weirdly twice a month.
Use mutexes for small, localized shared state. Once the shared state has three collaborators or more, or a nontrivial invariant across fields, reach for one of the other models.
Pillar 2: CSP (Communicating Sequential Processes)
Tony Hoare's 1978 paper, popularized by Occam and now Go. The model Rob Pike and Ken Thompson picked for Go's concurrency.
How it works: processes don't share memory; they send messages on named channels. Senders and receivers rendezvous on the channel. Ownership of data moves with the message. "Do not communicate by sharing memory; share memory by communicating."
Where it shines:
- Pipelines. Data flows through stages, each a goroutine, connected by channels. Clean to read.
- Fan-out / fan-in. One producer, many workers, one aggregator. The channel topology is the architecture.
- Backpressure. A bounded channel blocks the producer when full. No extra flow control needed.
-
Cancellation coordination.
selectwith<-ctx.Done()is a clean primitive. - Lifecycle control. Closing a channel is a broadcast to every listener.
Failure modes:
- Deadlocks remain possible. Two goroutines each waiting on the other's channel. Cycles in the channel graph are lethal.
- Memory leaks via unclosed channels. A goroutine blocked on a send that will never be received lives forever.
- Awkward request/reply. You end up passing a reply channel with each request, which works but feels verbose.
- Order isn't free. Channel ordering is only per-channel. If you fan out and fan in, the aggregation is unordered unless you sort.
Use CSP for coordination-heavy designs. When the structure of "who's alive, who sends to whom, when do things stop" is the architecture, channels make that visible in the code.
Go is the obvious exemplar, but CSP-style is also available in Rust (crossbeam-channel, tokio::sync::mpsc), Kotlin (coroutines with channels), Python (asyncio.Queue), and C# (System.Threading.Channels).
Pillar 3: Actors
Carl Hewitt's 1973 paper. Made practical by Erlang (1986) and later Akka (Scala/Java). The model behind WhatsApp, a decade of telecom, and most fault-tolerant messaging infrastructure.
How it works: an actor is a named entity with private state and a mailbox. Other actors send messages to its address. Messages are processed one at a time from the mailbox. No shared memory. Parent actors supervise children; when a child crashes, the parent decides to restart, escalate, or ignore. Crashes are normal.
Where it shines:
- Fault isolation at scale. One actor crashing is expected; it doesn't take down the system. Supervision hierarchies make "let it crash" a sensible engineering strategy.
- Stateful services. Each actor holds its own state. Conceptually clean: no shared global state, no locks around it.
- Location transparency. An actor can live in the same process, another process, or another machine. The sender doesn't know. This is where actors shine in distributed systems — the model scales across the network boundary natively.
- Massive concurrency with stateful semantics. Erlang routinely runs millions of actors per node. Each is cheap.
Failure modes:
- Mailbox unboundedness. If a producer sends faster than the actor can process, the mailbox grows without bound. Bounded mailboxes exist; use them.
- Message-ordering assumptions break across the network. Within one node, delivery order is preserved per sender. Across nodes, all bets are off without explicit sequencing.
- Testing is harder. Actors make their own state opaque; you test behavior through message exchange. Good frameworks help, but the habits needed are different from testing normal code.
- Conceptual mismatch in CRUD-style backends. If your business logic is "select some rows, transform them, insert result," actors are overkill. They shine on long-lived stateful entities (a game character, a connected device, a user session), not on stateless request handlers.
Erlang and Elixir are the canonical runtimes. Akka brings actors to the JVM. Pony is a rare actor-first typed language. In Go, you can simulate actors with a goroutine + channel-as-mailbox pattern, but you lose Erlang's supervision and "let it crash" semantics unless you build them yourself.
Use actors when you have long-lived stateful entities with fault requirements. Telecom, messaging, multiplayer game servers, IoT device shadows, any system where "this particular entity has its own state machine, and we really care when it crashes" is the shape.
Pillar 4: Software Transactional Memory (STM)
Imagine database transactions, but for in-memory data. That's STM.
How it works: critical sections are wrapped in transactions. The runtime tracks reads and writes optimistically. On commit, if any data touched was modified by another transaction, the current one rolls back and retries. No explicit locks. Composability — two transactions can be combined into a larger one without redesigning the locking order.
Where it shines:
- Composable concurrent code. Combining operations that were individually correct usually stays correct under STM. Lock-based code famously does not.
- Read-mostly workloads. STM with multi-version concurrency control scales reads without blocking.
- Avoiding the lock-ordering bug class. No locks, no deadlocks. The failure mode is retry storms, which are easier to reason about.
Failure modes:
- I/O inside transactions is awful. Transactions may retry. If you did I/O, you may have done it multiple times. Either separate I/O from transactional state, or the runtime has to forbid I/O inside transactions (Haskell's STM monad does this at the type level).
- Retry storms under contention. Heavy write contention on the same data means constant retries. In the worst case, throughput can be worse than locks.
-
Limited language support. Clojure (built-in), Haskell (
STM), Scala (scala-stm), Rust (experimentalstmcrates). Not a mainstream feature of Go/Java/C#.
Clojure is the canonical "STM as a first-class citizen" language — its refs and transactions are idiomatic. Haskell's STM monad is arguably the cleanest realization. In other ecosystems, STM exists as libraries but hasn't displaced mutexes.
Use STM when the concurrent state is small-to-medium, the access pattern is read-heavy with occasional writes, and you want the composability. For the rare problems that fit, STM is strictly simpler to reason about than locks. For problems that don't fit (I/O-heavy, write-contention-heavy), STM is worse.
How Real Systems Mix Them
The surprise for engineers who've only used one model: mature systems mix three of them in one codebase.
A typical backend service I'd build today:
- Mutexes / atomics for the inner loops — counters, caches, rate-limiter state, anything performance-critical with one clear owner.
- Channels (CSP) for coordination — worker pools, pipelines, cancellation, shutdown signaling, bounded queues.
- Actors (in a sense) for long-lived stateful entities — each connected client session, each in-flight request, each background job. In Go I'd model this as "one goroutine per entity, communicating via channels," which isn't formal actors but inherits the useful semantics: isolated state, message-passing, crash-isolation.
And I wouldn't use STM in that stack. Not because it's bad, but because the language runtime doesn't make it first-class. If I were writing Clojure, STM would be a natural fit for the in-memory state machines that would otherwise be locked maps.
The old "pick one concurrency model" debate was always a false choice. The real decision is per-problem: what shape is the concurrent work, what's the state-sharing pattern, and what failure semantics do I want.
Decision Guide
Quick map:
- I have a counter that multiple goroutines read and update. → atomic or mutex.
- I have a pipeline of work that flows through stages. → channels (CSP).
- I have a fleet of long-lived sessions, each with its own state and lifetime. → actor pattern (goroutine + mailbox channel, or real actor framework).
- I have a fleet of connected devices each with a state machine that must survive crashes. → actor framework with supervision (Erlang, Akka, or Go with explicit crash/restart logic).
- I have complex shared state with nontrivial invariants across fields, and updates are occasional but important to compose. → STM if your language supports it; otherwise, lots of careful mutex discipline.
-
I have a request/response flow with fan-out to downstreams. → CSP with
errgroup.WithContext. - I have no idea what I have. → Start with mutexes, switch when it hurts. Don't over-engineer the first version.
The Real Lesson
Most people who get bitten by concurrency bugs got bitten because they used the wrong model, not because they used it wrong. A mutex-heavy design for a workload that's really a pipeline is fragile. A channels-for-everything design when there's a shared counter underneath ends up with awkward rendezvous. An actors-everywhere design when the business is CRUD requests reads like over-engineering.
The four pillars aren't competing theories of concurrency. They're four tools, each good at specific jobs. Senior engineers know all four and reach for the right one. Junior engineers reach for the only one they know and force-fit it.
If your career so far has been mostly mutexes, spend a weekend reading the other three. Write a toy pipeline in Go channels. Read Erlang's supervision documentation. Play with Clojure refs. The investment pays back every time you sit in a design review and someone proposes locking their way out of a structural problem.
Related
- Go's Concurrency Is About Structure, Not Speed — CSP applied concretely in Go.
- Why Go Handles Millions of Connections — the runtime characteristics that make CSP cheap in Go.
- Scale-Up vs Scale-Out: Why Every Language Wins Somewhere — the language-level view of the same question.