For years, software engineering has optimized for:
- writing code faster;
- abstracting infrastructure;
- reducing boilerplate;
- generating APIs;
- simplifying CRUD.
AI accelerated this dramatically.
But I believe we are entering a completely different era.
An era where the bottleneck is no longer:
- typing speed;
- framework knowledge;
- remembering syntax.
The new bottleneck is:
How well can you architect reusable semantic systems?
That realization led me to create what I call:
VibeCoding State-of-the-Art-Driven Development
And the emotional force behind it was:
BRIO-Driven Development
Because if something can be dramatically better, why settle for the basic version?
What Is VibeCoding State-of-the-Art-Driven Development?
Most people think “VibeCoding” means:
- letting AI generate random code quickly;
- prototyping faster;
- replacing junior developers;
- automating boilerplate.
That is not what I’m doing.
For me, VibeCoding means using AI as:
- a runtime architecture researcher;
- a distributed systems theorist;
- a semantic compiler collaborator;
- a language design partner;
- a convergence and security advisor.
I became completely dependent on AI for one reason:
No human can keep up with every state-of-the-art technique across every domain anymore.
While the AI generates code, I debate with it:
- the best execution semantics;
- the best replay guarantees;
- the best convergence model;
- the best distributed runtime topology;
- the best entity declaration syntax;
- the best type-system strategy;
- the best security guarantees;
- the best orchestration patterns.
I learned more in two weeks discussing architecture with AI than I did in the previous ten years writing traditional software.
Not because the AI replaced me.
But because it multiplied:
- curiosity;
- iteration speed;
- architectural exploration;
- systems thinking.
The Goal: Extreme Developer Experience
The goal is not to create “another framework”.
The goal is:
To create the easiest and most powerful framework ever built.
A framework where:
- after version 1.0;
- I never need to manually code another system again.
New systems should be created only through:
.be2e.json.yml
configuration and semantic declaration files.
The orchestration complexity should exist:
- once;
- globally;
- permanently.
Everything else becomes:
- specification;
- semantic declaration;
- runtime derivation.
BE2E: One Language To Generate Everything
One of the biggest problems with AI-generated systems today is this:
LLMs need to generate code for many different languages, frameworks and runtimes.
Frontend.
Backend.
Queue systems.
ORMs.
Vector databases.
Caching.
Event systems.
Observability.
Security.
This creates:
- inconsistency;
- architectural drift;
- duplicated logic;
- hallucinated integrations;
- fragile systems.
So I asked:
Why should the AI generate N implementations if it could generate only one semantic language?
That language became:
Behavior E2E (BE2E)
A semantic DSL where an entity behavior is declared once:
behavior User.Login {
opens LoginPage
-> fill email
-> fill password
-> click submit
-> expect Session.created
}
And the runtime handles:
- orchestration;
- replay;
- observability;
- security;
- caching;
- convergence;
- synchronization;
- projections;
- distributed consistency;
- event sourcing;
- semantic validation;
- polyglot persistence.
The AI only needs to become exceptional at generating:
- one DSL.
The runtime performs the heavy lifting.
Why Hyper-Polyglot?
I’m building a hyper-polyglot architecture using more than 7 languages.
Why?
Because different problems deserve different execution models.
Current stack direction:
| Plane / Category | Technology / Language |
|---|---|
| UI Plane | TS |
| Type System Plane | Haskell (Atomic Behavior Types) |
| Test Plane | Haskell |
| Legal/Compliance Plane | PROLOG |
| AI Plane | Mojo/Python |
| Effects Plane | Koka |
| Linear Plane | Austral |
| Actors Plane | Gleam |
| Media & Buffer | Zig |
| Crypto Plane | Rust |
| Gateway Plane | Go |
| Comunication | NATS/Kafka |
| Write Data Plane | Postgres |
| Read Data Plane | MongoDB |
| Cache Data Plane | Redis |
| Vector Data Plane | Qdrant |
| Graph Data Plane | Neo4J |
| Trace Data Plane | Tempo |
| Log Data Plane | Clickhouse |
| Events Data Plane | EventStoreDB |
| Agent Eventsourcing Local Data Plane | BadgerDB |
| Analytics Data Plane | Cassandra |
And almost every language compiles to:
WASM
This is not “complexity for fun”.
This is:
- execution specialization;
- correctness specialization;
- safety specialization;
- runtime specialization.
I originally intended to use Erlang/Elixir heavily, but after discovering Gleam I realized it provides a much cleaner path for typed actor orchestration.
Why Event Sourcing, Graph, Vector and Observability Are Mandatory
Modern AI systems cannot remain:
- CRUD-centric;
- relational-only;
- stateless;
- opaque.
Any professional AI-native system MUST have:
- Event Sourcing
- Observability
- Cache-first architecture
- Vector storage
- Graph storage
Why?
Because intelligence requires:
- memory;
- relationships;
- semantics;
- retrieval;
- causality;
- traceability.
Vectors allow:
- semantic retrieval;
- contextual memory;
- embeddings;
- similarity reasoning.
Graphs allow:
- relationships;
- causality;
- semantic traversal;
- knowledge structures.
Observability is mandatory because:
- AI systems are probabilistic;
- distributed;
- emergent;
- dynamically evolving.
Without observability:
- you cannot debug;
- explain;
- benchmark;
- trust;
- evolve agents safely.
And Event Sourcing becomes critical because:
- replayability;
- causality;
- convergence;
- auditability;
- semantic reconstruction
are foundational for intelligent runtimes.
“Why Not Just Use Postgres?”
I could absolutely build everything with only Postgres.
And I will support that version too.
But honestly:
I see zero problem in running one Docker container per specialized database.
We are no longer in 2012.
Storage engines exist for different purposes:
- Redis for cache;
- Qdrant for vector retrieval;
- Neo4j for graph semantics;
- ClickHouse for observability;
- EventStoreDB for event sourcing;
- MongoDB for read projections;
- PostgreSQL for transactional writes.
The runtime should orchestrate this complexity automatically.
The developer should not suffer because the architecture is advanced.
Everything-as-Code Taken to the Extreme
Most frameworks still think in:
- APIs;
- services;
- routes;
- tables;
- DTOs.
I think in:
- behaviors;
- semantic transformations;
- convergence;
- guarantees;
- orchestration;
- runtime algebra.
The real product is not the code.
The real product is:
- the semantic model;
- the runtime guarantees;
- the orchestration engine;
- the reusable execution semantics.
That’s why I call it:
State-of-the-Art-Driven Development
Because the architecture itself is continuously shaped by:
- the best type systems;
- the best distributed systems theories;
- the best runtime models;
- the best security patterns;
- the best semantic computation techniques available today.
The End Goal
The end goal is not another framework.
The end goal is:
A semantic runtime where building complex distributed systems becomes ridiculously easy through natural language.
Where developers no longer fight:
- infrastructure;
- orchestration;
- synchronization;
- replay;
- observability;
- consistency;
- distributed complexity.
They simply declare:
- behaviors;
- guarantees;
- constraints;
- capabilities;
- transformations.
And the runtime derives:
- the system;
- the topology;
- the orchestration;
- the storage strategy;
- the synchronization model;
- the observability;
- the convergence guarantees.
That is what I mean by:
VibeCoding State-of-the-Art-Driven Development
A future where:
- AI amplifies architecture instead of just generating snippets;
- semantic systems replace repetitive implementation;
- and developers spend their time designing meaning instead of wiring infrastructure.