My Robot Said Kitchen 47 Times — And Never Knew It Had Already Been There

rust dev.to

My Robot Said Kitchen 47 Times — And Never Knew It Had Already Been There

Last Tuesday, my robot announced Entering kitchen for the 47th time. Same kitchen. Same Tuesday. Same confused voice.

Id given it a state-of-the-art vector database. Semantic search. Cosine similarity. The whole thing.

What I hadnt given it? The ability to remember when it last visited the kitchen. Or how long it stayed. Or whether the coffee machine was already on.

The robot knew what a kitchen felt like — but it had no memory of actually being there.

This is the gap nobody talks about.

The Vector Search Trap

Every AI robotics tutorial starts the same way: Store your robots experiences as vectors. Use similarity search to find relevant memories.

Its not wrong. Vector search is incredible. But it creates a specific, subtle failure mode that only shows up when your robot encounters the same situation twice.

The robot thinks every kitchen is the same kitchen.

When the robot asks the vector database for the closest kitchen memory, it gets back a description of a past visit — but not the actual temporal or spatial context of what happened. In robotics, context is everything.

The Three Things Vector Search Cant Tell You

1. When Did This Actually Happen?

Semantic similarity doesnt preserve time. The kitchen from last week looks identical to the kitchen from last year — unless you manually tag timestamps on every embedding.

But even if you add timestamps, you now have to filter by time before you search semantically. Two separate operations. Two different failure modes.

2. How Long Did It Take?

My robot spent 8 minutes in the kitchen last Tuesday. It spent 23 minutes the time before that. It spent 2 minutes the time before that — because it immediately bumped into the dog bowl and panicked.

These durations tell you something important: the robots state was different in each visit. Vector search gives you none of this.

3. Where Exactly In the State Space?

When the robot says kitchen, it doesnt mean just the room. It means proximity to the counter, the refrigerator, whether the stove was on, what the lighting level was, whether the dog was in the room.

These are structured attributes, not vector embeddings.

The Franken-database Problem

So what did I do? I bolted on a time-series database for temporal data. A graph database for spatial relationships. A Redis cache for real-time state.

Five databases. Four different query languages. One robot that still said Entering kitchen 47 times.

The problem wasnt that these tools were bad. The problem was that they werent designed to work together — and robotics requires all of these simultaneously, in real-time, on edge hardware.

What I Actually Needed

I needed one database that could handle:

  • Vectors — semantic similarity for context matching
  • Time-series — when did X happen, and for how long?
  • Structured data — key-value attributes that arent vectors
  • Single binary — running on a Raspberry Pi, not a data center
  • Offline-first — the robot cant call home when the WiFi drops

There wasnt anything off-the-shelf. So I built moteDB — a Rust-native embedded database that handles all four data types in a single engine. No server. No cloud. No dependency hell.

The robot now stores location, timestamp, duration, structured state, and embeddings all in one place. One query. All dimensions.

The Question Nobody Asks

We spend so much energy making robots smarter — better models, better sensors, better LLMs.

But maybe the bottleneck isnt intelligence. Maybe its memory.

Not how do I make the robot understand this moment? — but how do I make the robot remember the moments that came before?

Vector search is a powerful tool. But its not a memory system. Its a similarity engine.

What does your robots memory stack look like? Are you using separate tools for vectors, time-series, and structured data — or have you found a better way to unify them?

Source: dev.to

arrow_back Back to Tutorials