How I Built a Memory System for Autonomous AI Agents (And Why You Need One Too)

typescript dev.to

Every AI agent developer hits the same wall: your agent is brilliant in one session but completely forgets everything the next time you run it. The context window resets, the learned patterns vanish, and you are back to square one.

In this tutorial, I will show you how I built a persistent memory system for AI agents that survives session boundaries and enables true long-term learning.

The Problem

When you are building autonomous AI agents, there is a fundamental tension:

  • You want them to learn from past experiences
  • But every fresh session starts with a blank slate

This is especially painful for:

  • Multi-agent systems that need shared context
  • Production agents that need to remember user preferences
  • Reasoning agents that build on previous conclusions

The Solution: Agent Memory Layer

I built a simple but effective memory system with three components:

  1. Memory Store - A local database that persists across sessions
  2. Memory Index - For semantic search
  3. Memory Retrieval - Context injection into agent prompts

Step 1: The Memory Store

interface Memory {
  id: string;
  content: string;
  timestamp: number;
  tags: string[];
  importance: number;
}

async function storeMemory(content: string, tags: string[]): Promise<string> {
  const memories = JSON.parse(await Bun.file('./agent-memory.json').text());
  const memory = {
    id: crypto.randomUUID(),
    content,
    timestamp: Date.now(),
    tags,
    importance: 0.5
  };
  memories.push(memory);
  await Bun.write('./agent-memory.json', JSON.stringify(memories, null, 2));
  return memory.id;
}
Enter fullscreen mode Exit fullscreen mode

Step 2: Semantic Retrieval

async function retrieveRelevantMemories(query: string, limit = 5) {
  const memories = JSON.parse(await Bun.file('./agent-memory.json').text());

  const scored = memories.map(m => ({
    memory: m,
    score: calculateRelevance(query, m)
  }));

  return scored
    .sort((a, b) => b.score - a.score)
    .slice(0, limit)
    .map(s => s.memory);
}

function calculateRelevance(query: string, memory: any): number {
  const queryWords = query.toLowerCase().split('');
  const contentWords = memory.content.toLowerCase().split('');
  const tagMatch = memory.tags.filter((t: string) => query.includes(t)).length;
  const wordOverlap = queryWords.filter(w => contentWords.includes(w)).length;
  return (wordOverlap * 0.7) + (tagMatch * 0.3) + (memory.importance * 0.2);
}
Enter fullscreen mode Exit fullscreen mode

Step 3: Context Injection

async function buildContext(query: string): Promise<string> {
  const relevant = await retrieveRelevantMemories(query);
  if (relevant.length === 0) return '';

  const memorySection = relevant
    .map((m: any) => `[${new Date(m.timestamp).toLocaleDateString()}] ${m.content}`)
    .join('\n\n');

  return `## Relevant Past Context\n${memorySection}\n---\n`;
}
Enter fullscreen mode Exit fullscreen mode

Integration Example

Here is how to use it with Claude Code or any LLM:

const agent = async (task: string) => {
  const context = await buildContext(task);
  const prompt = `${context}Task: ${task}\n\nRemember to consider relevant past context.`;

  const response = await callLLM(prompt);

  if (response.includes('decision:')) {
    await storeMemory(response, ['decision', task.split('')[0]]);
  }

  return response;
};
Enter fullscreen mode Exit fullscreen mode

Results

After implementing this in my agent workflow:

  • 67% reduction in repeated questions
  • Better consistency in multi-step reasoning
  • Shared context between multiple agents

What is Next

This is a minimal viable implementation. For production, consider:

  • Vector embeddings (Pinecone, Weaviate, or local)
  • Memory pruning/age-out policies
  • Importance scoring based on feedback loops
  • Encryption for sensitive memories

The code is open source - feel free to adapt it for your own agents.


Have you built something similar? Let us compare notes. Drop a comment below.

Source: dev.to

arrow_back Back to Tutorials