Open source memory layer so any AI agent can do what Claude.ai and ChatGPT do
Stash makes your AI remember you. Every session. Forever. No more explaining yourself from scratch.
Stash is a persistent cognitive layer that sits between your AI agent and the world. It doesn't replace your model — it makes your model continuous. Episodes become facts. Facts become patterns. Patterns become wisdom.
Not all memory is equal. What your agent learns about you is different from what it learns about a project, which is different from what it knows about itself. Namespaces let the agent organize what it learns into clean, separate buckets — just like folders on your computer.
Each namespace is a path. Paths are hierarchical. Reading from /projects automatically includes everything under /projects/stash, /projects/cartona, and so on. You never have to think about it — the agent does.
A real conversation loop — two sessions, one agent, zero repetition. Watch what happens between them.
You've probably heard of RAG — Retrieval Augmented Generation. It's clever. But it's not memory. Here's the difference, in plain English.
You give it a pile of documents. When you ask a question, it searches those documents and hands you the relevant pages. That's it. It doesn't remember your conversation. It doesn't learn. It doesn't know you. Every question starts from scratch — it's just a smarter search engine over files you already wrote.
Stash learns from everything your agent experiences — conversations, decisions, successes, failures. It synthesizes raw observations into facts, connects facts into a knowledge graph, detects contradictions, tracks goals, and builds an understanding of you that deepens over time. You don't write anything. It figures it out.
Claude.ai has memory. ChatGPT has memory. They only work for themselves — locked to one platform, one model, one company. Stash works for everyone, everywhere, forever. And it goes far deeper than any of them.
AI models reason brilliantly but remember nothing. Every session you re-explain who you are, what you need, and what you've already tried. You're training the same student every single day.
The workaround is stuffing full conversation history into every prompt. It's slow, expensive, and you still hit the limit. You're paying for tokens that repeat the same facts over and over.
Your agent tried something, it failed, and next session it tries the exact same thing again. There's no mechanism to carry lessons forward. Every failure is forgotten.
Only a handful of AI platforms offer memory — and only for their own models. Your custom agent, your local LLM, your Cursor setup? They all start blind. Memory shouldn't be a premium feature.
No infrastructure to set up. No dependencies to install manually. Docker Compose handles everything — Postgres, pgvector, Stash, all wired together and ready.
A background process continuously synthesizes your agent's experiences into structured knowledge. It runs on a schedule — your agent just lives.
Stash speaks MCP natively. Drop it into Claude Desktop, Cursor, or any MCP-compatible agent in under 5 minutes. No SDK. No vendor lock-in. Your agent remembers you everywhere.
28 tools covering the full cognitive stack — from raw remember and recall all the way to causal chains, contradiction resolution, and hypothesis management.
Call init and Stash creates a /self namespace scaffold. The agent uses its own memory layer to build and maintain a model of its own capabilities, limits, and preferences.
Give your agent a 5-minute research loop. It orients from past memory, researches a topic it chooses itself, invents new connections, consolidates what it learned, and closes gracefully — ready to pick up next time.
Run it as a cron job. Every 5 minutes, your agent gets smarter.
Stash uses one provider for both embedding and reasoning — any OpenAI-compatible backend works. Cloud, local, or self-hosted. No vendor lock-in, ever.
OpenRouter gives you access to hundreds of models — GPT, Claude, Gemini, Mistral — all behind one OpenAI-compatible endpoint. Point Stash at it and pick any model for embedding and reasoning.
Running Ollama locally? Stash works out of the box. Use Qwen, Llama, Mistral, or any model you've pulled — your memory stays fully private, fully offline.
vLLM, LM Studio, llama.cpp server, Together AI, Groq — if it speaks the OpenAI API format, Stash speaks it back. Same provider serves both models.
Open source. Apache 2.0 licensed. Backed by PostgreSQL. Works with any MCP-compatible agent.
Схожі новини