radical mission

Building a unified memory service, vendor-agnostic storage that all models can access. Context packs via MCP, structured memory, context injection. One memory layer, pick model per message.

the problem

Switching between models means losing all context. No shared memory. Every switch is starting over. You end up stuck with one model even when another would be better.

Official chat UIs don't support memory injection. So we build a universal client that injects context packs via system prompts on every request, plus an MCP-compliant server so models can pull memory as tools where supported.

the solution

Unified memory service that stores all conversations in vendor-neutral format. Maintains structured memory: projects, preferences, decisions, tasks. Hybrid retrieval: semantic search (vector DB), keyword search, and optional graph relationships.

Context pack builder uses retrieval-augmented generation to assemble relevant memory within token budgets. Universal chat app injects context via model APIs. MCP server exposes memory tools via standard protocol. Switch models instantly. Nothing resets.

what we're building

unified memory service

Vendor-agnostic storage for all conversations. Maintains structured memory: projects, preferences, decisions, tasks. Semantic search via vector database, keyword indexing, optional graph relationships.

context pack builder

Retrieval-augmented context assembly. Semantic search, recency weighting, token budgeting per model. Builds context packs that fit within each model's limits. Injects via system prompts, no fine-tuning required.

mcp server

MCP-compliant server exposing memory tools. Standard JSON-based interface for search, store, pin, forget. Any MCP-capable client can connect. Your memory becomes a universal tool backend.

universal chat

One memory layer, pick model per message. Everything persists and transfers. Nothing resets.