Ingest
Feed raw text. EidolonDB extracts what is worth remembering.
Self-managing memory for AI agents
Self-managing memory for AI agents.
Memory that extracts itself. Evolves itself. Stays useful.
Vector DBs store whatever you give them. You have to figure out what matters.
Agents forget everything between sessions. Context window isn't memory.
Memory bloat is real. Low-signal storage degrades retrieval quality.
Feed raw text. EidolonDB extracts what is worth remembering.
short_term becomes episodic, episodic distills to semantic. Noise decays.
Hybrid search surfaces the right memories at the right time.
import { EidolonDB } from '@eidolondb/client';
const db = new EidolonDB({ url: 'http://localhost:3000', tenant: 'my-app' });
// Memory that extracts itself
await db.ingest("Today we decided on Fastify for the API. Port 4000. Jordan leads backend.");
// Recall across sessions
const context = await db.recall("project decisions");
// → ["We're using Fastify on port 4000", "Jordan leads backend development"]| Metric | Without EidolonDB | With EidolonDB |
|---|---|---|
| Recall accuracy | 10% | 100% |
| Hallucinations | 1 | 0 |
| Overall score | 6% | 100% |
LLM extracts structured memories from raw text.
short_term / episodic / semantic with automatic lifecycle.
Episodic memories condense into lasting semantic knowledge.
Vector + recency + importance scoring.
First-pass Jaccard + vector similarity dedup.
REST API, TypeScript SDK, zero-dependency.
npm install @eidolondb/client