Give your AI agents persistent memory in 5 lines of Python. Zero config. Self-hosted. Open source.
from engram import Engram memory = Engram() # Store a memory memory.store("User prefers dark mode", tags=["preference"], importance=8 ) # Search memories results = memory.search("dark mode") # Recall top memories context = memory.recall(limit=10)
The Problem
Every conversation starts from scratch. Every context is forgotten. Your AI agents process thousands of interactions — and remember none of them. They ask the same questions, make the same mistakes, lose the same insights. Memory isn't a feature. It's the foundation.
Features
Built for developers who value simplicity, privacy, and performance.
No databases to set up. No Docker containers to manage. No environment variables to configure. Install and go.
Store, search, and recall memories with an API so simple it fits in a tweet. No boilerplate. No ceremony.
First-class Model Context Protocol support. Connect to Claude, Cursor, or any MCP-compatible client out of the box.
Isolated namespaces per agent. Shared memory pools when you need them. Built for swarm architectures.
Your data stays on your machine. No cloud. No telemetry. No third-party services. MIT licensed.
Importance scoring, automatic deduplication, full-text search, and semantic recall. Memories that matter surface first.
Comparison
We built Engram because existing solutions were too complex for what should be simple.
| Feature | Engram | Mem0 | Letta | Zep |
|---|---|---|---|---|
| Setup | Zero-Config | Complex | Docker + PG | Cloud-only |
| Self-Hosted | Yes (trivial) | Yes (complex) | Yes (Docker) | Deprecated |
| Dependencies | None | Neo4j + Vector DB | PostgreSQL | Cloud-only |
| API Complexity | 5 lines | Medium | Complex | Medium |
| MCP Support | First-Class | Community | No | Yes |
| License | MIT | Apache 2.0 | Apache 2.0 | Cloud-only |
Pricing
Start free. Scale when you're ready. No credit card required.
Hosted Pro version coming soon. Join the waitlist for early access.
Examples
Copy, paste, run. That's it.
from engram import Engram # Initialize — that's it. No config needed. memory = Engram() # Store facts, preferences, decisions memory.store( content="User prefers dark mode and compact layout", tags=["preference", "ui"], importance=8 ) # Search with full-text search results = memory.search("dark mode") for r in results: print(r.content, r.importance) # Recall top memories for context injection context = memory.recall(limit=20, min_importance=7)
from engram import Engram # Each agent gets its own namespace researcher = Engram(namespace="researcher") analyst = Engram(namespace="analyst") writer = Engram(namespace="writer") # Memories are isolated by default researcher.store( "Found 3 papers on transformer architectures", tags=["finding", "ml"], importance=9 ) # But can be shared when needed shared = Engram(namespace="shared") shared.store( "Project deadline: March 15, 2026", tags=["project", "deadline"], importance=10 ) # All agents can access shared memory deadlines = shared.search("deadline")
{ "mcpServers": { "engram": { "command": "python3", "args": [ "-m", "engram.mcp_server" ] } } } // That's it. Claude Code now has persistent memory. // Available tools: // - memory_store // - memory_search // - memory_recall // - memory_delete // - memory_stats