Files
project-lyra/neomem/README.md
2025-11-16 03:17:32 -05:00

147 lines
3.2 KiB
Markdown
Raw Blame History

This file contains ambiguous Unicode characters
This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.
# 🧠 neomem
**neomem** is a local-first vector memory engine derived from the open-source **Mem0** project.
It provides persistent, structured storage and semantic retrieval for AI companions like **Lyra** — with zero cloud dependencies.
---
## 🚀 Overview
- **Origin:** Forked from Mem0 OSS (Apache 2.0)
- **Purpose:** Replace Mem0 as Lyras canonical on-prem memory backend
- **Core stack:**
- FastAPI (API layer)
- PostgreSQL + pgvector (structured + vector data)
- Neo4j (entity graph)
- **Language:** Python 3.11+
- **License:** Apache 2.0 (original Mem0) + local modifications © 2025 ServersDown Labs
---
## ⚙️ Features
| Layer | Function | Notes |
|-------|-----------|-------|
| **FastAPI** | `/memories`, `/search` endpoints | Drop-in compatible with Mem0 |
| **Postgres (pgvector)** | Memory payload + embeddings | JSON payload schema |
| **Neo4j** | Entity graph relationships | auto-linked per memory |
| **Local Embedding** | via Ollama or OpenAI | configurable in `.env` |
| **Fully Offline Mode** | ✅ | No external SDK or telemetry |
| **Dockerized** | ✅ | `docker-compose.yml` included |
---
## 📦 Requirements
- Docker + Docker Compose
- Python 3.11 (if running bare-metal)
- PostgreSQL 15+ with `pgvector` extension
- Neo4j 5.x
- Optional: Ollama for local embeddings
**Dependencies (requirements.txt):**
```txt
fastapi==0.115.8
uvicorn==0.34.0
pydantic==2.10.4
python-dotenv==1.0.1
psycopg>=3.2.8
ollama
```
---
## 🧩 Setup
1. **Clone & build**
```bash
git clone https://github.com/serversdown/neomem.git
cd neomem
docker compose -f docker-compose.neomem.yml up -d --build
```
2. **Verify startup**
```bash
curl http://localhost:7077/docs
```
Expected output:
```
✅ Connected to Neo4j on attempt 1
INFO: Uvicorn running on http://0.0.0.0:7077
```
---
## 🔌 API Endpoints
### Add Memory
```bash
POST /memories
```
```json
{
"messages": [
{"role": "user", "content": "I like coffee in the morning"}
],
"user_id": "brian"
}
```
### Search Memory
```bash
POST /search
```
```json
{
"query": "coffee",
"user_id": "brian"
}
```
---
## 🗄️ Data Flow
```
Request → FastAPI → Embedding (Ollama/OpenAI)
Postgres (payload store)
Neo4j (graph links)
Search / Recall
```
---
## 🧱 Integration with Lyra
- Lyra Relay connects to `neomem-api:8000` (Docker) or `localhost:7077` (local).
- Identical endpoints to Mem0 mean **no code changes** in Lyra Core.
- Designed for **persistent, private** operation on your own hardware.
---
## 🧯 Shutdown
```bash
docker compose -f docker-compose.neomem.yml down
```
Then power off the VM or Proxmox guest safely.
---
## 🧾 License
neomem is a derivative work based on the **Mem0 OSS** project (Apache 2.0).
It retains the original Apache 2.0 license and adds local modifications.
© 2025 ServersDown Labs / Terra-Mechanics.
All modifications released under Apache 2.0.
---
## 📅 Version
**neomem v0.1.0** — 2025-10-07
_Initial fork from Mem0 OSS with full independence and local-first architecture._