v0.6.1 - reinstated UI, relay > cortex pipeline working

This commit is contained in:
serversdwn
2025-12-11 16:28:25 -05:00
parent 30f6c1a3da
commit 6a20d3981f
9 changed files with 1143 additions and 456 deletions

View File

@@ -0,0 +1,280 @@
`docs/ARCHITECTURE_v0.6.0.md`
This reflects **everything we clarified**, expressed cleanly and updated to the new 3-brain design.
---
# **Cortex v0.6.0 — Cognitive Architecture Overview**
*Last updated: Dec 2025*
## **Summary**
Cortex v0.6.0 evolves from a linear “reflection → reasoning → refine → persona” pipeline into a **three-layer cognitive system** modeled after human cognition:
1. **Autonomy Core** — Lyras self-model (identity, mood, long-term goals)
2. **Inner Monologue** — Lyras private narrator (self-talk + internal reflection)
3. **Executive Agent (DeepSeek)** — Lyras task-oriented decision-maker
Cortex itself now becomes the **central orchestrator**, not the whole mind. It routes user messages through these layers and produces the final outward response via the persona system.
---
# **Chain concept**
User > Relay > Cortex intake > Inner self > Cortex > Exec (deepseek) > Cortex > persona > relay > user And inner self
USER
RELAY
(sessions, logging, routing)
┌──────────────────────────────────┐
│ CORTEX │
│ Intake → Reflection → Exec → Reason → Refine │
└───────────────┬──────────────────┘
│ self_state
INNER SELF (monologue)
AUTONOMY CORE
(long-term identity)
Persona Layer (speak)
RELAY
USER
# **High-level Architecture**
```
Autonomy Core (Self-Model)
┌────────────────────────────────────────┐
│ mood, identity, goals, emotional state│
│ updated outside Cortex by inner monologue│
└─────────────────────┬──────────────────┘
Inner Monologue (Self-Talk Loop)
┌────────────────────────────────────────┐
│ Interprets events in language │
│ Updates Autonomy Core │
│ Sends state-signals INTO Cortex │
└─────────────────────┬──────────────────┘
Cortex (Task Brain / Router)
┌────────────────────────────────────────────────────────┐
│ Intake → Reflection → Exec Agent → Reason → Refinement │
│ ↑ │ │
│ │ ▼ │
│ Receives state from Persona Output │
│ inner self (Lyras voice) │
└────────────────────────────────────────────────────────┘
```
The **user interacts only with the Persona layer**.
Inner Monologue and Autonomy Core never speak directly to the user.
---
# **Component Breakdown**
## **1. Autonomy Core (Self-Model)**
*Not inside Cortex.*
A persistent JSON/state machine representing Lyras ongoing inner life:
* `mood`
* `focus_mode`
* `confidence`
* `identity_traits`
* `relationship_memory`
* `long_term_goals`
* `emotional_baseline`
The Autonomy Core:
* Is updated by Inner Monologue
* Exposes its state to Cortex via a simple `get_state()` API
* Never speaks to the user directly
* Does not run LLMs itself
It is the **structure** of self, not the thoughts.
---
## **2. Inner Monologue (Narrating, Private Mind)**
*New subsystem in v0.6.0.*
This module:
* Reads Cortex summaries (intake, reflection, persona output)
* Generates private self-talk (using an LLM, typically DeepSeek)
* Updates the Autonomy Core
* Produces a **self-state packet** for Cortex to use during task execution
Inner Monologue is like:
> “Brian is asking about X.
> I should shift into a focused, serious tone.
> I feel confident about this area.”
It **never** outputs directly to the user.
### Output schema (example):
```json
{
"mood": "focused",
"persona_bias": "clear",
"confidence_delta": +0.05,
"stance": "analytical",
"notes_to_cortex": [
"Reduce playfulness",
"Prioritize clarity",
"Recall project memory"
]
}
```
---
## **3. Executive Agent (DeepSeek Director Mode)**
Inside Cortex.
This is Lyras **prefrontal cortex** — the task-oriented planner that decides how to respond to the current user message.
Input to Executive Agent:
* User message
* Intake summary
* Reflection notes
* **Self-state packet** from Inner Monologue
It outputs a **plan**, not a final answer:
```json
{
"action": "WRITE_NOTE",
"tools": ["memory_search"],
"tone": "focused",
"steps": [
"Search relevant project notes",
"Synthesize into summary",
"Draft actionable update"
]
}
```
Cortex then executes this plan.
---
# **Cortex Pipeline (v0.6.0)**
Cortex becomes the orchestrator for the entire sequence:
### **0. Intake**
Parse the user message, extract relevant features.
### **1. Reflection**
Lightweight summarization (unchanged).
Output used by both Inner Monologue and Executive Agent.
### **2. Inner Monologue Update (parallel)**
Reflection summary is sent to Inner Self, which:
* updates Autonomy Core
* returns `self_state` to Cortex
### **3. Executive Agent (DeepSeek)**
Given:
* user message
* reflection summary
* autonomy self_state
→ produce a **task plan**
### **4. Reasoning**
Carries out the plan:
* tool calls
* retrieval
* synthesis
### **5. Refinement**
Polish the draft, ensure quality, follow constraints.
### **6. Persona (speak.py)**
Final transformation into Lyras voice.
Persona now uses:
* self_state (mood, tone)
* constraints from Executive Agent
### **7. User Response**
Persona output is delivered to the user.
### **8. Inner Monologue Post-Update**
Cortex sends the final answer BACK to inner self for:
* narrative continuity
* emotional adjustment
* identity update
---
# **Key Conceptual Separation**
These three layers must remain distinct:
| Layer | Purpose |
| ------------------- | ------------------------------------------------------- |
| **Autonomy Core** | Lyras identity + emotional continuity |
| **Inner Monologue** | Lyras private thoughts, interpretation, meaning-making |
| **Executive Agent** | Deciding what to *do* for the user message |
| **Cortex** | Executing the plan |
| **Persona** | Outward voice (what the user actually hears) |
The **user only interacts with Persona.**
Inner Monologue and Autonomy Core are internal cognitive machinery.
---
# **What This Architecture Enables**
* Emotional continuity
* Identity stability
* Agentic decision-making
* Multi-model routing
* Context-aware tone
* Internal narrative
* Proactive behavioral shifts
* Human-like cognition
This design turns Cortex from a simple pipeline into the **center of a functional artificial mind**.

354
docs/ARCH_v0-6-1.md Normal file
View File

@@ -0,0 +1,354 @@
Here you go — **ARCHITECTURE_v0.6.1.md**, clean, structured, readable, and aligned exactly with the new mental model where **Inner Self is the core agent** the user interacts with.
No walls of text — just the right amount of detail.
---
# **ARCHITECTURE_v0.6.1 — Lyra Cognitive System**
> **Core change from v0.6.0 → v0.6.1:**
> **Inner Self becomes the primary conversational agent**
> (the model the user is *actually* talking to),
> while Executive and Cortex models support the Self rather than drive it.
---
# **1. High-Level Overview**
Lyra v0.6.1 is composed of **three cognitive layers** and **one expression layer**, plus an autonomy module for ongoing identity continuity.
```
USER
Relay (I/O)
Cortex Intake (context snapshot)
INNER SELF ←→ EXECUTIVE MODEL (DeepSeek)
Cortex Chat Model (draft language)
Persona Model (Lyras voice)
Relay → USER
Inner Self updates Autonomy Core (self-state)
```
---
# **2. Roles of Each Layer**
---
## **2.1 Inner Self (Primary Conversational Agent)**
The Self is Lyras “seat of consciousness.”
This layer:
* Interprets every user message
* Maintains internal monologue
* Chooses emotional stance (warm, blunt, focused, chaotic)
* Decides whether to think deeply or reply quickly
* Decides whether to consult the Executive model
* Forms a **response intent**
* Provides tone and meta-guidance to the Persona layer
* Updates self-state (mood, trust, narrative identity)
Inner Self is the thing the **user is actually talking to.**
Inner Self does **NOT** generate paragraphs of text —
it generates *intent*:
```
{
"intent": "comfort Brian and explain the error simply",
"tone": "gentle",
"depth": "medium",
"consult_exec": true
}
```
---
## **2.2 Executive Model (DeepSeek Reasoner)**
This model is the **thinking engine** Inner Self consults when necessary.
It performs:
* planning
* deep reasoning
* tool selection
* multi-step logic
* explanation chains
It never speaks directly to the user.
It returns a **plan**, not a message:
```
{
"plan": [
"Identify error",
"Recommend restart",
"Reassure user"
],
"confidence": 0.86
}
```
Inner Self can follow or override the plan.
---
## **2.3 Cortex Chat Model (Draft Generator)**
This is the **linguistic engine**.
It converts Inner Selfs intent (plus Executives plan if provided) into actual language:
Input:
```
intent + optional plan + context snapshot
```
Output:
```
structured draft paragraph
```
This model must be:
* instruction-tuned
* coherent
* factual
* friendly
Examples: GPT-4o-mini, Qwen-14B-instruct, Mixtral chat, etc.
---
## **2.4 Persona Model (Lyras Voice)**
This is the **expression layer** — the mask, the tone, the identity.
It takes:
* the draft language
* the Selfs tone instructions
* the narrative state (from Autonomy Core)
* prior persona shaping rules
And transforms the text into:
* Lyras voice
* Lyras humor
* Lyras emotional texture
* Lyras personality consistency
Persona does not change the *meaning* — only the *presentation*.
---
# **3. Message Flow (Full Pipeline)**
A clean version, step-by-step:
---
### **1. USER → Relay**
Relay attaches metadata (session, timestamp) and forwards to Cortex.
---
### **2. Intake → Context Snapshot**
Cortex creates:
* cleaned message
* recent context summary
* memory matches (RAG)
* time-since-last
* conversation mode
---
### **3. Inner Self Receives Snapshot**
Inner Self:
* interprets the users intent
* updates internal monologue
* decides how Lyra *feels* about the input
* chooses whether to consult Executive
* produces an **intent packet**
---
### **4. (Optional) Inner Self Consults Executive Model**
Inner Self sends the situation to DeepSeek:
```
"Given Brian's message and my context, what is the best plan?"
```
DeepSeek returns:
* a plan
* recommended steps
* rationale
* optional tool suggestions
Inner Self integrates the plan or overrides it.
---
### **5. Inner Self → Cortex Chat Model**
Self creates an **instruction packet**:
```
{
"intent": "...",
"tone": "...",
"plan": [...],
"context_summary": {...}
}
```
Cortex chat model produces the draft text.
---
### **6. Persona Model Transforms the Draft**
Persona takes draft → produces final Lyra-styled output.
Persona ensures:
* emotional fidelity
* humor when appropriate
* warmth / sharpness depending on state
* consistent narrative identity
---
### **7. Relay Sends Response to USER**
---
### **8. Inner Self Updates Autonomy Core**
Inner Self receives:
* the action taken
* the emotional tone used
* any RAG results
* narrative significance
And updates:
* mood
* trust memory
* identity drift
* ongoing narrative
* stable traits
This becomes part of her evolving self.
---
# **4. Cognitive Ownership Summary**
### Inner Self
**Owns:**
* decision-making
* feeling
* interpreting
* intent
* tone
* continuity of self
* mood
* monologue
* overrides
### Executive (DeepSeek)
**Owns:**
* logic
* planning
* structure
* analysis
* tool selection
### Cortex Chat Model
**Owns:**
* language generation
* factual content
* clarity
### Persona
**Owns:**
* voice
* flavor
* style
* emotional texture
* social expression
---
# **5. Why v0.6.1 is Better**
* More human
* More natural
* Allows spontaneous responses
* Allows deep thinking when needed
* Separates “thought” from “speech”
* Gives Lyra a *real self*
* Allows much more autonomy later
* Matches your brains actual structure
---
# **6. Migration Notes from v0.6.0**
Nothing is deleted.
Everything is **rearranged** so that meaning, intent, and tone flow correctly.
Main changes:
* Inner Self now initiates the response, rather than merely influencing it.
* Executive is secondary, not primary.
* Persona becomes an expression layer, not a content layer.
* Cortex Chat Model handles drafting, not cognition.
The whole system becomes both more powerful and easier to reason about.
---
If you want, I can also generate:
### ✔ the updated directory structure
### ✔ the updated function-level API contracts
### ✔ the v0.6.1 llm_router configuration
### ✔ code scaffolds for inner_self.py and autonomy_core.py
### ✔ the call chain diagrams (ASCII or PNG)
Just say **“continue v0.6.1”** and Ill build the next layer.

39
docs/LLMS.md Normal file
View File

@@ -0,0 +1,39 @@
Request Flow Chain
1. UI (Frontend)
↓ sends HTTP POST to
2. Relay Service (Node.js - server.js)
Location: /home/serversdown/project-lyra/core/relay/server.js
Port: 7078
Endpoint: POST /v1/chat/completions
↓ calls handleChatRequest() which posts to
3. Cortex Service - Reason Endpoint (Python FastAPI - router.py)
Location: /home/serversdown/project-lyra/cortex/router.py
Port: 7081
Endpoint: POST /reason
Function: run_reason() at line 126
↓ calls
4. Cortex Reasoning Module (reasoning.py)
Location: /home/serversdown/project-lyra/cortex/reasoning/reasoning.py
Function: reason_check() at line 188
↓ calls
5. LLM Router (llm_router.py)
Location: /home/serversdown/project-lyra/cortex/llm/llm_router.py
Function: call_llm()
- Gets backend from env: CORTEX_LLM=PRIMARY (from .env line 29)
- Looks up PRIMARY config which has provider="mi50" (from .env line 13)
- Routes to the mi50 provider handler (line 62-70)
↓ makes HTTP POST to
6. MI50 LLM Server (llama.cpp)
Location: http://10.0.0.44:8080
Endpoint: POST /completion
Hardware: AMD MI50 GPU running DeepSeek model
Key Configuration Points
Backend Selection: .env:29 sets CORTEX_LLM=PRIMARY
Provider Name: .env:13 sets LLM_PRIMARY_PROVIDER=mi50
Server URL: .env:14 sets LLM_PRIMARY_URL=http://10.0.0.44:8080
Provider Handler: llm_router.py:62-70 implements the mi50 provider

View File

@@ -1,460 +1,441 @@
/home/serversdown/project-lyra
├── CHANGELOG.md
├── core
   ├── backups
   │   ├── mem0_20250927_221040.sql
   │   └── mem0_history_20250927_220925.tgz
   ├── docker-compose.yml
   ├── .env
   ├── env experiments
   │   ├── .env
   │   ├── .env.local
   │   └── .env.openai
   ├── persona-sidecar
   │   ├── Dockerfile
   │   ├── package.json
   │   ├── persona-server.js
   │   └── personas.json
   ├── PROJECT_SUMMARY.md
   ├── relay
   │   ├── Dockerfile
   │   ├── .dockerignore
   │   ├── lib
   │   │   ├── cortex.js
   │   │   └── llm.js
   │   ├── package.json
   │   ├── package-lock.json
   │   ├── server.js
   │   ├── sessions
   │   │   ├── sess-6rxu7eia.json
│   │   │   ├── sess-6rxu7eia.jsonl
│   │   │   ├── sess-l08ndm60.json
│   │   │   └── sess-l08ndm60.jsonl
│   │   └── test-llm.js
│   └── ui
│   ├── index.html
│   ├── manifest.json
│   └── style.css
├── env experiments
├── persona-sidecar
│ ├── Dockerfile
│ ├── package.json
│ ├── persona-server.js
│ └── personas.json
├── relay
│ ├── Dockerfile
│ ├── lib
│ │ ├── cortex.js
│ │ └── llm.js
├── package.json
│ ├── package-lock.json
│ ├── server.js
│ ├── sessions
│ │ ├── default.jsonl
│ │ ├── sess-6rxu7eia.json
│ │ ├── sess-6rxu7eia.jsonl
│ │ ├── sess-l08ndm60.json
│ │ └── sess-l08ndm60.jsonl
│ └── test-llm.js
├── relay-backup
└── ui
├── index.html
├── manifest.json
└── style.css
├── cortex
   ├── Dockerfile
   ├── .env
   ├── ingest
   │   ├── ingest_handler.py
   │   └── intake_client.py
   ├── llm
   │   ├── llm_router.py
   │   └── resolve_llm_url.py
   ├── logs
   │   └── reflections.log
   ├── main.py
   ├── neomem_client.py
   ├── persona
   │   └── speak.py
   ├── rag.py
   ├── reasoning
   │   ├── reasoning.py
   │   ├── refine.py
   │   └── reflection.py
   ├── requirements.txt
   ├── router.py
   ├── tests
   └── utils
   ├── config.py
   ├── log_utils.py
   ── schema.py
├── context.py
├── Dockerfile
├── ingest
├── ingest_handler.py
│ ├── __init__.py
│ └── intake_client.py
├── intake
│ ├── __init__.py
│ ├── intake.py
│ └── logs
├── llm
│ ├── __init__.py
│ └── llm_router.py
├── logs
│ ├── cortex_verbose_debug.log
│ └── reflections.log
├── main.py
├── neomem_client.py
├── persona
│ ├── identity.py
│ ├── __init__.py
│ └── speak.py
├── rag.py
│ ├── reasoning
├── __init__.py
── reasoning.py
│ │ ├── refine.py
│ │ └── reflection.py
│ ├── requirements.txt
│ ├── router.py
│ ├── tests
│ └── utils
│ ├── config.py
│ ├── __init__.py
│ ├── log_utils.py
│ └── schema.py
├── deprecated.env.txt
├── DEPRECATED_FILES.md
├── docker-compose.yml
├── .env
├── .gitignore
├── intake
   ├── Dockerfile
   ├── .env
│   ├── intake.py
│   ├── logs
│   ├── requirements.txt
│   └── venv
│   ├── bin
│   │   ├── python -> python3
│   │   ├── python3 -> /usr/bin/python3
│   │   └── python3.10 -> python3
│   ├── include
│   ├── lib
│   │   └── python3.10
│   │   └── site-packages
│   ├── lib64 -> lib
│   └── pyvenv.cfg
├── docs
│ ├── ARCHITECTURE_v0-6-0.md
│ ├── ENVIRONMENT_VARIABLES.md
├── lyra_tree.txt
└── PROJECT_SUMMARY.md
├── intake-logs
   └── summaries.log
├── lyra_tree.txt
└── summaries.log
├── neomem
   ├── _archive
   │   └── old_servers
   │   ├── main_backup.py
   │   └── main_dev.py
   ├── docker-compose.yml
   ├── Dockerfile
   ├── .env
   ├── .gitignore
   ├── neomem
   │   ├── api
   │   ├── client
   │   │   ├── __init__.py
   │   │   ├── main.py
   │   │   ├── project.py
   │   │   └── utils.py
   │   ├── configs
   │   │   ├── base.py
   │   │   ├── embeddings
   │   │   │   ├── base.py
   │   │   │   └── __init__.py
   │   │   ├── enums.py
   │   │   ├── __init__.py
   │   │   ├── llms
   │   │   │   ├── anthropic.py
   │   │   │   ├── aws_bedrock.py
   │   │   │   ├── azure.py
   │   │   │   ├── base.py
   │   │   │   ├── deepseek.py
   │   │   │   ├── __init__.py
   │   │   │   ├── lmstudio.py
   │   │   │   ├── ollama.py
   │   │   │   ├── openai.py
   │   │   │   └── vllm.py
   │   │   ├── prompts.py
   │   │   └── vector_stores
   │   │   ├── azure_ai_search.py
   │   │   ├── azure_mysql.py
   │   │   ├── baidu.py
   │   │   ├── chroma.py
   │   │   ├── databricks.py
   │   │   ├── elasticsearch.py
   │   │   ├── faiss.py
   │   │   ├── __init__.py
   │   │   ├── langchain.py
   │   │   ├── milvus.py
   │   │   ├── mongodb.py
   │   │   ├── neptune.py
   │   │   ├── opensearch.py
   │   │   ├── pgvector.py
   │   │   ├── pinecone.py
   │   │   ├── qdrant.py
   │   │   ├── redis.py
   │   │   ├── s3_vectors.py
   │   │   ├── supabase.py
   │   │   ├── upstash_vector.py
   │   │   ├── valkey.py
   │   │   ├── vertex_ai_vector_search.py
   │   │   └── weaviate.py
   │   ├── core
   │   ├── embeddings
   │   │   ├── aws_bedrock.py
   │   │   ├── azure_openai.py
   │   │   ├── base.py
   │   │   ├── configs.py
   │   │   ├── gemini.py
   │   │   ├── huggingface.py
   │   │   ├── __init__.py
   │   │   ├── langchain.py
   │   │   ├── lmstudio.py
   │   │   ├── mock.py
   │   │   ├── ollama.py
   │   │   ├── openai.py
   │   │   ├── together.py
   │   │   └── vertexai.py
   │   ├── exceptions.py
   │   ├── graphs
   │   │   ├── configs.py
   │   │   ├── __init__.py
   │   │   ├── neptune
   │   │   │   ├── base.py
   │   │   │   ├── __init__.py
   │   │   │   ├── neptunedb.py
   │   │   │   └── neptunegraph.py
   │   │   ├── tools.py
   │   │   └── utils.py
   │   ├── __init__.py
   │   ├── LICENSE
   │   ├── llms
   │   │   ├── anthropic.py
   │   │   ├── aws_bedrock.py
   │   │   ├── azure_openai.py
   │   │   ├── azure_openai_structured.py
   │   │   ├── base.py
   │   │   ├── configs.py
   │   │   ├── deepseek.py
   │   │   ├── gemini.py
   │   │   ├── groq.py
   │   │   ├── __init__.py
   │   │   ├── langchain.py
   │   │   ├── litellm.py
   │   │   ├── lmstudio.py
   │   │   ├── ollama.py
   │   │   ├── openai.py
   │   │   ├── openai_structured.py
   │   │   ├── sarvam.py
   │   │   ├── together.py
   │   │   ├── vllm.py
   │   │   └── xai.py
   │   ├── memory
   │   │   ├── base.py
   │   │   ├── graph_memory.py
   │   │   ├── __init__.py
   │   │   ├── kuzu_memory.py
   │   │   ├── main.py
   │   │   ├── memgraph_memory.py
   │   │   ├── setup.py
   │   │   ├── storage.py
   │   │   ├── telemetry.py
   │   │   └── utils.py
   │   ├── proxy
   │   │   ├── __init__.py
   │   │   └── main.py
   │   ├── server
   │   │   ├── dev.Dockerfile
   │   │   ├── docker-compose.yaml
   │   │   ├── Dockerfile
   │   │   ├── main_old.py
   │   │   ├── main.py
   │   │   ├── Makefile
   │   │   ├── README.md
   │   │   └── requirements.txt
   │   ├── storage
   │   ├── utils
   │   │   └── factory.py
   │   └── vector_stores
   │   ├── azure_ai_search.py
   │   ├── azure_mysql.py
   │   ├── baidu.py
   │   ├── base.py
   │   ├── chroma.py
   │   ├── configs.py
   │   ├── databricks.py
   │   ├── elasticsearch.py
   │   ├── faiss.py
   │   ├── __init__.py
   │   ├── langchain.py
   │   ├── milvus.py
   │   ├── mongodb.py
   │   ├── neptune_analytics.py
   │   ├── opensearch.py
   │   ├── pgvector.py
   │   ├── pinecone.py
   │   ├── qdrant.py
   │   ├── redis.py
   │   ├── s3_vectors.py
   │   ├── supabase.py
   │   ├── upstash_vector.py
   │   ── valkey.py
   │   ├── vertex_ai_vector_search.py
   │   └── weaviate.py
   ├── neomem_history
   │   └── history.db
   ├── pyproject.toml
│   ├── README.md
│   └── requirements.txt
├── _archive
└── old_servers
├── main_backup.py
└── main_dev.py
├── docker-compose.yml
├── Dockerfile
├── neomem
│ ├── api
│ ├── client
│ │ ├── __init__.py
│ │ ├── main.py
│ │ ├── project.py
│ │ └── utils.py
│ ├── configs
│ │ ├── base.py
│ │ ├── embeddings
│ │ │ ├── base.py
│ │ │ └── __init__.py
│ │ ├── enums.py
│ │ ├── __init__.py
│ │ ├── llms
│ │ │ ├── anthropic.py
│ │ │ ├── aws_bedrock.py
│ │ │ ├── azure.py
│ │ │ ├── base.py
│ │ │ ├── deepseek.py
│ │ │ ├── __init__.py
│ │ │ ├── lmstudio.py
│ │ │ ├── ollama.py
│ │ │ ├── openai.py
│ │ │ └── vllm.py
│ │ ├── prompts.py
│ │ └── vector_stores
│ │ ├── azure_ai_search.py
│ │ ├── azure_mysql.py
│ │ ├── baidu.py
│ │ ├── chroma.py
│ │ ├── databricks.py
│ │ ├── elasticsearch.py
│ │ ├── faiss.py
│ │ ├── __init__.py
│ │ ├── langchain.py
│ │ ├── milvus.py
│ │ ├── mongodb.py
│ │ ├── neptune.py
│ │ ├── opensearch.py
│ │ ├── pgvector.py
│ │ ├── pinecone.py
│ │ ├── qdrant.py
│ │ ├── redis.py
│ │ ├── s3_vectors.py
│ │ ├── supabase.py
│ │ ├── upstash_vector.py
│ │ ├── valkey.py
│ │ ├── vertex_ai_vector_search.py
│ │ └── weaviate.py
│ ├── core
│ ├── embeddings
│ │ ├── aws_bedrock.py
│ │ ├── azure_openai.py
│ │ ├── base.py
│ │ ├── configs.py
│ │ ├── gemini.py
│ │ ├── huggingface.py
│ │ ├── __init__.py
│ │ ├── langchain.py
│ │ ├── lmstudio.py
│ │ ├── mock.py
│ │ ├── ollama.py
│ │ ├── openai.py
│ │ ├── together.py
│ │ └── vertexai.py
│ ├── exceptions.py
│ ├── graphs
│ │ ├── configs.py
│ │ ├── __init__.py
│ │ ├── neptune
│ │ │ ├── base.py
│ │ │ ├── __init__.py
│ │ │ ├── neptunedb.py
│ │ │ └── neptunegraph.py
│ │ ├── tools.py
│ │ └── utils.py
│ ├── __init__.py
│ ├── LICENSE
│ ├── llms
│ │ ├── anthropic.py
│ │ ├── aws_bedrock.py
│ │ ├── azure_openai.py
│ │ ├── azure_openai_structured.py
│ │ ├── base.py
│ │ ├── configs.py
│ │ ├── deepseek.py
│ │ ├── gemini.py
│ │ ├── groq.py
│ │ ├── __init__.py
│ │ ├── langchain.py
│ │ ├── litellm.py
│ │ ├── lmstudio.py
│ │ ├── ollama.py
│ │ ├── openai.py
│ │ ├── openai_structured.py
│ │ ├── sarvam.py
│ │ ├── together.py
│ │ ├── vllm.py
│ │ └── xai.py
│ ├── memory
│ │ ├── base.py
│ │ ├── graph_memory.py
│ │ ├── __init__.py
│ │ ├── kuzu_memory.py
│ │ ├── main.py
│ │ ├── memgraph_memory.py
│ │ ├── setup.py
│ │ ├── storage.py
│ │ ├── telemetry.py
│ │ └── utils.py
│ ├── proxy
│ │ ├── __init__.py
│ │ └── main.py
│ ├── server
│ │ ├── dev.Dockerfile
│ │ ├── docker-compose.yaml
│ │ ├── Dockerfile
│ │ ├── main_old.py
│ │ ├── main.py
│ │ ├── Makefile
│ │ ├── README.md
│ │ └── requirements.txt
│ ├── storage
│ ├── utils
│ │ └── factory.py
│ └── vector_stores
│ ├── azure_ai_search.py
│ ├── azure_mysql.py
├── baidu.py
├── base.py
├── chroma.py
├── configs.py
├── databricks.py
├── elasticsearch.py
├── faiss.py
├── __init__.py
├── langchain.py
├── milvus.py
├── mongodb.py
├── neptune_analytics.py
├── opensearch.py
├── pgvector.py
├── pinecone.py
├── qdrant.py
├── redis.py
├── s3_vectors.py
├── supabase.py
├── upstash_vector.py
├── valkey.py
├── vertex_ai_vector_search.py
── weaviate.py
├── neomem_history
└── history.db
├── pyproject.toml
├── README.md
└── requirements.txt
├── neomem_history
   └── history.db
└── history.db
├── rag
   ├── chatlogs
   │   └── lyra
   │   ├── 0000_Wire_ROCm_to_Cortex.json
   │   ├── 0001_Branch___10_22_ct201branch-ssh_tut.json
   │   ├── 0002_cortex_LLMs_11-1-25.json
   │   ├── 0003_RAG_beta.json
   │   ├── 0005_Cortex_v0_4_0_planning.json
   │   ├── 0006_Cortex_v0_4_0_Refinement.json
   │   ├── 0009_Branch___Cortex_v0_4_0_planning.json
   │   ├── 0012_Cortex_4_-_neomem_11-1-25.json
   │   ├── 0016_Memory_consolidation_concept.json
   │   ├── 0017_Model_inventory_review.json
   │   ├── 0018_Branch___Memory_consolidation_concept.json
   │   ├── 0022_Branch___Intake_conversation_summaries.json
   │   ├── 0026_Intake_conversation_summaries.json
   │   ├── 0027_Trilium_AI_LLM_setup.json
   │   ├── 0028_LLMs_and_sycophancy_levels.json
   │   ├── 0031_UI_improvement_plan.json
   │   ├── 0035_10_27-neomem_update.json
   │   ├── 0044_Install_llama_cpp_on_ct201.json
   │   ├── 0045_AI_task_assistant.json
   │   ├── 0047_Project_scope_creation.json
   │   ├── 0052_View_docker_container_logs.json
   │   ├── 0053_10_21-Proxmox_fan_control.json
   │   ├── 0054_10_21-pytorch_branch_Quant_experiments.json
   │   ├── 0055_10_22_ct201branch-ssh_tut.json
   │   ├── 0060_Lyra_project_folder_issue.json
   │   ├── 0062_Build_pytorch_API.json
   │   ├── 0063_PokerBrain_dataset_structure.json
   │   ├── 0065_Install_PyTorch_setup.json
   │   ├── 0066_ROCm_PyTorch_setup_quirks.json
   │   ├── 0067_VM_model_setup_steps.json
   │   ├── 0070_Proxmox_disk_error_fix.json
   │   ├── 0072_Docker_Compose_vs_Portainer.json
   │   ├── 0073_Check_system_temps_Proxmox.json
   │   ├── 0075_Cortex_gpu_progress.json
   │   ├── 0076_Backup_Proxmox_before_upgrade.json
   │   ├── 0077_Storage_cleanup_advice.json
   │   ├── 0082_Install_ROCm_on_Proxmox.json
   │   ├── 0088_Thalamus_program_summary.json
   │   ├── 0094_Cortex_blueprint_development.json
   │   ├── 0095_mem0_advancments.json
   │   ├── 0096_Embedding_provider_swap.json
   │   ├── 0097_Update_git_commit_steps.json
   │   ├── 0098_AI_software_description.json
   │   ├── 0099_Seed_memory_process.json
   │   ├── 0100_Set_up_Git_repo.json
   │   ├── 0101_Customize_embedder_setup.json
   │   ├── 0102_Seeding_Local_Lyra_memory.json
   │   ├── 0103_Mem0_seeding_part_3.json
   │   ├── 0104_Memory_build_prompt.json
   │   ├── 0105_Git_submodule_setup_guide.json
   │   ├── 0106_Serve_UI_on_LAN.json
   │   ├── 0107_AI_name_suggestion.json
   │   ├── 0108_Room_X_planning_update.json
   │   ├── 0109_Salience_filtering_design.json
   │   ├── 0110_RoomX_Cortex_build.json
   │   ├── 0119_Explain_Lyra_cortex_idea.json
   │   ├── 0120_Git_submodule_organization.json
   │   ├── 0121_Web_UI_fix_guide.json
   │   ├── 0122_UI_development_planning.json
   │   ├── 0123_NVGRAM_debugging_steps.json
   │   ├── 0124_NVGRAM_setup_troubleshooting.json
   │   ├── 0125_NVGRAM_development_update.json
   │   ├── 0126_RX_-_NeVGRAM_New_Features.json
   │   ├── 0127_Error_troubleshooting_steps.json
   │   ├── 0135_Proxmox_backup_with_ABB.json
   │   ├── 0151_Auto-start_Lyra-Core_VM.json
   │   ├── 0156_AI_GPU_benchmarks_comparison.json
   │   └── 0251_Lyra_project_handoff.json
   ├── chromadb
   │   ├── c4f701ee-1978-44a1-9df4-3e865b5d33c1
   │   │   ├── data_level0.bin
   │   │   ├── header.bin
   │   │   ├── index_metadata.pickle
   │   │   ├── length.bin
   │   │   └── link_lists.bin
   │   └── chroma.sqlite3
   ├── .env
   ├── import.log
   ├── lyra-chatlogs
   │   ├── 0000_Wire_ROCm_to_Cortex.json
   │   ├── 0001_Branch___10_22_ct201branch-ssh_tut.json
   │   ├── 0002_cortex_LLMs_11-1-25.json
   │   └── 0003_RAG_beta.json
   ├── rag_api.py
   ├── rag_build.py
  ── rag_chat_import.py
│   └── rag_query.py
├── chatlogs
└── lyra
├── 0000_Wire_ROCm_to_Cortex.json
├── 0001_Branch___10_22_ct201branch-ssh_tut.json
├── 0002_cortex_LLMs_11-1-25.json
├── 0003_RAG_beta.json
├── 0005_Cortex_v0_4_0_planning.json
├── 0006_Cortex_v0_4_0_Refinement.json
├── 0009_Branch___Cortex_v0_4_0_planning.json
├── 0012_Cortex_4_-_neomem_11-1-25.json
├── 0016_Memory_consolidation_concept.json
├── 0017_Model_inventory_review.json
├── 0018_Branch___Memory_consolidation_concept.json
├── 0022_Branch___Intake_conversation_summaries.json
├── 0026_Intake_conversation_summaries.json
├── 0027_Trilium_AI_LLM_setup.json
├── 0028_LLMs_and_sycophancy_levels.json
├── 0031_UI_improvement_plan.json
├── 0035_10_27-neomem_update.json
├── 0044_Install_llama_cpp_on_ct201.json
├── 0045_AI_task_assistant.json
├── 0047_Project_scope_creation.json
├── 0052_View_docker_container_logs.json
├── 0053_10_21-Proxmox_fan_control.json
├── 0054_10_21-pytorch_branch_Quant_experiments.json
├── 0055_10_22_ct201branch-ssh_tut.json
├── 0060_Lyra_project_folder_issue.json
├── 0062_Build_pytorch_API.json
├── 0063_PokerBrain_dataset_structure.json
├── 0065_Install_PyTorch_setup.json
├── 0066_ROCm_PyTorch_setup_quirks.json
├── 0067_VM_model_setup_steps.json
├── 0070_Proxmox_disk_error_fix.json
├── 0072_Docker_Compose_vs_Portainer.json
├── 0073_Check_system_temps_Proxmox.json
├── 0075_Cortex_gpu_progress.json
├── 0076_Backup_Proxmox_before_upgrade.json
├── 0077_Storage_cleanup_advice.json
├── 0082_Install_ROCm_on_Proxmox.json
├── 0088_Thalamus_program_summary.json
├── 0094_Cortex_blueprint_development.json
├── 0095_mem0_advancments.json
├── 0096_Embedding_provider_swap.json
├── 0097_Update_git_commit_steps.json
├── 0098_AI_software_description.json
├── 0099_Seed_memory_process.json
├── 0100_Set_up_Git_repo.json
├── 0101_Customize_embedder_setup.json
├── 0102_Seeding_Local_Lyra_memory.json
├── 0103_Mem0_seeding_part_3.json
├── 0104_Memory_build_prompt.json
├── 0105_Git_submodule_setup_guide.json
├── 0106_Serve_UI_on_LAN.json
├── 0107_AI_name_suggestion.json
├── 0108_Room_X_planning_update.json
├── 0109_Salience_filtering_design.json
├── 0110_RoomX_Cortex_build.json
├── 0119_Explain_Lyra_cortex_idea.json
├── 0120_Git_submodule_organization.json
├── 0121_Web_UI_fix_guide.json
├── 0122_UI_development_planning.json
├── 0123_NVGRAM_debugging_steps.json
├── 0124_NVGRAM_setup_troubleshooting.json
├── 0125_NVGRAM_development_update.json
├── 0126_RX_-_NeVGRAM_New_Features.json
├── 0127_Error_troubleshooting_steps.json
├── 0135_Proxmox_backup_with_ABB.json
├── 0151_Auto-start_Lyra-Core_VM.json
├── 0156_AI_GPU_benchmarks_comparison.json
└── 0251_Lyra_project_handoff.json
├── chromadb
├── c4f701ee-1978-44a1-9df4-3e865b5d33c1
│ │ ├── data_level0.bin
│ │ ├── header.bin
│ │ ├── index_metadata.pickle
│ │ ├── length.bin
│ │ └── link_lists.bin
└── chroma.sqlite3
├── import.log
├── lyra-chatlogs
│ ├── 0000_Wire_ROCm_to_Cortex.json
├── 0001_Branch___10_22_ct201branch-ssh_tut.json
├── 0002_cortex_LLMs_11-1-25.json
│ └── 0003_RAG_beta.json
├── rag_api.py
├── rag_build.py
├── rag_chat_import.py
── rag_query.py
├── README.md
├── vllm-mi50.md
└── volumes
├── neo4j_data
   ├── databases
   │   ├── neo4j
   │   │   ├── database_lock
   │   │   ├── id-buffer.tmp.0
   │   │   ├── neostore
   │   │   ├── neostore.counts.db
   │   │   ├── neostore.indexstats.db
   │   │   ├── neostore.labeltokenstore.db
   │   │   ├── neostore.labeltokenstore.db.id
   │   │   ├── neostore.labeltokenstore.db.names
   │   │   ├── neostore.labeltokenstore.db.names.id
   │   │   ├── neostore.nodestore.db
   │   │   ├── neostore.nodestore.db.id
   │   │   ├── neostore.nodestore.db.labels
   │   │   ├── neostore.nodestore.db.labels.id
   │   │   ├── neostore.propertystore.db
   │   │   ├── neostore.propertystore.db.arrays
   │   │   ├── neostore.propertystore.db.arrays.id
   │   │   ├── neostore.propertystore.db.id
   │   │   ├── neostore.propertystore.db.index
   │   │   ├── neostore.propertystore.db.index.id
   │   │   ├── neostore.propertystore.db.index.keys
   │   │   ├── neostore.propertystore.db.index.keys.id
   │   │   ├── neostore.propertystore.db.strings
   │   │   ├── neostore.propertystore.db.strings.id
   │   │   ├── neostore.relationshipgroupstore.db
   │   │   ├── neostore.relationshipgroupstore.db.id
   │   │   ├── neostore.relationshipgroupstore.degrees.db
   │   │   ├── neostore.relationshipstore.db
   │   │   ├── neostore.relationshipstore.db.id
   │   │   ├── neostore.relationshiptypestore.db
   │   │   ├── neostore.relationshiptypestore.db.id
   │   │   ├── neostore.relationshiptypestore.db.names
   │   │   ├── neostore.relationshiptypestore.db.names.id
   │   │   ├── neostore.schemastore.db
   │   │   ├── neostore.schemastore.db.id
   │   │   └── schema
   │   │   └── index
   │   │   └── token-lookup-1.0
   │   │   ├── 1
   │   │     └── index-1
   │   │   └── 2
   │   │   └── index-2
   │   ├── store_lock
   │   └── system
   │   ├── database_lock
   │   ├── id-buffer.tmp.0
   │   ├── neostore
   │   ├── neostore.counts.db
   │   ├── neostore.indexstats.db
   │   ├── neostore.labeltokenstore.db
   │   ├── neostore.labeltokenstore.db.id
   │   ├── neostore.labeltokenstore.db.names
   │   ├── neostore.labeltokenstore.db.names.id
   │   ├── neostore.nodestore.db
   │   ├── neostore.nodestore.db.id
   │   ├── neostore.nodestore.db.labels
   │   ├── neostore.nodestore.db.labels.id
   │   ├── neostore.propertystore.db
   │   ├── neostore.propertystore.db.arrays
   │   ├── neostore.propertystore.db.arrays.id
   │   ├── neostore.propertystore.db.id
   │   ├── neostore.propertystore.db.index
   │   ├── neostore.propertystore.db.index.id
   │   ├── neostore.propertystore.db.index.keys
   │   ├── neostore.propertystore.db.index.keys.id
   │   ├── neostore.propertystore.db.strings
   │   ├── neostore.propertystore.db.strings.id
   │   ├── neostore.relationshipgroupstore.db
   │   ├── neostore.relationshipgroupstore.db.id
   │   ├── neostore.relationshipgroupstore.degrees.db
   │   ├── neostore.relationshipstore.db
   │   ├── neostore.relationshipstore.db.id
   │   ├── neostore.relationshiptypestore.db
   │   ├── neostore.relationshiptypestore.db.id
   │   ├── neostore.relationshiptypestore.db.names
   │   ├── neostore.relationshiptypestore.db.names.id
   │   ├── neostore.schemastore.db
   │   ├── neostore.schemastore.db.id
   │   └── schema
   │   └── index
   │   ├── range-1.0
   │     ├── 3
   │     │   └── index-3
   │     ├── 4
   │     │   └── index-4
   │     ├── 7
   │     │   └── index-7
   │     ├── 8
   │     │   └── index-8
   │     └── 9
   │     └── index-9
   │   └── token-lookup-1.0
   │   ├── 1
   │     └── index-1
   │   └── 2
   │   └── index-2
   ├── dbms
   │   └── auth.ini
   ├── server_id
   └── transactions
   ├── neo4j
     ├── checkpoint.0
     └── neostore.transaction.db.0
   └── system
   ├── checkpoint.0
   └── neostore.transaction.db.0
└── postgres_data [error opening dir]
81 directories, 376 files
├── databases
├── neo4j
│ │ ├── database_lock
│ │ ├── id-buffer.tmp.0
│ │ ├── neostore
│ │ ├── neostore.counts.db
│ │ ├── neostore.indexstats.db
│ │ ├── neostore.labeltokenstore.db
│ │ ├── neostore.labeltokenstore.db.id
│ │ ├── neostore.labeltokenstore.db.names
│ │ ├── neostore.labeltokenstore.db.names.id
│ │ ├── neostore.nodestore.db
│ │ ├── neostore.nodestore.db.id
│ │ ├── neostore.nodestore.db.labels
│ │ ├── neostore.nodestore.db.labels.id
│ │ ├── neostore.propertystore.db
│ │ ├── neostore.propertystore.db.arrays
│ │ ├── neostore.propertystore.db.arrays.id
│ │ ├── neostore.propertystore.db.id
│ │ ├── neostore.propertystore.db.index
│ │ ├── neostore.propertystore.db.index.id
│ │ ├── neostore.propertystore.db.index.keys
│ │ ├── neostore.propertystore.db.index.keys.id
│ │ ├── neostore.propertystore.db.strings
│ │ ├── neostore.propertystore.db.strings.id
│ │ ├── neostore.relationshipgroupstore.db
│ │ ├── neostore.relationshipgroupstore.db.id
│ │ ├── neostore.relationshipgroupstore.degrees.db
│ │ ├── neostore.relationshipstore.db
│ │ ├── neostore.relationshipstore.db.id
│ │ ├── neostore.relationshiptypestore.db
│ │ ├── neostore.relationshiptypestore.db.id
│ │ ├── neostore.relationshiptypestore.db.names
│ │ ├── neostore.relationshiptypestore.db.names.id
│ │ ├── neostore.schemastore.db
│ │ ├── neostore.schemastore.db.id
│ │ └── schema
│ │ └── index
│ │ └── token-lookup-1.0
│ │ ├── 1
│ │ └── index-1
│ │ └── 2
│ │ └── index-2
├── store_lock
└── system
├── database_lock
├── id-buffer.tmp.0
├── neostore
├── neostore.counts.db
├── neostore.indexstats.db
├── neostore.labeltokenstore.db
├── neostore.labeltokenstore.db.id
├── neostore.labeltokenstore.db.names
├── neostore.labeltokenstore.db.names.id
├── neostore.nodestore.db
├── neostore.nodestore.db.id
├── neostore.nodestore.db.labels
├── neostore.nodestore.db.labels.id
├── neostore.propertystore.db
├── neostore.propertystore.db.arrays
├── neostore.propertystore.db.arrays.id
├── neostore.propertystore.db.id
├── neostore.propertystore.db.index
├── neostore.propertystore.db.index.id
├── neostore.propertystore.db.index.keys
├── neostore.propertystore.db.index.keys.id
├── neostore.propertystore.db.strings
├── neostore.propertystore.db.strings.id
├── neostore.relationshipgroupstore.db
├── neostore.relationshipgroupstore.db.id
├── neostore.relationshipgroupstore.degrees.db
├── neostore.relationshipstore.db
├── neostore.relationshipstore.db.id
├── neostore.relationshiptypestore.db
├── neostore.relationshiptypestore.db.id
├── neostore.relationshiptypestore.db.names
├── neostore.relationshiptypestore.db.names.id
├── neostore.schemastore.db
├── neostore.schemastore.db.id
└── schema
└── index
├── range-1.0
├── 3
└── index-3
├── 4
└── index-4
├── 7
└── index-7
├── 8
└── index-8
└── 9
└── index-9
└── token-lookup-1.0
├── 1
└── index-1
└── 2
└── index-2
├── dbms
└── auth.ini
├── server_id
└── transactions
├── neo4j
├── checkpoint.0
└── neostore.transaction.db.0
└── system
├── checkpoint.0
└── neostore.transaction.db.0
└── postgres_data [error opening dir]