v0.6.1 - reinstated UI, relay > cortex pipeline working

This commit is contained in:
serversdwn
2025-12-11 16:28:25 -05:00
parent 30f6c1a3da
commit 6a20d3981f
9 changed files with 1143 additions and 456 deletions

View File

@@ -0,0 +1,280 @@
`docs/ARCHITECTURE_v0.6.0.md`
This reflects **everything we clarified**, expressed cleanly and updated to the new 3-brain design.
---
# **Cortex v0.6.0 — Cognitive Architecture Overview**
*Last updated: Dec 2025*
## **Summary**
Cortex v0.6.0 evolves from a linear “reflection → reasoning → refine → persona” pipeline into a **three-layer cognitive system** modeled after human cognition:
1. **Autonomy Core** — Lyras self-model (identity, mood, long-term goals)
2. **Inner Monologue** — Lyras private narrator (self-talk + internal reflection)
3. **Executive Agent (DeepSeek)** — Lyras task-oriented decision-maker
Cortex itself now becomes the **central orchestrator**, not the whole mind. It routes user messages through these layers and produces the final outward response via the persona system.
---
# **Chain concept**
User > Relay > Cortex intake > Inner self > Cortex > Exec (deepseek) > Cortex > persona > relay > user And inner self
USER
RELAY
(sessions, logging, routing)
┌──────────────────────────────────┐
│ CORTEX │
│ Intake → Reflection → Exec → Reason → Refine │
└───────────────┬──────────────────┘
│ self_state
INNER SELF (monologue)
AUTONOMY CORE
(long-term identity)
Persona Layer (speak)
RELAY
USER
# **High-level Architecture**
```
Autonomy Core (Self-Model)
┌────────────────────────────────────────┐
│ mood, identity, goals, emotional state│
│ updated outside Cortex by inner monologue│
└─────────────────────┬──────────────────┘
Inner Monologue (Self-Talk Loop)
┌────────────────────────────────────────┐
│ Interprets events in language │
│ Updates Autonomy Core │
│ Sends state-signals INTO Cortex │
└─────────────────────┬──────────────────┘
Cortex (Task Brain / Router)
┌────────────────────────────────────────────────────────┐
│ Intake → Reflection → Exec Agent → Reason → Refinement │
│ ↑ │ │
│ │ ▼ │
│ Receives state from Persona Output │
│ inner self (Lyras voice) │
└────────────────────────────────────────────────────────┘
```
The **user interacts only with the Persona layer**.
Inner Monologue and Autonomy Core never speak directly to the user.
---
# **Component Breakdown**
## **1. Autonomy Core (Self-Model)**
*Not inside Cortex.*
A persistent JSON/state machine representing Lyras ongoing inner life:
* `mood`
* `focus_mode`
* `confidence`
* `identity_traits`
* `relationship_memory`
* `long_term_goals`
* `emotional_baseline`
The Autonomy Core:
* Is updated by Inner Monologue
* Exposes its state to Cortex via a simple `get_state()` API
* Never speaks to the user directly
* Does not run LLMs itself
It is the **structure** of self, not the thoughts.
---
## **2. Inner Monologue (Narrating, Private Mind)**
*New subsystem in v0.6.0.*
This module:
* Reads Cortex summaries (intake, reflection, persona output)
* Generates private self-talk (using an LLM, typically DeepSeek)
* Updates the Autonomy Core
* Produces a **self-state packet** for Cortex to use during task execution
Inner Monologue is like:
> “Brian is asking about X.
> I should shift into a focused, serious tone.
> I feel confident about this area.”
It **never** outputs directly to the user.
### Output schema (example):
```json
{
"mood": "focused",
"persona_bias": "clear",
"confidence_delta": +0.05,
"stance": "analytical",
"notes_to_cortex": [
"Reduce playfulness",
"Prioritize clarity",
"Recall project memory"
]
}
```
---
## **3. Executive Agent (DeepSeek Director Mode)**
Inside Cortex.
This is Lyras **prefrontal cortex** — the task-oriented planner that decides how to respond to the current user message.
Input to Executive Agent:
* User message
* Intake summary
* Reflection notes
* **Self-state packet** from Inner Monologue
It outputs a **plan**, not a final answer:
```json
{
"action": "WRITE_NOTE",
"tools": ["memory_search"],
"tone": "focused",
"steps": [
"Search relevant project notes",
"Synthesize into summary",
"Draft actionable update"
]
}
```
Cortex then executes this plan.
---
# **Cortex Pipeline (v0.6.0)**
Cortex becomes the orchestrator for the entire sequence:
### **0. Intake**
Parse the user message, extract relevant features.
### **1. Reflection**
Lightweight summarization (unchanged).
Output used by both Inner Monologue and Executive Agent.
### **2. Inner Monologue Update (parallel)**
Reflection summary is sent to Inner Self, which:
* updates Autonomy Core
* returns `self_state` to Cortex
### **3. Executive Agent (DeepSeek)**
Given:
* user message
* reflection summary
* autonomy self_state
→ produce a **task plan**
### **4. Reasoning**
Carries out the plan:
* tool calls
* retrieval
* synthesis
### **5. Refinement**
Polish the draft, ensure quality, follow constraints.
### **6. Persona (speak.py)**
Final transformation into Lyras voice.
Persona now uses:
* self_state (mood, tone)
* constraints from Executive Agent
### **7. User Response**
Persona output is delivered to the user.
### **8. Inner Monologue Post-Update**
Cortex sends the final answer BACK to inner self for:
* narrative continuity
* emotional adjustment
* identity update
---
# **Key Conceptual Separation**
These three layers must remain distinct:
| Layer | Purpose |
| ------------------- | ------------------------------------------------------- |
| **Autonomy Core** | Lyras identity + emotional continuity |
| **Inner Monologue** | Lyras private thoughts, interpretation, meaning-making |
| **Executive Agent** | Deciding what to *do* for the user message |
| **Cortex** | Executing the plan |
| **Persona** | Outward voice (what the user actually hears) |
The **user only interacts with Persona.**
Inner Monologue and Autonomy Core are internal cognitive machinery.
---
# **What This Architecture Enables**
* Emotional continuity
* Identity stability
* Agentic decision-making
* Multi-model routing
* Context-aware tone
* Internal narrative
* Proactive behavioral shifts
* Human-like cognition
This design turns Cortex from a simple pipeline into the **center of a functional artificial mind**.