Update to v0.9.1 #1
99
CHANGELOG.md
99
CHANGELOG.md
@@ -9,6 +9,105 @@ Format based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/) and [Se
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
## [0.6.0] - 2025-12-18
|
||||||
|
|
||||||
|
### Added - Autonomy System (Phase 1 & 2)
|
||||||
|
|
||||||
|
**Autonomy Phase 1** - Self-Awareness & Planning Foundation
|
||||||
|
- **Executive Planning Module** [cortex/autonomy/executive/planner.py](cortex/autonomy/executive/planner.py)
|
||||||
|
- Autonomous goal setting and task planning capabilities
|
||||||
|
- Multi-step reasoning for complex objectives
|
||||||
|
- Integration with self-state tracking
|
||||||
|
- **Self-State Management** [cortex/data/self_state.json](cortex/data/self_state.json)
|
||||||
|
- Persistent state tracking across sessions
|
||||||
|
- Memory of past actions and outcomes
|
||||||
|
- Self-awareness metadata storage
|
||||||
|
- **Self Analyzer** [cortex/autonomy/self/analyzer.py](cortex/autonomy/self/analyzer.py)
|
||||||
|
- Analyzes own performance and decision patterns
|
||||||
|
- Identifies areas for improvement
|
||||||
|
- Tracks cognitive patterns over time
|
||||||
|
- **Test Suite** [cortex/tests/test_autonomy_phase1.py](cortex/tests/test_autonomy_phase1.py)
|
||||||
|
- Unit tests for phase 1 autonomy features
|
||||||
|
|
||||||
|
**Autonomy Phase 2** - Decision Making & Proactive Behavior
|
||||||
|
- **Autonomous Actions Module** [cortex/autonomy/actions/autonomous_actions.py](cortex/autonomy/actions/autonomous_actions.py)
|
||||||
|
- Self-initiated action execution
|
||||||
|
- Context-aware decision implementation
|
||||||
|
- Action logging and tracking
|
||||||
|
- **Pattern Learning System** [cortex/autonomy/learning/pattern_learner.py](cortex/autonomy/learning/pattern_learner.py)
|
||||||
|
- Learns from interaction patterns
|
||||||
|
- Identifies recurring user needs
|
||||||
|
- Adapts behavior based on learned patterns
|
||||||
|
- **Proactive Monitor** [cortex/autonomy/proactive/monitor.py](cortex/autonomy/proactive/monitor.py)
|
||||||
|
- Monitors system state for intervention opportunities
|
||||||
|
- Detects patterns requiring proactive response
|
||||||
|
- Background monitoring capabilities
|
||||||
|
- **Decision Engine** [cortex/autonomy/tools/decision_engine.py](cortex/autonomy/tools/decision_engine.py)
|
||||||
|
- Autonomous decision-making framework
|
||||||
|
- Weighs options and selects optimal actions
|
||||||
|
- Integrates with orchestrator for coordinated decisions
|
||||||
|
- **Orchestrator** [cortex/autonomy/tools/orchestrator.py](cortex/autonomy/tools/orchestrator.py)
|
||||||
|
- Coordinates multiple autonomy subsystems
|
||||||
|
- Manages tool selection and execution
|
||||||
|
- Handles NeoMem integration (with disable capability)
|
||||||
|
- **Test Suite** [cortex/tests/test_autonomy_phase2.py](cortex/tests/test_autonomy_phase2.py)
|
||||||
|
- Unit tests for phase 2 autonomy features
|
||||||
|
|
||||||
|
**Autonomy Phase 2.5** - Pipeline Refinement
|
||||||
|
- Tightened integration between autonomy modules and reasoning pipeline
|
||||||
|
- Enhanced self-state persistence and tracking
|
||||||
|
- Improved orchestrator reliability
|
||||||
|
- NeoMem integration refinements in vector store handling [neomem/neomem/vector_stores/qdrant.py](neomem/neomem/vector_stores/qdrant.py)
|
||||||
|
|
||||||
|
### Added - Documentation
|
||||||
|
|
||||||
|
- **Complete AI Agent Breakdown** [docs/PROJECT_LYRA_COMPLETE_BREAKDOWN.md](docs/PROJECT_LYRA_COMPLETE_BREAKDOWN.md)
|
||||||
|
- Comprehensive system architecture documentation
|
||||||
|
- Detailed component descriptions
|
||||||
|
- Data flow diagrams
|
||||||
|
- Integration points and API specifications
|
||||||
|
|
||||||
|
### Changed - Core Integration
|
||||||
|
|
||||||
|
- **Router Updates** [cortex/router.py](cortex/router.py)
|
||||||
|
- Integrated autonomy subsystems into main routing logic
|
||||||
|
- Added endpoints for autonomous decision-making
|
||||||
|
- Enhanced state management across requests
|
||||||
|
- **Reasoning Pipeline** [cortex/reasoning/reasoning.py](cortex/reasoning/reasoning.py)
|
||||||
|
- Integrated autonomy-aware reasoning
|
||||||
|
- Self-state consideration in reasoning process
|
||||||
|
- **Persona Layer** [cortex/persona/speak.py](cortex/persona/speak.py)
|
||||||
|
- Autonomy-aware response generation
|
||||||
|
- Self-state reflection in personality expression
|
||||||
|
- **Context Handling** [cortex/context.py](cortex/context.py)
|
||||||
|
- NeoMem disable capability for flexible deployment
|
||||||
|
|
||||||
|
### Changed - Development Environment
|
||||||
|
|
||||||
|
- Updated [.gitignore](.gitignore) for better workspace management
|
||||||
|
- Cleaned up VSCode settings
|
||||||
|
- Removed [.vscode/settings.json](.vscode/settings.json) from repository
|
||||||
|
|
||||||
|
### Technical Improvements
|
||||||
|
|
||||||
|
- Modular autonomy architecture with clear separation of concerns
|
||||||
|
- Test-driven development for new autonomy features
|
||||||
|
- Enhanced state persistence across system restarts
|
||||||
|
- Flexible NeoMem integration with enable/disable controls
|
||||||
|
|
||||||
|
### Architecture - Autonomy System Design
|
||||||
|
|
||||||
|
The autonomy system operates in layers:
|
||||||
|
1. **Executive Layer** - High-level planning and goal setting
|
||||||
|
2. **Decision Layer** - Evaluates options and makes choices
|
||||||
|
3. **Action Layer** - Executes autonomous decisions
|
||||||
|
4. **Learning Layer** - Adapts behavior based on patterns
|
||||||
|
5. **Monitoring Layer** - Proactive awareness of system state
|
||||||
|
|
||||||
|
All layers coordinate through the orchestrator and maintain state in `self_state.json`.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
## [0.5.2] - 2025-12-12
|
## [0.5.2] - 2025-12-12
|
||||||
|
|
||||||
### Fixed - LLM Router & Async HTTP
|
### Fixed - LLM Router & Async HTTP
|
||||||
|
|||||||
136
README.md
136
README.md
@@ -1,10 +1,12 @@
|
|||||||
# Project Lyra - README v0.5.1
|
# Project Lyra - README v0.6.0
|
||||||
|
|
||||||
Lyra is a modular persistent AI companion system with advanced reasoning capabilities.
|
Lyra is a modular persistent AI companion system with advanced reasoning capabilities and autonomous decision-making.
|
||||||
It provides memory-backed chat using **NeoMem** + **Relay** + **Cortex**,
|
It provides memory-backed chat using **Relay** + **Cortex** with integrated **Autonomy System**,
|
||||||
with multi-stage reasoning pipeline powered by HTTP-based LLM backends.
|
featuring a multi-stage reasoning pipeline powered by HTTP-based LLM backends.
|
||||||
|
|
||||||
**Current Version:** v0.5.1 (2025-12-11)
|
**Current Version:** v0.6.0 (2025-12-18)
|
||||||
|
|
||||||
|
> **Note:** As of v0.6.0, NeoMem is **disabled by default** while we work out integration hiccups in the pipeline. The autonomy system is being refined independently before full memory integration.
|
||||||
|
|
||||||
## Mission Statement
|
## Mission Statement
|
||||||
|
|
||||||
@@ -24,7 +26,8 @@ Project Lyra operates as a **single docker-compose deployment** with multiple Do
|
|||||||
- OpenAI-compatible endpoint: `POST /v1/chat/completions`
|
- OpenAI-compatible endpoint: `POST /v1/chat/completions`
|
||||||
- Internal endpoint: `POST /chat`
|
- Internal endpoint: `POST /chat`
|
||||||
- Routes messages through Cortex reasoning pipeline
|
- Routes messages through Cortex reasoning pipeline
|
||||||
- Manages async calls to NeoMem and Cortex ingest
|
- Manages async calls to Cortex ingest
|
||||||
|
- *(NeoMem integration currently disabled in v0.6.0)*
|
||||||
|
|
||||||
**2. UI** (Static HTML)
|
**2. UI** (Static HTML)
|
||||||
- Browser-based chat interface with cyberpunk theme
|
- Browser-based chat interface with cyberpunk theme
|
||||||
@@ -32,18 +35,20 @@ Project Lyra operates as a **single docker-compose deployment** with multiple Do
|
|||||||
- Saves and loads sessions
|
- Saves and loads sessions
|
||||||
- OpenAI-compatible message format
|
- OpenAI-compatible message format
|
||||||
|
|
||||||
**3. NeoMem** (Python/FastAPI) - Port 7077
|
**3. NeoMem** (Python/FastAPI) - Port 7077 - **DISABLED IN v0.6.0**
|
||||||
- Long-term memory database (fork of Mem0 OSS)
|
- Long-term memory database (fork of Mem0 OSS)
|
||||||
- Vector storage (PostgreSQL + pgvector) + Graph storage (Neo4j)
|
- Vector storage (PostgreSQL + pgvector) + Graph storage (Neo4j)
|
||||||
- RESTful API: `/memories`, `/search`
|
- RESTful API: `/memories`, `/search`
|
||||||
- Semantic memory updates and retrieval
|
- Semantic memory updates and retrieval
|
||||||
- No external SDK dependencies - fully local
|
- No external SDK dependencies - fully local
|
||||||
|
- **Status:** Currently disabled while pipeline integration is refined
|
||||||
|
|
||||||
### Reasoning Layer
|
### Reasoning Layer
|
||||||
|
|
||||||
**4. Cortex** (Python/FastAPI) - Port 7081
|
**4. Cortex** (Python/FastAPI) - Port 7081
|
||||||
- Primary reasoning engine with multi-stage pipeline
|
- Primary reasoning engine with multi-stage pipeline and autonomy system
|
||||||
- **Includes embedded Intake module** (no separate service as of v0.5.1)
|
- **Includes embedded Intake module** (no separate service as of v0.5.1)
|
||||||
|
- **Integrated Autonomy System** (NEW in v0.6.0) - See Autonomy System section below
|
||||||
- **4-Stage Processing:**
|
- **4-Stage Processing:**
|
||||||
1. **Reflection** - Generates meta-awareness notes about conversation
|
1. **Reflection** - Generates meta-awareness notes about conversation
|
||||||
2. **Reasoning** - Creates initial draft answer using context
|
2. **Reasoning** - Creates initial draft answer using context
|
||||||
@@ -82,9 +87,49 @@ Project Lyra operates as a **single docker-compose deployment** with multiple Do
|
|||||||
|
|
||||||
Each module can be configured to use a different backend via environment variables.
|
Each module can be configured to use a different backend via environment variables.
|
||||||
|
|
||||||
|
### Autonomy System (NEW in v0.6.0)
|
||||||
|
|
||||||
|
**Cortex Autonomy Subsystems** - Multi-layered autonomous decision-making and learning
|
||||||
|
- **Executive Layer** [cortex/autonomy/executive/](cortex/autonomy/executive/)
|
||||||
|
- High-level planning and goal setting
|
||||||
|
- Multi-step reasoning for complex objectives
|
||||||
|
- Strategic decision making
|
||||||
|
- **Decision Engine** [cortex/autonomy/tools/decision_engine.py](cortex/autonomy/tools/decision_engine.py)
|
||||||
|
- Autonomous decision-making framework
|
||||||
|
- Option evaluation and selection
|
||||||
|
- Coordinated decision orchestration
|
||||||
|
- **Autonomous Actions** [cortex/autonomy/actions/](cortex/autonomy/actions/)
|
||||||
|
- Self-initiated action execution
|
||||||
|
- Context-aware behavior implementation
|
||||||
|
- Action logging and tracking
|
||||||
|
- **Pattern Learning** [cortex/autonomy/learning/](cortex/autonomy/learning/)
|
||||||
|
- Learns from interaction patterns
|
||||||
|
- Identifies recurring user needs
|
||||||
|
- Adaptive behavior refinement
|
||||||
|
- **Proactive Monitoring** [cortex/autonomy/proactive/](cortex/autonomy/proactive/)
|
||||||
|
- System state monitoring
|
||||||
|
- Intervention opportunity detection
|
||||||
|
- Background awareness capabilities
|
||||||
|
- **Self-Analysis** [cortex/autonomy/self/](cortex/autonomy/self/)
|
||||||
|
- Performance tracking and analysis
|
||||||
|
- Cognitive pattern identification
|
||||||
|
- Self-state persistence in [cortex/data/self_state.json](cortex/data/self_state.json)
|
||||||
|
- **Orchestrator** [cortex/autonomy/tools/orchestrator.py](cortex/autonomy/tools/orchestrator.py)
|
||||||
|
- Coordinates all autonomy subsystems
|
||||||
|
- Manages tool selection and execution
|
||||||
|
- Handles external integrations (with enable/disable controls)
|
||||||
|
|
||||||
|
**Autonomy Architecture:**
|
||||||
|
The autonomy system operates in coordinated layers, all maintaining state in `self_state.json`:
|
||||||
|
1. Executive Layer → Planning and goals
|
||||||
|
2. Decision Layer → Evaluation and choices
|
||||||
|
3. Action Layer → Execution
|
||||||
|
4. Learning Layer → Pattern adaptation
|
||||||
|
5. Monitoring Layer → Proactive awareness
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Data Flow Architecture (v0.5.1)
|
## Data Flow Architecture (v0.6.0)
|
||||||
|
|
||||||
### Normal Message Flow:
|
### Normal Message Flow:
|
||||||
|
|
||||||
@@ -97,11 +142,13 @@ Cortex (7081)
|
|||||||
↓ (internal Python call)
|
↓ (internal Python call)
|
||||||
Intake module → summarize_context()
|
Intake module → summarize_context()
|
||||||
↓
|
↓
|
||||||
|
Autonomy System → Decision evaluation & pattern learning
|
||||||
|
↓
|
||||||
Cortex processes (4 stages):
|
Cortex processes (4 stages):
|
||||||
1. reflection.py → meta-awareness notes (CLOUD backend)
|
1. reflection.py → meta-awareness notes (CLOUD backend)
|
||||||
2. reasoning.py → draft answer (PRIMARY backend)
|
2. reasoning.py → draft answer (PRIMARY backend, autonomy-aware)
|
||||||
3. refine.py → refined answer (PRIMARY backend)
|
3. refine.py → refined answer (PRIMARY backend)
|
||||||
4. persona/speak.py → Lyra personality (CLOUD backend)
|
4. persona/speak.py → Lyra personality (CLOUD backend, autonomy-aware)
|
||||||
↓
|
↓
|
||||||
Returns persona answer to Relay
|
Returns persona answer to Relay
|
||||||
↓
|
↓
|
||||||
@@ -109,9 +156,11 @@ Relay → POST /ingest (async)
|
|||||||
↓
|
↓
|
||||||
Cortex → add_exchange_internal() → SESSIONS buffer
|
Cortex → add_exchange_internal() → SESSIONS buffer
|
||||||
↓
|
↓
|
||||||
Relay → NeoMem /memories (async, planned)
|
Autonomy System → Update self_state.json (pattern tracking)
|
||||||
↓
|
↓
|
||||||
Relay → UI (returns final response)
|
Relay → UI (returns final response)
|
||||||
|
|
||||||
|
Note: NeoMem integration disabled in v0.6.0
|
||||||
```
|
```
|
||||||
|
|
||||||
### Cortex 4-Stage Reasoning Pipeline:
|
### Cortex 4-Stage Reasoning Pipeline:
|
||||||
@@ -239,13 +288,13 @@ rag/
|
|||||||
All services run in a single docker-compose stack with the following containers:
|
All services run in a single docker-compose stack with the following containers:
|
||||||
|
|
||||||
**Active Services:**
|
**Active Services:**
|
||||||
- **neomem-postgres** - PostgreSQL with pgvector extension (port 5432)
|
|
||||||
- **neomem-neo4j** - Neo4j graph database (ports 7474, 7687)
|
|
||||||
- **neomem-api** - NeoMem memory service (port 7077)
|
|
||||||
- **relay** - Main orchestrator (port 7078)
|
- **relay** - Main orchestrator (port 7078)
|
||||||
- **cortex** - Reasoning engine with embedded Intake (port 7081)
|
- **cortex** - Reasoning engine with embedded Intake and Autonomy System (port 7081)
|
||||||
|
|
||||||
**Disabled Services:**
|
**Disabled Services (v0.6.0):**
|
||||||
|
- **neomem-postgres** - PostgreSQL with pgvector extension (port 5432) - *disabled while refining pipeline*
|
||||||
|
- **neomem-neo4j** - Neo4j graph database (ports 7474, 7687) - *disabled while refining pipeline*
|
||||||
|
- **neomem-api** - NeoMem memory service (port 7077) - *disabled while refining pipeline*
|
||||||
- **intake** - No longer needed (embedded in Cortex as of v0.5.1)
|
- **intake** - No longer needed (embedded in Cortex as of v0.5.1)
|
||||||
- **rag** - Beta Lyrae RAG service (port 7090) - currently disabled
|
- **rag** - Beta Lyrae RAG service (port 7090) - currently disabled
|
||||||
|
|
||||||
@@ -278,7 +327,32 @@ The following LLM backends are accessed via HTTP (not part of docker-compose):
|
|||||||
|
|
||||||
## Version History
|
## Version History
|
||||||
|
|
||||||
### v0.5.1 (2025-12-11) - Current Release
|
### v0.6.0 (2025-12-18) - Current Release
|
||||||
|
**Major Feature: Autonomy System (Phase 1, 2, and 2.5)**
|
||||||
|
- ✅ Added autonomous decision-making framework
|
||||||
|
- ✅ Implemented executive planning and goal-setting layer
|
||||||
|
- ✅ Added pattern learning system for adaptive behavior
|
||||||
|
- ✅ Implemented proactive monitoring capabilities
|
||||||
|
- ✅ Created self-analysis and performance tracking system
|
||||||
|
- ✅ Integrated self-state persistence (`cortex/data/self_state.json`)
|
||||||
|
- ✅ Built decision engine with orchestrator coordination
|
||||||
|
- ✅ Added autonomous action execution framework
|
||||||
|
- ✅ Integrated autonomy into reasoning and persona layers
|
||||||
|
- ✅ Created comprehensive test suites for autonomy features
|
||||||
|
- ✅ Added complete system breakdown documentation
|
||||||
|
|
||||||
|
**Architecture Changes:**
|
||||||
|
- Autonomy system integrated into Cortex reasoning pipeline
|
||||||
|
- Multi-layered autonomous decision-making architecture
|
||||||
|
- Self-state tracking across sessions
|
||||||
|
- NeoMem disabled by default while refining pipeline integration
|
||||||
|
- Enhanced orchestrator with flexible service controls
|
||||||
|
|
||||||
|
**Documentation:**
|
||||||
|
- Added [PROJECT_LYRA_COMPLETE_BREAKDOWN.md](docs/PROJECT_LYRA_COMPLETE_BREAKDOWN.md)
|
||||||
|
- Updated changelog with comprehensive autonomy system details
|
||||||
|
|
||||||
|
### v0.5.1 (2025-12-11)
|
||||||
**Critical Intake Integration Fixes:**
|
**Critical Intake Integration Fixes:**
|
||||||
- ✅ Fixed `bg_summarize()` NameError preventing SESSIONS persistence
|
- ✅ Fixed `bg_summarize()` NameError preventing SESSIONS persistence
|
||||||
- ✅ Fixed `/ingest` endpoint unreachable code
|
- ✅ Fixed `/ingest` endpoint unreachable code
|
||||||
@@ -320,17 +394,19 @@ The following LLM backends are accessed via HTTP (not part of docker-compose):
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Known Issues (v0.5.1)
|
## Known Issues (v0.6.0)
|
||||||
|
|
||||||
### Critical (Fixed in v0.5.1)
|
### Temporarily Disabled (v0.6.0)
|
||||||
- ~~Intake SESSIONS not persisting~~ ✅ **FIXED**
|
- **NeoMem disabled by default** - Being refined independently before full integration
|
||||||
- ~~`bg_summarize()` NameError~~ ✅ **FIXED**
|
- PostgreSQL + pgvector storage inactive
|
||||||
- ~~`/ingest` endpoint unreachable code~~ ✅ **FIXED**
|
- Neo4j graph database inactive
|
||||||
|
- Memory persistence endpoints not active
|
||||||
|
- RAG service (Beta Lyrae) currently disabled in docker-compose.yml
|
||||||
|
|
||||||
### Non-Critical
|
### Non-Critical
|
||||||
- Session management endpoints not fully implemented in Relay
|
- Session management endpoints not fully implemented in Relay
|
||||||
- RAG service currently disabled in docker-compose.yml
|
- Full autonomy system integration still being refined
|
||||||
- NeoMem integration in Relay not yet active (planned for v0.5.2)
|
- Memory retrieval integration pending NeoMem re-enablement
|
||||||
|
|
||||||
### Operational Notes
|
### Operational Notes
|
||||||
- **Single-worker constraint**: Cortex must run with single Uvicorn worker to maintain SESSIONS state
|
- **Single-worker constraint**: Cortex must run with single Uvicorn worker to maintain SESSIONS state
|
||||||
@@ -338,12 +414,14 @@ The following LLM backends are accessed via HTTP (not part of docker-compose):
|
|||||||
- Diagnostic endpoints (`/debug/sessions`, `/debug/summary`) available for troubleshooting
|
- Diagnostic endpoints (`/debug/sessions`, `/debug/summary`) available for troubleshooting
|
||||||
|
|
||||||
### Future Enhancements
|
### Future Enhancements
|
||||||
|
- Re-enable NeoMem integration after pipeline refinement
|
||||||
|
- Full autonomy system maturation and optimization
|
||||||
- Re-enable RAG service integration
|
- Re-enable RAG service integration
|
||||||
- Implement full session persistence
|
- Implement full session persistence
|
||||||
- Migrate SESSIONS to Redis for multi-worker support
|
- Migrate SESSIONS to Redis for multi-worker support
|
||||||
- Add request correlation IDs for tracing
|
- Add request correlation IDs for tracing
|
||||||
- Comprehensive health checks across all services
|
- Comprehensive health checks across all services
|
||||||
- NeoMem integration in Relay
|
- Enhanced pattern learning with long-term memory integration
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -576,12 +654,16 @@ NeoMem is a derivative work based on Mem0 OSS (Apache 2.0).
|
|||||||
|
|
||||||
## Development Notes
|
## Development Notes
|
||||||
|
|
||||||
### Cortex Architecture (v0.5.1)
|
### Cortex Architecture (v0.6.0)
|
||||||
- Cortex contains embedded Intake module at `cortex/intake/`
|
- Cortex contains embedded Intake module at `cortex/intake/`
|
||||||
- Intake is imported as: `from intake.intake import add_exchange_internal, SESSIONS`
|
- Intake is imported as: `from intake.intake import add_exchange_internal, SESSIONS`
|
||||||
- SESSIONS is a module-level global dictionary (singleton pattern)
|
- SESSIONS is a module-level global dictionary (singleton pattern)
|
||||||
- Single-worker constraint required to maintain SESSIONS state
|
- Single-worker constraint required to maintain SESSIONS state
|
||||||
- Diagnostic endpoints available for debugging: `/debug/sessions`, `/debug/summary`
|
- Diagnostic endpoints available for debugging: `/debug/sessions`, `/debug/summary`
|
||||||
|
- **NEW:** Autonomy system integrated at `cortex/autonomy/`
|
||||||
|
- Executive, decision, action, learning, and monitoring layers
|
||||||
|
- Self-state persistence in `cortex/data/self_state.json`
|
||||||
|
- Coordinated via orchestrator with flexible service controls
|
||||||
|
|
||||||
### Adding New LLM Backends
|
### Adding New LLM Backends
|
||||||
1. Add backend URL to `.env`:
|
1. Add backend URL to `.env`:
|
||||||
|
|||||||
@@ -4,8 +4,8 @@
|
|||||||
"focus": "user_request",
|
"focus": "user_request",
|
||||||
"confidence": 0.7,
|
"confidence": 0.7,
|
||||||
"curiosity": 1.0,
|
"curiosity": 1.0,
|
||||||
"last_updated": "2025-12-15T07:43:32.567849",
|
"last_updated": "2025-12-19T20:25:25.437557",
|
||||||
"interaction_count": 15,
|
"interaction_count": 16,
|
||||||
"learning_queue": [],
|
"learning_queue": [],
|
||||||
"active_goals": [],
|
"active_goals": [],
|
||||||
"preferences": {
|
"preferences": {
|
||||||
|
|||||||
Reference in New Issue
Block a user