6.8 KiB
6.8 KiB
Cortex Logging Quick Reference
🎯 TL;DR
Finding weak links in the LLM chain?
export LOG_DETAIL_LEVEL=detailed
export VERBOSE_DEBUG=true
Production use?
export LOG_DETAIL_LEVEL=summary
📊 Log Levels Comparison
| Level | Output Lines/Message | Use Case | Raw LLM Output? |
|---|---|---|---|
| minimal | 1-2 | Silent production | ❌ No |
| summary | 5-7 | Production (DEFAULT) | ❌ No |
| detailed | 30-50 | Debugging, finding bottlenecks | ✅ Parsed only |
| verbose | 100+ | Deep debugging, seeing raw data | ✅ Full JSON |
🔍 Common Debugging Tasks
See Raw LLM Outputs
export LOG_DETAIL_LEVEL=verbose
Look for:
╭─ RAW RESPONSE ────────────────────────────────────
│ { "choices": [ { "message": { "content": "..." } } ] }
╰───────────────────────────────────────────────────
Find Performance Bottlenecks
export LOG_DETAIL_LEVEL=detailed
Look for:
⏱️ Stage Timings:
reasoning : 3450ms ( 76.0%) ← SLOW!
Check Which RAG Memories Are Used
export LOG_DETAIL_LEVEL=detailed
Look for:
╭─ RAG RESULTS (5) ──────────────────────────────
│ [1] 0.923 | Memory content...
Detect Loops
export ENABLE_DUPLICATE_DETECTION=true # (default)
Look for:
⚠️ DUPLICATE MESSAGE DETECTED
🔁 LOOP DETECTED - Returning cached context
See All Backend Failures
export LOG_DETAIL_LEVEL=summary # or higher
Look for:
⚠️ [LLM] PRIMARY failed | Connection timeout
⚠️ [LLM] SECONDARY failed | Model not found
✅ [LLM] CLOUD | Reply: Based on...
🛠️ Environment Variables Cheat Sheet
# Verbosity Control
LOG_DETAIL_LEVEL=detailed # minimal | summary | detailed | verbose
VERBOSE_DEBUG=false # true = maximum verbosity (legacy)
# Raw Data Visibility
LOG_RAW_CONTEXT_DATA=false # Show full intake L1-L30 dumps
# Loop Protection
ENABLE_DUPLICATE_DETECTION=true # Detect duplicate messages
MAX_MESSAGE_HISTORY=100 # Trim history after N messages
SESSION_TTL_HOURS=24 # Expire sessions after N hours
# Features
NEOMEM_ENABLED=false # Enable long-term memory
ENABLE_AUTONOMOUS_TOOLS=true # Enable tool invocation
ENABLE_PROACTIVE_MONITORING=true # Enable suggestions
📋 Sample Output
Summary Mode (Default - Production)
✅ [LLM] PRIMARY | 14:23:45.123 | Reply: Based on your question...
📊 Context | Session: abc123 | Messages: 42 | Last: 5.2min | RAG: 5 results
🧠 Monologue | question | Tone: curious
✨ PIPELINE COMPLETE | Session: abc123 | Total: 1250ms
📤 Output: 342 characters
Detailed Mode (Debugging)
════════════════════════════════════════════════════════════════════════════
🚀 PIPELINE START | Session: abc123 | 14:23:45.123
════════════════════════════════════════════════════════════════════════════
📝 User: What is the meaning of life?
────────────────────────────────────────────────────────────────────────────
────────────────────────────────────────────────────────────────────────────
🧠 LLM CALL | Backend: PRIMARY | 14:23:45.234
────────────────────────────────────────────────────────────────────────────
📝 Prompt: You are Lyra, a thoughtful AI assistant...
💬 Reply: Based on philosophical perspectives...
📊 Context | Session: abc123 | Messages: 42 | Last: 5.2min | RAG: 5 results
╭─ RAG RESULTS (5) ──────────────────────────────
│ [1] 0.923 | Previous philosophy discussion...
│ [2] 0.891 | Existential note...
╰────────────────────────────────────────────────
════════════════════════════════════════════════════════════════════════════
✨ PIPELINE COMPLETE | Session: abc123 | Total: 1250ms
════════════════════════════════════════════════════════════════════════════
⏱️ Stage Timings:
context : 150ms ( 12.0%)
reasoning : 450ms ( 36.0%) ← Largest component
persona : 140ms ( 11.2%)
📤 Output: 342 characters
════════════════════════════════════════════════════════════════════════════
⚡ Quick Troubleshooting
| Symptom | Check | Fix |
|---|---|---|
| Logs too verbose | Current level | Set LOG_DETAIL_LEVEL=summary |
| Can't see LLM outputs | Current level | Set LOG_DETAIL_LEVEL=detailed or verbose |
| Repeating operations | Loop warnings | Check for 🔁 LOOP DETECTED messages |
| Slow responses | Stage timings | Look for stages >1000ms in detailed mode |
| Missing RAG data | NEOMEM_ENABLED | Set NEOMEM_ENABLED=true |
| Out of memory | Message history | Lower MAX_MESSAGE_HISTORY |
📁 Key Files
- .env.logging.example - Full configuration guide
- LOGGING_REFACTOR_SUMMARY.md - Detailed explanation
- cortex/utils/logging_utils.py - Logging utilities
- cortex/context.py - Context + loop protection
- cortex/router.py - Pipeline stages
- core/relay/lib/llm.js - LLM backend logging
Need more detail? See LOGGING_REFACTOR_SUMMARY.md