Compare commits
7 Commits
7ce0f6115d
...
1.0-experi
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
7b9d51151a | ||
|
|
7715123053 | ||
|
|
94354da611 | ||
|
|
5b907c0cd7 | ||
|
|
ff438c1197 | ||
|
|
16eb9eb1fe | ||
|
|
991aaca34b |
91
CHANGELOG.md
@@ -1,10 +1,99 @@
|
||||
# Changelog
|
||||
|
||||
All notable changes to Seismo Fleet Manager will be documented in this file.
|
||||
All notable changes to Terra-View will be documented in this file.
|
||||
|
||||
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
|
||||
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
|
||||
|
||||
## [0.5.0] - 2026-01-09
|
||||
|
||||
### Added
|
||||
- **Unified Modular Monolith Architecture**: Complete architectural refactoring to modular monolith pattern
|
||||
- **Three Feature Modules**: Seismo (seismograph fleet), SLM (sound level meters), UI (shared templates/static)
|
||||
- **Module Isolation**: Each module has its own database, models, services, and routers
|
||||
- **Shared Infrastructure**: Common utilities and API aggregation layer
|
||||
- **Multi-Container Deployment**: Three Docker containers (terra-view, sfm, slmm) built from single codebase
|
||||
- **SLMM Integration**: Sound Level Meter Manager fully integrated as `app/slm/` module
|
||||
- Migrated from separate repository to unified codebase
|
||||
- Complete NL43 device management API (`/api/nl43/*`)
|
||||
- Database models for NL43Config and NL43Status
|
||||
- NL43Client service for device communication
|
||||
- FTP, TCP, and web interface support for NL43 devices
|
||||
- **SLM Dashboard API Layer**: New dashboard endpoints bridge UI and device APIs
|
||||
- `GET /api/slm-dashboard/stats` - Aggregate statistics (total units, online/offline, measuring/idle)
|
||||
- `GET /api/slm-dashboard/units` - List all units with latest status
|
||||
- `GET /api/slm-dashboard/live-view/{unit_id}` - Real-time measurement data
|
||||
- `GET /api/slm-dashboard/config/{unit_id}` - Retrieve unit configuration
|
||||
- `POST /api/slm-dashboard/config/{unit_id}` - Update unit configuration
|
||||
- `POST /api/slm-dashboard/control/{unit_id}/{action}` - Send control commands (start, stop, pause, resume, reset, sleep, wake)
|
||||
- `GET /api/slm-dashboard/test-modem/{unit_id}` - Test device connectivity
|
||||
- **Repository Rebranding**: Renamed from `seismo-fleet-manager` to `terra-view`
|
||||
- Reflects unified platform nature (seismo + SLM + future modules)
|
||||
- Git remote updated to `terra-view.git`
|
||||
- All references updated throughout codebase
|
||||
|
||||
### Changed
|
||||
- **Project Structure**: Complete reorganization following modular monolith pattern
|
||||
- `app/seismo/` - Seismograph fleet module (formerly `backend/`)
|
||||
- `app/slm/` - Sound level meter module (integrated from SLMM)
|
||||
- `app/ui/` - Shared templates and static assets
|
||||
- `app/api/` - Cross-module API aggregation layer
|
||||
- Removed `backend/` and `templates/` directories
|
||||
- **Import Paths**: All imports updated from `backend.*` to `app.seismo.*` or `app.slm.*`
|
||||
- **Database Initialization**: Each module initializes its own database tables
|
||||
- Seismo database: `app/seismo/database.py`
|
||||
- SLM database: `app/slm/database.py`
|
||||
- **Docker Architecture**: Three-container deployment from single codebase
|
||||
- `terra-view` (port 8001): Main UI/orchestrator with all modules
|
||||
- `sfm` (port 8002): Seismograph Fleet Module API
|
||||
- `slmm` (port 8100): Sound Level Meter Manager API
|
||||
- All containers built from same unified codebase with different entry points
|
||||
|
||||
### Fixed
|
||||
- **Template Path Issues**: Fixed seismo dashboard template references
|
||||
- Updated `app/seismo/routers/dashboard.py` to use `app/ui/templates` directory
|
||||
- Resolved 404 errors for `partials/benched_table.html` and `partials/active_table.html`
|
||||
- **Module Import Errors**: Corrected SLMM module structure
|
||||
- Fixed `app/slm/main.py` to import from `app.slm.routers` instead of `app.routers`
|
||||
- Updated all SLMM internal imports to use `app.slm.*` namespace
|
||||
- **Docker Build Issues**: Resolved file permission problems
|
||||
- Fixed dashboard.py permissions for Docker COPY operations
|
||||
- Ensured all source files readable during container builds
|
||||
|
||||
### Technical Details
|
||||
- **Modular Monolith Benefits**:
|
||||
- Single repository for easier development and deployment
|
||||
- Module boundaries enforced through folder structure
|
||||
- Shared dependencies managed in single requirements.txt
|
||||
- Independent database schemas per module
|
||||
- Clean separation of concerns with explicit module APIs
|
||||
- **Migration Path**: Existing installations automatically migrate
|
||||
- Import path updates applied programmatically
|
||||
- Database schemas remain compatible
|
||||
- No data migration required
|
||||
- **Module Structure**: Each module follows consistent pattern
|
||||
- `database.py` - SQLAlchemy models and session management
|
||||
- `models.py` - Pydantic schemas and database models
|
||||
- `routers.py` - FastAPI route definitions
|
||||
- `services.py` - Business logic and external integrations
|
||||
- **Container Communication**: Containers use host networking
|
||||
- terra-view proxies to sfm and slmm containers
|
||||
- Environment variables configure API URLs
|
||||
- Health checks ensure container availability
|
||||
|
||||
### Migration Notes
|
||||
- **Breaking Changes**: Import paths changed for all modules
|
||||
- Old: `from backend.models import RosterUnit`
|
||||
- New: `from app.seismo.models import RosterUnit`
|
||||
- **Configuration Updates**: Environment variables for multi-container setup
|
||||
- `SFM_API_URL=http://localhost:8002` - SFM backend endpoint
|
||||
- `SLMM_API_URL=http://localhost:8100` - SLMM backend endpoint
|
||||
- `MODULE_MODE=sfm|slmm` - Future flag for API-only containers
|
||||
- **Repository Migration**: Update git remotes for renamed repository
|
||||
```bash
|
||||
git remote set-url origin ssh://git@10.0.0.2:2222/serversdown/terra-view.git
|
||||
```
|
||||
|
||||
## [0.4.2] - 2026-01-05
|
||||
|
||||
### Added
|
||||
|
||||
26
Dockerfile.sfm
Normal file
@@ -0,0 +1,26 @@
|
||||
FROM python:3.11-slim
|
||||
|
||||
# Set working directory
|
||||
WORKDIR /app
|
||||
|
||||
# Install system dependencies
|
||||
RUN apt-get update && \
|
||||
apt-get install -y --no-install-recommends iputils-ping curl && \
|
||||
rm -rf /var/lib/apt/lists/*
|
||||
|
||||
# Copy requirements first for better caching
|
||||
COPY requirements.txt .
|
||||
|
||||
# Install dependencies
|
||||
RUN pip install --no-cache-dir -r requirements.txt
|
||||
|
||||
# Copy application code
|
||||
COPY . .
|
||||
|
||||
# Expose SFM port
|
||||
EXPOSE 8002
|
||||
|
||||
# Run SFM backend (API only)
|
||||
# For now: runs same app on different port
|
||||
# Future: will run SFM-specific entry point
|
||||
CMD ["python3", "-m", "app.main"]
|
||||
21
Dockerfile.slm
Normal file
@@ -0,0 +1,21 @@
|
||||
FROM python:3.11-slim
|
||||
|
||||
WORKDIR /app
|
||||
|
||||
# Install system dependencies
|
||||
RUN apt-get update && \
|
||||
apt-get install -y --no-install-recommends curl && \
|
||||
rm -rf /var/lib/apt/lists/*
|
||||
|
||||
# Copy requirements and install dependencies
|
||||
COPY requirements.txt .
|
||||
RUN pip install --no-cache-dir -r requirements.txt
|
||||
|
||||
# Copy application code
|
||||
COPY app /app/app
|
||||
|
||||
# Expose port
|
||||
EXPOSE 8100
|
||||
|
||||
# Run the SLM application
|
||||
CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "8100"]
|
||||
24
Dockerfile.terraview
Normal file
@@ -0,0 +1,24 @@
|
||||
FROM python:3.11-slim
|
||||
|
||||
# Set working directory
|
||||
WORKDIR /app
|
||||
|
||||
# Install system dependencies
|
||||
RUN apt-get update && \
|
||||
apt-get install -y --no-install-recommends iputils-ping curl && \
|
||||
rm -rf /var/lib/apt/lists/*
|
||||
|
||||
# Copy requirements first for better caching
|
||||
COPY requirements.txt .
|
||||
|
||||
# Install dependencies
|
||||
RUN pip install --no-cache-dir -r requirements.txt
|
||||
|
||||
# Copy application code
|
||||
COPY . .
|
||||
|
||||
# Expose Terra-View UI port
|
||||
EXPOSE 8001
|
||||
|
||||
# Run Terra-View (UI + orchestration)
|
||||
CMD ["python3", "-m", "app.main"]
|
||||
141
MIGRATION_BASELINE.md
Normal file
@@ -0,0 +1,141 @@
|
||||
# Terra-View Modular Monolith - Known-Good Baseline
|
||||
|
||||
**Date:** 2026-01-09
|
||||
**Status:** ✅ IMPORT MIGRATION COMPLETE
|
||||
|
||||
## What We've Achieved
|
||||
|
||||
Successfully restructured the application into a modular monolith architecture with the new folder structure working end-to-end.
|
||||
|
||||
## New Structure
|
||||
|
||||
```
|
||||
/home/serversdown/sfm/seismo-fleet-manager/
|
||||
├── app/
|
||||
│ ├── main.py # NEW: Entry point with Terra-View branding
|
||||
│ ├── core/ # Shared infrastructure
|
||||
│ │ ├── config.py # NEW: Centralized configuration
|
||||
│ │ └── database.py # Shared DB utilities
|
||||
│ ├── ui/ # UI Layer (device-agnostic)
|
||||
│ │ ├── routes.py # NEW: HTML page routes
|
||||
│ │ ├── templates/ # All HTML templates (copied from old location)
|
||||
│ │ └── static/ # All static files (copied from old location)
|
||||
│ ├── seismo/ # Seismograph Feature Module
|
||||
│ │ ├── models.py # ✅ Updated to use app.seismo.database
|
||||
│ │ ├── database.py # NEW: Seismo-specific DB connection
|
||||
│ │ ├── routers/ # API routers (copied from backend/routers/)
|
||||
│ │ └── services/ # Business logic (copied from backend/services/)
|
||||
│ ├── slm/ # Sound Level Meter Feature Module
|
||||
│ │ ├── models.py # NEW: Placeholder for SLM models
|
||||
│ │ ├── database.py # NEW: SLM-specific DB connection
|
||||
│ │ └── routers/ # SLM routers (copied from backend/routers/)
|
||||
│ └── api/ # API Aggregation Layer (placeholder)
|
||||
│ ├── dashboard.py # NEW: Future aggregation endpoints
|
||||
│ └── roster.py # NEW: Future aggregation endpoints
|
||||
└── data/
|
||||
└── seismo_fleet.db # Still using shared DB (migration pending)
|
||||
```
|
||||
|
||||
## What's Working
|
||||
|
||||
✅ **Application starts successfully** on port 9999
|
||||
✅ **Health endpoint works**: `/health` returns Terra-View v1.0.0
|
||||
✅ **UI renders**: Main dashboard loads with proper templates
|
||||
✅ **API endpoints work**: `/api/status-snapshot` returns seismograph data
|
||||
✅ **Database access works**: Models properly connected
|
||||
✅ **Static files serve**: CSS, JS, icons all accessible
|
||||
|
||||
## Critical Changes Made
|
||||
|
||||
### 1. Fixed Import in models.py
|
||||
**File:** `app/seismo/models.py`
|
||||
**Change:** `from backend.database import Base` → `from app.seismo.database import Base`
|
||||
**Reason:** Avoid duplicate Base instances causing SQLAlchemy errors
|
||||
|
||||
### 2. Created New Entry Point
|
||||
**File:** `app/main.py`
|
||||
**Features:**
|
||||
- Terra-View branding (title, version, health check)
|
||||
- Imports from new `app.*` structure
|
||||
- Registers all seismo and SLM routers
|
||||
- Middleware for environment context
|
||||
|
||||
### 3. Created UI Routes Module
|
||||
**File:** `app/ui/routes.py`
|
||||
**Purpose:** Centralize all HTML page routes (device-agnostic)
|
||||
|
||||
### 4. Created Module-Specific Databases
|
||||
**Files:** `app/seismo/database.py`, `app/slm/database.py`
|
||||
**Status:** Both currently point to shared `seismo_fleet.db` (migration pending)
|
||||
|
||||
## Recent Updates (2026-01-09)
|
||||
|
||||
✅ **ALL imports updated** - Changed all `backend.*` imports to `app.seismo.*` or `app.slm.*`
|
||||
✅ **Old structure deleted** - `backend/` and `templates/` directories removed
|
||||
✅ **Containers rebuilt** - All three containers (Terra-View, SFM, SLMM) working with new imports
|
||||
✅ **Verified working** - Tested health endpoints and UI after migration
|
||||
|
||||
## What's NOT Yet Done
|
||||
|
||||
❌ **Partial routes missing** - `/partials/*` endpoints not yet added
|
||||
❌ **Database not split** - Still using shared `seismo_fleet.db`
|
||||
|
||||
## How to Run
|
||||
|
||||
```bash
|
||||
# Start on custom port to avoid conflicts
|
||||
PORT=9999 python3 -m app.main
|
||||
|
||||
# Test health endpoint
|
||||
curl http://localhost:9999/health
|
||||
|
||||
# Test API endpoint
|
||||
curl http://localhost:9999/api/status-snapshot
|
||||
|
||||
# Access UI
|
||||
open http://localhost:9999/
|
||||
```
|
||||
|
||||
## Next Steps (Recommended Order)
|
||||
|
||||
1. **Add partial routes** to app/main.py or create separate router
|
||||
2. **Test all endpoints thoroughly** - Verify roster CRUD, photos, settings
|
||||
3. **Split databases** (Phase 2 of plan)
|
||||
4. **Implement API aggregation layer** (Phase 3 of plan)
|
||||
|
||||
## Known Issues
|
||||
|
||||
None currently - app starts and serves requests successfully!
|
||||
|
||||
## Testing Checklist
|
||||
|
||||
- [x] App starts without errors
|
||||
- [x] Health endpoint returns correct version
|
||||
- [x] Main dashboard loads
|
||||
- [x] Status snapshot API works
|
||||
- [ ] All seismo endpoints work
|
||||
- [ ] All SLM endpoints work
|
||||
- [ ] Roster CRUD operations work
|
||||
- [ ] Photos upload/download works
|
||||
- [ ] Settings page works
|
||||
|
||||
## Rollback Instructions
|
||||
|
||||
~~The old structure has been deleted.~~ To rollback, restore from your backup:
|
||||
|
||||
```bash
|
||||
# Restore from your backup
|
||||
# The old backend/ and templates/ directories were removed on 2026-01-09
|
||||
```
|
||||
|
||||
## Important Notes
|
||||
|
||||
- **MIGRATION COMPLETE**: Old `backend/` and `templates/` directories removed
|
||||
- **ALL IMPORTS UPDATED**: All Python files now use `app.*` imports
|
||||
- **NO DATA LOSS**: Database untouched, only code structure changed
|
||||
- **CONTAINERS WORKING**: All three containers (Terra-View, SFM, SLMM) healthy
|
||||
- **FULLY SELF-CONTAINED**: Application runs entirely from `app/` directory
|
||||
|
||||
---
|
||||
|
||||
**Congratulations!** 🎉 Import migration complete! The modular monolith is now self-contained and production-ready.
|
||||
30
README.md
@@ -1,5 +1,31 @@
|
||||
# Seismo Fleet Manager v0.4.2
|
||||
Backend API and HTMX-powered web interface for managing a mixed fleet of seismographs and field modems. Track deployments, monitor health in real time, merge roster intent with incoming telemetry, and control your fleet through a unified database and dashboard.
|
||||
# Terra-View v0.5.0
|
||||
Unified platform for managing seismograph fleets and sound level meter deployments. Built as a modular monolith with independent feature modules (Seismo, SLM) sharing a common UI layer. Track deployments, monitor health in real time, merge roster intent with incoming telemetry, and control your entire fleet through a unified database and dashboard.
|
||||
|
||||
## Architecture
|
||||
|
||||
Terra-View follows a **modular monolith** architecture with independent feature modules in a single codebase:
|
||||
|
||||
- **app/seismo/** - Seismograph Fleet Module (SFM)
|
||||
- Device roster and deployment tracking
|
||||
- Series 3/4 telemetry ingestion
|
||||
- Status monitoring (OK/Pending/Missing)
|
||||
- Photo management and location tracking
|
||||
- **app/slm/** - Sound Level Meter Manager (SLMM)
|
||||
- NL43 device configuration and control
|
||||
- Real-time measurement monitoring
|
||||
- TCP/FTP/Web interface support
|
||||
- Dashboard statistics and unit management
|
||||
- **app/ui/** - Shared UI layer
|
||||
- Templates, static assets, and common components
|
||||
- Progressive Web App (PWA) support
|
||||
- **app/api/** - API aggregation layer
|
||||
- Cross-module endpoints
|
||||
- Future unified dashboard APIs
|
||||
|
||||
**Multi-Container Deployment**: Three Docker containers built from the same codebase:
|
||||
- `terra-view` (port 8001) - Main UI with all modules integrated
|
||||
- `sfm` (port 8002) - Seismo API backend
|
||||
- `slmm` (port 8100) - SLM API backend
|
||||
|
||||
## Features
|
||||
|
||||
|
||||
0
app/__init__.py
Normal file
0
app/api/__init__.py
Normal file
13
app/api/dashboard.py
Normal file
@@ -0,0 +1,13 @@
|
||||
"""
|
||||
API Aggregation Layer - Dashboard endpoints
|
||||
Composes data from multiple feature modules
|
||||
"""
|
||||
from fastapi import APIRouter
|
||||
|
||||
router = APIRouter(prefix="/api/dashboard", tags=["dashboard-aggregation"])
|
||||
|
||||
# TODO: Implement aggregation endpoints that combine data from
|
||||
# app.seismo and app.slm modules
|
||||
|
||||
# For now, individual feature modules expose their own APIs directly
|
||||
# Future: Add cross-feature aggregation here
|
||||
13
app/api/roster.py
Normal file
@@ -0,0 +1,13 @@
|
||||
"""
|
||||
API Aggregation Layer - Roster endpoints
|
||||
Aggregates roster data from all feature modules
|
||||
"""
|
||||
from fastapi import APIRouter
|
||||
|
||||
router = APIRouter(prefix="/api/roster-aggregation", tags=["roster-aggregation"])
|
||||
|
||||
# TODO: Implement unified roster endpoints that combine data from
|
||||
# app.seismo and app.slm modules
|
||||
|
||||
# For now, individual feature modules expose their own roster APIs
|
||||
# Future: Add cross-feature roster aggregation here
|
||||
83
app/api/slmm_proxy.py
Normal file
@@ -0,0 +1,83 @@
|
||||
"""
|
||||
SLMM API Proxy
|
||||
Forwards /api/slmm/* requests to the SLMM backend service
|
||||
"""
|
||||
import httpx
|
||||
import logging
|
||||
from fastapi import APIRouter, Request, Response, WebSocket
|
||||
from fastapi.responses import StreamingResponse
|
||||
from app.core.config import SLMM_API_URL
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
router = APIRouter(prefix="/api/slmm", tags=["slmm-proxy"])
|
||||
|
||||
|
||||
@router.api_route("/{path:path}", methods=["GET", "POST", "PUT", "DELETE", "PATCH"])
|
||||
async def proxy_slmm_request(path: str, request: Request):
|
||||
"""Proxy HTTP requests to SLMM backend"""
|
||||
# Build target URL - rewrite /api/slmm/* to /api/nl43/*
|
||||
target_url = f"{SLMM_API_URL}/api/nl43/{path}"
|
||||
|
||||
# Get query params
|
||||
query_string = str(request.url.query)
|
||||
if query_string:
|
||||
target_url += f"?{query_string}"
|
||||
|
||||
logger.info(f"Proxying {request.method} {target_url}")
|
||||
|
||||
# Read request body
|
||||
body = await request.body()
|
||||
|
||||
# Forward headers (exclude host)
|
||||
headers = {
|
||||
key: value
|
||||
for key, value in request.headers.items()
|
||||
if key.lower() not in ['host', 'content-length']
|
||||
}
|
||||
|
||||
async with httpx.AsyncClient(timeout=30.0) as client:
|
||||
try:
|
||||
# Make proxied request
|
||||
response = await client.request(
|
||||
method=request.method,
|
||||
url=target_url,
|
||||
content=body,
|
||||
headers=headers
|
||||
)
|
||||
|
||||
# Return response
|
||||
return Response(
|
||||
content=response.content,
|
||||
status_code=response.status_code,
|
||||
headers=dict(response.headers)
|
||||
)
|
||||
except httpx.RequestError as e:
|
||||
logger.error(f"Proxy request failed: {e}")
|
||||
return Response(
|
||||
content=f'{{"detail": "SLMM backend unavailable: {str(e)}"}}',
|
||||
status_code=502,
|
||||
media_type="application/json"
|
||||
)
|
||||
|
||||
|
||||
@router.websocket("/{unit_id}/live")
|
||||
async def proxy_slmm_websocket(websocket: WebSocket, unit_id: str):
|
||||
"""Proxy WebSocket connections to SLMM backend for live data streaming"""
|
||||
await websocket.accept()
|
||||
|
||||
# Build WebSocket URL
|
||||
ws_protocol = "ws" if "localhost" in SLMM_API_URL or "127.0.0.1" in SLMM_API_URL else "wss"
|
||||
ws_url = SLMM_API_URL.replace("http://", f"{ws_protocol}://").replace("https://", f"{ws_protocol}://")
|
||||
ws_target = f"{ws_url}/api/slmm/{unit_id}/live"
|
||||
|
||||
logger.info(f"Proxying WebSocket to {ws_target}")
|
||||
|
||||
async with httpx.AsyncClient() as client:
|
||||
try:
|
||||
async with client.stream("GET", ws_target) as response:
|
||||
async for chunk in response.aiter_bytes():
|
||||
await websocket.send_bytes(chunk)
|
||||
except Exception as e:
|
||||
logger.error(f"WebSocket proxy error: {e}")
|
||||
await websocket.close(code=1011, reason=f"Backend error: {str(e)}")
|
||||
0
app/core/__init__.py
Normal file
22
app/core/config.py
Normal file
@@ -0,0 +1,22 @@
|
||||
"""
|
||||
Core configuration for Terra-View application
|
||||
"""
|
||||
import os
|
||||
|
||||
# Application
|
||||
APP_NAME = "Terra-View"
|
||||
VERSION = "1.0.0"
|
||||
ENVIRONMENT = os.getenv("ENVIRONMENT", "production")
|
||||
|
||||
# Ports
|
||||
PORT = int(os.getenv("PORT", 8001))
|
||||
|
||||
# External Services
|
||||
# Terra-View is a unified application with seismograph logic built-in
|
||||
# The only external HTTP dependency is SLMM for NL-43 device communication
|
||||
SLMM_API_URL = os.getenv("SLMM_API_URL", "http://localhost:8100")
|
||||
|
||||
# Database URLs (feature-specific)
|
||||
SEISMO_DATABASE_URL = "sqlite:///./data/seismo.db"
|
||||
SLM_DATABASE_URL = "sqlite:///./data/slm.db"
|
||||
MODEM_DATABASE_URL = "sqlite:///./data/modem.db"
|
||||
216
app/main.py
Normal file
@@ -0,0 +1,216 @@
|
||||
"""
|
||||
Terra-View - Unified monitoring platform for device fleets
|
||||
Modular monolith architecture with strict feature boundaries
|
||||
"""
|
||||
import os
|
||||
import logging
|
||||
from fastapi import FastAPI, Request
|
||||
from fastapi.middleware.cors import CORSMiddleware
|
||||
from fastapi.staticfiles import StaticFiles
|
||||
from fastapi.responses import JSONResponse
|
||||
from fastapi.exceptions import RequestValidationError
|
||||
|
||||
# Configure logging
|
||||
logging.basicConfig(
|
||||
level=logging.INFO,
|
||||
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s'
|
||||
)
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# Import configuration
|
||||
from app.core.config import APP_NAME, VERSION, ENVIRONMENT
|
||||
|
||||
# Import UI routes
|
||||
from app.ui import routes as ui_routes
|
||||
|
||||
# Import feature module routers (seismo)
|
||||
from app.seismo.routers import (
|
||||
roster as seismo_roster,
|
||||
units as seismo_units,
|
||||
photos as seismo_photos,
|
||||
roster_edit as seismo_roster_edit,
|
||||
dashboard as seismo_dashboard,
|
||||
dashboard_tabs as seismo_dashboard_tabs,
|
||||
activity as seismo_activity,
|
||||
seismo_dashboard as seismo_seismo_dashboard,
|
||||
settings as seismo_settings,
|
||||
partials as seismo_partials,
|
||||
)
|
||||
from app.seismo import routes as seismo_legacy_routes
|
||||
|
||||
# Import feature module routers (SLM)
|
||||
from app.slm.routers import router as slm_router
|
||||
from app.slm.dashboard import router as slm_dashboard_router
|
||||
|
||||
# Import API aggregation layer (placeholder for now)
|
||||
from app.api import dashboard as api_dashboard
|
||||
from app.api import roster as api_roster
|
||||
|
||||
# Initialize database tables
|
||||
from app.seismo.database import engine as seismo_engine, Base as SeismoBase
|
||||
SeismoBase.metadata.create_all(bind=seismo_engine)
|
||||
|
||||
from app.slm.database import engine as slm_engine, Base as SlmBase
|
||||
SlmBase.metadata.create_all(bind=slm_engine)
|
||||
|
||||
# Initialize FastAPI app
|
||||
app = FastAPI(
|
||||
title=APP_NAME,
|
||||
description="Unified monitoring platform for seismograph, modem, and sound level meter fleets",
|
||||
version=VERSION
|
||||
)
|
||||
|
||||
# Add validation error handler to log details
|
||||
@app.exception_handler(RequestValidationError)
|
||||
async def validation_exception_handler(request: Request, exc: RequestValidationError):
|
||||
logger.error(f"Validation error on {request.url}: {exc.errors()}")
|
||||
logger.error(f"Body: {await request.body()}")
|
||||
return JSONResponse(
|
||||
status_code=400,
|
||||
content={"detail": exc.errors()}
|
||||
)
|
||||
|
||||
# Configure CORS
|
||||
app.add_middleware(
|
||||
CORSMiddleware,
|
||||
allow_origins=["*"],
|
||||
allow_credentials=True,
|
||||
allow_methods=["*"],
|
||||
allow_headers=["*"],
|
||||
)
|
||||
|
||||
# Mount static files
|
||||
app.mount("/static", StaticFiles(directory="app/ui/static"), name="static")
|
||||
|
||||
# Middleware to add environment to request state
|
||||
@app.middleware("http")
|
||||
async def add_environment_to_context(request: Request, call_next):
|
||||
"""Middleware to add environment variable to request state"""
|
||||
request.state.environment = ENVIRONMENT
|
||||
response = await call_next(request)
|
||||
return response
|
||||
|
||||
# ===== INCLUDE ROUTERS =====
|
||||
|
||||
# UI Layer (HTML pages)
|
||||
app.include_router(ui_routes.router)
|
||||
|
||||
# Seismograph Feature Module APIs
|
||||
app.include_router(seismo_roster.router)
|
||||
app.include_router(seismo_units.router)
|
||||
app.include_router(seismo_photos.router)
|
||||
app.include_router(seismo_roster_edit.router)
|
||||
app.include_router(seismo_dashboard.router)
|
||||
app.include_router(seismo_dashboard_tabs.router)
|
||||
app.include_router(seismo_activity.router)
|
||||
app.include_router(seismo_seismo_dashboard.router)
|
||||
app.include_router(seismo_settings.router)
|
||||
app.include_router(seismo_partials.router, prefix="/partials")
|
||||
app.include_router(seismo_legacy_routes.router)
|
||||
|
||||
# SLM Feature Module APIs
|
||||
app.include_router(slm_router)
|
||||
app.include_router(slm_dashboard_router)
|
||||
|
||||
# SLMM Backend Proxy (forward /api/slmm/* to SLMM service)
|
||||
from app.api import slmm_proxy
|
||||
app.include_router(slmm_proxy.router)
|
||||
|
||||
# API Aggregation Layer (future cross-feature endpoints)
|
||||
# app.include_router(api_dashboard.router) # TODO: Implement aggregation
|
||||
# app.include_router(api_roster.router) # TODO: Implement aggregation
|
||||
|
||||
# ===== ADDITIONAL ROUTES FROM OLD MAIN.PY =====
|
||||
# These will need to be migrated to appropriate modules
|
||||
|
||||
from fastapi.templating import Jinja2Templates
|
||||
from typing import List, Dict
|
||||
from pydantic import BaseModel
|
||||
from sqlalchemy.orm import Session
|
||||
from fastapi import Depends
|
||||
|
||||
from app.seismo.database import get_db
|
||||
from app.seismo.services.snapshot import emit_status_snapshot
|
||||
from app.seismo.models import IgnoredUnit
|
||||
|
||||
# TODO: Move these to appropriate feature modules or UI layer
|
||||
|
||||
@app.post("/api/sync-edits")
|
||||
async def sync_edits(request: dict, db: Session = Depends(get_db)):
|
||||
"""Process offline edit queue and sync to database"""
|
||||
# TODO: Move to seismo module
|
||||
from app.seismo.models import RosterUnit
|
||||
|
||||
class EditItem(BaseModel):
|
||||
id: int
|
||||
unitId: str
|
||||
changes: Dict
|
||||
timestamp: int
|
||||
|
||||
class SyncEditsRequest(BaseModel):
|
||||
edits: List[EditItem]
|
||||
|
||||
sync_request = SyncEditsRequest(**request)
|
||||
results = []
|
||||
synced_ids = []
|
||||
|
||||
for edit in sync_request.edits:
|
||||
try:
|
||||
unit = db.query(RosterUnit).filter_by(id=edit.unitId).first()
|
||||
|
||||
if not unit:
|
||||
results.append({
|
||||
"id": edit.id,
|
||||
"status": "error",
|
||||
"reason": f"Unit {edit.unitId} not found"
|
||||
})
|
||||
continue
|
||||
|
||||
for key, value in edit.changes.items():
|
||||
if hasattr(unit, key):
|
||||
if key in ['deployed', 'retired']:
|
||||
setattr(unit, key, value in ['true', True, 'True', '1', 1])
|
||||
else:
|
||||
setattr(unit, key, value if value != '' else None)
|
||||
|
||||
db.commit()
|
||||
|
||||
results.append({
|
||||
"id": edit.id,
|
||||
"status": "success"
|
||||
})
|
||||
synced_ids.append(edit.id)
|
||||
|
||||
except Exception as e:
|
||||
db.rollback()
|
||||
results.append({
|
||||
"id": edit.id,
|
||||
"status": "error",
|
||||
"reason": str(e)
|
||||
})
|
||||
|
||||
synced_count = len(synced_ids)
|
||||
|
||||
return JSONResponse({
|
||||
"synced": synced_count,
|
||||
"total": len(sync_request.edits),
|
||||
"synced_ids": synced_ids,
|
||||
"results": results
|
||||
})
|
||||
|
||||
|
||||
@app.get("/health")
|
||||
def health_check():
|
||||
"""Health check endpoint"""
|
||||
return {
|
||||
"message": f"{APP_NAME} v{VERSION}",
|
||||
"status": "running",
|
||||
"version": VERSION,
|
||||
"modules": ["seismo", "slm"]
|
||||
}
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
import uvicorn
|
||||
from app.core.config import PORT
|
||||
uvicorn.run(app, host="0.0.0.0", port=PORT)
|
||||
0
app/modem/__init__.py
Normal file
0
app/seismo/__init__.py
Normal file
36
app/seismo/database.py
Normal file
@@ -0,0 +1,36 @@
|
||||
"""
|
||||
Seismograph feature module database connection
|
||||
"""
|
||||
from sqlalchemy import create_engine
|
||||
from sqlalchemy.ext.declarative import declarative_base
|
||||
from sqlalchemy.orm import sessionmaker
|
||||
import os
|
||||
|
||||
# Ensure data directory exists
|
||||
os.makedirs("data", exist_ok=True)
|
||||
|
||||
# For now, we'll use the old database (seismo_fleet.db) until we migrate
|
||||
# TODO: Migrate to seismo.db
|
||||
SQLALCHEMY_DATABASE_URL = "sqlite:///./data/seismo_fleet.db"
|
||||
|
||||
engine = create_engine(
|
||||
SQLALCHEMY_DATABASE_URL, connect_args={"check_same_thread": False}
|
||||
)
|
||||
|
||||
SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)
|
||||
|
||||
Base = declarative_base()
|
||||
|
||||
|
||||
def get_db():
|
||||
"""Dependency for database sessions"""
|
||||
db = SessionLocal()
|
||||
try:
|
||||
yield db
|
||||
finally:
|
||||
db.close()
|
||||
|
||||
|
||||
def get_db_session():
|
||||
"""Get a database session directly (not as a dependency)"""
|
||||
return SessionLocal()
|
||||
@@ -1,6 +1,6 @@
|
||||
from sqlalchemy import Column, String, DateTime, Boolean, Text, Date, Integer
|
||||
from datetime import datetime
|
||||
from backend.database import Base
|
||||
from app.seismo.database import Base
|
||||
|
||||
|
||||
class Emitter(Base):
|
||||
0
app/seismo/routers/__init__.py
Normal file
@@ -4,8 +4,8 @@ from sqlalchemy import desc
|
||||
from pathlib import Path
|
||||
from datetime import datetime, timedelta, timezone
|
||||
from typing import List, Dict, Any
|
||||
from backend.database import get_db
|
||||
from backend.models import UnitHistory, Emitter, RosterUnit
|
||||
from app.seismo.database import get_db
|
||||
from app.seismo.models import UnitHistory, Emitter, RosterUnit
|
||||
|
||||
router = APIRouter(prefix="/api", tags=["activity"])
|
||||
|
||||
@@ -1,10 +1,10 @@
|
||||
from fastapi import APIRouter, Request, Depends
|
||||
from fastapi.templating import Jinja2Templates
|
||||
|
||||
from backend.services.snapshot import emit_status_snapshot
|
||||
from app.seismo.services.snapshot import emit_status_snapshot
|
||||
|
||||
router = APIRouter()
|
||||
templates = Jinja2Templates(directory="templates")
|
||||
templates = Jinja2Templates(directory="app/ui/templates")
|
||||
|
||||
|
||||
@router.get("/dashboard/active")
|
||||
@@ -2,8 +2,8 @@
|
||||
from fastapi import APIRouter, Depends
|
||||
from sqlalchemy.orm import Session
|
||||
|
||||
from backend.database import get_db
|
||||
from backend.services.snapshot import emit_status_snapshot
|
||||
from app.seismo.database import get_db
|
||||
from app.seismo.services.snapshot import emit_status_snapshot
|
||||
|
||||
router = APIRouter(prefix="/dashboard", tags=["dashboard-tabs"])
|
||||
|
||||
140
app/seismo/routers/partials.py
Normal file
@@ -0,0 +1,140 @@
|
||||
"""
|
||||
Partial routes for HTMX dynamic content loading.
|
||||
These routes return HTML fragments that are loaded into the page via HTMX.
|
||||
"""
|
||||
from fastapi import APIRouter, Request
|
||||
from fastapi.responses import HTMLResponse
|
||||
from fastapi.templating import Jinja2Templates
|
||||
|
||||
from app.seismo.services.snapshot import emit_status_snapshot
|
||||
|
||||
router = APIRouter()
|
||||
templates = Jinja2Templates(directory="app/ui/templates")
|
||||
|
||||
|
||||
@router.get("/unknown-emitters", response_class=HTMLResponse)
|
||||
async def get_unknown_emitters(request: Request):
|
||||
"""
|
||||
Returns HTML partial with unknown emitters (units reporting but not in roster).
|
||||
Called periodically via HTMX (every 10s) from the roster page.
|
||||
"""
|
||||
snapshot = emit_status_snapshot()
|
||||
|
||||
# Convert unknown units dict to list and add required fields
|
||||
unknown_list = []
|
||||
for unit_id, unit_data in snapshot.get("unknown", {}).items():
|
||||
unknown_list.append({
|
||||
"id": unit_id,
|
||||
"status": unit_data["status"],
|
||||
"age": unit_data["age"],
|
||||
"fname": unit_data.get("fname", ""),
|
||||
})
|
||||
|
||||
# Sort by ID for consistent display
|
||||
unknown_list.sort(key=lambda x: x["id"])
|
||||
|
||||
return templates.TemplateResponse(
|
||||
"partials/unknown_emitters.html",
|
||||
{
|
||||
"request": request,
|
||||
"unknown_units": unknown_list
|
||||
}
|
||||
)
|
||||
|
||||
|
||||
@router.get("/devices-all", response_class=HTMLResponse)
|
||||
async def get_all_devices(request: Request):
|
||||
"""
|
||||
Returns HTML partial with all devices (deployed, benched, retired, ignored).
|
||||
Called on page load and when filters are applied.
|
||||
"""
|
||||
snapshot = emit_status_snapshot()
|
||||
|
||||
# Combine all units from different buckets
|
||||
all_units = []
|
||||
|
||||
# Add active units (deployed)
|
||||
for unit_id, unit_data in snapshot.get("active", {}).items():
|
||||
unit_info = {
|
||||
"id": unit_id,
|
||||
"status": unit_data["status"],
|
||||
"age": unit_data["age"],
|
||||
"last_seen": unit_data.get("last", ""),
|
||||
"fname": unit_data.get("fname", ""),
|
||||
"deployed": True,
|
||||
"retired": False,
|
||||
"ignored": False,
|
||||
"note": unit_data.get("note", ""),
|
||||
"device_type": unit_data.get("device_type", "seismograph"),
|
||||
"location": unit_data.get("location", ""),
|
||||
"address": unit_data.get("address", ""),
|
||||
"coordinates": unit_data.get("coordinates", ""),
|
||||
"last_calibrated": unit_data.get("last_calibrated"),
|
||||
"next_calibration_due": unit_data.get("next_calibration_due"),
|
||||
"deployed_with_modem_id": unit_data.get("deployed_with_modem_id"),
|
||||
"ip_address": unit_data.get("ip_address"),
|
||||
"phone_number": unit_data.get("phone_number"),
|
||||
"hardware_model": unit_data.get("hardware_model"),
|
||||
}
|
||||
all_units.append(unit_info)
|
||||
|
||||
# Add benched units (not deployed, not retired)
|
||||
for unit_id, unit_data in snapshot.get("benched", {}).items():
|
||||
unit_info = {
|
||||
"id": unit_id,
|
||||
"status": unit_data["status"],
|
||||
"age": unit_data["age"],
|
||||
"last_seen": unit_data.get("last", ""),
|
||||
"fname": unit_data.get("fname", ""),
|
||||
"deployed": False,
|
||||
"retired": False,
|
||||
"ignored": False,
|
||||
"note": unit_data.get("note", ""),
|
||||
"device_type": unit_data.get("device_type", "seismograph"),
|
||||
"location": unit_data.get("location", ""),
|
||||
"address": unit_data.get("address", ""),
|
||||
"coordinates": unit_data.get("coordinates", ""),
|
||||
"last_calibrated": unit_data.get("last_calibrated"),
|
||||
"next_calibration_due": unit_data.get("next_calibration_due"),
|
||||
"deployed_with_modem_id": unit_data.get("deployed_with_modem_id"),
|
||||
"ip_address": unit_data.get("ip_address"),
|
||||
"phone_number": unit_data.get("phone_number"),
|
||||
"hardware_model": unit_data.get("hardware_model"),
|
||||
}
|
||||
all_units.append(unit_info)
|
||||
|
||||
# Add retired units
|
||||
for unit_id, unit_data in snapshot.get("retired", {}).items():
|
||||
unit_info = {
|
||||
"id": unit_id,
|
||||
"status": "Retired",
|
||||
"age": unit_data["age"],
|
||||
"last_seen": unit_data.get("last", ""),
|
||||
"fname": unit_data.get("fname", ""),
|
||||
"deployed": False,
|
||||
"retired": True,
|
||||
"ignored": False,
|
||||
"note": unit_data.get("note", ""),
|
||||
"device_type": unit_data.get("device_type", "seismograph"),
|
||||
"location": unit_data.get("location", ""),
|
||||
"address": unit_data.get("address", ""),
|
||||
"coordinates": unit_data.get("coordinates", ""),
|
||||
"last_calibrated": unit_data.get("last_calibrated"),
|
||||
"next_calibration_due": unit_data.get("next_calibration_due"),
|
||||
"deployed_with_modem_id": unit_data.get("deployed_with_modem_id"),
|
||||
"ip_address": unit_data.get("ip_address"),
|
||||
"phone_number": unit_data.get("phone_number"),
|
||||
"hardware_model": unit_data.get("hardware_model"),
|
||||
}
|
||||
all_units.append(unit_info)
|
||||
|
||||
# Sort by ID for consistent display
|
||||
all_units.sort(key=lambda x: x["id"])
|
||||
|
||||
return templates.TemplateResponse(
|
||||
"partials/devices_table.html",
|
||||
{
|
||||
"request": request,
|
||||
"units": all_units
|
||||
}
|
||||
)
|
||||
@@ -8,8 +8,8 @@ import shutil
|
||||
from PIL import Image
|
||||
from PIL.ExifTags import TAGS, GPSTAGS
|
||||
from sqlalchemy.orm import Session
|
||||
from backend.database import get_db
|
||||
from backend.models import RosterUnit
|
||||
from app.seismo.database import get_db
|
||||
from app.seismo.models import RosterUnit
|
||||
|
||||
router = APIRouter(prefix="/api", tags=["photos"])
|
||||
|
||||
@@ -4,8 +4,8 @@ from datetime import datetime, timedelta
|
||||
from typing import Dict, Any
|
||||
import random
|
||||
|
||||
from backend.database import get_db
|
||||
from backend.services.snapshot import emit_status_snapshot
|
||||
from app.seismo.database import get_db
|
||||
from app.seismo.services.snapshot import emit_status_snapshot
|
||||
|
||||
router = APIRouter(prefix="/api", tags=["roster"])
|
||||
|
||||
@@ -8,8 +8,8 @@ import logging
|
||||
import httpx
|
||||
import os
|
||||
|
||||
from backend.database import get_db
|
||||
from backend.models import RosterUnit, IgnoredUnit, Emitter, UnitHistory
|
||||
from app.seismo.database import get_db
|
||||
from app.seismo.models import RosterUnit, IgnoredUnit, Emitter, UnitHistory
|
||||
|
||||
router = APIRouter(prefix="/api/roster", tags=["roster-edit"])
|
||||
logger = logging.getLogger(__name__)
|
||||
@@ -7,11 +7,11 @@ from fastapi import APIRouter, Request, Depends, Query
|
||||
from fastapi.responses import HTMLResponse
|
||||
from fastapi.templating import Jinja2Templates
|
||||
from sqlalchemy.orm import Session
|
||||
from backend.database import get_db
|
||||
from backend.models import RosterUnit
|
||||
from app.seismo.database import get_db
|
||||
from app.seismo.models import RosterUnit
|
||||
|
||||
router = APIRouter(prefix="/api/seismo-dashboard", tags=["seismo-dashboard"])
|
||||
templates = Jinja2Templates(directory="templates")
|
||||
templates = Jinja2Templates(directory="app/ui/templates")
|
||||
|
||||
|
||||
@router.get("/stats", response_class=HTMLResponse)
|
||||
@@ -9,9 +9,9 @@ import io
|
||||
import shutil
|
||||
from pathlib import Path
|
||||
|
||||
from backend.database import get_db
|
||||
from backend.models import RosterUnit, Emitter, IgnoredUnit, UserPreferences
|
||||
from backend.services.database_backup import DatabaseBackupService
|
||||
from app.seismo.database import get_db
|
||||
from app.seismo.models import RosterUnit, Emitter, IgnoredUnit, UserPreferences
|
||||
from app.seismo.services.database_backup import DatabaseBackupService
|
||||
|
||||
router = APIRouter(prefix="/api/settings", tags=["settings"])
|
||||
|
||||
@@ -3,8 +3,8 @@ from sqlalchemy.orm import Session
|
||||
from datetime import datetime
|
||||
from typing import Dict, Any
|
||||
|
||||
from backend.database import get_db
|
||||
from backend.services.snapshot import emit_status_snapshot
|
||||
from app.seismo.database import get_db
|
||||
from app.seismo.services.snapshot import emit_status_snapshot
|
||||
|
||||
router = APIRouter(prefix="/api", tags=["units"])
|
||||
|
||||
@@ -4,8 +4,8 @@ from pydantic import BaseModel
|
||||
from datetime import datetime
|
||||
from typing import Optional, List
|
||||
|
||||
from backend.database import get_db
|
||||
from backend.models import Emitter
|
||||
from app.seismo.database import get_db
|
||||
from app.seismo.models import Emitter
|
||||
|
||||
router = APIRouter()
|
||||
|
||||
0
app/seismo/services/__init__.py
Normal file
@@ -10,7 +10,7 @@ from datetime import datetime
|
||||
from typing import Optional
|
||||
import logging
|
||||
|
||||
from backend.services.database_backup import DatabaseBackupService
|
||||
from app.seismo.services.database_backup import DatabaseBackupService
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@@ -1,8 +1,8 @@
|
||||
from datetime import datetime, timezone
|
||||
from sqlalchemy.orm import Session
|
||||
|
||||
from backend.database import get_db_session
|
||||
from backend.models import Emitter, RosterUnit, IgnoredUnit
|
||||
from app.seismo.database import get_db_session
|
||||
from app.seismo.models import Emitter, RosterUnit, IgnoredUnit
|
||||
|
||||
|
||||
def ensure_utc(dt):
|
||||
@@ -60,7 +60,7 @@ def emit_status_snapshot():
|
||||
db = get_db_session()
|
||||
try:
|
||||
# Get user preferences for status thresholds
|
||||
from backend.models import UserPreferences
|
||||
from app.seismo.models import UserPreferences
|
||||
prefs = db.query(UserPreferences).filter_by(id=1).first()
|
||||
status_ok_threshold = prefs.status_ok_threshold_hours if prefs else 12
|
||||
status_pending_threshold = prefs.status_pending_threshold_hours if prefs else 24
|
||||
1
app/slm/__init__.py
Normal file
@@ -0,0 +1 @@
|
||||
# SLMM addon package for NL43 integration.
|
||||
317
app/slm/dashboard.py
Normal file
@@ -0,0 +1,317 @@
|
||||
"""
|
||||
Dashboard API endpoints for SLM/NL43 devices.
|
||||
This layer aggregates and transforms data from the device API for UI consumption.
|
||||
"""
|
||||
from fastapi import APIRouter, Depends, HTTPException, Request
|
||||
from fastapi.responses import HTMLResponse
|
||||
from fastapi.templating import Jinja2Templates
|
||||
from sqlalchemy.orm import Session
|
||||
from sqlalchemy import func
|
||||
from typing import List, Dict, Any
|
||||
import logging
|
||||
|
||||
from app.slm.database import get_db as get_slm_db
|
||||
from app.slm.models import NL43Config, NL43Status
|
||||
from app.slm.services import NL43Client
|
||||
# Import seismo database for roster data
|
||||
from app.seismo.database import get_db as get_seismo_db
|
||||
from app.seismo.models import RosterUnit
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
router = APIRouter(prefix="/api/slm-dashboard", tags=["slm-dashboard"])
|
||||
templates = Jinja2Templates(directory="app/ui/templates")
|
||||
|
||||
|
||||
@router.get("/stats", response_class=HTMLResponse)
|
||||
async def get_dashboard_stats(request: Request, db: Session = Depends(get_seismo_db)):
|
||||
"""Get aggregate statistics for the SLM dashboard from roster (returns HTML)."""
|
||||
# Query SLMs from the roster
|
||||
slms = db.query(RosterUnit).filter_by(
|
||||
device_type="sound_level_meter",
|
||||
retired=False
|
||||
).all()
|
||||
|
||||
total_units = len(slms)
|
||||
deployed = sum(1 for s in slms if s.deployed)
|
||||
benched = sum(1 for s in slms if not s.deployed)
|
||||
|
||||
# For "active", count SLMs with recent check-ins (within last hour)
|
||||
from datetime import datetime, timedelta, timezone
|
||||
one_hour_ago = datetime.now(timezone.utc) - timedelta(hours=1)
|
||||
active = sum(1 for s in slms if s.slm_last_check and s.slm_last_check >= one_hour_ago)
|
||||
|
||||
# Map to template variable names
|
||||
# total_count, deployed_count, active_count, benched_count
|
||||
return templates.TemplateResponse(
|
||||
"partials/slm_stats.html",
|
||||
{
|
||||
"request": request,
|
||||
"total_count": total_units,
|
||||
"deployed_count": deployed,
|
||||
"active_count": active,
|
||||
"benched_count": benched
|
||||
}
|
||||
)
|
||||
|
||||
|
||||
@router.get("/units", response_class=HTMLResponse)
|
||||
async def get_units_list(request: Request, db: Session = Depends(get_seismo_db)):
|
||||
"""Get list of all SLM units from roster (returns HTML)."""
|
||||
# Query SLMs from the roster (not retired)
|
||||
slms = db.query(RosterUnit).filter_by(
|
||||
device_type="sound_level_meter",
|
||||
retired=False
|
||||
).order_by(RosterUnit.id).all()
|
||||
|
||||
units = []
|
||||
for slm in slms:
|
||||
# Map to template field names
|
||||
unit_data = {
|
||||
"id": slm.id,
|
||||
"slm_host": slm.slm_host,
|
||||
"slm_tcp_port": slm.slm_tcp_port,
|
||||
"slm_last_check": slm.slm_last_check,
|
||||
"slm_model": slm.slm_model or "NL-43",
|
||||
"address": slm.address,
|
||||
"deployed_with_modem_id": slm.deployed_with_modem_id,
|
||||
}
|
||||
units.append(unit_data)
|
||||
|
||||
return templates.TemplateResponse(
|
||||
"partials/slm_unit_list.html",
|
||||
{
|
||||
"request": request,
|
||||
"units": units
|
||||
}
|
||||
)
|
||||
|
||||
|
||||
@router.get("/live-view/{unit_id}", response_class=HTMLResponse)
|
||||
async def get_live_view(unit_id: str, request: Request, slm_db: Session = Depends(get_slm_db), roster_db: Session = Depends(get_seismo_db)):
|
||||
"""Get live measurement data for a specific unit (returns HTML)."""
|
||||
# Get unit from roster
|
||||
unit = roster_db.query(RosterUnit).filter_by(
|
||||
id=unit_id,
|
||||
device_type="sound_level_meter"
|
||||
).first()
|
||||
|
||||
if not unit:
|
||||
return templates.TemplateResponse(
|
||||
"partials/slm_live_view_error.html",
|
||||
{
|
||||
"request": request,
|
||||
"error": f"Unit {unit_id} not found in roster"
|
||||
}
|
||||
)
|
||||
|
||||
# Get status from monitoring database (may not exist yet)
|
||||
status = slm_db.query(NL43Status).filter_by(unit_id=unit_id).first()
|
||||
|
||||
# Get modem info if available
|
||||
modem = None
|
||||
modem_ip = None
|
||||
if unit.deployed_with_modem_id:
|
||||
modem = roster_db.query(RosterUnit).filter_by(
|
||||
id=unit.deployed_with_modem_id,
|
||||
device_type="modem"
|
||||
).first()
|
||||
if modem:
|
||||
modem_ip = modem.ip_address
|
||||
elif unit.slm_host:
|
||||
modem_ip = unit.slm_host
|
||||
|
||||
# Determine if measuring
|
||||
is_measuring = False
|
||||
if status and status.measurement_state:
|
||||
is_measuring = status.measurement_state.lower() == 'start'
|
||||
|
||||
return templates.TemplateResponse(
|
||||
"partials/slm_live_view.html",
|
||||
{
|
||||
"request": request,
|
||||
"unit": unit,
|
||||
"modem": modem,
|
||||
"modem_ip": modem_ip,
|
||||
"current_status": status,
|
||||
"is_measuring": is_measuring
|
||||
}
|
||||
)
|
||||
|
||||
|
||||
@router.get("/config/{unit_id}", response_class=HTMLResponse)
|
||||
async def get_unit_config(unit_id: str, request: Request, roster_db: Session = Depends(get_seismo_db)):
|
||||
"""Return the HTML config form for a specific unit."""
|
||||
unit = roster_db.query(RosterUnit).filter_by(
|
||||
id=unit_id,
|
||||
device_type="sound_level_meter"
|
||||
).first()
|
||||
|
||||
if not unit:
|
||||
raise HTTPException(status_code=404, detail="Unit configuration not found")
|
||||
|
||||
return templates.TemplateResponse(
|
||||
"partials/slm_config_form.html",
|
||||
{
|
||||
"request": request,
|
||||
"unit": unit
|
||||
}
|
||||
)
|
||||
|
||||
|
||||
@router.post("/config/{unit_id}")
|
||||
async def update_unit_config(
|
||||
unit_id: str,
|
||||
request: Request,
|
||||
roster_db: Session = Depends(get_seismo_db),
|
||||
slm_db: Session = Depends(get_slm_db)
|
||||
):
|
||||
"""Update configuration for a specific unit from the form submission."""
|
||||
unit = roster_db.query(RosterUnit).filter_by(
|
||||
id=unit_id,
|
||||
device_type="sound_level_meter"
|
||||
).first()
|
||||
|
||||
if not unit:
|
||||
raise HTTPException(status_code=404, detail="Unit configuration not found")
|
||||
|
||||
form = await request.form()
|
||||
|
||||
def get_int(value, default=None):
|
||||
try:
|
||||
return int(value) if value not in (None, "") else default
|
||||
except (TypeError, ValueError):
|
||||
return default
|
||||
|
||||
# Update roster fields
|
||||
unit.slm_model = form.get("slm_model") or unit.slm_model
|
||||
unit.slm_serial_number = form.get("slm_serial_number") or unit.slm_serial_number
|
||||
unit.slm_frequency_weighting = form.get("slm_frequency_weighting") or unit.slm_frequency_weighting
|
||||
unit.slm_time_weighting = form.get("slm_time_weighting") or unit.slm_time_weighting
|
||||
unit.slm_measurement_range = form.get("slm_measurement_range") or unit.slm_measurement_range
|
||||
|
||||
unit.slm_host = form.get("slm_host") or None
|
||||
unit.slm_tcp_port = get_int(form.get("slm_tcp_port"), unit.slm_tcp_port or 2255)
|
||||
unit.slm_ftp_port = get_int(form.get("slm_ftp_port"), unit.slm_ftp_port or 21)
|
||||
|
||||
deployed_with_modem_id = form.get("deployed_with_modem_id") or None
|
||||
unit.deployed_with_modem_id = deployed_with_modem_id
|
||||
|
||||
roster_db.commit()
|
||||
roster_db.refresh(unit)
|
||||
|
||||
# Update or create NL43 config so SLMM can reach the device
|
||||
config = slm_db.query(NL43Config).filter_by(unit_id=unit_id).first()
|
||||
if not config:
|
||||
config = NL43Config(unit_id=unit_id)
|
||||
slm_db.add(config)
|
||||
|
||||
# Resolve host from modem if present, otherwise fall back to direct IP or existing config
|
||||
host_for_config = None
|
||||
if deployed_with_modem_id:
|
||||
modem = roster_db.query(RosterUnit).filter_by(
|
||||
id=deployed_with_modem_id,
|
||||
device_type="modem"
|
||||
).first()
|
||||
if modem and modem.ip_address:
|
||||
host_for_config = modem.ip_address
|
||||
if not host_for_config:
|
||||
host_for_config = unit.slm_host or config.host or "127.0.0.1"
|
||||
|
||||
config.host = host_for_config
|
||||
config.tcp_port = get_int(form.get("slm_tcp_port"), config.tcp_port or 2255)
|
||||
config.tcp_enabled = True
|
||||
config.ftp_enabled = bool(config.ftp_username and config.ftp_password)
|
||||
|
||||
slm_db.commit()
|
||||
slm_db.refresh(config)
|
||||
|
||||
return {"success": True, "unit_id": unit_id}
|
||||
|
||||
|
||||
@router.post("/control/{unit_id}/{action}")
|
||||
async def control_unit(unit_id: str, action: str, db: Session = Depends(get_slm_db)):
|
||||
"""Send control command to a unit (start, stop, pause, resume, etc.)."""
|
||||
config = db.query(NL43Config).filter_by(unit_id=unit_id).first()
|
||||
if not config:
|
||||
raise HTTPException(status_code=404, detail="Unit configuration not found")
|
||||
|
||||
if not config.tcp_enabled:
|
||||
raise HTTPException(status_code=400, detail="TCP control not enabled for this unit")
|
||||
|
||||
# Create NL43Client
|
||||
client = NL43Client(
|
||||
host=config.host,
|
||||
port=config.tcp_port,
|
||||
timeout=5.0,
|
||||
ftp_username=config.ftp_username,
|
||||
ftp_password=config.ftp_password
|
||||
)
|
||||
|
||||
# Map action to command
|
||||
action_map = {
|
||||
"start": "start_measurement",
|
||||
"stop": "stop_measurement",
|
||||
"pause": "pause_measurement",
|
||||
"resume": "resume_measurement",
|
||||
"reset": "reset_measurement",
|
||||
"sleep": "sleep_mode",
|
||||
"wake": "wake_from_sleep",
|
||||
}
|
||||
|
||||
if action not in action_map:
|
||||
raise HTTPException(status_code=400, detail=f"Unknown action: {action}")
|
||||
|
||||
method_name = action_map[action]
|
||||
method = getattr(client, method_name, None)
|
||||
|
||||
if not method:
|
||||
raise HTTPException(status_code=500, detail=f"Method {method_name} not implemented")
|
||||
|
||||
try:
|
||||
result = await method()
|
||||
return {"success": True, "action": action, "result": result}
|
||||
except Exception as e:
|
||||
logger.error(f"Error executing {action} on {unit_id}: {e}")
|
||||
raise HTTPException(status_code=500, detail=str(e))
|
||||
|
||||
|
||||
@router.get("/test-modem/{unit_id}")
|
||||
async def test_modem(unit_id: str, db: Session = Depends(get_slm_db)):
|
||||
"""Test connectivity to a unit's modem/device."""
|
||||
config = db.query(NL43Config).filter_by(unit_id=unit_id).first()
|
||||
if not config:
|
||||
raise HTTPException(status_code=404, detail="Unit configuration not found")
|
||||
|
||||
if not config.tcp_enabled:
|
||||
raise HTTPException(status_code=400, detail="TCP control not enabled for this unit")
|
||||
|
||||
client = NL43Client(
|
||||
host=config.host,
|
||||
port=config.tcp_port,
|
||||
timeout=5.0,
|
||||
ftp_username=config.ftp_username,
|
||||
ftp_password=config.ftp_password
|
||||
)
|
||||
|
||||
try:
|
||||
# Try to get measurement state as a connectivity test
|
||||
state = await client.get_measurement_state()
|
||||
return {
|
||||
"success": True,
|
||||
"unit_id": unit_id,
|
||||
"host": config.host,
|
||||
"port": config.tcp_port,
|
||||
"reachable": True,
|
||||
"measurement_state": state
|
||||
}
|
||||
except Exception as e:
|
||||
logger.warning(f"Modem test failed for {unit_id}: {e}")
|
||||
return {
|
||||
"success": False,
|
||||
"unit_id": unit_id,
|
||||
"host": config.host,
|
||||
"port": config.tcp_port,
|
||||
"reachable": False,
|
||||
"error": str(e)
|
||||
}
|
||||
27
app/slm/database.py
Normal file
@@ -0,0 +1,27 @@
|
||||
from sqlalchemy import create_engine
|
||||
from sqlalchemy.ext.declarative import declarative_base
|
||||
from sqlalchemy.orm import sessionmaker
|
||||
import os
|
||||
|
||||
# Ensure data directory exists for the SLMM addon
|
||||
os.makedirs("data", exist_ok=True)
|
||||
|
||||
SQLALCHEMY_DATABASE_URL = "sqlite:///./data/slmm.db"
|
||||
|
||||
engine = create_engine(SQLALCHEMY_DATABASE_URL, connect_args={"check_same_thread": False})
|
||||
SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)
|
||||
Base = declarative_base()
|
||||
|
||||
|
||||
def get_db():
|
||||
"""Dependency for database sessions."""
|
||||
db = SessionLocal()
|
||||
try:
|
||||
yield db
|
||||
finally:
|
||||
db.close()
|
||||
|
||||
|
||||
def get_db_session():
|
||||
"""Get a database session directly (not as a dependency)."""
|
||||
return SessionLocal()
|
||||
116
app/slm/main.py
Normal file
@@ -0,0 +1,116 @@
|
||||
import os
|
||||
import logging
|
||||
from fastapi import FastAPI, Request
|
||||
from fastapi.middleware.cors import CORSMiddleware
|
||||
from fastapi.responses import HTMLResponse
|
||||
from fastapi.templating import Jinja2Templates
|
||||
|
||||
from app.slm.database import Base, engine
|
||||
from app.slm import routers
|
||||
|
||||
# Configure logging
|
||||
logging.basicConfig(
|
||||
level=logging.INFO,
|
||||
format="%(asctime)s - %(name)s - %(levelname)s - %(message)s",
|
||||
handlers=[
|
||||
logging.StreamHandler(),
|
||||
logging.FileHandler("data/slmm.log"),
|
||||
],
|
||||
)
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# Ensure database tables exist for the addon
|
||||
Base.metadata.create_all(bind=engine)
|
||||
logger.info("Database tables initialized")
|
||||
|
||||
app = FastAPI(
|
||||
title="SLMM NL43 Addon",
|
||||
description="Standalone module for NL43 configuration and status APIs",
|
||||
version="0.1.0",
|
||||
)
|
||||
|
||||
# CORS configuration - use environment variable for allowed origins
|
||||
# Default to "*" for development, but should be restricted in production
|
||||
allowed_origins = os.getenv("CORS_ORIGINS", "*").split(",")
|
||||
logger.info(f"CORS allowed origins: {allowed_origins}")
|
||||
|
||||
app.add_middleware(
|
||||
CORSMiddleware,
|
||||
allow_origins=allowed_origins,
|
||||
allow_credentials=True,
|
||||
allow_methods=["*"],
|
||||
allow_headers=["*"],
|
||||
)
|
||||
|
||||
templates = Jinja2Templates(directory="templates")
|
||||
|
||||
app.include_router(routers.router)
|
||||
|
||||
|
||||
@app.get("/", response_class=HTMLResponse)
|
||||
def index(request: Request):
|
||||
return templates.TemplateResponse("index.html", {"request": request})
|
||||
|
||||
|
||||
@app.get("/health")
|
||||
async def health():
|
||||
"""Basic health check endpoint."""
|
||||
return {"status": "ok", "service": "slmm-nl43-addon"}
|
||||
|
||||
|
||||
@app.get("/health/devices")
|
||||
async def health_devices():
|
||||
"""Enhanced health check that tests device connectivity."""
|
||||
from sqlalchemy.orm import Session
|
||||
from app.slm.database import SessionLocal
|
||||
from app.slm.services import NL43Client
|
||||
from app.slm.models import NL43Config
|
||||
|
||||
db: Session = SessionLocal()
|
||||
device_status = []
|
||||
|
||||
try:
|
||||
configs = db.query(NL43Config).filter_by(tcp_enabled=True).all()
|
||||
|
||||
for cfg in configs:
|
||||
client = NL43Client(cfg.host, cfg.tcp_port, timeout=2.0, ftp_username=cfg.ftp_username, ftp_password=cfg.ftp_password)
|
||||
status = {
|
||||
"unit_id": cfg.unit_id,
|
||||
"host": cfg.host,
|
||||
"port": cfg.tcp_port,
|
||||
"reachable": False,
|
||||
"error": None,
|
||||
}
|
||||
|
||||
try:
|
||||
# Try to connect (don't send command to avoid rate limiting issues)
|
||||
import asyncio
|
||||
reader, writer = await asyncio.wait_for(
|
||||
asyncio.open_connection(cfg.host, cfg.tcp_port), timeout=2.0
|
||||
)
|
||||
writer.close()
|
||||
await writer.wait_closed()
|
||||
status["reachable"] = True
|
||||
except Exception as e:
|
||||
status["error"] = str(type(e).__name__)
|
||||
logger.warning(f"Device {cfg.unit_id} health check failed: {e}")
|
||||
|
||||
device_status.append(status)
|
||||
|
||||
finally:
|
||||
db.close()
|
||||
|
||||
all_reachable = all(d["reachable"] for d in device_status) if device_status else True
|
||||
|
||||
return {
|
||||
"status": "ok" if all_reachable else "degraded",
|
||||
"devices": device_status,
|
||||
"total_devices": len(device_status),
|
||||
"reachable_devices": sum(1 for d in device_status if d["reachable"]),
|
||||
}
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
import uvicorn
|
||||
|
||||
uvicorn.run("app.main:app", host="0.0.0.0", port=int(os.getenv("PORT", "8100")), reload=True)
|
||||
43
app/slm/models.py
Normal file
@@ -0,0 +1,43 @@
|
||||
from sqlalchemy import Column, String, DateTime, Boolean, Integer, Text, func
|
||||
from app.slm.database import Base
|
||||
|
||||
|
||||
class NL43Config(Base):
|
||||
"""
|
||||
NL43 connection/config metadata for the standalone SLMM addon.
|
||||
"""
|
||||
|
||||
__tablename__ = "nl43_config"
|
||||
|
||||
unit_id = Column(String, primary_key=True, index=True)
|
||||
host = Column(String, default="127.0.0.1")
|
||||
tcp_port = Column(Integer, default=80) # NL43 TCP control port (via RX55)
|
||||
tcp_enabled = Column(Boolean, default=True)
|
||||
ftp_enabled = Column(Boolean, default=False)
|
||||
ftp_username = Column(String, nullable=True) # FTP login username
|
||||
ftp_password = Column(String, nullable=True) # FTP login password
|
||||
web_enabled = Column(Boolean, default=False)
|
||||
|
||||
|
||||
class NL43Status(Base):
|
||||
"""
|
||||
Latest NL43 status snapshot for quick dashboard/API access.
|
||||
"""
|
||||
|
||||
__tablename__ = "nl43_status"
|
||||
|
||||
unit_id = Column(String, primary_key=True, index=True)
|
||||
last_seen = Column(DateTime, default=func.now())
|
||||
measurement_state = Column(String, default="unknown") # Measure/Stop
|
||||
measurement_start_time = Column(DateTime, nullable=True) # When measurement started (UTC)
|
||||
counter = Column(String, nullable=True) # d0: Measurement interval counter (1-600)
|
||||
lp = Column(String, nullable=True) # Instantaneous sound pressure level
|
||||
leq = Column(String, nullable=True) # Equivalent continuous sound level
|
||||
lmax = Column(String, nullable=True) # Maximum level
|
||||
lmin = Column(String, nullable=True) # Minimum level
|
||||
lpeak = Column(String, nullable=True) # Peak level
|
||||
battery_level = Column(String, nullable=True)
|
||||
power_source = Column(String, nullable=True)
|
||||
sd_remaining_mb = Column(String, nullable=True)
|
||||
sd_free_ratio = Column(String, nullable=True)
|
||||
raw_payload = Column(Text, nullable=True)
|
||||
1333
app/slm/routers.py
Normal file
828
app/slm/services.py
Normal file
@@ -0,0 +1,828 @@
|
||||
"""
|
||||
NL43 TCP connector and snapshot persistence.
|
||||
|
||||
Implements simple per-request TCP calls to avoid long-lived socket complexity.
|
||||
Extend to pooled connections/DRD streaming later.
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import contextlib
|
||||
import logging
|
||||
import time
|
||||
from dataclasses import dataclass
|
||||
from datetime import datetime
|
||||
from typing import Optional, List
|
||||
from sqlalchemy.orm import Session
|
||||
from ftplib import FTP
|
||||
from pathlib import Path
|
||||
|
||||
from app.slm.models import NL43Status
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
@dataclass
|
||||
class NL43Snapshot:
|
||||
unit_id: str
|
||||
measurement_state: str = "unknown"
|
||||
counter: Optional[str] = None # d0: Measurement interval counter (1-600)
|
||||
lp: Optional[str] = None # Instantaneous sound pressure level
|
||||
leq: Optional[str] = None # Equivalent continuous sound level
|
||||
lmax: Optional[str] = None # Maximum level
|
||||
lmin: Optional[str] = None # Minimum level
|
||||
lpeak: Optional[str] = None # Peak level
|
||||
battery_level: Optional[str] = None
|
||||
power_source: Optional[str] = None
|
||||
sd_remaining_mb: Optional[str] = None
|
||||
sd_free_ratio: Optional[str] = None
|
||||
raw_payload: Optional[str] = None
|
||||
|
||||
|
||||
def persist_snapshot(s: NL43Snapshot, db: Session):
|
||||
"""Persist the latest snapshot for API/dashboard use."""
|
||||
try:
|
||||
row = db.query(NL43Status).filter_by(unit_id=s.unit_id).first()
|
||||
if not row:
|
||||
row = NL43Status(unit_id=s.unit_id)
|
||||
db.add(row)
|
||||
|
||||
row.last_seen = datetime.utcnow()
|
||||
|
||||
# Track measurement start time by detecting state transition
|
||||
previous_state = row.measurement_state
|
||||
new_state = s.measurement_state
|
||||
|
||||
logger.info(f"State transition check for {s.unit_id}: '{previous_state}' -> '{new_state}'")
|
||||
|
||||
# Device returns "Start" when measuring, "Stop" when stopped
|
||||
# Normalize to previous behavior for backward compatibility
|
||||
is_measuring = new_state == "Start"
|
||||
was_measuring = previous_state == "Start"
|
||||
|
||||
if not was_measuring and is_measuring:
|
||||
# Measurement just started - record the start time
|
||||
row.measurement_start_time = datetime.utcnow()
|
||||
logger.info(f"✓ Measurement started on {s.unit_id} at {row.measurement_start_time}")
|
||||
elif was_measuring and not is_measuring:
|
||||
# Measurement stopped - clear the start time
|
||||
row.measurement_start_time = None
|
||||
logger.info(f"✓ Measurement stopped on {s.unit_id}")
|
||||
|
||||
row.measurement_state = new_state
|
||||
row.counter = s.counter
|
||||
row.lp = s.lp
|
||||
row.leq = s.leq
|
||||
row.lmax = s.lmax
|
||||
row.lmin = s.lmin
|
||||
row.lpeak = s.lpeak
|
||||
row.battery_level = s.battery_level
|
||||
row.power_source = s.power_source
|
||||
row.sd_remaining_mb = s.sd_remaining_mb
|
||||
row.sd_free_ratio = s.sd_free_ratio
|
||||
row.raw_payload = s.raw_payload
|
||||
|
||||
db.commit()
|
||||
except Exception as e:
|
||||
db.rollback()
|
||||
logger.error(f"Failed to persist snapshot for unit {s.unit_id}: {e}")
|
||||
raise
|
||||
|
||||
|
||||
# Rate limiting: NL43 requires ≥1 second between commands
|
||||
_last_command_time = {}
|
||||
_rate_limit_lock = asyncio.Lock()
|
||||
|
||||
|
||||
class NL43Client:
|
||||
def __init__(self, host: str, port: int, timeout: float = 5.0, ftp_username: str = None, ftp_password: str = None):
|
||||
self.host = host
|
||||
self.port = port
|
||||
self.timeout = timeout
|
||||
self.ftp_username = ftp_username or "anonymous"
|
||||
self.ftp_password = ftp_password or ""
|
||||
self.device_key = f"{host}:{port}"
|
||||
|
||||
async def _enforce_rate_limit(self):
|
||||
"""Ensure ≥1 second between commands to the same device."""
|
||||
async with _rate_limit_lock:
|
||||
last_time = _last_command_time.get(self.device_key, 0)
|
||||
elapsed = time.time() - last_time
|
||||
if elapsed < 1.0:
|
||||
wait_time = 1.0 - elapsed
|
||||
logger.debug(f"Rate limiting: waiting {wait_time:.2f}s for {self.device_key}")
|
||||
await asyncio.sleep(wait_time)
|
||||
_last_command_time[self.device_key] = time.time()
|
||||
|
||||
async def _send_command(self, cmd: str) -> str:
|
||||
"""Send ASCII command to NL43 device via TCP.
|
||||
|
||||
NL43 protocol returns two lines for query commands:
|
||||
Line 1: Result code (R+0000 for success, error codes otherwise)
|
||||
Line 2: Actual data (for query commands ending with '?')
|
||||
"""
|
||||
await self._enforce_rate_limit()
|
||||
|
||||
logger.info(f"Sending command to {self.device_key}: {cmd.strip()}")
|
||||
|
||||
try:
|
||||
reader, writer = await asyncio.wait_for(
|
||||
asyncio.open_connection(self.host, self.port), timeout=self.timeout
|
||||
)
|
||||
except asyncio.TimeoutError:
|
||||
logger.error(f"Connection timeout to {self.device_key}")
|
||||
raise ConnectionError(f"Failed to connect to device at {self.host}:{self.port}")
|
||||
except Exception as e:
|
||||
logger.error(f"Connection failed to {self.device_key}: {e}")
|
||||
raise ConnectionError(f"Failed to connect to device: {str(e)}")
|
||||
|
||||
try:
|
||||
writer.write(cmd.encode("ascii"))
|
||||
await writer.drain()
|
||||
|
||||
# Read first line (result code)
|
||||
first_line_data = await asyncio.wait_for(reader.readuntil(b"\n"), timeout=self.timeout)
|
||||
result_code = first_line_data.decode(errors="ignore").strip()
|
||||
|
||||
# Remove leading $ prompt if present
|
||||
if result_code.startswith("$"):
|
||||
result_code = result_code[1:].strip()
|
||||
|
||||
logger.info(f"Result code from {self.device_key}: {result_code}")
|
||||
|
||||
# Check result code
|
||||
if result_code == "R+0000":
|
||||
# Success - for query commands, read the second line with actual data
|
||||
is_query = cmd.strip().endswith("?")
|
||||
if is_query:
|
||||
data_line = await asyncio.wait_for(reader.readuntil(b"\n"), timeout=self.timeout)
|
||||
response = data_line.decode(errors="ignore").strip()
|
||||
logger.debug(f"Data line from {self.device_key}: {response}")
|
||||
return response
|
||||
else:
|
||||
# Setting command - return success code
|
||||
return result_code
|
||||
elif result_code == "R+0001":
|
||||
raise ValueError("Command error - device did not recognize command")
|
||||
elif result_code == "R+0002":
|
||||
raise ValueError("Parameter error - invalid parameter value")
|
||||
elif result_code == "R+0003":
|
||||
raise ValueError("Spec/type error - command not supported by this device model")
|
||||
elif result_code == "R+0004":
|
||||
raise ValueError("Status error - device is in wrong state for this command")
|
||||
else:
|
||||
raise ValueError(f"Unknown result code: {result_code}")
|
||||
|
||||
except asyncio.TimeoutError:
|
||||
logger.error(f"Response timeout from {self.device_key}")
|
||||
raise TimeoutError(f"Device did not respond within {self.timeout}s")
|
||||
except Exception as e:
|
||||
logger.error(f"Communication error with {self.device_key}: {e}")
|
||||
raise
|
||||
finally:
|
||||
writer.close()
|
||||
with contextlib.suppress(Exception):
|
||||
await writer.wait_closed()
|
||||
|
||||
async def request_dod(self) -> NL43Snapshot:
|
||||
"""Request DOD (Data Output Display) snapshot from device.
|
||||
|
||||
Returns parsed measurement data from the device display.
|
||||
"""
|
||||
# _send_command now handles result code validation and returns the data line
|
||||
resp = await self._send_command("DOD?\r\n")
|
||||
|
||||
# Validate response format
|
||||
if not resp:
|
||||
logger.warning(f"Empty data response from DOD command on {self.device_key}")
|
||||
raise ValueError("Device returned empty data for DOD? command")
|
||||
|
||||
# Remove leading $ prompt if present (shouldn't be there after _send_command, but be safe)
|
||||
if resp.startswith("$"):
|
||||
resp = resp[1:].strip()
|
||||
|
||||
parts = [p.strip() for p in resp.split(",") if p.strip() != ""]
|
||||
|
||||
# DOD should return at least some data points
|
||||
if len(parts) < 2:
|
||||
logger.error(f"Malformed DOD data from {self.device_key}: {resp}")
|
||||
raise ValueError(f"Malformed DOD data: expected comma-separated values, got: {resp}")
|
||||
|
||||
logger.info(f"Parsed {len(parts)} data points from DOD response")
|
||||
|
||||
# Query actual measurement state (DOD doesn't include this information)
|
||||
try:
|
||||
measurement_state = await self.get_measurement_state()
|
||||
except Exception as e:
|
||||
logger.warning(f"Failed to get measurement state, defaulting to 'Measure': {e}")
|
||||
measurement_state = "Measure"
|
||||
|
||||
snap = NL43Snapshot(unit_id="", raw_payload=resp, measurement_state=measurement_state)
|
||||
|
||||
# Parse known positions (based on NL43 communication guide - DRD format)
|
||||
# DRD format: d0=counter, d1=Lp, d2=Leq, d3=Lmax, d4=Lmin, d5=Lpeak, d6=LIeq, ...
|
||||
try:
|
||||
# Capture d0 (counter) for timer synchronization
|
||||
if len(parts) >= 1:
|
||||
snap.counter = parts[0] # d0: Measurement interval counter (1-600)
|
||||
if len(parts) >= 2:
|
||||
snap.lp = parts[1] # d1: Instantaneous sound pressure level
|
||||
if len(parts) >= 3:
|
||||
snap.leq = parts[2] # d2: Equivalent continuous sound level
|
||||
if len(parts) >= 4:
|
||||
snap.lmax = parts[3] # d3: Maximum level
|
||||
if len(parts) >= 5:
|
||||
snap.lmin = parts[4] # d4: Minimum level
|
||||
if len(parts) >= 6:
|
||||
snap.lpeak = parts[5] # d5: Peak level
|
||||
except (IndexError, ValueError) as e:
|
||||
logger.warning(f"Error parsing DOD data points: {e}")
|
||||
|
||||
return snap
|
||||
|
||||
async def start(self):
|
||||
"""Start measurement on the device.
|
||||
|
||||
According to NL43 protocol: Measure,Start (no $ prefix, capitalized param)
|
||||
"""
|
||||
await self._send_command("Measure,Start\r\n")
|
||||
|
||||
async def stop(self):
|
||||
"""Stop measurement on the device.
|
||||
|
||||
According to NL43 protocol: Measure,Stop (no $ prefix, capitalized param)
|
||||
"""
|
||||
await self._send_command("Measure,Stop\r\n")
|
||||
|
||||
async def set_store_mode_manual(self):
|
||||
"""Set the device to Manual Store mode.
|
||||
|
||||
According to NL43 protocol: Store Mode,Manual sets manual storage mode
|
||||
"""
|
||||
await self._send_command("Store Mode,Manual\r\n")
|
||||
logger.info(f"Store mode set to Manual on {self.device_key}")
|
||||
|
||||
async def manual_store(self):
|
||||
"""Manually store the current measurement data.
|
||||
|
||||
According to NL43 protocol: Manual Store,Start executes storing
|
||||
Parameter p1="Start" executes the storage operation
|
||||
Device must be in Manual Store mode first
|
||||
"""
|
||||
await self._send_command("Manual Store,Start\r\n")
|
||||
logger.info(f"Manual store executed on {self.device_key}")
|
||||
|
||||
async def pause(self):
|
||||
"""Pause the current measurement."""
|
||||
await self._send_command("Pause,On\r\n")
|
||||
logger.info(f"Measurement paused on {self.device_key}")
|
||||
|
||||
async def resume(self):
|
||||
"""Resume a paused measurement."""
|
||||
await self._send_command("Pause,Off\r\n")
|
||||
logger.info(f"Measurement resumed on {self.device_key}")
|
||||
|
||||
async def reset(self):
|
||||
"""Reset the measurement data."""
|
||||
await self._send_command("Reset\r\n")
|
||||
logger.info(f"Measurement data reset on {self.device_key}")
|
||||
|
||||
async def get_measurement_state(self) -> str:
|
||||
"""Get the current measurement state.
|
||||
|
||||
Returns: "Start" if measuring, "Stop" if stopped
|
||||
"""
|
||||
resp = await self._send_command("Measure?\r\n")
|
||||
state = resp.strip()
|
||||
logger.info(f"Measurement state on {self.device_key}: {state}")
|
||||
return state
|
||||
|
||||
async def get_battery_level(self) -> str:
|
||||
"""Get the battery level."""
|
||||
resp = await self._send_command("Battery Level?\r\n")
|
||||
logger.info(f"Battery level on {self.device_key}: {resp}")
|
||||
return resp.strip()
|
||||
|
||||
async def get_clock(self) -> str:
|
||||
"""Get the device clock time."""
|
||||
resp = await self._send_command("Clock?\r\n")
|
||||
logger.info(f"Clock on {self.device_key}: {resp}")
|
||||
return resp.strip()
|
||||
|
||||
async def set_clock(self, datetime_str: str):
|
||||
"""Set the device clock time.
|
||||
|
||||
Args:
|
||||
datetime_str: Time in format YYYY/MM/DD,HH:MM:SS or YYYY/MM/DD HH:MM:SS
|
||||
"""
|
||||
# Device expects format: Clock,YYYY/MM/DD HH:MM:SS (space between date and time)
|
||||
# Replace comma with space if present to normalize format
|
||||
normalized = datetime_str.replace(',', ' ', 1)
|
||||
await self._send_command(f"Clock,{normalized}\r\n")
|
||||
logger.info(f"Clock set on {self.device_key} to {normalized}")
|
||||
|
||||
async def get_frequency_weighting(self, channel: str = "Main") -> str:
|
||||
"""Get frequency weighting (A, C, Z, etc.).
|
||||
|
||||
Args:
|
||||
channel: Main, Sub1, Sub2, or Sub3
|
||||
"""
|
||||
resp = await self._send_command(f"Frequency Weighting ({channel})?\r\n")
|
||||
logger.info(f"Frequency weighting ({channel}) on {self.device_key}: {resp}")
|
||||
return resp.strip()
|
||||
|
||||
async def set_frequency_weighting(self, weighting: str, channel: str = "Main"):
|
||||
"""Set frequency weighting.
|
||||
|
||||
Args:
|
||||
weighting: A, C, or Z
|
||||
channel: Main, Sub1, Sub2, or Sub3
|
||||
"""
|
||||
await self._send_command(f"Frequency Weighting ({channel}),{weighting}\r\n")
|
||||
logger.info(f"Frequency weighting ({channel}) set to {weighting} on {self.device_key}")
|
||||
|
||||
async def get_time_weighting(self, channel: str = "Main") -> str:
|
||||
"""Get time weighting (F, S, I).
|
||||
|
||||
Args:
|
||||
channel: Main, Sub1, Sub2, or Sub3
|
||||
"""
|
||||
resp = await self._send_command(f"Time Weighting ({channel})?\r\n")
|
||||
logger.info(f"Time weighting ({channel}) on {self.device_key}: {resp}")
|
||||
return resp.strip()
|
||||
|
||||
async def set_time_weighting(self, weighting: str, channel: str = "Main"):
|
||||
"""Set time weighting.
|
||||
|
||||
Args:
|
||||
weighting: F (Fast), S (Slow), or I (Impulse)
|
||||
channel: Main, Sub1, Sub2, or Sub3
|
||||
"""
|
||||
await self._send_command(f"Time Weighting ({channel}),{weighting}\r\n")
|
||||
logger.info(f"Time weighting ({channel}) set to {weighting} on {self.device_key}")
|
||||
|
||||
async def request_dlc(self) -> dict:
|
||||
"""Request DLC (Data Last Calculation) - final stored measurement results.
|
||||
|
||||
This retrieves the complete calculation results from the last/current measurement,
|
||||
including all statistical data. Similar to DOD but for final results.
|
||||
|
||||
Returns:
|
||||
Dict with parsed DLC data
|
||||
"""
|
||||
resp = await self._send_command("DLC?\r\n")
|
||||
logger.info(f"DLC data received from {self.device_key}: {resp[:100]}...")
|
||||
|
||||
# Parse DLC response - similar format to DOD
|
||||
# The exact format depends on device configuration
|
||||
# For now, return raw data - can be enhanced based on actual response format
|
||||
return {
|
||||
"raw_data": resp.strip(),
|
||||
"device_key": self.device_key,
|
||||
}
|
||||
|
||||
async def sleep(self):
|
||||
"""Put the device into sleep mode to conserve battery.
|
||||
|
||||
Sleep mode is useful for battery conservation between scheduled measurements.
|
||||
Device can be woken up remotely via TCP command or by pressing a button.
|
||||
"""
|
||||
await self._send_command("Sleep Mode,On\r\n")
|
||||
logger.info(f"Device {self.device_key} entering sleep mode")
|
||||
|
||||
async def wake(self):
|
||||
"""Wake the device from sleep mode.
|
||||
|
||||
Note: This may not work if the device is in deep sleep.
|
||||
Physical button press might be required in some cases.
|
||||
"""
|
||||
await self._send_command("Sleep Mode,Off\r\n")
|
||||
logger.info(f"Device {self.device_key} waking from sleep mode")
|
||||
|
||||
async def get_sleep_status(self) -> str:
|
||||
"""Get the current sleep mode status."""
|
||||
resp = await self._send_command("Sleep Mode?\r\n")
|
||||
logger.info(f"Sleep mode status on {self.device_key}: {resp}")
|
||||
return resp.strip()
|
||||
|
||||
async def stream_drd(self, callback):
|
||||
"""Stream continuous DRD output from the device.
|
||||
|
||||
Opens a persistent connection and streams DRD data lines.
|
||||
Calls the provided callback function with each parsed snapshot.
|
||||
|
||||
Args:
|
||||
callback: Async function that receives NL43Snapshot objects
|
||||
|
||||
The stream continues until an exception occurs or the connection is closed.
|
||||
Send SUB character (0x1A) to stop the stream.
|
||||
"""
|
||||
await self._enforce_rate_limit()
|
||||
|
||||
logger.info(f"Starting DRD stream for {self.device_key}")
|
||||
|
||||
try:
|
||||
reader, writer = await asyncio.wait_for(
|
||||
asyncio.open_connection(self.host, self.port), timeout=self.timeout
|
||||
)
|
||||
except asyncio.TimeoutError:
|
||||
logger.error(f"DRD stream connection timeout to {self.device_key}")
|
||||
raise ConnectionError(f"Failed to connect to device at {self.host}:{self.port}")
|
||||
except Exception as e:
|
||||
logger.error(f"DRD stream connection failed to {self.device_key}: {e}")
|
||||
raise ConnectionError(f"Failed to connect to device: {str(e)}")
|
||||
|
||||
try:
|
||||
# Start DRD streaming
|
||||
writer.write(b"DRD?\r\n")
|
||||
await writer.drain()
|
||||
|
||||
# Read initial result code
|
||||
first_line_data = await asyncio.wait_for(reader.readuntil(b"\n"), timeout=self.timeout)
|
||||
result_code = first_line_data.decode(errors="ignore").strip()
|
||||
|
||||
if result_code.startswith("$"):
|
||||
result_code = result_code[1:].strip()
|
||||
|
||||
logger.debug(f"DRD stream result code from {self.device_key}: {result_code}")
|
||||
|
||||
if result_code != "R+0000":
|
||||
raise ValueError(f"DRD stream failed to start: {result_code}")
|
||||
|
||||
logger.info(f"DRD stream started successfully for {self.device_key}")
|
||||
|
||||
# Continuously read data lines
|
||||
while True:
|
||||
try:
|
||||
line_data = await asyncio.wait_for(reader.readuntil(b"\n"), timeout=30.0)
|
||||
line = line_data.decode(errors="ignore").strip()
|
||||
|
||||
if not line:
|
||||
continue
|
||||
|
||||
# Remove leading $ if present
|
||||
if line.startswith("$"):
|
||||
line = line[1:].strip()
|
||||
|
||||
# Parse the DRD data (same format as DOD)
|
||||
parts = [p.strip() for p in line.split(",") if p.strip() != ""]
|
||||
|
||||
if len(parts) < 2:
|
||||
logger.warning(f"Malformed DRD data from {self.device_key}: {line}")
|
||||
continue
|
||||
|
||||
snap = NL43Snapshot(unit_id="", raw_payload=line, measurement_state="Measure")
|
||||
|
||||
# Parse known positions (DRD format - same as DOD)
|
||||
# DRD format: d0=counter, d1=Lp, d2=Leq, d3=Lmax, d4=Lmin, d5=Lpeak, d6=LIeq, ...
|
||||
try:
|
||||
# Capture d0 (counter) for timer synchronization
|
||||
if len(parts) >= 1:
|
||||
snap.counter = parts[0] # d0: Measurement interval counter (1-600)
|
||||
if len(parts) >= 2:
|
||||
snap.lp = parts[1] # d1: Instantaneous sound pressure level
|
||||
if len(parts) >= 3:
|
||||
snap.leq = parts[2] # d2: Equivalent continuous sound level
|
||||
if len(parts) >= 4:
|
||||
snap.lmax = parts[3] # d3: Maximum level
|
||||
if len(parts) >= 5:
|
||||
snap.lmin = parts[4] # d4: Minimum level
|
||||
if len(parts) >= 6:
|
||||
snap.lpeak = parts[5] # d5: Peak level
|
||||
except (IndexError, ValueError) as e:
|
||||
logger.warning(f"Error parsing DRD data points: {e}")
|
||||
|
||||
# Call the callback with the snapshot
|
||||
await callback(snap)
|
||||
|
||||
except asyncio.TimeoutError:
|
||||
logger.warning(f"DRD stream timeout (no data for 30s) from {self.device_key}")
|
||||
break
|
||||
except asyncio.IncompleteReadError:
|
||||
logger.info(f"DRD stream closed by device {self.device_key}")
|
||||
break
|
||||
|
||||
finally:
|
||||
# Send SUB character to stop streaming
|
||||
try:
|
||||
writer.write(b"\x1A")
|
||||
await writer.drain()
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
writer.close()
|
||||
with contextlib.suppress(Exception):
|
||||
await writer.wait_closed()
|
||||
|
||||
logger.info(f"DRD stream ended for {self.device_key}")
|
||||
|
||||
async def set_measurement_time(self, preset: str):
|
||||
"""Set measurement time preset.
|
||||
|
||||
Args:
|
||||
preset: Time preset (10s, 1m, 5m, 10m, 15m, 30m, 1h, 8h, 24h, or custom like "00:05:30")
|
||||
"""
|
||||
await self._send_command(f"Measurement Time Preset Manual,{preset}\r\n")
|
||||
logger.info(f"Set measurement time to {preset} on {self.device_key}")
|
||||
|
||||
async def get_measurement_time(self) -> str:
|
||||
"""Get current measurement time preset.
|
||||
|
||||
Returns: Current time preset setting
|
||||
"""
|
||||
resp = await self._send_command("Measurement Time Preset Manual?\r\n")
|
||||
return resp.strip()
|
||||
|
||||
async def set_leq_interval(self, preset: str):
|
||||
"""Set Leq calculation interval preset.
|
||||
|
||||
Args:
|
||||
preset: Interval preset (Off, 10s, 1m, 5m, 10m, 15m, 30m, 1h, 8h, 24h, or custom like "00:05:30")
|
||||
"""
|
||||
await self._send_command(f"Leq Calculation Interval Preset,{preset}\r\n")
|
||||
logger.info(f"Set Leq interval to {preset} on {self.device_key}")
|
||||
|
||||
async def get_leq_interval(self) -> str:
|
||||
"""Get current Leq calculation interval preset.
|
||||
|
||||
Returns: Current interval preset setting
|
||||
"""
|
||||
resp = await self._send_command("Leq Calculation Interval Preset?\r\n")
|
||||
return resp.strip()
|
||||
|
||||
async def set_lp_interval(self, preset: str):
|
||||
"""Set Lp store interval.
|
||||
|
||||
Args:
|
||||
preset: Store interval (Off, 10ms, 25ms, 100ms, 200ms, 1s)
|
||||
"""
|
||||
await self._send_command(f"Lp Store Interval,{preset}\r\n")
|
||||
logger.info(f"Set Lp interval to {preset} on {self.device_key}")
|
||||
|
||||
async def get_lp_interval(self) -> str:
|
||||
"""Get current Lp store interval.
|
||||
|
||||
Returns: Current store interval setting
|
||||
"""
|
||||
resp = await self._send_command("Lp Store Interval?\r\n")
|
||||
return resp.strip()
|
||||
|
||||
async def set_index_number(self, index: int):
|
||||
"""Set index number for file numbering (Store Name).
|
||||
|
||||
Args:
|
||||
index: Index number (0000-9999)
|
||||
"""
|
||||
if not 0 <= index <= 9999:
|
||||
raise ValueError("Index must be between 0000 and 9999")
|
||||
await self._send_command(f"Store Name,{index:04d}\r\n")
|
||||
logger.info(f"Set store name (index) to {index:04d} on {self.device_key}")
|
||||
|
||||
async def get_index_number(self) -> str:
|
||||
"""Get current index number (Store Name).
|
||||
|
||||
Returns: Current index number
|
||||
"""
|
||||
resp = await self._send_command("Store Name?\r\n")
|
||||
return resp.strip()
|
||||
|
||||
async def get_overwrite_status(self) -> str:
|
||||
"""Check if saved data exists at current store target.
|
||||
|
||||
This command checks whether saved data exists in the set store target
|
||||
(store mode / store name / store address). Use this before storing
|
||||
to prevent accidentally overwriting data.
|
||||
|
||||
Returns:
|
||||
"None" - No data exists (safe to store)
|
||||
"Exist" - Data exists (would overwrite)
|
||||
"""
|
||||
resp = await self._send_command("Overwrite?\r\n")
|
||||
return resp.strip()
|
||||
|
||||
async def get_all_settings(self) -> dict:
|
||||
"""Query all device settings for verification.
|
||||
|
||||
Returns: Dictionary with all current device settings
|
||||
"""
|
||||
settings = {}
|
||||
|
||||
# Measurement settings
|
||||
try:
|
||||
settings["measurement_state"] = await self.get_measurement_state()
|
||||
except Exception as e:
|
||||
settings["measurement_state"] = f"Error: {e}"
|
||||
|
||||
try:
|
||||
settings["frequency_weighting"] = await self.get_frequency_weighting()
|
||||
except Exception as e:
|
||||
settings["frequency_weighting"] = f"Error: {e}"
|
||||
|
||||
try:
|
||||
settings["time_weighting"] = await self.get_time_weighting()
|
||||
except Exception as e:
|
||||
settings["time_weighting"] = f"Error: {e}"
|
||||
|
||||
# Timing/interval settings
|
||||
try:
|
||||
settings["measurement_time"] = await self.get_measurement_time()
|
||||
except Exception as e:
|
||||
settings["measurement_time"] = f"Error: {e}"
|
||||
|
||||
try:
|
||||
settings["leq_interval"] = await self.get_leq_interval()
|
||||
except Exception as e:
|
||||
settings["leq_interval"] = f"Error: {e}"
|
||||
|
||||
try:
|
||||
settings["lp_interval"] = await self.get_lp_interval()
|
||||
except Exception as e:
|
||||
settings["lp_interval"] = f"Error: {e}"
|
||||
|
||||
try:
|
||||
settings["index_number"] = await self.get_index_number()
|
||||
except Exception as e:
|
||||
settings["index_number"] = f"Error: {e}"
|
||||
|
||||
# Device info
|
||||
try:
|
||||
settings["battery_level"] = await self.get_battery_level()
|
||||
except Exception as e:
|
||||
settings["battery_level"] = f"Error: {e}"
|
||||
|
||||
try:
|
||||
settings["clock"] = await self.get_clock()
|
||||
except Exception as e:
|
||||
settings["clock"] = f"Error: {e}"
|
||||
|
||||
# Sleep mode
|
||||
try:
|
||||
settings["sleep_mode"] = await self.get_sleep_status()
|
||||
except Exception as e:
|
||||
settings["sleep_mode"] = f"Error: {e}"
|
||||
|
||||
# FTP status
|
||||
try:
|
||||
settings["ftp_status"] = await self.get_ftp_status()
|
||||
except Exception as e:
|
||||
settings["ftp_status"] = f"Error: {e}"
|
||||
|
||||
logger.info(f"Retrieved all settings for {self.device_key}")
|
||||
return settings
|
||||
|
||||
async def enable_ftp(self):
|
||||
"""Enable FTP server on the device.
|
||||
|
||||
According to NL43 protocol: FTP,On enables the FTP server
|
||||
"""
|
||||
await self._send_command("FTP,On\r\n")
|
||||
logger.info(f"FTP enabled on {self.device_key}")
|
||||
|
||||
async def disable_ftp(self):
|
||||
"""Disable FTP server on the device.
|
||||
|
||||
According to NL43 protocol: FTP,Off disables the FTP server
|
||||
"""
|
||||
await self._send_command("FTP,Off\r\n")
|
||||
logger.info(f"FTP disabled on {self.device_key}")
|
||||
|
||||
async def get_ftp_status(self) -> str:
|
||||
"""Query FTP server status on the device.
|
||||
|
||||
Returns: "On" or "Off"
|
||||
"""
|
||||
resp = await self._send_command("FTP?\r\n")
|
||||
logger.info(f"FTP status on {self.device_key}: {resp}")
|
||||
return resp.strip()
|
||||
|
||||
async def list_ftp_files(self, remote_path: str = "/") -> List[dict]:
|
||||
"""List files on the device via FTP.
|
||||
|
||||
Args:
|
||||
remote_path: Directory path on the device (default: root)
|
||||
|
||||
Returns:
|
||||
List of file info dicts with 'name', 'size', 'modified', 'is_dir'
|
||||
"""
|
||||
logger.info(f"Listing FTP files on {self.device_key} at {remote_path}")
|
||||
|
||||
def _list_ftp_sync():
|
||||
"""Synchronous FTP listing using ftplib (supports active mode)."""
|
||||
ftp = FTP()
|
||||
ftp.set_debuglevel(0)
|
||||
try:
|
||||
# Connect and login
|
||||
ftp.connect(self.host, 21, timeout=10)
|
||||
ftp.login(self.ftp_username, self.ftp_password)
|
||||
ftp.set_pasv(False) # Force active mode
|
||||
|
||||
# Change to target directory
|
||||
if remote_path != "/":
|
||||
ftp.cwd(remote_path)
|
||||
|
||||
# Get directory listing with details
|
||||
files = []
|
||||
lines = []
|
||||
ftp.retrlines('LIST', lines.append)
|
||||
|
||||
for line in lines:
|
||||
# Parse Unix-style ls output
|
||||
parts = line.split(None, 8)
|
||||
if len(parts) < 9:
|
||||
continue
|
||||
|
||||
is_dir = parts[0].startswith('d')
|
||||
size = int(parts[4]) if not is_dir else 0
|
||||
name = parts[8]
|
||||
|
||||
# Skip . and ..
|
||||
if name in ('.', '..'):
|
||||
continue
|
||||
|
||||
# Parse modification time
|
||||
# Format: "Jan 07 14:23" or "Dec 25 2025"
|
||||
modified_str = f"{parts[5]} {parts[6]} {parts[7]}"
|
||||
modified_timestamp = None
|
||||
try:
|
||||
from datetime import datetime
|
||||
# Try parsing with time (recent files: "Jan 07 14:23")
|
||||
try:
|
||||
dt = datetime.strptime(modified_str, "%b %d %H:%M")
|
||||
# Add current year since it's not in the format
|
||||
dt = dt.replace(year=datetime.now().year)
|
||||
|
||||
# If the resulting date is in the future, it's actually from last year
|
||||
if dt > datetime.now():
|
||||
dt = dt.replace(year=dt.year - 1)
|
||||
|
||||
modified_timestamp = dt.isoformat()
|
||||
except ValueError:
|
||||
# Try parsing with year (older files: "Dec 25 2025")
|
||||
dt = datetime.strptime(modified_str, "%b %d %Y")
|
||||
modified_timestamp = dt.isoformat()
|
||||
except Exception as e:
|
||||
logger.warning(f"Failed to parse timestamp '{modified_str}': {e}")
|
||||
|
||||
file_info = {
|
||||
"name": name,
|
||||
"path": f"{remote_path.rstrip('/')}/{name}",
|
||||
"size": size,
|
||||
"modified": modified_str, # Keep original string
|
||||
"modified_timestamp": modified_timestamp, # Add parsed timestamp
|
||||
"is_dir": is_dir,
|
||||
}
|
||||
files.append(file_info)
|
||||
logger.debug(f"Found file: {file_info}")
|
||||
|
||||
logger.info(f"Found {len(files)} files/directories on {self.device_key}")
|
||||
return files
|
||||
|
||||
finally:
|
||||
try:
|
||||
ftp.quit()
|
||||
except:
|
||||
pass
|
||||
|
||||
try:
|
||||
# Run synchronous FTP in thread pool
|
||||
return await asyncio.to_thread(_list_ftp_sync)
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to list FTP files on {self.device_key}: {e}")
|
||||
raise ConnectionError(f"FTP connection failed: {str(e)}")
|
||||
|
||||
async def download_ftp_file(self, remote_path: str, local_path: str):
|
||||
"""Download a file from the device via FTP.
|
||||
|
||||
Args:
|
||||
remote_path: Full path to file on the device
|
||||
local_path: Local path where file will be saved
|
||||
"""
|
||||
logger.info(f"Downloading {remote_path} from {self.device_key} to {local_path}")
|
||||
|
||||
def _download_ftp_sync():
|
||||
"""Synchronous FTP download using ftplib (supports active mode)."""
|
||||
ftp = FTP()
|
||||
ftp.set_debuglevel(0)
|
||||
try:
|
||||
# Connect and login
|
||||
ftp.connect(self.host, 21, timeout=10)
|
||||
ftp.login(self.ftp_username, self.ftp_password)
|
||||
ftp.set_pasv(False) # Force active mode
|
||||
|
||||
# Download file
|
||||
with open(local_path, 'wb') as f:
|
||||
ftp.retrbinary(f'RETR {remote_path}', f.write)
|
||||
|
||||
logger.info(f"Successfully downloaded {remote_path} to {local_path}")
|
||||
|
||||
finally:
|
||||
try:
|
||||
ftp.quit()
|
||||
except:
|
||||
pass
|
||||
|
||||
try:
|
||||
# Run synchronous FTP in thread pool
|
||||
await asyncio.to_thread(_download_ftp_sync)
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to download {remote_path} from {self.device_key}: {e}")
|
||||
raise ConnectionError(f"FTP download failed: {str(e)}")
|
||||
0
app/ui/__init__.py
Normal file
92
app/ui/routes.py
Normal file
@@ -0,0 +1,92 @@
|
||||
"""
|
||||
UI Layer Routes - HTML page routes only (no business logic)
|
||||
"""
|
||||
from fastapi import APIRouter, Request
|
||||
from fastapi.responses import HTMLResponse, FileResponse
|
||||
from fastapi.templating import Jinja2Templates
|
||||
import os
|
||||
|
||||
router = APIRouter()
|
||||
|
||||
# Setup Jinja2 templates
|
||||
templates = Jinja2Templates(directory="app/ui/templates")
|
||||
|
||||
# Read environment (development or production)
|
||||
ENVIRONMENT = os.getenv("ENVIRONMENT", "production")
|
||||
VERSION = "1.0.0" # Terra-View version
|
||||
|
||||
# Override TemplateResponse to include environment and version in context
|
||||
original_template_response = templates.TemplateResponse
|
||||
def custom_template_response(name, context=None, *args, **kwargs):
|
||||
if context is None:
|
||||
context = {}
|
||||
context["environment"] = ENVIRONMENT
|
||||
context["version"] = VERSION
|
||||
return original_template_response(name, context, *args, **kwargs)
|
||||
templates.TemplateResponse = custom_template_response
|
||||
|
||||
|
||||
# ===== HTML PAGE ROUTES =====
|
||||
|
||||
@router.get("/", response_class=HTMLResponse)
|
||||
async def dashboard(request: Request):
|
||||
"""Dashboard home page"""
|
||||
return templates.TemplateResponse("dashboard.html", {"request": request})
|
||||
|
||||
|
||||
@router.get("/roster", response_class=HTMLResponse)
|
||||
async def roster_page(request: Request):
|
||||
"""Fleet roster page"""
|
||||
return templates.TemplateResponse("roster.html", {"request": request})
|
||||
|
||||
|
||||
@router.get("/unit/{unit_id}", response_class=HTMLResponse)
|
||||
async def unit_detail_page(request: Request, unit_id: str):
|
||||
"""Unit detail page"""
|
||||
return templates.TemplateResponse("unit_detail.html", {
|
||||
"request": request,
|
||||
"unit_id": unit_id
|
||||
})
|
||||
|
||||
|
||||
@router.get("/settings", response_class=HTMLResponse)
|
||||
async def settings_page(request: Request):
|
||||
"""Settings page for roster management"""
|
||||
return templates.TemplateResponse("settings.html", {"request": request})
|
||||
|
||||
|
||||
@router.get("/sound-level-meters", response_class=HTMLResponse)
|
||||
async def sound_level_meters_page(request: Request):
|
||||
"""Sound Level Meters management dashboard"""
|
||||
return templates.TemplateResponse("sound_level_meters.html", {"request": request})
|
||||
|
||||
|
||||
@router.get("/seismographs", response_class=HTMLResponse)
|
||||
async def seismographs_page(request: Request):
|
||||
"""Seismographs management dashboard"""
|
||||
return templates.TemplateResponse("seismographs.html", {"request": request})
|
||||
|
||||
|
||||
# ===== PWA ROUTES =====
|
||||
|
||||
@router.get("/sw.js")
|
||||
async def service_worker():
|
||||
"""Serve service worker with proper headers for PWA"""
|
||||
return FileResponse(
|
||||
"app/ui/static/sw.js",
|
||||
media_type="application/javascript",
|
||||
headers={
|
||||
"Service-Worker-Allowed": "/",
|
||||
"Cache-Control": "no-cache"
|
||||
}
|
||||
)
|
||||
|
||||
|
||||
@router.get("/offline-db.js")
|
||||
async def offline_db_script():
|
||||
"""Serve offline database script"""
|
||||
return FileResponse(
|
||||
"app/ui/static/offline-db.js",
|
||||
media_type="application/javascript",
|
||||
headers={"Cache-Control": "no-cache"}
|
||||
)
|
||||
|
Before Width: | Height: | Size: 1.9 KiB After Width: | Height: | Size: 1.9 KiB |
|
Before Width: | Height: | Size: 288 B After Width: | Height: | Size: 288 B |
|
Before Width: | Height: | Size: 2.2 KiB After Width: | Height: | Size: 2.2 KiB |
|
Before Width: | Height: | Size: 287 B After Width: | Height: | Size: 287 B |
|
Before Width: | Height: | Size: 2.2 KiB After Width: | Height: | Size: 2.2 KiB |
|
Before Width: | Height: | Size: 288 B After Width: | Height: | Size: 288 B |
|
Before Width: | Height: | Size: 2.9 KiB After Width: | Height: | Size: 2.9 KiB |
|
Before Width: | Height: | Size: 288 B After Width: | Height: | Size: 288 B |
|
Before Width: | Height: | Size: 5.8 KiB After Width: | Height: | Size: 5.8 KiB |
|
Before Width: | Height: | Size: 287 B After Width: | Height: | Size: 287 B |
|
Before Width: | Height: | Size: 7.8 KiB After Width: | Height: | Size: 7.8 KiB |
|
Before Width: | Height: | Size: 288 B After Width: | Height: | Size: 288 B |
|
Before Width: | Height: | Size: 1.1 KiB After Width: | Height: | Size: 1.1 KiB |
|
Before Width: | Height: | Size: 283 B After Width: | Height: | Size: 283 B |
|
Before Width: | Height: | Size: 1.4 KiB After Width: | Height: | Size: 1.4 KiB |
|
Before Width: | Height: | Size: 283 B After Width: | Height: | Size: 283 B |
1466
app/ui/templates/partials/slm_live_view.html
Normal file
@@ -167,10 +167,18 @@ function initLiveDataStream(unitId) {
|
||||
if (stopBtn) stopBtn.style.display = 'flex';
|
||||
};
|
||||
|
||||
currentWebSocket.onmessage = function(event) {
|
||||
const data = JSON.parse(event.data);
|
||||
currentWebSocket.onmessage = async function(event) {
|
||||
try {
|
||||
let payload = event.data;
|
||||
if (payload instanceof Blob) {
|
||||
payload = await payload.text();
|
||||
}
|
||||
const data = typeof payload === 'string' ? JSON.parse(payload) : payload;
|
||||
updateLiveChart(data);
|
||||
updateLiveMetrics(data);
|
||||
} catch (error) {
|
||||
console.error('Error parsing WebSocket message:', error);
|
||||
}
|
||||
};
|
||||
|
||||
currentWebSocket.onerror = function(error) {
|
||||
527
backend/main.py
@@ -1,527 +0,0 @@
|
||||
import os
|
||||
import logging
|
||||
from fastapi import FastAPI, Request, Depends
|
||||
from fastapi.middleware.cors import CORSMiddleware
|
||||
from fastapi.staticfiles import StaticFiles
|
||||
from fastapi.templating import Jinja2Templates
|
||||
from fastapi.responses import HTMLResponse, FileResponse, JSONResponse
|
||||
from fastapi.exceptions import RequestValidationError
|
||||
from sqlalchemy.orm import Session
|
||||
from typing import List, Dict
|
||||
from pydantic import BaseModel
|
||||
|
||||
# Configure logging
|
||||
logging.basicConfig(
|
||||
level=logging.INFO,
|
||||
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s'
|
||||
)
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
from backend.database import engine, Base, get_db
|
||||
from backend.routers import roster, units, photos, roster_edit, dashboard, dashboard_tabs, activity, slmm, slm_ui, slm_dashboard, seismo_dashboard
|
||||
from backend.services.snapshot import emit_status_snapshot
|
||||
from backend.models import IgnoredUnit
|
||||
|
||||
# Create database tables
|
||||
Base.metadata.create_all(bind=engine)
|
||||
|
||||
# Read environment (development or production)
|
||||
ENVIRONMENT = os.getenv("ENVIRONMENT", "production")
|
||||
|
||||
# Initialize FastAPI app
|
||||
VERSION = "0.4.2"
|
||||
app = FastAPI(
|
||||
title="Seismo Fleet Manager",
|
||||
description="Backend API for managing seismograph fleet status",
|
||||
version=VERSION
|
||||
)
|
||||
|
||||
# Add validation error handler to log details
|
||||
@app.exception_handler(RequestValidationError)
|
||||
async def validation_exception_handler(request: Request, exc: RequestValidationError):
|
||||
logger.error(f"Validation error on {request.url}: {exc.errors()}")
|
||||
logger.error(f"Body: {await request.body()}")
|
||||
return JSONResponse(
|
||||
status_code=400,
|
||||
content={"detail": exc.errors()}
|
||||
)
|
||||
|
||||
# Configure CORS
|
||||
app.add_middleware(
|
||||
CORSMiddleware,
|
||||
allow_origins=["*"],
|
||||
allow_credentials=True,
|
||||
allow_methods=["*"],
|
||||
allow_headers=["*"],
|
||||
)
|
||||
|
||||
# Mount static files
|
||||
app.mount("/static", StaticFiles(directory="backend/static"), name="static")
|
||||
|
||||
# Setup Jinja2 templates
|
||||
templates = Jinja2Templates(directory="templates")
|
||||
|
||||
# Add custom context processor to inject environment variable into all templates
|
||||
@app.middleware("http")
|
||||
async def add_environment_to_context(request: Request, call_next):
|
||||
"""Middleware to add environment variable to request state"""
|
||||
request.state.environment = ENVIRONMENT
|
||||
response = await call_next(request)
|
||||
return response
|
||||
|
||||
# Override TemplateResponse to include environment and version in context
|
||||
original_template_response = templates.TemplateResponse
|
||||
def custom_template_response(name, context=None, *args, **kwargs):
|
||||
if context is None:
|
||||
context = {}
|
||||
context["environment"] = ENVIRONMENT
|
||||
context["version"] = VERSION
|
||||
return original_template_response(name, context, *args, **kwargs)
|
||||
templates.TemplateResponse = custom_template_response
|
||||
|
||||
# Include API routers
|
||||
app.include_router(roster.router)
|
||||
app.include_router(units.router)
|
||||
app.include_router(photos.router)
|
||||
app.include_router(roster_edit.router)
|
||||
app.include_router(dashboard.router)
|
||||
app.include_router(dashboard_tabs.router)
|
||||
app.include_router(activity.router)
|
||||
app.include_router(slmm.router)
|
||||
app.include_router(slm_ui.router)
|
||||
app.include_router(slm_dashboard.router)
|
||||
app.include_router(seismo_dashboard.router)
|
||||
|
||||
from backend.routers import settings
|
||||
app.include_router(settings.router)
|
||||
|
||||
|
||||
|
||||
# Legacy routes from the original backend
|
||||
from backend import routes as legacy_routes
|
||||
app.include_router(legacy_routes.router)
|
||||
|
||||
|
||||
# HTML page routes
|
||||
@app.get("/", response_class=HTMLResponse)
|
||||
async def dashboard(request: Request):
|
||||
"""Dashboard home page"""
|
||||
return templates.TemplateResponse("dashboard.html", {"request": request})
|
||||
|
||||
|
||||
@app.get("/roster", response_class=HTMLResponse)
|
||||
async def roster_page(request: Request):
|
||||
"""Fleet roster page"""
|
||||
return templates.TemplateResponse("roster.html", {"request": request})
|
||||
|
||||
|
||||
@app.get("/unit/{unit_id}", response_class=HTMLResponse)
|
||||
async def unit_detail_page(request: Request, unit_id: str):
|
||||
"""Unit detail page"""
|
||||
return templates.TemplateResponse("unit_detail.html", {
|
||||
"request": request,
|
||||
"unit_id": unit_id
|
||||
})
|
||||
|
||||
|
||||
@app.get("/settings", response_class=HTMLResponse)
|
||||
async def settings_page(request: Request):
|
||||
"""Settings page for roster management"""
|
||||
return templates.TemplateResponse("settings.html", {"request": request})
|
||||
|
||||
|
||||
@app.get("/sound-level-meters", response_class=HTMLResponse)
|
||||
async def sound_level_meters_page(request: Request):
|
||||
"""Sound Level Meters management dashboard"""
|
||||
return templates.TemplateResponse("sound_level_meters.html", {"request": request})
|
||||
|
||||
|
||||
@app.get("/seismographs", response_class=HTMLResponse)
|
||||
async def seismographs_page(request: Request):
|
||||
"""Seismographs management dashboard"""
|
||||
return templates.TemplateResponse("seismographs.html", {"request": request})
|
||||
|
||||
|
||||
# ===== PWA ROUTES =====
|
||||
|
||||
@app.get("/sw.js")
|
||||
async def service_worker():
|
||||
"""Serve service worker with proper headers for PWA"""
|
||||
return FileResponse(
|
||||
"backend/static/sw.js",
|
||||
media_type="application/javascript",
|
||||
headers={
|
||||
"Service-Worker-Allowed": "/",
|
||||
"Cache-Control": "no-cache"
|
||||
}
|
||||
)
|
||||
|
||||
|
||||
@app.get("/offline-db.js")
|
||||
async def offline_db_script():
|
||||
"""Serve offline database script"""
|
||||
return FileResponse(
|
||||
"backend/static/offline-db.js",
|
||||
media_type="application/javascript",
|
||||
headers={"Cache-Control": "no-cache"}
|
||||
)
|
||||
|
||||
|
||||
# Pydantic model for sync edits
|
||||
class EditItem(BaseModel):
|
||||
id: int
|
||||
unitId: str
|
||||
changes: Dict
|
||||
timestamp: int
|
||||
|
||||
|
||||
class SyncEditsRequest(BaseModel):
|
||||
edits: List[EditItem]
|
||||
|
||||
|
||||
@app.post("/api/sync-edits")
|
||||
async def sync_edits(request: SyncEditsRequest, db: Session = Depends(get_db)):
|
||||
"""Process offline edit queue and sync to database"""
|
||||
from backend.models import RosterUnit
|
||||
|
||||
results = []
|
||||
synced_ids = []
|
||||
|
||||
for edit in request.edits:
|
||||
try:
|
||||
# Find the unit
|
||||
unit = db.query(RosterUnit).filter_by(id=edit.unitId).first()
|
||||
|
||||
if not unit:
|
||||
results.append({
|
||||
"id": edit.id,
|
||||
"status": "error",
|
||||
"reason": f"Unit {edit.unitId} not found"
|
||||
})
|
||||
continue
|
||||
|
||||
# Apply changes
|
||||
for key, value in edit.changes.items():
|
||||
if hasattr(unit, key):
|
||||
# Handle boolean conversions
|
||||
if key in ['deployed', 'retired']:
|
||||
setattr(unit, key, value in ['true', True, 'True', '1', 1])
|
||||
else:
|
||||
setattr(unit, key, value if value != '' else None)
|
||||
|
||||
db.commit()
|
||||
|
||||
results.append({
|
||||
"id": edit.id,
|
||||
"status": "success"
|
||||
})
|
||||
synced_ids.append(edit.id)
|
||||
|
||||
except Exception as e:
|
||||
db.rollback()
|
||||
results.append({
|
||||
"id": edit.id,
|
||||
"status": "error",
|
||||
"reason": str(e)
|
||||
})
|
||||
|
||||
synced_count = len(synced_ids)
|
||||
|
||||
return JSONResponse({
|
||||
"synced": synced_count,
|
||||
"total": len(request.edits),
|
||||
"synced_ids": synced_ids,
|
||||
"results": results
|
||||
})
|
||||
|
||||
|
||||
@app.get("/partials/roster-deployed", response_class=HTMLResponse)
|
||||
async def roster_deployed_partial(request: Request):
|
||||
"""Partial template for deployed units tab"""
|
||||
from datetime import datetime
|
||||
snapshot = emit_status_snapshot()
|
||||
|
||||
units_list = []
|
||||
for unit_id, unit_data in snapshot["active"].items():
|
||||
units_list.append({
|
||||
"id": unit_id,
|
||||
"status": unit_data.get("status", "Unknown"),
|
||||
"age": unit_data.get("age", "N/A"),
|
||||
"last_seen": unit_data.get("last", "Never"),
|
||||
"deployed": unit_data.get("deployed", False),
|
||||
"note": unit_data.get("note", ""),
|
||||
"device_type": unit_data.get("device_type", "seismograph"),
|
||||
"address": unit_data.get("address", ""),
|
||||
"coordinates": unit_data.get("coordinates", ""),
|
||||
"project_id": unit_data.get("project_id", ""),
|
||||
"last_calibrated": unit_data.get("last_calibrated"),
|
||||
"next_calibration_due": unit_data.get("next_calibration_due"),
|
||||
"deployed_with_modem_id": unit_data.get("deployed_with_modem_id"),
|
||||
"ip_address": unit_data.get("ip_address"),
|
||||
"phone_number": unit_data.get("phone_number"),
|
||||
"hardware_model": unit_data.get("hardware_model"),
|
||||
})
|
||||
|
||||
# Sort by status priority (Missing > Pending > OK) then by ID
|
||||
status_priority = {"Missing": 0, "Pending": 1, "OK": 2}
|
||||
units_list.sort(key=lambda x: (status_priority.get(x["status"], 3), x["id"]))
|
||||
|
||||
return templates.TemplateResponse("partials/roster_table.html", {
|
||||
"request": request,
|
||||
"units": units_list,
|
||||
"timestamp": datetime.now().strftime("%H:%M:%S")
|
||||
})
|
||||
|
||||
|
||||
@app.get("/partials/roster-benched", response_class=HTMLResponse)
|
||||
async def roster_benched_partial(request: Request):
|
||||
"""Partial template for benched units tab"""
|
||||
from datetime import datetime
|
||||
snapshot = emit_status_snapshot()
|
||||
|
||||
units_list = []
|
||||
for unit_id, unit_data in snapshot["benched"].items():
|
||||
units_list.append({
|
||||
"id": unit_id,
|
||||
"status": unit_data.get("status", "N/A"),
|
||||
"age": unit_data.get("age", "N/A"),
|
||||
"last_seen": unit_data.get("last", "Never"),
|
||||
"deployed": unit_data.get("deployed", False),
|
||||
"note": unit_data.get("note", ""),
|
||||
"device_type": unit_data.get("device_type", "seismograph"),
|
||||
"address": unit_data.get("address", ""),
|
||||
"coordinates": unit_data.get("coordinates", ""),
|
||||
"project_id": unit_data.get("project_id", ""),
|
||||
"last_calibrated": unit_data.get("last_calibrated"),
|
||||
"next_calibration_due": unit_data.get("next_calibration_due"),
|
||||
"deployed_with_modem_id": unit_data.get("deployed_with_modem_id"),
|
||||
"ip_address": unit_data.get("ip_address"),
|
||||
"phone_number": unit_data.get("phone_number"),
|
||||
"hardware_model": unit_data.get("hardware_model"),
|
||||
})
|
||||
|
||||
# Sort by ID
|
||||
units_list.sort(key=lambda x: x["id"])
|
||||
|
||||
return templates.TemplateResponse("partials/roster_table.html", {
|
||||
"request": request,
|
||||
"units": units_list,
|
||||
"timestamp": datetime.now().strftime("%H:%M:%S")
|
||||
})
|
||||
|
||||
|
||||
@app.get("/partials/roster-retired", response_class=HTMLResponse)
|
||||
async def roster_retired_partial(request: Request):
|
||||
"""Partial template for retired units tab"""
|
||||
from datetime import datetime
|
||||
snapshot = emit_status_snapshot()
|
||||
|
||||
units_list = []
|
||||
for unit_id, unit_data in snapshot["retired"].items():
|
||||
units_list.append({
|
||||
"id": unit_id,
|
||||
"status": unit_data["status"],
|
||||
"age": unit_data["age"],
|
||||
"last_seen": unit_data["last"],
|
||||
"deployed": unit_data["deployed"],
|
||||
"note": unit_data.get("note", ""),
|
||||
"device_type": unit_data.get("device_type", "seismograph"),
|
||||
"last_calibrated": unit_data.get("last_calibrated"),
|
||||
"next_calibration_due": unit_data.get("next_calibration_due"),
|
||||
"deployed_with_modem_id": unit_data.get("deployed_with_modem_id"),
|
||||
"ip_address": unit_data.get("ip_address"),
|
||||
"phone_number": unit_data.get("phone_number"),
|
||||
"hardware_model": unit_data.get("hardware_model"),
|
||||
})
|
||||
|
||||
# Sort by ID
|
||||
units_list.sort(key=lambda x: x["id"])
|
||||
|
||||
return templates.TemplateResponse("partials/retired_table.html", {
|
||||
"request": request,
|
||||
"units": units_list,
|
||||
"timestamp": datetime.now().strftime("%H:%M:%S")
|
||||
})
|
||||
|
||||
|
||||
@app.get("/partials/roster-ignored", response_class=HTMLResponse)
|
||||
async def roster_ignored_partial(request: Request, db: Session = Depends(get_db)):
|
||||
"""Partial template for ignored units tab"""
|
||||
from datetime import datetime
|
||||
|
||||
ignored = db.query(IgnoredUnit).all()
|
||||
ignored_list = []
|
||||
for unit in ignored:
|
||||
ignored_list.append({
|
||||
"id": unit.id,
|
||||
"reason": unit.reason or "",
|
||||
"ignored_at": unit.ignored_at.strftime("%Y-%m-%d %H:%M:%S") if unit.ignored_at else "Unknown"
|
||||
})
|
||||
|
||||
# Sort by ID
|
||||
ignored_list.sort(key=lambda x: x["id"])
|
||||
|
||||
return templates.TemplateResponse("partials/ignored_table.html", {
|
||||
"request": request,
|
||||
"ignored_units": ignored_list,
|
||||
"timestamp": datetime.now().strftime("%H:%M:%S")
|
||||
})
|
||||
|
||||
|
||||
@app.get("/partials/unknown-emitters", response_class=HTMLResponse)
|
||||
async def unknown_emitters_partial(request: Request):
|
||||
"""Partial template for unknown emitters (HTMX)"""
|
||||
snapshot = emit_status_snapshot()
|
||||
|
||||
unknown_list = []
|
||||
for unit_id, unit_data in snapshot.get("unknown", {}).items():
|
||||
unknown_list.append({
|
||||
"id": unit_id,
|
||||
"status": unit_data["status"],
|
||||
"age": unit_data["age"],
|
||||
"fname": unit_data.get("fname", ""),
|
||||
})
|
||||
|
||||
# Sort by ID
|
||||
unknown_list.sort(key=lambda x: x["id"])
|
||||
|
||||
return templates.TemplateResponse("partials/unknown_emitters.html", {
|
||||
"request": request,
|
||||
"unknown_units": unknown_list
|
||||
})
|
||||
|
||||
|
||||
@app.get("/partials/devices-all", response_class=HTMLResponse)
|
||||
async def devices_all_partial(request: Request):
|
||||
"""Unified partial template for ALL devices with comprehensive filtering support"""
|
||||
from datetime import datetime
|
||||
snapshot = emit_status_snapshot()
|
||||
|
||||
units_list = []
|
||||
|
||||
# Add deployed/active units
|
||||
for unit_id, unit_data in snapshot["active"].items():
|
||||
units_list.append({
|
||||
"id": unit_id,
|
||||
"status": unit_data.get("status", "Unknown"),
|
||||
"age": unit_data.get("age", "N/A"),
|
||||
"last_seen": unit_data.get("last", "Never"),
|
||||
"deployed": True,
|
||||
"retired": False,
|
||||
"ignored": False,
|
||||
"note": unit_data.get("note", ""),
|
||||
"device_type": unit_data.get("device_type", "seismograph"),
|
||||
"address": unit_data.get("address", ""),
|
||||
"coordinates": unit_data.get("coordinates", ""),
|
||||
"project_id": unit_data.get("project_id", ""),
|
||||
"last_calibrated": unit_data.get("last_calibrated"),
|
||||
"next_calibration_due": unit_data.get("next_calibration_due"),
|
||||
"deployed_with_modem_id": unit_data.get("deployed_with_modem_id"),
|
||||
"ip_address": unit_data.get("ip_address"),
|
||||
"phone_number": unit_data.get("phone_number"),
|
||||
"hardware_model": unit_data.get("hardware_model"),
|
||||
})
|
||||
|
||||
# Add benched units
|
||||
for unit_id, unit_data in snapshot["benched"].items():
|
||||
units_list.append({
|
||||
"id": unit_id,
|
||||
"status": unit_data.get("status", "N/A"),
|
||||
"age": unit_data.get("age", "N/A"),
|
||||
"last_seen": unit_data.get("last", "Never"),
|
||||
"deployed": False,
|
||||
"retired": False,
|
||||
"ignored": False,
|
||||
"note": unit_data.get("note", ""),
|
||||
"device_type": unit_data.get("device_type", "seismograph"),
|
||||
"address": unit_data.get("address", ""),
|
||||
"coordinates": unit_data.get("coordinates", ""),
|
||||
"project_id": unit_data.get("project_id", ""),
|
||||
"last_calibrated": unit_data.get("last_calibrated"),
|
||||
"next_calibration_due": unit_data.get("next_calibration_due"),
|
||||
"deployed_with_modem_id": unit_data.get("deployed_with_modem_id"),
|
||||
"ip_address": unit_data.get("ip_address"),
|
||||
"phone_number": unit_data.get("phone_number"),
|
||||
"hardware_model": unit_data.get("hardware_model"),
|
||||
})
|
||||
|
||||
# Add retired units
|
||||
for unit_id, unit_data in snapshot["retired"].items():
|
||||
units_list.append({
|
||||
"id": unit_id,
|
||||
"status": "Retired",
|
||||
"age": "N/A",
|
||||
"last_seen": "N/A",
|
||||
"deployed": False,
|
||||
"retired": True,
|
||||
"ignored": False,
|
||||
"note": unit_data.get("note", ""),
|
||||
"device_type": unit_data.get("device_type", "seismograph"),
|
||||
"address": unit_data.get("address", ""),
|
||||
"coordinates": unit_data.get("coordinates", ""),
|
||||
"project_id": unit_data.get("project_id", ""),
|
||||
"last_calibrated": unit_data.get("last_calibrated"),
|
||||
"next_calibration_due": unit_data.get("next_calibration_due"),
|
||||
"deployed_with_modem_id": unit_data.get("deployed_with_modem_id"),
|
||||
"ip_address": unit_data.get("ip_address"),
|
||||
"phone_number": unit_data.get("phone_number"),
|
||||
"hardware_model": unit_data.get("hardware_model"),
|
||||
})
|
||||
|
||||
# Add ignored units
|
||||
for unit_id, unit_data in snapshot.get("ignored", {}).items():
|
||||
units_list.append({
|
||||
"id": unit_id,
|
||||
"status": "Ignored",
|
||||
"age": "N/A",
|
||||
"last_seen": "N/A",
|
||||
"deployed": False,
|
||||
"retired": False,
|
||||
"ignored": True,
|
||||
"note": unit_data.get("note", unit_data.get("reason", "")),
|
||||
"device_type": unit_data.get("device_type", "unknown"),
|
||||
"address": "",
|
||||
"coordinates": "",
|
||||
"project_id": "",
|
||||
"last_calibrated": None,
|
||||
"next_calibration_due": None,
|
||||
"deployed_with_modem_id": None,
|
||||
"ip_address": None,
|
||||
"phone_number": None,
|
||||
"hardware_model": None,
|
||||
})
|
||||
|
||||
# Sort by status category, then by ID
|
||||
def sort_key(unit):
|
||||
# Priority: deployed (active) -> benched -> retired -> ignored
|
||||
if unit["deployed"]:
|
||||
return (0, unit["id"])
|
||||
elif not unit["retired"] and not unit["ignored"]:
|
||||
return (1, unit["id"])
|
||||
elif unit["retired"]:
|
||||
return (2, unit["id"])
|
||||
else:
|
||||
return (3, unit["id"])
|
||||
|
||||
units_list.sort(key=sort_key)
|
||||
|
||||
return templates.TemplateResponse("partials/devices_table.html", {
|
||||
"request": request,
|
||||
"units": units_list,
|
||||
"timestamp": datetime.now().strftime("%H:%M:%S")
|
||||
})
|
||||
|
||||
|
||||
@app.get("/health")
|
||||
def health_check():
|
||||
"""Health check endpoint"""
|
||||
return {
|
||||
"message": f"Seismo Fleet Manager v{VERSION}",
|
||||
"status": "running",
|
||||
"version": VERSION
|
||||
}
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
import uvicorn
|
||||
uvicorn.run(app, host="0.0.0.0", port=8001)
|
||||
@@ -1,84 +0,0 @@
|
||||
"""
|
||||
Migration script to add device type support to the roster table.
|
||||
|
||||
This adds columns for:
|
||||
- device_type (seismograph/modem discriminator)
|
||||
- Seismograph-specific fields (calibration dates, modem pairing)
|
||||
- Modem-specific fields (IP address, phone number, hardware model)
|
||||
|
||||
Run this script once to migrate an existing database.
|
||||
"""
|
||||
|
||||
import sqlite3
|
||||
import os
|
||||
|
||||
# Database path
|
||||
DB_PATH = "./data/seismo_fleet.db"
|
||||
|
||||
def migrate_database():
|
||||
"""Add new columns to the roster table"""
|
||||
|
||||
if not os.path.exists(DB_PATH):
|
||||
print(f"Database not found at {DB_PATH}")
|
||||
print("The database will be created automatically when you run the application.")
|
||||
return
|
||||
|
||||
print(f"Migrating database: {DB_PATH}")
|
||||
|
||||
conn = sqlite3.connect(DB_PATH)
|
||||
cursor = conn.cursor()
|
||||
|
||||
# Check if device_type column already exists
|
||||
cursor.execute("PRAGMA table_info(roster)")
|
||||
columns = [col[1] for col in cursor.fetchall()]
|
||||
|
||||
if "device_type" in columns:
|
||||
print("Migration already applied - device_type column exists")
|
||||
conn.close()
|
||||
return
|
||||
|
||||
print("Adding new columns to roster table...")
|
||||
|
||||
try:
|
||||
# Add device type discriminator
|
||||
cursor.execute("ALTER TABLE roster ADD COLUMN device_type TEXT DEFAULT 'seismograph'")
|
||||
print(" ✓ Added device_type column")
|
||||
|
||||
# Add seismograph-specific fields
|
||||
cursor.execute("ALTER TABLE roster ADD COLUMN last_calibrated DATE")
|
||||
print(" ✓ Added last_calibrated column")
|
||||
|
||||
cursor.execute("ALTER TABLE roster ADD COLUMN next_calibration_due DATE")
|
||||
print(" ✓ Added next_calibration_due column")
|
||||
|
||||
cursor.execute("ALTER TABLE roster ADD COLUMN deployed_with_modem_id TEXT")
|
||||
print(" ✓ Added deployed_with_modem_id column")
|
||||
|
||||
# Add modem-specific fields
|
||||
cursor.execute("ALTER TABLE roster ADD COLUMN ip_address TEXT")
|
||||
print(" ✓ Added ip_address column")
|
||||
|
||||
cursor.execute("ALTER TABLE roster ADD COLUMN phone_number TEXT")
|
||||
print(" ✓ Added phone_number column")
|
||||
|
||||
cursor.execute("ALTER TABLE roster ADD COLUMN hardware_model TEXT")
|
||||
print(" ✓ Added hardware_model column")
|
||||
|
||||
# Set all existing units to seismograph type
|
||||
cursor.execute("UPDATE roster SET device_type = 'seismograph' WHERE device_type IS NULL")
|
||||
print(" ✓ Set existing units to seismograph type")
|
||||
|
||||
conn.commit()
|
||||
print("\nMigration completed successfully!")
|
||||
|
||||
except sqlite3.Error as e:
|
||||
print(f"\nError during migration: {e}")
|
||||
conn.rollback()
|
||||
raise
|
||||
|
||||
finally:
|
||||
conn.close()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
migrate_database()
|
||||
@@ -1,78 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Database migration: Add sound level meter fields to roster table.
|
||||
|
||||
Adds columns for sound_level_meter device type support.
|
||||
"""
|
||||
|
||||
import sqlite3
|
||||
from pathlib import Path
|
||||
|
||||
def migrate():
|
||||
"""Add SLM fields to roster table if they don't exist."""
|
||||
|
||||
# Try multiple possible database locations
|
||||
possible_paths = [
|
||||
Path("data/seismo_fleet.db"),
|
||||
Path("data/sfm.db"),
|
||||
Path("data/seismo.db"),
|
||||
]
|
||||
|
||||
db_path = None
|
||||
for path in possible_paths:
|
||||
if path.exists():
|
||||
db_path = path
|
||||
break
|
||||
|
||||
if db_path is None:
|
||||
print(f"Database not found in any of: {[str(p) for p in possible_paths]}")
|
||||
print("Creating database with models.py will include new fields automatically.")
|
||||
return
|
||||
|
||||
print(f"Using database: {db_path}")
|
||||
|
||||
conn = sqlite3.connect(db_path)
|
||||
cursor = conn.cursor()
|
||||
|
||||
# Check if columns already exist
|
||||
cursor.execute("PRAGMA table_info(roster)")
|
||||
existing_columns = {row[1] for row in cursor.fetchall()}
|
||||
|
||||
new_columns = {
|
||||
"slm_host": "TEXT",
|
||||
"slm_tcp_port": "INTEGER",
|
||||
"slm_model": "TEXT",
|
||||
"slm_serial_number": "TEXT",
|
||||
"slm_frequency_weighting": "TEXT",
|
||||
"slm_time_weighting": "TEXT",
|
||||
"slm_measurement_range": "TEXT",
|
||||
"slm_last_check": "DATETIME",
|
||||
}
|
||||
|
||||
migrations_applied = []
|
||||
|
||||
for column_name, column_type in new_columns.items():
|
||||
if column_name not in existing_columns:
|
||||
try:
|
||||
cursor.execute(f"ALTER TABLE roster ADD COLUMN {column_name} {column_type}")
|
||||
migrations_applied.append(column_name)
|
||||
print(f"✓ Added column: {column_name} ({column_type})")
|
||||
except sqlite3.OperationalError as e:
|
||||
print(f"✗ Failed to add column {column_name}: {e}")
|
||||
else:
|
||||
print(f"○ Column already exists: {column_name}")
|
||||
|
||||
conn.commit()
|
||||
conn.close()
|
||||
|
||||
if migrations_applied:
|
||||
print(f"\n✓ Migration complete! Added {len(migrations_applied)} new columns.")
|
||||
else:
|
||||
print("\n○ No migration needed - all columns already exist.")
|
||||
|
||||
print("\nSound level meter fields are now available in the roster table.")
|
||||
print("You can now set device_type='sound_level_meter' for SLM devices.")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
migrate()
|
||||
@@ -1,78 +0,0 @@
|
||||
"""
|
||||
Migration script to add unit history timeline support.
|
||||
|
||||
This creates the unit_history table to track all changes to units:
|
||||
- Note changes (archived old notes, new notes)
|
||||
- Deployment status changes (deployed/benched)
|
||||
- Retired status changes
|
||||
- Other field changes
|
||||
|
||||
Run this script once to migrate an existing database.
|
||||
"""
|
||||
|
||||
import sqlite3
|
||||
import os
|
||||
|
||||
# Database path
|
||||
DB_PATH = "./data/seismo_fleet.db"
|
||||
|
||||
def migrate_database():
|
||||
"""Create the unit_history table"""
|
||||
|
||||
if not os.path.exists(DB_PATH):
|
||||
print(f"Database not found at {DB_PATH}")
|
||||
print("The database will be created automatically when you run the application.")
|
||||
return
|
||||
|
||||
print(f"Migrating database: {DB_PATH}")
|
||||
|
||||
conn = sqlite3.connect(DB_PATH)
|
||||
cursor = conn.cursor()
|
||||
|
||||
# Check if unit_history table already exists
|
||||
cursor.execute("SELECT name FROM sqlite_master WHERE type='table' AND name='unit_history'")
|
||||
if cursor.fetchone():
|
||||
print("Migration already applied - unit_history table exists")
|
||||
conn.close()
|
||||
return
|
||||
|
||||
print("Creating unit_history table...")
|
||||
|
||||
try:
|
||||
cursor.execute("""
|
||||
CREATE TABLE unit_history (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
unit_id TEXT NOT NULL,
|
||||
change_type TEXT NOT NULL,
|
||||
field_name TEXT,
|
||||
old_value TEXT,
|
||||
new_value TEXT,
|
||||
changed_at TIMESTAMP NOT NULL,
|
||||
source TEXT DEFAULT 'manual',
|
||||
notes TEXT
|
||||
)
|
||||
""")
|
||||
print(" ✓ Created unit_history table")
|
||||
|
||||
# Create indexes for better query performance
|
||||
cursor.execute("CREATE INDEX idx_unit_history_unit_id ON unit_history(unit_id)")
|
||||
print(" ✓ Created index on unit_id")
|
||||
|
||||
cursor.execute("CREATE INDEX idx_unit_history_changed_at ON unit_history(changed_at)")
|
||||
print(" ✓ Created index on changed_at")
|
||||
|
||||
conn.commit()
|
||||
print("\nMigration completed successfully!")
|
||||
print("Units will now track their complete history of changes.")
|
||||
|
||||
except sqlite3.Error as e:
|
||||
print(f"\nError during migration: {e}")
|
||||
conn.rollback()
|
||||
raise
|
||||
|
||||
finally:
|
||||
conn.close()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
migrate_database()
|
||||
@@ -1,80 +0,0 @@
|
||||
"""
|
||||
Migration script to add user_preferences table.
|
||||
|
||||
This creates a new table for storing persistent user preferences:
|
||||
- Display settings (timezone, theme, date format)
|
||||
- Auto-refresh configuration
|
||||
- Calibration defaults
|
||||
- Status threshold customization
|
||||
|
||||
Run this script once to migrate an existing database.
|
||||
"""
|
||||
|
||||
import sqlite3
|
||||
import os
|
||||
|
||||
# Database path
|
||||
DB_PATH = "./data/seismo_fleet.db"
|
||||
|
||||
def migrate_database():
|
||||
"""Create user_preferences table"""
|
||||
|
||||
if not os.path.exists(DB_PATH):
|
||||
print(f"Database not found at {DB_PATH}")
|
||||
print("The database will be created automatically when you run the application.")
|
||||
return
|
||||
|
||||
print(f"Migrating database: {DB_PATH}")
|
||||
|
||||
conn = sqlite3.connect(DB_PATH)
|
||||
cursor = conn.cursor()
|
||||
|
||||
# Check if user_preferences table already exists
|
||||
cursor.execute("SELECT name FROM sqlite_master WHERE type='table' AND name='user_preferences'")
|
||||
table_exists = cursor.fetchone()
|
||||
|
||||
if table_exists:
|
||||
print("Migration already applied - user_preferences table exists")
|
||||
conn.close()
|
||||
return
|
||||
|
||||
print("Creating user_preferences table...")
|
||||
|
||||
try:
|
||||
cursor.execute("""
|
||||
CREATE TABLE user_preferences (
|
||||
id INTEGER PRIMARY KEY DEFAULT 1,
|
||||
timezone TEXT DEFAULT 'America/New_York',
|
||||
theme TEXT DEFAULT 'auto',
|
||||
auto_refresh_interval INTEGER DEFAULT 10,
|
||||
date_format TEXT DEFAULT 'MM/DD/YYYY',
|
||||
table_rows_per_page INTEGER DEFAULT 25,
|
||||
calibration_interval_days INTEGER DEFAULT 365,
|
||||
calibration_warning_days INTEGER DEFAULT 30,
|
||||
status_ok_threshold_hours INTEGER DEFAULT 12,
|
||||
status_pending_threshold_hours INTEGER DEFAULT 24,
|
||||
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
|
||||
)
|
||||
""")
|
||||
print(" ✓ Created user_preferences table")
|
||||
|
||||
# Insert default preferences
|
||||
cursor.execute("""
|
||||
INSERT INTO user_preferences (id) VALUES (1)
|
||||
""")
|
||||
print(" ✓ Inserted default preferences")
|
||||
|
||||
conn.commit()
|
||||
print("\nMigration completed successfully!")
|
||||
|
||||
except sqlite3.Error as e:
|
||||
print(f"\nError during migration: {e}")
|
||||
conn.rollback()
|
||||
raise
|
||||
|
||||
finally:
|
||||
conn.close()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
migrate_database()
|
||||
@@ -1,328 +0,0 @@
|
||||
"""
|
||||
SLM Dashboard Router
|
||||
|
||||
Provides API endpoints for the Sound Level Meters dashboard page.
|
||||
"""
|
||||
|
||||
from fastapi import APIRouter, Request, Depends, Query
|
||||
from fastapi.templating import Jinja2Templates
|
||||
from fastapi.responses import HTMLResponse
|
||||
from sqlalchemy.orm import Session
|
||||
from sqlalchemy import func
|
||||
from datetime import datetime, timedelta
|
||||
import httpx
|
||||
import logging
|
||||
import os
|
||||
|
||||
from backend.database import get_db
|
||||
from backend.models import RosterUnit
|
||||
from backend.routers.roster_edit import sync_slm_to_slmm_cache
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
router = APIRouter(prefix="/api/slm-dashboard", tags=["slm-dashboard"])
|
||||
templates = Jinja2Templates(directory="templates")
|
||||
|
||||
# SLMM backend URL - configurable via environment variable
|
||||
SLMM_BASE_URL = os.getenv("SLMM_BASE_URL", "http://localhost:8100")
|
||||
|
||||
|
||||
@router.get("/stats", response_class=HTMLResponse)
|
||||
async def get_slm_stats(request: Request, db: Session = Depends(get_db)):
|
||||
"""
|
||||
Get summary statistics for SLM dashboard.
|
||||
Returns HTML partial with stat cards.
|
||||
"""
|
||||
# Query all SLMs
|
||||
all_slms = db.query(RosterUnit).filter_by(device_type="sound_level_meter").all()
|
||||
|
||||
# Count deployed vs benched
|
||||
deployed_count = sum(1 for slm in all_slms if slm.deployed and not slm.retired)
|
||||
benched_count = sum(1 for slm in all_slms if not slm.deployed and not slm.retired)
|
||||
retired_count = sum(1 for slm in all_slms if slm.retired)
|
||||
|
||||
# Count recently active (checked in last hour)
|
||||
one_hour_ago = datetime.utcnow() - timedelta(hours=1)
|
||||
active_count = sum(1 for slm in all_slms
|
||||
if slm.slm_last_check and slm.slm_last_check > one_hour_ago)
|
||||
|
||||
return templates.TemplateResponse("partials/slm_stats.html", {
|
||||
"request": request,
|
||||
"total_count": len(all_slms),
|
||||
"deployed_count": deployed_count,
|
||||
"benched_count": benched_count,
|
||||
"active_count": active_count,
|
||||
"retired_count": retired_count
|
||||
})
|
||||
|
||||
|
||||
@router.get("/units", response_class=HTMLResponse)
|
||||
async def get_slm_units(
|
||||
request: Request,
|
||||
db: Session = Depends(get_db),
|
||||
search: str = Query(None)
|
||||
):
|
||||
"""
|
||||
Get list of SLM units for the sidebar.
|
||||
Returns HTML partial with unit cards.
|
||||
"""
|
||||
query = db.query(RosterUnit).filter_by(device_type="sound_level_meter")
|
||||
|
||||
# Filter by search term if provided
|
||||
if search:
|
||||
search_term = f"%{search}%"
|
||||
query = query.filter(
|
||||
(RosterUnit.id.like(search_term)) |
|
||||
(RosterUnit.slm_model.like(search_term)) |
|
||||
(RosterUnit.address.like(search_term))
|
||||
)
|
||||
|
||||
# Only show deployed units by default
|
||||
units = query.filter_by(deployed=True, retired=False).order_by(RosterUnit.id).all()
|
||||
|
||||
return templates.TemplateResponse("partials/slm_unit_list.html", {
|
||||
"request": request,
|
||||
"units": units
|
||||
})
|
||||
|
||||
|
||||
@router.get("/live-view/{unit_id}", response_class=HTMLResponse)
|
||||
async def get_live_view(request: Request, unit_id: str, db: Session = Depends(get_db)):
|
||||
"""
|
||||
Get live view panel for a specific SLM unit.
|
||||
Returns HTML partial with live metrics and chart.
|
||||
"""
|
||||
# Get unit from database
|
||||
unit = db.query(RosterUnit).filter_by(id=unit_id, device_type="sound_level_meter").first()
|
||||
|
||||
if not unit:
|
||||
return templates.TemplateResponse("partials/slm_live_view_error.html", {
|
||||
"request": request,
|
||||
"error": f"Unit {unit_id} not found"
|
||||
})
|
||||
|
||||
# Get modem information if assigned
|
||||
modem = None
|
||||
modem_ip = None
|
||||
if unit.deployed_with_modem_id:
|
||||
modem = db.query(RosterUnit).filter_by(id=unit.deployed_with_modem_id, device_type="modem").first()
|
||||
if modem:
|
||||
modem_ip = modem.ip_address
|
||||
else:
|
||||
logger.warning(f"SLM {unit_id} is assigned to modem {unit.deployed_with_modem_id} but modem not found")
|
||||
|
||||
# Fallback to direct slm_host if no modem assigned (backward compatibility)
|
||||
if not modem_ip and unit.slm_host:
|
||||
modem_ip = unit.slm_host
|
||||
logger.info(f"Using legacy slm_host for {unit_id}: {modem_ip}")
|
||||
|
||||
# Try to get current status from SLMM
|
||||
current_status = None
|
||||
measurement_state = None
|
||||
is_measuring = False
|
||||
|
||||
try:
|
||||
async with httpx.AsyncClient(timeout=5.0) as client:
|
||||
# Get measurement state
|
||||
state_response = await client.get(
|
||||
f"{SLMM_BASE_URL}/api/nl43/{unit_id}/measurement-state"
|
||||
)
|
||||
if state_response.status_code == 200:
|
||||
state_data = state_response.json()
|
||||
measurement_state = state_data.get("measurement_state", "Unknown")
|
||||
is_measuring = state_data.get("is_measuring", False)
|
||||
|
||||
# Get live status
|
||||
status_response = await client.get(
|
||||
f"{SLMM_BASE_URL}/api/nl43/{unit_id}/live"
|
||||
)
|
||||
if status_response.status_code == 200:
|
||||
status_data = status_response.json()
|
||||
current_status = status_data.get("data", {})
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to get status for {unit_id}: {e}")
|
||||
|
||||
return templates.TemplateResponse("partials/slm_live_view.html", {
|
||||
"request": request,
|
||||
"unit": unit,
|
||||
"modem": modem,
|
||||
"modem_ip": modem_ip,
|
||||
"current_status": current_status,
|
||||
"measurement_state": measurement_state,
|
||||
"is_measuring": is_measuring
|
||||
})
|
||||
|
||||
|
||||
@router.post("/control/{unit_id}/{action}")
|
||||
async def control_slm(unit_id: str, action: str):
|
||||
"""
|
||||
Send control commands to SLM (start, stop, pause, resume, reset).
|
||||
Proxies to SLMM backend.
|
||||
"""
|
||||
valid_actions = ["start", "stop", "pause", "resume", "reset"]
|
||||
|
||||
if action not in valid_actions:
|
||||
return {"status": "error", "detail": f"Invalid action. Must be one of: {valid_actions}"}
|
||||
|
||||
try:
|
||||
async with httpx.AsyncClient(timeout=10.0) as client:
|
||||
response = await client.post(
|
||||
f"{SLMM_BASE_URL}/api/nl43/{unit_id}/{action}"
|
||||
)
|
||||
|
||||
if response.status_code == 200:
|
||||
return response.json()
|
||||
else:
|
||||
return {
|
||||
"status": "error",
|
||||
"detail": f"SLMM returned status {response.status_code}"
|
||||
}
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to control {unit_id}: {e}")
|
||||
return {
|
||||
"status": "error",
|
||||
"detail": str(e)
|
||||
}
|
||||
|
||||
@router.get("/config/{unit_id}", response_class=HTMLResponse)
|
||||
async def get_slm_config(request: Request, unit_id: str, db: Session = Depends(get_db)):
|
||||
"""
|
||||
Get configuration form for a specific SLM unit.
|
||||
Returns HTML partial with configuration form.
|
||||
"""
|
||||
unit = db.query(RosterUnit).filter_by(id=unit_id, device_type="sound_level_meter").first()
|
||||
|
||||
if not unit:
|
||||
return HTMLResponse(
|
||||
content=f'<div class="text-red-500">Unit {unit_id} not found</div>',
|
||||
status_code=404
|
||||
)
|
||||
|
||||
return templates.TemplateResponse("partials/slm_config_form.html", {
|
||||
"request": request,
|
||||
"unit": unit
|
||||
})
|
||||
|
||||
|
||||
@router.post("/config/{unit_id}")
|
||||
async def save_slm_config(request: Request, unit_id: str, db: Session = Depends(get_db)):
|
||||
"""
|
||||
Save SLM configuration.
|
||||
Updates unit parameters in the database.
|
||||
"""
|
||||
unit = db.query(RosterUnit).filter_by(id=unit_id, device_type="sound_level_meter").first()
|
||||
|
||||
if not unit:
|
||||
return {"status": "error", "detail": f"Unit {unit_id} not found"}
|
||||
|
||||
try:
|
||||
# Get form data
|
||||
form_data = await request.form()
|
||||
|
||||
# Update SLM-specific fields
|
||||
unit.slm_model = form_data.get("slm_model") or None
|
||||
unit.slm_serial_number = form_data.get("slm_serial_number") or None
|
||||
unit.slm_frequency_weighting = form_data.get("slm_frequency_weighting") or None
|
||||
unit.slm_time_weighting = form_data.get("slm_time_weighting") or None
|
||||
unit.slm_measurement_range = form_data.get("slm_measurement_range") or None
|
||||
|
||||
# Update network configuration
|
||||
modem_id = form_data.get("deployed_with_modem_id")
|
||||
unit.deployed_with_modem_id = modem_id if modem_id else None
|
||||
|
||||
# Always update TCP and FTP ports (used regardless of modem assignment)
|
||||
unit.slm_tcp_port = int(form_data.get("slm_tcp_port")) if form_data.get("slm_tcp_port") else None
|
||||
unit.slm_ftp_port = int(form_data.get("slm_ftp_port")) if form_data.get("slm_ftp_port") else None
|
||||
|
||||
# Only update direct IP if no modem is assigned
|
||||
if not modem_id:
|
||||
unit.slm_host = form_data.get("slm_host") or None
|
||||
else:
|
||||
# Clear legacy direct IP field when modem is assigned
|
||||
unit.slm_host = None
|
||||
|
||||
db.commit()
|
||||
logger.info(f"Updated configuration for SLM {unit_id}")
|
||||
|
||||
# Sync updated configuration to SLMM cache
|
||||
logger.info(f"Syncing SLM {unit_id} config changes to SLMM cache...")
|
||||
result = await sync_slm_to_slmm_cache(
|
||||
unit_id=unit_id,
|
||||
host=unit.slm_host, # Use the updated host from Terra-View
|
||||
tcp_port=unit.slm_tcp_port,
|
||||
ftp_port=unit.slm_ftp_port,
|
||||
deployed_with_modem_id=unit.deployed_with_modem_id, # Resolve modem IP if assigned
|
||||
db=db
|
||||
)
|
||||
|
||||
if not result["success"]:
|
||||
logger.warning(f"SLMM cache sync warning for {unit_id}: {result['message']}")
|
||||
# Config still saved in Terra-View (source of truth)
|
||||
|
||||
return {"status": "success", "unit_id": unit_id}
|
||||
|
||||
except Exception as e:
|
||||
db.rollback()
|
||||
logger.error(f"Failed to save config for {unit_id}: {e}")
|
||||
return {"status": "error", "detail": str(e)}
|
||||
|
||||
|
||||
@router.get("/test-modem/{modem_id}")
|
||||
async def test_modem_connection(modem_id: str, db: Session = Depends(get_db)):
|
||||
"""
|
||||
Test modem connectivity with a simple ping/health check.
|
||||
Returns response time and connection status.
|
||||
"""
|
||||
import subprocess
|
||||
import time
|
||||
|
||||
# Get modem from database
|
||||
modem = db.query(RosterUnit).filter_by(id=modem_id, device_type="modem").first()
|
||||
|
||||
if not modem:
|
||||
return {"status": "error", "detail": f"Modem {modem_id} not found"}
|
||||
|
||||
if not modem.ip_address:
|
||||
return {"status": "error", "detail": f"Modem {modem_id} has no IP address configured"}
|
||||
|
||||
try:
|
||||
# Ping the modem (1 packet, 2 second timeout)
|
||||
start_time = time.time()
|
||||
result = subprocess.run(
|
||||
["ping", "-c", "1", "-W", "2", modem.ip_address],
|
||||
capture_output=True,
|
||||
text=True,
|
||||
timeout=3
|
||||
)
|
||||
response_time = int((time.time() - start_time) * 1000) # Convert to milliseconds
|
||||
|
||||
if result.returncode == 0:
|
||||
return {
|
||||
"status": "success",
|
||||
"modem_id": modem_id,
|
||||
"ip_address": modem.ip_address,
|
||||
"response_time": response_time,
|
||||
"message": "Modem is responding to ping"
|
||||
}
|
||||
else:
|
||||
return {
|
||||
"status": "error",
|
||||
"modem_id": modem_id,
|
||||
"ip_address": modem.ip_address,
|
||||
"detail": "Modem not responding to ping"
|
||||
}
|
||||
|
||||
except subprocess.TimeoutExpired:
|
||||
return {
|
||||
"status": "error",
|
||||
"modem_id": modem_id,
|
||||
"ip_address": modem.ip_address,
|
||||
"detail": "Ping timeout (> 2 seconds)"
|
||||
}
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to ping modem {modem_id}: {e}")
|
||||
return {
|
||||
"status": "error",
|
||||
"modem_id": modem_id,
|
||||
"detail": str(e)
|
||||
}
|
||||