Compare commits
28 Commits
1.0-experi
...
62fd963c07
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
62fd963c07 | ||
|
|
5ee6f5eb28 | ||
|
|
6492fdff82 | ||
|
|
44d7841852 | ||
|
|
38c600aca3 | ||
|
|
eeda94926f | ||
|
|
57be9bf1f1 | ||
|
|
8431784708 | ||
|
|
c771a86675 | ||
|
|
65ea0920db | ||
|
|
1f3fa7a718 | ||
|
|
a9c9b1fd48 | ||
|
|
4c213c96ee | ||
|
|
ff38b74548 | ||
|
|
c8a030a3ba | ||
|
|
d8a8330427 | ||
|
|
1ef0557ccb | ||
|
|
6c7ce5aad0 | ||
|
|
54754e2279 | ||
|
|
8787a2dbb8 | ||
|
|
7971092509 | ||
|
|
d349af9444 | ||
|
|
be83cb3fe7 | ||
|
|
e9216b9abc | ||
|
|
d93785c230 | ||
|
|
98ee9d7cea | ||
|
|
04c66bdf9c | ||
|
|
8a5fadb5df |
2
.gitignore
vendored
@@ -211,3 +211,5 @@ __marimo__/
|
||||
*.db
|
||||
*.db-journal
|
||||
data/
|
||||
.aider*
|
||||
.aider*
|
||||
|
||||
126
CHANGELOG.md
@@ -5,94 +5,56 @@ All notable changes to Terra-View will be documented in this file.
|
||||
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
|
||||
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
|
||||
|
||||
## [0.5.0] - 2026-01-09
|
||||
## [0.5.1] - 2026-01-27
|
||||
|
||||
### Added
|
||||
- **Unified Modular Monolith Architecture**: Complete architectural refactoring to modular monolith pattern
|
||||
- **Three Feature Modules**: Seismo (seismograph fleet), SLM (sound level meters), UI (shared templates/static)
|
||||
- **Module Isolation**: Each module has its own database, models, services, and routers
|
||||
- **Shared Infrastructure**: Common utilities and API aggregation layer
|
||||
- **Multi-Container Deployment**: Three Docker containers (terra-view, sfm, slmm) built from single codebase
|
||||
- **SLMM Integration**: Sound Level Meter Manager fully integrated as `app/slm/` module
|
||||
- Migrated from separate repository to unified codebase
|
||||
- Complete NL43 device management API (`/api/nl43/*`)
|
||||
- Database models for NL43Config and NL43Status
|
||||
- NL43Client service for device communication
|
||||
- FTP, TCP, and web interface support for NL43 devices
|
||||
- **SLM Dashboard API Layer**: New dashboard endpoints bridge UI and device APIs
|
||||
- `GET /api/slm-dashboard/stats` - Aggregate statistics (total units, online/offline, measuring/idle)
|
||||
- `GET /api/slm-dashboard/units` - List all units with latest status
|
||||
- `GET /api/slm-dashboard/live-view/{unit_id}` - Real-time measurement data
|
||||
- `GET /api/slm-dashboard/config/{unit_id}` - Retrieve unit configuration
|
||||
- `POST /api/slm-dashboard/config/{unit_id}` - Update unit configuration
|
||||
- `POST /api/slm-dashboard/control/{unit_id}/{action}` - Send control commands (start, stop, pause, resume, reset, sleep, wake)
|
||||
- `GET /api/slm-dashboard/test-modem/{unit_id}` - Test device connectivity
|
||||
- **Repository Rebranding**: Renamed from `seismo-fleet-manager` to `terra-view`
|
||||
- Reflects unified platform nature (seismo + SLM + future modules)
|
||||
- Git remote updated to `terra-view.git`
|
||||
- All references updated throughout codebase
|
||||
- **Dashboard Schedule View**: Today's scheduled actions now display directly on the main dashboard
|
||||
- New "Today's Actions" panel showing upcoming and past scheduled events
|
||||
- Schedule list partial for project-specific schedule views
|
||||
- API endpoint for fetching today's schedule data
|
||||
- **New Branding Assets**: Complete logo rework for Terra-View
|
||||
- New Terra-View logos for light and dark themes
|
||||
- Retina-ready (@2x) logo variants
|
||||
- Updated favicons (16px and 32px)
|
||||
- Refreshed PWA icons (72px through 512px)
|
||||
|
||||
### Changed
|
||||
- **Project Structure**: Complete reorganization following modular monolith pattern
|
||||
- `app/seismo/` - Seismograph fleet module (formerly `backend/`)
|
||||
- `app/slm/` - Sound level meter module (integrated from SLMM)
|
||||
- `app/ui/` - Shared templates and static assets
|
||||
- `app/api/` - Cross-module API aggregation layer
|
||||
- Removed `backend/` and `templates/` directories
|
||||
- **Import Paths**: All imports updated from `backend.*` to `app.seismo.*` or `app.slm.*`
|
||||
- **Database Initialization**: Each module initializes its own database tables
|
||||
- Seismo database: `app/seismo/database.py`
|
||||
- SLM database: `app/slm/database.py`
|
||||
- **Docker Architecture**: Three-container deployment from single codebase
|
||||
- `terra-view` (port 8001): Main UI/orchestrator with all modules
|
||||
- `sfm` (port 8002): Seismograph Fleet Module API
|
||||
- `slmm` (port 8100): Sound Level Meter Manager API
|
||||
- All containers built from same unified codebase with different entry points
|
||||
- **Dashboard Layout**: Reorganized to include schedule information panel
|
||||
- **Base Template**: Updated to use new Terra-View logos with theme-aware switching
|
||||
|
||||
## [0.5.0] - 2026-01-23
|
||||
|
||||
_Note: This version was not formally released; changes were included in v0.5.1._
|
||||
|
||||
## [0.4.4] - 2026-01-23
|
||||
|
||||
### Added
|
||||
- **Recurring schedules**: New scheduler service, recurring schedule APIs, and schedule templates (calendar/interval/list).
|
||||
- **Alerts UI + backend**: Alerting service plus dropdown/list templates for surfacing notifications.
|
||||
- **Report templates + viewers**: CRUD API for report templates, report preview screen, and RND file viewer.
|
||||
- **SLM tooling**: SLM settings modal and SLM project report generator workflow.
|
||||
|
||||
### Changed
|
||||
- **Project data management**: Unified files view, refreshed FTP browser, and new project header/templates for file/session/unit/assignment lists.
|
||||
- **Device/SLM sync**: Standardized SLM device types and tightened SLMM sync paths.
|
||||
- **Docs/scripts**: Cleanup pass and expanded device-type documentation.
|
||||
|
||||
### Fixed
|
||||
- **Template Path Issues**: Fixed seismo dashboard template references
|
||||
- Updated `app/seismo/routers/dashboard.py` to use `app/ui/templates` directory
|
||||
- Resolved 404 errors for `partials/benched_table.html` and `partials/active_table.html`
|
||||
- **Module Import Errors**: Corrected SLMM module structure
|
||||
- Fixed `app/slm/main.py` to import from `app.slm.routers` instead of `app.routers`
|
||||
- Updated all SLMM internal imports to use `app.slm.*` namespace
|
||||
- **Docker Build Issues**: Resolved file permission problems
|
||||
- Fixed dashboard.py permissions for Docker COPY operations
|
||||
- Ensured all source files readable during container builds
|
||||
- **Scheduler actions**: Strict command definitions so actions run reliably.
|
||||
- **Project view title**: Resolved JSON string rendering in project headers.
|
||||
|
||||
### Technical Details
|
||||
- **Modular Monolith Benefits**:
|
||||
- Single repository for easier development and deployment
|
||||
- Module boundaries enforced through folder structure
|
||||
- Shared dependencies managed in single requirements.txt
|
||||
- Independent database schemas per module
|
||||
- Clean separation of concerns with explicit module APIs
|
||||
- **Migration Path**: Existing installations automatically migrate
|
||||
- Import path updates applied programmatically
|
||||
- Database schemas remain compatible
|
||||
- No data migration required
|
||||
- **Module Structure**: Each module follows consistent pattern
|
||||
- `database.py` - SQLAlchemy models and session management
|
||||
- `models.py` - Pydantic schemas and database models
|
||||
- `routers.py` - FastAPI route definitions
|
||||
- `services.py` - Business logic and external integrations
|
||||
- **Container Communication**: Containers use host networking
|
||||
- terra-view proxies to sfm and slmm containers
|
||||
- Environment variables configure API URLs
|
||||
- Health checks ensure container availability
|
||||
## [0.4.3] - 2026-01-14
|
||||
|
||||
### Migration Notes
|
||||
- **Breaking Changes**: Import paths changed for all modules
|
||||
- Old: `from backend.models import RosterUnit`
|
||||
- New: `from app.seismo.models import RosterUnit`
|
||||
- **Configuration Updates**: Environment variables for multi-container setup
|
||||
- `SFM_API_URL=http://localhost:8002` - SFM backend endpoint
|
||||
- `SLMM_API_URL=http://localhost:8100` - SLMM backend endpoint
|
||||
- `MODULE_MODE=sfm|slmm` - Future flag for API-only containers
|
||||
- **Repository Migration**: Update git remotes for renamed repository
|
||||
```bash
|
||||
git remote set-url origin ssh://git@10.0.0.2:2222/serversdown/terra-view.git
|
||||
```
|
||||
### Added
|
||||
- **Sound Level Meter roster tooling**: Roster manager surfaces SLM metadata, supports rename unit flows, and adds return-to-project navigation to keep SLM dashboard users oriented.
|
||||
- **Project management templates**: New schedule and unit list templates plus file/session lists show what each project stores before teams dive into deployments.
|
||||
|
||||
### Changed
|
||||
- **Project view refresh**: FTP browser now downloads folders locally, the countdown timer was rebuilt, and project/device templates gained edit modals for projects and locations so navigation feels smoother.
|
||||
- **SLM control sync & accuracy**: Control center groundwork now runs inside the dev UI, configuration edits propagate to SLMM (which caches configs for faster responses), and the SLM live view reads the correct DRD fields after the refactor.
|
||||
|
||||
### Fixed
|
||||
- **SLM UI syntax bug**: Resolved the unexpected token error that appeared in the refreshed SLM components.
|
||||
|
||||
## [0.4.2] - 2026-01-05
|
||||
|
||||
@@ -437,6 +399,10 @@ No database migration required for v0.4.0. All new features use existing databas
|
||||
- Photo management per unit
|
||||
- Automated status categorization (OK/Pending/Missing)
|
||||
|
||||
[0.5.1]: https://github.com/serversdwn/seismo-fleet-manager/compare/v0.5.0...v0.5.1
|
||||
[0.5.0]: https://github.com/serversdwn/seismo-fleet-manager/compare/v0.4.4...v0.5.0
|
||||
[0.4.4]: https://github.com/serversdwn/seismo-fleet-manager/compare/v0.4.3...v0.4.4
|
||||
[0.4.3]: https://github.com/serversdwn/seismo-fleet-manager/compare/v0.4.2...v0.4.3
|
||||
[0.4.2]: https://github.com/serversdwn/seismo-fleet-manager/compare/v0.4.1...v0.4.2
|
||||
[0.4.1]: https://github.com/serversdwn/seismo-fleet-manager/compare/v0.4.0...v0.4.1
|
||||
[0.4.0]: https://github.com/serversdwn/seismo-fleet-manager/compare/v0.3.3...v0.4.0
|
||||
|
||||
@@ -1,26 +0,0 @@
|
||||
FROM python:3.11-slim
|
||||
|
||||
# Set working directory
|
||||
WORKDIR /app
|
||||
|
||||
# Install system dependencies
|
||||
RUN apt-get update && \
|
||||
apt-get install -y --no-install-recommends iputils-ping curl && \
|
||||
rm -rf /var/lib/apt/lists/*
|
||||
|
||||
# Copy requirements first for better caching
|
||||
COPY requirements.txt .
|
||||
|
||||
# Install dependencies
|
||||
RUN pip install --no-cache-dir -r requirements.txt
|
||||
|
||||
# Copy application code
|
||||
COPY . .
|
||||
|
||||
# Expose SFM port
|
||||
EXPOSE 8002
|
||||
|
||||
# Run SFM backend (API only)
|
||||
# For now: runs same app on different port
|
||||
# Future: will run SFM-specific entry point
|
||||
CMD ["python3", "-m", "app.main"]
|
||||
@@ -1,21 +0,0 @@
|
||||
FROM python:3.11-slim
|
||||
|
||||
WORKDIR /app
|
||||
|
||||
# Install system dependencies
|
||||
RUN apt-get update && \
|
||||
apt-get install -y --no-install-recommends curl && \
|
||||
rm -rf /var/lib/apt/lists/*
|
||||
|
||||
# Copy requirements and install dependencies
|
||||
COPY requirements.txt .
|
||||
RUN pip install --no-cache-dir -r requirements.txt
|
||||
|
||||
# Copy application code
|
||||
COPY app /app/app
|
||||
|
||||
# Expose port
|
||||
EXPOSE 8100
|
||||
|
||||
# Run the SLM application
|
||||
CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "8100"]
|
||||
@@ -1,24 +0,0 @@
|
||||
FROM python:3.11-slim
|
||||
|
||||
# Set working directory
|
||||
WORKDIR /app
|
||||
|
||||
# Install system dependencies
|
||||
RUN apt-get update && \
|
||||
apt-get install -y --no-install-recommends iputils-ping curl && \
|
||||
rm -rf /var/lib/apt/lists/*
|
||||
|
||||
# Copy requirements first for better caching
|
||||
COPY requirements.txt .
|
||||
|
||||
# Install dependencies
|
||||
RUN pip install --no-cache-dir -r requirements.txt
|
||||
|
||||
# Copy application code
|
||||
COPY . .
|
||||
|
||||
# Expose Terra-View UI port
|
||||
EXPOSE 8001
|
||||
|
||||
# Run Terra-View (UI + orchestration)
|
||||
CMD ["python3", "-m", "app.main"]
|
||||
@@ -1,141 +0,0 @@
|
||||
# Terra-View Modular Monolith - Known-Good Baseline
|
||||
|
||||
**Date:** 2026-01-09
|
||||
**Status:** ✅ IMPORT MIGRATION COMPLETE
|
||||
|
||||
## What We've Achieved
|
||||
|
||||
Successfully restructured the application into a modular monolith architecture with the new folder structure working end-to-end.
|
||||
|
||||
## New Structure
|
||||
|
||||
```
|
||||
/home/serversdown/sfm/seismo-fleet-manager/
|
||||
├── app/
|
||||
│ ├── main.py # NEW: Entry point with Terra-View branding
|
||||
│ ├── core/ # Shared infrastructure
|
||||
│ │ ├── config.py # NEW: Centralized configuration
|
||||
│ │ └── database.py # Shared DB utilities
|
||||
│ ├── ui/ # UI Layer (device-agnostic)
|
||||
│ │ ├── routes.py # NEW: HTML page routes
|
||||
│ │ ├── templates/ # All HTML templates (copied from old location)
|
||||
│ │ └── static/ # All static files (copied from old location)
|
||||
│ ├── seismo/ # Seismograph Feature Module
|
||||
│ │ ├── models.py # ✅ Updated to use app.seismo.database
|
||||
│ │ ├── database.py # NEW: Seismo-specific DB connection
|
||||
│ │ ├── routers/ # API routers (copied from backend/routers/)
|
||||
│ │ └── services/ # Business logic (copied from backend/services/)
|
||||
│ ├── slm/ # Sound Level Meter Feature Module
|
||||
│ │ ├── models.py # NEW: Placeholder for SLM models
|
||||
│ │ ├── database.py # NEW: SLM-specific DB connection
|
||||
│ │ └── routers/ # SLM routers (copied from backend/routers/)
|
||||
│ └── api/ # API Aggregation Layer (placeholder)
|
||||
│ ├── dashboard.py # NEW: Future aggregation endpoints
|
||||
│ └── roster.py # NEW: Future aggregation endpoints
|
||||
└── data/
|
||||
└── seismo_fleet.db # Still using shared DB (migration pending)
|
||||
```
|
||||
|
||||
## What's Working
|
||||
|
||||
✅ **Application starts successfully** on port 9999
|
||||
✅ **Health endpoint works**: `/health` returns Terra-View v1.0.0
|
||||
✅ **UI renders**: Main dashboard loads with proper templates
|
||||
✅ **API endpoints work**: `/api/status-snapshot` returns seismograph data
|
||||
✅ **Database access works**: Models properly connected
|
||||
✅ **Static files serve**: CSS, JS, icons all accessible
|
||||
|
||||
## Critical Changes Made
|
||||
|
||||
### 1. Fixed Import in models.py
|
||||
**File:** `app/seismo/models.py`
|
||||
**Change:** `from backend.database import Base` → `from app.seismo.database import Base`
|
||||
**Reason:** Avoid duplicate Base instances causing SQLAlchemy errors
|
||||
|
||||
### 2. Created New Entry Point
|
||||
**File:** `app/main.py`
|
||||
**Features:**
|
||||
- Terra-View branding (title, version, health check)
|
||||
- Imports from new `app.*` structure
|
||||
- Registers all seismo and SLM routers
|
||||
- Middleware for environment context
|
||||
|
||||
### 3. Created UI Routes Module
|
||||
**File:** `app/ui/routes.py`
|
||||
**Purpose:** Centralize all HTML page routes (device-agnostic)
|
||||
|
||||
### 4. Created Module-Specific Databases
|
||||
**Files:** `app/seismo/database.py`, `app/slm/database.py`
|
||||
**Status:** Both currently point to shared `seismo_fleet.db` (migration pending)
|
||||
|
||||
## Recent Updates (2026-01-09)
|
||||
|
||||
✅ **ALL imports updated** - Changed all `backend.*` imports to `app.seismo.*` or `app.slm.*`
|
||||
✅ **Old structure deleted** - `backend/` and `templates/` directories removed
|
||||
✅ **Containers rebuilt** - All three containers (Terra-View, SFM, SLMM) working with new imports
|
||||
✅ **Verified working** - Tested health endpoints and UI after migration
|
||||
|
||||
## What's NOT Yet Done
|
||||
|
||||
❌ **Partial routes missing** - `/partials/*` endpoints not yet added
|
||||
❌ **Database not split** - Still using shared `seismo_fleet.db`
|
||||
|
||||
## How to Run
|
||||
|
||||
```bash
|
||||
# Start on custom port to avoid conflicts
|
||||
PORT=9999 python3 -m app.main
|
||||
|
||||
# Test health endpoint
|
||||
curl http://localhost:9999/health
|
||||
|
||||
# Test API endpoint
|
||||
curl http://localhost:9999/api/status-snapshot
|
||||
|
||||
# Access UI
|
||||
open http://localhost:9999/
|
||||
```
|
||||
|
||||
## Next Steps (Recommended Order)
|
||||
|
||||
1. **Add partial routes** to app/main.py or create separate router
|
||||
2. **Test all endpoints thoroughly** - Verify roster CRUD, photos, settings
|
||||
3. **Split databases** (Phase 2 of plan)
|
||||
4. **Implement API aggregation layer** (Phase 3 of plan)
|
||||
|
||||
## Known Issues
|
||||
|
||||
None currently - app starts and serves requests successfully!
|
||||
|
||||
## Testing Checklist
|
||||
|
||||
- [x] App starts without errors
|
||||
- [x] Health endpoint returns correct version
|
||||
- [x] Main dashboard loads
|
||||
- [x] Status snapshot API works
|
||||
- [ ] All seismo endpoints work
|
||||
- [ ] All SLM endpoints work
|
||||
- [ ] Roster CRUD operations work
|
||||
- [ ] Photos upload/download works
|
||||
- [ ] Settings page works
|
||||
|
||||
## Rollback Instructions
|
||||
|
||||
~~The old structure has been deleted.~~ To rollback, restore from your backup:
|
||||
|
||||
```bash
|
||||
# Restore from your backup
|
||||
# The old backend/ and templates/ directories were removed on 2026-01-09
|
||||
```
|
||||
|
||||
## Important Notes
|
||||
|
||||
- **MIGRATION COMPLETE**: Old `backend/` and `templates/` directories removed
|
||||
- **ALL IMPORTS UPDATED**: All Python files now use `app.*` imports
|
||||
- **NO DATA LOSS**: Database untouched, only code structure changed
|
||||
- **CONTAINERS WORKING**: All three containers (Terra-View, SFM, SLMM) healthy
|
||||
- **FULLY SELF-CONTAINED**: Application runs entirely from `app/` directory
|
||||
|
||||
---
|
||||
|
||||
**Congratulations!** 🎉 Import migration complete! The modular monolith is now self-contained and production-ready.
|
||||
85
README.md
@@ -1,31 +1,5 @@
|
||||
# Terra-View v0.5.0
|
||||
Unified platform for managing seismograph fleets and sound level meter deployments. Built as a modular monolith with independent feature modules (Seismo, SLM) sharing a common UI layer. Track deployments, monitor health in real time, merge roster intent with incoming telemetry, and control your entire fleet through a unified database and dashboard.
|
||||
|
||||
## Architecture
|
||||
|
||||
Terra-View follows a **modular monolith** architecture with independent feature modules in a single codebase:
|
||||
|
||||
- **app/seismo/** - Seismograph Fleet Module (SFM)
|
||||
- Device roster and deployment tracking
|
||||
- Series 3/4 telemetry ingestion
|
||||
- Status monitoring (OK/Pending/Missing)
|
||||
- Photo management and location tracking
|
||||
- **app/slm/** - Sound Level Meter Manager (SLMM)
|
||||
- NL43 device configuration and control
|
||||
- Real-time measurement monitoring
|
||||
- TCP/FTP/Web interface support
|
||||
- Dashboard statistics and unit management
|
||||
- **app/ui/** - Shared UI layer
|
||||
- Templates, static assets, and common components
|
||||
- Progressive Web App (PWA) support
|
||||
- **app/api/** - API aggregation layer
|
||||
- Cross-module endpoints
|
||||
- Future unified dashboard APIs
|
||||
|
||||
**Multi-Container Deployment**: Three Docker containers built from the same codebase:
|
||||
- `terra-view` (port 8001) - Main UI with all modules integrated
|
||||
- `sfm` (port 8002) - Seismo API backend
|
||||
- `slmm` (port 8100) - SLM API backend
|
||||
# Terra-View v0.5.1
|
||||
Backend API and HTMX-powered web interface for managing a mixed fleet of seismographs and field modems. Track deployments, monitor health in real time, merge roster intent with incoming telemetry, and control your fleet through a unified database and dashboard.
|
||||
|
||||
## Features
|
||||
|
||||
@@ -334,7 +308,7 @@ print(response.json())
|
||||
|-------|------|-------------|
|
||||
| id | string | Unit identifier (primary key) |
|
||||
| unit_type | string | Hardware model name (default: `series3`) |
|
||||
| device_type | string | `seismograph` or `modem` discriminator |
|
||||
| device_type | string | Device type: `"seismograph"`, `"modem"`, or `"slm"` (sound level meter) |
|
||||
| deployed | boolean | Whether the unit is in the field |
|
||||
| retired | boolean | Removes the unit from deployments but preserves history |
|
||||
| note | string | Notes about the unit |
|
||||
@@ -360,6 +334,39 @@ print(response.json())
|
||||
| phone_number | string | Cellular number for the modem |
|
||||
| hardware_model | string | Modem hardware reference |
|
||||
|
||||
**Sound Level Meter (SLM) fields**
|
||||
|
||||
| Field | Type | Description |
|
||||
|-------|------|-------------|
|
||||
| slm_host | string | Direct IP address for SLM (if not using modem) |
|
||||
| slm_tcp_port | integer | TCP control port (default: 2255) |
|
||||
| slm_ftp_port | integer | FTP file transfer port (default: 21) |
|
||||
| slm_model | string | Device model (NL-43, NL-53) |
|
||||
| slm_serial_number | string | Manufacturer serial number |
|
||||
| slm_frequency_weighting | string | Frequency weighting setting (A, C, Z) |
|
||||
| slm_time_weighting | string | Time weighting setting (F=Fast, S=Slow) |
|
||||
| slm_measurement_range | string | Measurement range setting |
|
||||
| slm_last_check | datetime | Last status check timestamp |
|
||||
| deployed_with_modem_id | string | Modem pairing (shared with seismographs) |
|
||||
|
||||
### Device Type Schema
|
||||
|
||||
Terra-View supports three device types with the following standardized `device_type` values:
|
||||
|
||||
- **`"seismograph"`** (default) - Seismic monitoring devices (Series 3, Series 4, Micromate)
|
||||
- Uses: calibration dates, modem pairing
|
||||
- Examples: BE1234, UM12345 (Series 3/4 units)
|
||||
|
||||
- **`"modem"`** - Field modems and network equipment
|
||||
- Uses: IP address, phone number, hardware model
|
||||
- Examples: MDM001, MODEM-2025-01
|
||||
|
||||
- **`"slm"`** - Sound level meters (Rion NL-43/NL-53)
|
||||
- Uses: TCP/FTP configuration, measurement settings, modem pairing
|
||||
- Examples: SLM-43-01, NL43-001
|
||||
|
||||
**Important**: All `device_type` values must be lowercase. The legacy value `"sound_level_meter"` has been deprecated in favor of the shorter `"slm"`. Run `backend/migrate_standardize_device_types.py` to update existing databases.
|
||||
|
||||
### Emitter Table (Device Check-ins)
|
||||
|
||||
| Field | Type | Description |
|
||||
@@ -489,6 +496,12 @@ docker compose down -v
|
||||
|
||||
## Release Highlights
|
||||
|
||||
### v0.4.3 — 2026-01-14
|
||||
- **Sound Level Meter workflow**: Roster manager surfaces SLM metadata, supports rename actions, and adds return-to-project navigation plus schedule/unit templates for project planning.
|
||||
- **Project insight panels**: Project dashboards now expose file and session lists so teams can see what each project stores before diving into units.
|
||||
- **Project view polish**: FTP browser supports folder downloads, the timer display was reimplemented, and the project/device templates gained edit modals for projects and locations to streamline navigation.
|
||||
- **SLM sync & accuracy**: Configuration edits now propagate to SLMM (which caches configs for faster responses) and the live view uses the correct DRD fields so telemetry aligns with the control center.
|
||||
|
||||
### v0.4.0 — 2025-12-16
|
||||
- **Database Management System**: Complete backup and restore functionality with manual snapshots, restore operations, and upload/download capabilities
|
||||
- **Remote Database Cloning**: New `clone_db_to_dev.py` script for copying production database to remote dev servers over WAN
|
||||
@@ -558,9 +571,19 @@ MIT
|
||||
|
||||
## Version
|
||||
|
||||
**Current: 0.4.0** — Database management system with backup/restore and remote cloning (2025-12-16)
|
||||
**Current: 0.5.1** — Dashboard schedule view with today's actions panel, new Terra-View branding and logo rework (2026-01-27)
|
||||
|
||||
Previous: 0.3.3 — Mobile navigation improvements and better status visibility (2025-12-12)
|
||||
Previous: 0.4.4 — Recurring schedules, alerting UI, report templates + RND viewer, and SLM workflow polish (2026-01-23)
|
||||
|
||||
0.4.3 — SLM roster/project view refresh, project insight panels, FTP browser folder downloads, and SLMM sync (2026-01-14)
|
||||
|
||||
0.4.2 — SLM configuration interface with TCP/FTP controls, modem diagnostics, and dashboard endpoints for Sound Level Meters (2026-01-05)
|
||||
|
||||
0.4.1 — Sound Level Meter integration with full management UI for SLM units (2026-01-05)
|
||||
|
||||
0.4.0 — Database management system with backup/restore and remote cloning (2025-12-16)
|
||||
|
||||
0.3.3 — Mobile navigation improvements and better status visibility (2025-12-12)
|
||||
|
||||
0.3.2 — Progressive Web App with mobile optimization (2025-12-12)
|
||||
|
||||
|
||||
@@ -1,13 +0,0 @@
|
||||
"""
|
||||
API Aggregation Layer - Dashboard endpoints
|
||||
Composes data from multiple feature modules
|
||||
"""
|
||||
from fastapi import APIRouter
|
||||
|
||||
router = APIRouter(prefix="/api/dashboard", tags=["dashboard-aggregation"])
|
||||
|
||||
# TODO: Implement aggregation endpoints that combine data from
|
||||
# app.seismo and app.slm modules
|
||||
|
||||
# For now, individual feature modules expose their own APIs directly
|
||||
# Future: Add cross-feature aggregation here
|
||||
@@ -1,13 +0,0 @@
|
||||
"""
|
||||
API Aggregation Layer - Roster endpoints
|
||||
Aggregates roster data from all feature modules
|
||||
"""
|
||||
from fastapi import APIRouter
|
||||
|
||||
router = APIRouter(prefix="/api/roster-aggregation", tags=["roster-aggregation"])
|
||||
|
||||
# TODO: Implement unified roster endpoints that combine data from
|
||||
# app.seismo and app.slm modules
|
||||
|
||||
# For now, individual feature modules expose their own roster APIs
|
||||
# Future: Add cross-feature roster aggregation here
|
||||
@@ -1,83 +0,0 @@
|
||||
"""
|
||||
SLMM API Proxy
|
||||
Forwards /api/slmm/* requests to the SLMM backend service
|
||||
"""
|
||||
import httpx
|
||||
import logging
|
||||
from fastapi import APIRouter, Request, Response, WebSocket
|
||||
from fastapi.responses import StreamingResponse
|
||||
from app.core.config import SLMM_API_URL
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
router = APIRouter(prefix="/api/slmm", tags=["slmm-proxy"])
|
||||
|
||||
|
||||
@router.api_route("/{path:path}", methods=["GET", "POST", "PUT", "DELETE", "PATCH"])
|
||||
async def proxy_slmm_request(path: str, request: Request):
|
||||
"""Proxy HTTP requests to SLMM backend"""
|
||||
# Build target URL - rewrite /api/slmm/* to /api/nl43/*
|
||||
target_url = f"{SLMM_API_URL}/api/nl43/{path}"
|
||||
|
||||
# Get query params
|
||||
query_string = str(request.url.query)
|
||||
if query_string:
|
||||
target_url += f"?{query_string}"
|
||||
|
||||
logger.info(f"Proxying {request.method} {target_url}")
|
||||
|
||||
# Read request body
|
||||
body = await request.body()
|
||||
|
||||
# Forward headers (exclude host)
|
||||
headers = {
|
||||
key: value
|
||||
for key, value in request.headers.items()
|
||||
if key.lower() not in ['host', 'content-length']
|
||||
}
|
||||
|
||||
async with httpx.AsyncClient(timeout=30.0) as client:
|
||||
try:
|
||||
# Make proxied request
|
||||
response = await client.request(
|
||||
method=request.method,
|
||||
url=target_url,
|
||||
content=body,
|
||||
headers=headers
|
||||
)
|
||||
|
||||
# Return response
|
||||
return Response(
|
||||
content=response.content,
|
||||
status_code=response.status_code,
|
||||
headers=dict(response.headers)
|
||||
)
|
||||
except httpx.RequestError as e:
|
||||
logger.error(f"Proxy request failed: {e}")
|
||||
return Response(
|
||||
content=f'{{"detail": "SLMM backend unavailable: {str(e)}"}}',
|
||||
status_code=502,
|
||||
media_type="application/json"
|
||||
)
|
||||
|
||||
|
||||
@router.websocket("/{unit_id}/live")
|
||||
async def proxy_slmm_websocket(websocket: WebSocket, unit_id: str):
|
||||
"""Proxy WebSocket connections to SLMM backend for live data streaming"""
|
||||
await websocket.accept()
|
||||
|
||||
# Build WebSocket URL
|
||||
ws_protocol = "ws" if "localhost" in SLMM_API_URL or "127.0.0.1" in SLMM_API_URL else "wss"
|
||||
ws_url = SLMM_API_URL.replace("http://", f"{ws_protocol}://").replace("https://", f"{ws_protocol}://")
|
||||
ws_target = f"{ws_url}/api/slmm/{unit_id}/live"
|
||||
|
||||
logger.info(f"Proxying WebSocket to {ws_target}")
|
||||
|
||||
async with httpx.AsyncClient() as client:
|
||||
try:
|
||||
async with client.stream("GET", ws_target) as response:
|
||||
async for chunk in response.aiter_bytes():
|
||||
await websocket.send_bytes(chunk)
|
||||
except Exception as e:
|
||||
logger.error(f"WebSocket proxy error: {e}")
|
||||
await websocket.close(code=1011, reason=f"Backend error: {str(e)}")
|
||||
@@ -1,22 +0,0 @@
|
||||
"""
|
||||
Core configuration for Terra-View application
|
||||
"""
|
||||
import os
|
||||
|
||||
# Application
|
||||
APP_NAME = "Terra-View"
|
||||
VERSION = "1.0.0"
|
||||
ENVIRONMENT = os.getenv("ENVIRONMENT", "production")
|
||||
|
||||
# Ports
|
||||
PORT = int(os.getenv("PORT", 8001))
|
||||
|
||||
# External Services
|
||||
# Terra-View is a unified application with seismograph logic built-in
|
||||
# The only external HTTP dependency is SLMM for NL-43 device communication
|
||||
SLMM_API_URL = os.getenv("SLMM_API_URL", "http://localhost:8100")
|
||||
|
||||
# Database URLs (feature-specific)
|
||||
SEISMO_DATABASE_URL = "sqlite:///./data/seismo.db"
|
||||
SLM_DATABASE_URL = "sqlite:///./data/slm.db"
|
||||
MODEM_DATABASE_URL = "sqlite:///./data/modem.db"
|
||||
216
app/main.py
@@ -1,216 +0,0 @@
|
||||
"""
|
||||
Terra-View - Unified monitoring platform for device fleets
|
||||
Modular monolith architecture with strict feature boundaries
|
||||
"""
|
||||
import os
|
||||
import logging
|
||||
from fastapi import FastAPI, Request
|
||||
from fastapi.middleware.cors import CORSMiddleware
|
||||
from fastapi.staticfiles import StaticFiles
|
||||
from fastapi.responses import JSONResponse
|
||||
from fastapi.exceptions import RequestValidationError
|
||||
|
||||
# Configure logging
|
||||
logging.basicConfig(
|
||||
level=logging.INFO,
|
||||
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s'
|
||||
)
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# Import configuration
|
||||
from app.core.config import APP_NAME, VERSION, ENVIRONMENT
|
||||
|
||||
# Import UI routes
|
||||
from app.ui import routes as ui_routes
|
||||
|
||||
# Import feature module routers (seismo)
|
||||
from app.seismo.routers import (
|
||||
roster as seismo_roster,
|
||||
units as seismo_units,
|
||||
photos as seismo_photos,
|
||||
roster_edit as seismo_roster_edit,
|
||||
dashboard as seismo_dashboard,
|
||||
dashboard_tabs as seismo_dashboard_tabs,
|
||||
activity as seismo_activity,
|
||||
seismo_dashboard as seismo_seismo_dashboard,
|
||||
settings as seismo_settings,
|
||||
partials as seismo_partials,
|
||||
)
|
||||
from app.seismo import routes as seismo_legacy_routes
|
||||
|
||||
# Import feature module routers (SLM)
|
||||
from app.slm.routers import router as slm_router
|
||||
from app.slm.dashboard import router as slm_dashboard_router
|
||||
|
||||
# Import API aggregation layer (placeholder for now)
|
||||
from app.api import dashboard as api_dashboard
|
||||
from app.api import roster as api_roster
|
||||
|
||||
# Initialize database tables
|
||||
from app.seismo.database import engine as seismo_engine, Base as SeismoBase
|
||||
SeismoBase.metadata.create_all(bind=seismo_engine)
|
||||
|
||||
from app.slm.database import engine as slm_engine, Base as SlmBase
|
||||
SlmBase.metadata.create_all(bind=slm_engine)
|
||||
|
||||
# Initialize FastAPI app
|
||||
app = FastAPI(
|
||||
title=APP_NAME,
|
||||
description="Unified monitoring platform for seismograph, modem, and sound level meter fleets",
|
||||
version=VERSION
|
||||
)
|
||||
|
||||
# Add validation error handler to log details
|
||||
@app.exception_handler(RequestValidationError)
|
||||
async def validation_exception_handler(request: Request, exc: RequestValidationError):
|
||||
logger.error(f"Validation error on {request.url}: {exc.errors()}")
|
||||
logger.error(f"Body: {await request.body()}")
|
||||
return JSONResponse(
|
||||
status_code=400,
|
||||
content={"detail": exc.errors()}
|
||||
)
|
||||
|
||||
# Configure CORS
|
||||
app.add_middleware(
|
||||
CORSMiddleware,
|
||||
allow_origins=["*"],
|
||||
allow_credentials=True,
|
||||
allow_methods=["*"],
|
||||
allow_headers=["*"],
|
||||
)
|
||||
|
||||
# Mount static files
|
||||
app.mount("/static", StaticFiles(directory="app/ui/static"), name="static")
|
||||
|
||||
# Middleware to add environment to request state
|
||||
@app.middleware("http")
|
||||
async def add_environment_to_context(request: Request, call_next):
|
||||
"""Middleware to add environment variable to request state"""
|
||||
request.state.environment = ENVIRONMENT
|
||||
response = await call_next(request)
|
||||
return response
|
||||
|
||||
# ===== INCLUDE ROUTERS =====
|
||||
|
||||
# UI Layer (HTML pages)
|
||||
app.include_router(ui_routes.router)
|
||||
|
||||
# Seismograph Feature Module APIs
|
||||
app.include_router(seismo_roster.router)
|
||||
app.include_router(seismo_units.router)
|
||||
app.include_router(seismo_photos.router)
|
||||
app.include_router(seismo_roster_edit.router)
|
||||
app.include_router(seismo_dashboard.router)
|
||||
app.include_router(seismo_dashboard_tabs.router)
|
||||
app.include_router(seismo_activity.router)
|
||||
app.include_router(seismo_seismo_dashboard.router)
|
||||
app.include_router(seismo_settings.router)
|
||||
app.include_router(seismo_partials.router, prefix="/partials")
|
||||
app.include_router(seismo_legacy_routes.router)
|
||||
|
||||
# SLM Feature Module APIs
|
||||
app.include_router(slm_router)
|
||||
app.include_router(slm_dashboard_router)
|
||||
|
||||
# SLMM Backend Proxy (forward /api/slmm/* to SLMM service)
|
||||
from app.api import slmm_proxy
|
||||
app.include_router(slmm_proxy.router)
|
||||
|
||||
# API Aggregation Layer (future cross-feature endpoints)
|
||||
# app.include_router(api_dashboard.router) # TODO: Implement aggregation
|
||||
# app.include_router(api_roster.router) # TODO: Implement aggregation
|
||||
|
||||
# ===== ADDITIONAL ROUTES FROM OLD MAIN.PY =====
|
||||
# These will need to be migrated to appropriate modules
|
||||
|
||||
from fastapi.templating import Jinja2Templates
|
||||
from typing import List, Dict
|
||||
from pydantic import BaseModel
|
||||
from sqlalchemy.orm import Session
|
||||
from fastapi import Depends
|
||||
|
||||
from app.seismo.database import get_db
|
||||
from app.seismo.services.snapshot import emit_status_snapshot
|
||||
from app.seismo.models import IgnoredUnit
|
||||
|
||||
# TODO: Move these to appropriate feature modules or UI layer
|
||||
|
||||
@app.post("/api/sync-edits")
|
||||
async def sync_edits(request: dict, db: Session = Depends(get_db)):
|
||||
"""Process offline edit queue and sync to database"""
|
||||
# TODO: Move to seismo module
|
||||
from app.seismo.models import RosterUnit
|
||||
|
||||
class EditItem(BaseModel):
|
||||
id: int
|
||||
unitId: str
|
||||
changes: Dict
|
||||
timestamp: int
|
||||
|
||||
class SyncEditsRequest(BaseModel):
|
||||
edits: List[EditItem]
|
||||
|
||||
sync_request = SyncEditsRequest(**request)
|
||||
results = []
|
||||
synced_ids = []
|
||||
|
||||
for edit in sync_request.edits:
|
||||
try:
|
||||
unit = db.query(RosterUnit).filter_by(id=edit.unitId).first()
|
||||
|
||||
if not unit:
|
||||
results.append({
|
||||
"id": edit.id,
|
||||
"status": "error",
|
||||
"reason": f"Unit {edit.unitId} not found"
|
||||
})
|
||||
continue
|
||||
|
||||
for key, value in edit.changes.items():
|
||||
if hasattr(unit, key):
|
||||
if key in ['deployed', 'retired']:
|
||||
setattr(unit, key, value in ['true', True, 'True', '1', 1])
|
||||
else:
|
||||
setattr(unit, key, value if value != '' else None)
|
||||
|
||||
db.commit()
|
||||
|
||||
results.append({
|
||||
"id": edit.id,
|
||||
"status": "success"
|
||||
})
|
||||
synced_ids.append(edit.id)
|
||||
|
||||
except Exception as e:
|
||||
db.rollback()
|
||||
results.append({
|
||||
"id": edit.id,
|
||||
"status": "error",
|
||||
"reason": str(e)
|
||||
})
|
||||
|
||||
synced_count = len(synced_ids)
|
||||
|
||||
return JSONResponse({
|
||||
"synced": synced_count,
|
||||
"total": len(sync_request.edits),
|
||||
"synced_ids": synced_ids,
|
||||
"results": results
|
||||
})
|
||||
|
||||
|
||||
@app.get("/health")
|
||||
def health_check():
|
||||
"""Health check endpoint"""
|
||||
return {
|
||||
"message": f"{APP_NAME} v{VERSION}",
|
||||
"status": "running",
|
||||
"version": VERSION,
|
||||
"modules": ["seismo", "slm"]
|
||||
}
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
import uvicorn
|
||||
from app.core.config import PORT
|
||||
uvicorn.run(app, host="0.0.0.0", port=PORT)
|
||||
@@ -1,36 +0,0 @@
|
||||
"""
|
||||
Seismograph feature module database connection
|
||||
"""
|
||||
from sqlalchemy import create_engine
|
||||
from sqlalchemy.ext.declarative import declarative_base
|
||||
from sqlalchemy.orm import sessionmaker
|
||||
import os
|
||||
|
||||
# Ensure data directory exists
|
||||
os.makedirs("data", exist_ok=True)
|
||||
|
||||
# For now, we'll use the old database (seismo_fleet.db) until we migrate
|
||||
# TODO: Migrate to seismo.db
|
||||
SQLALCHEMY_DATABASE_URL = "sqlite:///./data/seismo_fleet.db"
|
||||
|
||||
engine = create_engine(
|
||||
SQLALCHEMY_DATABASE_URL, connect_args={"check_same_thread": False}
|
||||
)
|
||||
|
||||
SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)
|
||||
|
||||
Base = declarative_base()
|
||||
|
||||
|
||||
def get_db():
|
||||
"""Dependency for database sessions"""
|
||||
db = SessionLocal()
|
||||
try:
|
||||
yield db
|
||||
finally:
|
||||
db.close()
|
||||
|
||||
|
||||
def get_db_session():
|
||||
"""Get a database session directly (not as a dependency)"""
|
||||
return SessionLocal()
|
||||
@@ -1,110 +0,0 @@
|
||||
from sqlalchemy import Column, String, DateTime, Boolean, Text, Date, Integer
|
||||
from datetime import datetime
|
||||
from app.seismo.database import Base
|
||||
|
||||
|
||||
class Emitter(Base):
|
||||
__tablename__ = "emitters"
|
||||
|
||||
id = Column(String, primary_key=True, index=True)
|
||||
unit_type = Column(String, nullable=False)
|
||||
last_seen = Column(DateTime, default=datetime.utcnow)
|
||||
last_file = Column(String, nullable=False)
|
||||
status = Column(String, nullable=False)
|
||||
notes = Column(String, nullable=True)
|
||||
|
||||
|
||||
class RosterUnit(Base):
|
||||
"""
|
||||
Roster table: represents our *intended assignment* of a unit.
|
||||
This is editable from the GUI.
|
||||
|
||||
Supports multiple device types (seismograph, modem, sound_level_meter) with type-specific fields.
|
||||
"""
|
||||
__tablename__ = "roster"
|
||||
|
||||
# Core fields (all device types)
|
||||
id = Column(String, primary_key=True, index=True)
|
||||
unit_type = Column(String, default="series3") # Backward compatibility
|
||||
device_type = Column(String, default="seismograph") # "seismograph" | "modem" | "sound_level_meter"
|
||||
deployed = Column(Boolean, default=True)
|
||||
retired = Column(Boolean, default=False)
|
||||
note = Column(String, nullable=True)
|
||||
project_id = Column(String, nullable=True)
|
||||
location = Column(String, nullable=True) # Legacy field - use address/coordinates instead
|
||||
address = Column(String, nullable=True) # Human-readable address
|
||||
coordinates = Column(String, nullable=True) # Lat,Lon format: "34.0522,-118.2437"
|
||||
last_updated = Column(DateTime, default=datetime.utcnow)
|
||||
|
||||
# Seismograph-specific fields (nullable for modems and SLMs)
|
||||
last_calibrated = Column(Date, nullable=True)
|
||||
next_calibration_due = Column(Date, nullable=True)
|
||||
|
||||
# Modem assignment (shared by seismographs and SLMs)
|
||||
deployed_with_modem_id = Column(String, nullable=True) # FK to another RosterUnit (device_type=modem)
|
||||
|
||||
# Modem-specific fields (nullable for seismographs and SLMs)
|
||||
ip_address = Column(String, nullable=True)
|
||||
phone_number = Column(String, nullable=True)
|
||||
hardware_model = Column(String, nullable=True)
|
||||
|
||||
# Sound Level Meter-specific fields (nullable for seismographs and modems)
|
||||
slm_host = Column(String, nullable=True) # Device IP or hostname
|
||||
slm_tcp_port = Column(Integer, nullable=True) # TCP control port (default 2255)
|
||||
slm_ftp_port = Column(Integer, nullable=True) # FTP data retrieval port (default 21)
|
||||
slm_model = Column(String, nullable=True) # NL-43, NL-53, etc.
|
||||
slm_serial_number = Column(String, nullable=True) # Device serial number
|
||||
slm_frequency_weighting = Column(String, nullable=True) # A, C, Z
|
||||
slm_time_weighting = Column(String, nullable=True) # F (Fast), S (Slow), I (Impulse)
|
||||
slm_measurement_range = Column(String, nullable=True) # e.g., "30-130 dB"
|
||||
slm_last_check = Column(DateTime, nullable=True) # Last communication check
|
||||
|
||||
|
||||
class IgnoredUnit(Base):
|
||||
"""
|
||||
Ignored units: units that report but should be filtered out from unknown emitters.
|
||||
Used to suppress noise from old projects.
|
||||
"""
|
||||
__tablename__ = "ignored_units"
|
||||
|
||||
id = Column(String, primary_key=True, index=True)
|
||||
reason = Column(String, nullable=True)
|
||||
ignored_at = Column(DateTime, default=datetime.utcnow)
|
||||
|
||||
|
||||
class UnitHistory(Base):
|
||||
"""
|
||||
Unit history: complete timeline of changes to each unit.
|
||||
Tracks note changes, status changes, deployment/benched events, and more.
|
||||
"""
|
||||
__tablename__ = "unit_history"
|
||||
|
||||
id = Column(Integer, primary_key=True, autoincrement=True)
|
||||
unit_id = Column(String, nullable=False, index=True) # FK to RosterUnit.id
|
||||
change_type = Column(String, nullable=False) # note_change, deployed_change, retired_change, etc.
|
||||
field_name = Column(String, nullable=True) # Which field changed
|
||||
old_value = Column(Text, nullable=True) # Previous value
|
||||
new_value = Column(Text, nullable=True) # New value
|
||||
changed_at = Column(DateTime, default=datetime.utcnow, nullable=False, index=True)
|
||||
source = Column(String, default="manual") # manual, csv_import, telemetry, offline_sync
|
||||
notes = Column(Text, nullable=True) # Optional reason/context for the change
|
||||
|
||||
|
||||
class UserPreferences(Base):
|
||||
"""
|
||||
User preferences: persistent storage for application settings.
|
||||
Single-row table (id=1) to store global user preferences.
|
||||
"""
|
||||
__tablename__ = "user_preferences"
|
||||
|
||||
id = Column(Integer, primary_key=True, default=1)
|
||||
timezone = Column(String, default="America/New_York")
|
||||
theme = Column(String, default="auto") # auto, light, dark
|
||||
auto_refresh_interval = Column(Integer, default=10) # seconds
|
||||
date_format = Column(String, default="MM/DD/YYYY")
|
||||
table_rows_per_page = Column(Integer, default=25)
|
||||
calibration_interval_days = Column(Integer, default=365)
|
||||
calibration_warning_days = Column(Integer, default=30)
|
||||
status_ok_threshold_hours = Column(Integer, default=12)
|
||||
status_pending_threshold_hours = Column(Integer, default=24)
|
||||
updated_at = Column(DateTime, default=datetime.utcnow, onupdate=datetime.utcnow)
|
||||
@@ -1,25 +0,0 @@
|
||||
from fastapi import APIRouter, Request, Depends
|
||||
from fastapi.templating import Jinja2Templates
|
||||
|
||||
from app.seismo.services.snapshot import emit_status_snapshot
|
||||
|
||||
router = APIRouter()
|
||||
templates = Jinja2Templates(directory="app/ui/templates")
|
||||
|
||||
|
||||
@router.get("/dashboard/active")
|
||||
def dashboard_active(request: Request):
|
||||
snapshot = emit_status_snapshot()
|
||||
return templates.TemplateResponse(
|
||||
"partials/active_table.html",
|
||||
{"request": request, "units": snapshot["active"]}
|
||||
)
|
||||
|
||||
|
||||
@router.get("/dashboard/benched")
|
||||
def dashboard_benched(request: Request):
|
||||
snapshot = emit_status_snapshot()
|
||||
return templates.TemplateResponse(
|
||||
"partials/benched_table.html",
|
||||
{"request": request, "units": snapshot["benched"]}
|
||||
)
|
||||
@@ -1,140 +0,0 @@
|
||||
"""
|
||||
Partial routes for HTMX dynamic content loading.
|
||||
These routes return HTML fragments that are loaded into the page via HTMX.
|
||||
"""
|
||||
from fastapi import APIRouter, Request
|
||||
from fastapi.responses import HTMLResponse
|
||||
from fastapi.templating import Jinja2Templates
|
||||
|
||||
from app.seismo.services.snapshot import emit_status_snapshot
|
||||
|
||||
router = APIRouter()
|
||||
templates = Jinja2Templates(directory="app/ui/templates")
|
||||
|
||||
|
||||
@router.get("/unknown-emitters", response_class=HTMLResponse)
|
||||
async def get_unknown_emitters(request: Request):
|
||||
"""
|
||||
Returns HTML partial with unknown emitters (units reporting but not in roster).
|
||||
Called periodically via HTMX (every 10s) from the roster page.
|
||||
"""
|
||||
snapshot = emit_status_snapshot()
|
||||
|
||||
# Convert unknown units dict to list and add required fields
|
||||
unknown_list = []
|
||||
for unit_id, unit_data in snapshot.get("unknown", {}).items():
|
||||
unknown_list.append({
|
||||
"id": unit_id,
|
||||
"status": unit_data["status"],
|
||||
"age": unit_data["age"],
|
||||
"fname": unit_data.get("fname", ""),
|
||||
})
|
||||
|
||||
# Sort by ID for consistent display
|
||||
unknown_list.sort(key=lambda x: x["id"])
|
||||
|
||||
return templates.TemplateResponse(
|
||||
"partials/unknown_emitters.html",
|
||||
{
|
||||
"request": request,
|
||||
"unknown_units": unknown_list
|
||||
}
|
||||
)
|
||||
|
||||
|
||||
@router.get("/devices-all", response_class=HTMLResponse)
|
||||
async def get_all_devices(request: Request):
|
||||
"""
|
||||
Returns HTML partial with all devices (deployed, benched, retired, ignored).
|
||||
Called on page load and when filters are applied.
|
||||
"""
|
||||
snapshot = emit_status_snapshot()
|
||||
|
||||
# Combine all units from different buckets
|
||||
all_units = []
|
||||
|
||||
# Add active units (deployed)
|
||||
for unit_id, unit_data in snapshot.get("active", {}).items():
|
||||
unit_info = {
|
||||
"id": unit_id,
|
||||
"status": unit_data["status"],
|
||||
"age": unit_data["age"],
|
||||
"last_seen": unit_data.get("last", ""),
|
||||
"fname": unit_data.get("fname", ""),
|
||||
"deployed": True,
|
||||
"retired": False,
|
||||
"ignored": False,
|
||||
"note": unit_data.get("note", ""),
|
||||
"device_type": unit_data.get("device_type", "seismograph"),
|
||||
"location": unit_data.get("location", ""),
|
||||
"address": unit_data.get("address", ""),
|
||||
"coordinates": unit_data.get("coordinates", ""),
|
||||
"last_calibrated": unit_data.get("last_calibrated"),
|
||||
"next_calibration_due": unit_data.get("next_calibration_due"),
|
||||
"deployed_with_modem_id": unit_data.get("deployed_with_modem_id"),
|
||||
"ip_address": unit_data.get("ip_address"),
|
||||
"phone_number": unit_data.get("phone_number"),
|
||||
"hardware_model": unit_data.get("hardware_model"),
|
||||
}
|
||||
all_units.append(unit_info)
|
||||
|
||||
# Add benched units (not deployed, not retired)
|
||||
for unit_id, unit_data in snapshot.get("benched", {}).items():
|
||||
unit_info = {
|
||||
"id": unit_id,
|
||||
"status": unit_data["status"],
|
||||
"age": unit_data["age"],
|
||||
"last_seen": unit_data.get("last", ""),
|
||||
"fname": unit_data.get("fname", ""),
|
||||
"deployed": False,
|
||||
"retired": False,
|
||||
"ignored": False,
|
||||
"note": unit_data.get("note", ""),
|
||||
"device_type": unit_data.get("device_type", "seismograph"),
|
||||
"location": unit_data.get("location", ""),
|
||||
"address": unit_data.get("address", ""),
|
||||
"coordinates": unit_data.get("coordinates", ""),
|
||||
"last_calibrated": unit_data.get("last_calibrated"),
|
||||
"next_calibration_due": unit_data.get("next_calibration_due"),
|
||||
"deployed_with_modem_id": unit_data.get("deployed_with_modem_id"),
|
||||
"ip_address": unit_data.get("ip_address"),
|
||||
"phone_number": unit_data.get("phone_number"),
|
||||
"hardware_model": unit_data.get("hardware_model"),
|
||||
}
|
||||
all_units.append(unit_info)
|
||||
|
||||
# Add retired units
|
||||
for unit_id, unit_data in snapshot.get("retired", {}).items():
|
||||
unit_info = {
|
||||
"id": unit_id,
|
||||
"status": "Retired",
|
||||
"age": unit_data["age"],
|
||||
"last_seen": unit_data.get("last", ""),
|
||||
"fname": unit_data.get("fname", ""),
|
||||
"deployed": False,
|
||||
"retired": True,
|
||||
"ignored": False,
|
||||
"note": unit_data.get("note", ""),
|
||||
"device_type": unit_data.get("device_type", "seismograph"),
|
||||
"location": unit_data.get("location", ""),
|
||||
"address": unit_data.get("address", ""),
|
||||
"coordinates": unit_data.get("coordinates", ""),
|
||||
"last_calibrated": unit_data.get("last_calibrated"),
|
||||
"next_calibration_due": unit_data.get("next_calibration_due"),
|
||||
"deployed_with_modem_id": unit_data.get("deployed_with_modem_id"),
|
||||
"ip_address": unit_data.get("ip_address"),
|
||||
"phone_number": unit_data.get("phone_number"),
|
||||
"hardware_model": unit_data.get("hardware_model"),
|
||||
}
|
||||
all_units.append(unit_info)
|
||||
|
||||
# Sort by ID for consistent display
|
||||
all_units.sort(key=lambda x: x["id"])
|
||||
|
||||
return templates.TemplateResponse(
|
||||
"partials/devices_table.html",
|
||||
{
|
||||
"request": request,
|
||||
"units": all_units
|
||||
}
|
||||
)
|
||||
@@ -1,720 +0,0 @@
|
||||
from fastapi import APIRouter, Depends, HTTPException, Form, UploadFile, File, Request
|
||||
from fastapi.exceptions import RequestValidationError
|
||||
from sqlalchemy.orm import Session
|
||||
from datetime import datetime, date
|
||||
import csv
|
||||
import io
|
||||
import logging
|
||||
import httpx
|
||||
import os
|
||||
|
||||
from app.seismo.database import get_db
|
||||
from app.seismo.models import RosterUnit, IgnoredUnit, Emitter, UnitHistory
|
||||
|
||||
router = APIRouter(prefix="/api/roster", tags=["roster-edit"])
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# SLMM backend URL for syncing device configs to cache
|
||||
SLMM_BASE_URL = os.getenv("SLMM_BASE_URL", "http://localhost:8100")
|
||||
|
||||
|
||||
def record_history(db: Session, unit_id: str, change_type: str, field_name: str = None,
|
||||
old_value: str = None, new_value: str = None, source: str = "manual", notes: str = None):
|
||||
"""Helper function to record a change in unit history"""
|
||||
history_entry = UnitHistory(
|
||||
unit_id=unit_id,
|
||||
change_type=change_type,
|
||||
field_name=field_name,
|
||||
old_value=old_value,
|
||||
new_value=new_value,
|
||||
changed_at=datetime.utcnow(),
|
||||
source=source,
|
||||
notes=notes
|
||||
)
|
||||
db.add(history_entry)
|
||||
# Note: caller is responsible for db.commit()
|
||||
|
||||
|
||||
def get_or_create_roster_unit(db: Session, unit_id: str):
|
||||
unit = db.query(RosterUnit).filter(RosterUnit.id == unit_id).first()
|
||||
if not unit:
|
||||
unit = RosterUnit(id=unit_id)
|
||||
db.add(unit)
|
||||
db.commit()
|
||||
db.refresh(unit)
|
||||
return unit
|
||||
|
||||
|
||||
async def sync_slm_to_slmm_cache(
|
||||
unit_id: str,
|
||||
host: str = None,
|
||||
tcp_port: int = None,
|
||||
ftp_port: int = None,
|
||||
ftp_username: str = None,
|
||||
ftp_password: str = None,
|
||||
deployed_with_modem_id: str = None,
|
||||
db: Session = None
|
||||
) -> dict:
|
||||
"""
|
||||
Sync SLM device configuration to SLMM backend cache.
|
||||
|
||||
Terra-View is the source of truth for device configs. This function updates
|
||||
SLMM's config cache (NL43Config table) so SLMM can look up device connection
|
||||
info by unit_id without Terra-View passing host:port with every request.
|
||||
|
||||
Args:
|
||||
unit_id: Unique identifier for the SLM device
|
||||
host: Direct IP address/hostname OR will be resolved from modem
|
||||
tcp_port: TCP control port (default: 2255)
|
||||
ftp_port: FTP port (default: 21)
|
||||
ftp_username: FTP username (optional)
|
||||
ftp_password: FTP password (optional)
|
||||
deployed_with_modem_id: If set, resolve modem IP as host
|
||||
db: Database session for modem lookup
|
||||
|
||||
Returns:
|
||||
dict: {"success": bool, "message": str}
|
||||
"""
|
||||
# Resolve host from modem if assigned
|
||||
if deployed_with_modem_id and db:
|
||||
modem = db.query(RosterUnit).filter_by(
|
||||
id=deployed_with_modem_id,
|
||||
device_type="modem"
|
||||
).first()
|
||||
if modem and modem.ip_address:
|
||||
host = modem.ip_address
|
||||
logger.info(f"Resolved host from modem {deployed_with_modem_id}: {host}")
|
||||
|
||||
# Validate required fields
|
||||
if not host:
|
||||
logger.warning(f"Cannot sync SLM {unit_id} to SLMM: no host/IP address provided")
|
||||
return {"success": False, "message": "No host IP address available"}
|
||||
|
||||
# Set defaults
|
||||
tcp_port = tcp_port or 2255
|
||||
ftp_port = ftp_port or 21
|
||||
|
||||
# Build SLMM cache payload
|
||||
config_payload = {
|
||||
"host": host,
|
||||
"tcp_port": tcp_port,
|
||||
"tcp_enabled": True,
|
||||
"ftp_enabled": bool(ftp_username and ftp_password),
|
||||
"web_enabled": False
|
||||
}
|
||||
|
||||
if ftp_username and ftp_password:
|
||||
config_payload["ftp_username"] = ftp_username
|
||||
config_payload["ftp_password"] = ftp_password
|
||||
|
||||
# Call SLMM cache update API
|
||||
slmm_url = f"{SLMM_BASE_URL}/api/nl43/{unit_id}/config"
|
||||
|
||||
try:
|
||||
async with httpx.AsyncClient(timeout=10.0) as client:
|
||||
response = await client.put(slmm_url, json=config_payload)
|
||||
|
||||
if response.status_code in [200, 201]:
|
||||
logger.info(f"Successfully synced SLM {unit_id} to SLMM cache")
|
||||
return {"success": True, "message": "Device config cached in SLMM"}
|
||||
else:
|
||||
logger.error(f"SLMM cache sync failed for {unit_id}: HTTP {response.status_code}")
|
||||
return {"success": False, "message": f"SLMM returned status {response.status_code}"}
|
||||
|
||||
except httpx.ConnectError:
|
||||
logger.error(f"Cannot connect to SLMM service at {SLMM_BASE_URL}")
|
||||
return {"success": False, "message": "SLMM service unavailable"}
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error syncing SLM {unit_id} to SLMM: {e}")
|
||||
return {"success": False, "message": str(e)}
|
||||
|
||||
|
||||
@router.post("/add")
|
||||
async def add_roster_unit(
|
||||
id: str = Form(...),
|
||||
device_type: str = Form("seismograph"),
|
||||
unit_type: str = Form("series3"),
|
||||
deployed: str = Form(None),
|
||||
retired: str = Form(None),
|
||||
note: str = Form(""),
|
||||
project_id: str = Form(None),
|
||||
location: str = Form(None),
|
||||
address: str = Form(None),
|
||||
coordinates: str = Form(None),
|
||||
# Seismograph-specific fields
|
||||
last_calibrated: str = Form(None),
|
||||
next_calibration_due: str = Form(None),
|
||||
deployed_with_modem_id: str = Form(None),
|
||||
# Modem-specific fields
|
||||
ip_address: str = Form(None),
|
||||
phone_number: str = Form(None),
|
||||
hardware_model: str = Form(None),
|
||||
# Sound Level Meter-specific fields
|
||||
slm_host: str = Form(None),
|
||||
slm_tcp_port: str = Form(None),
|
||||
slm_ftp_port: str = Form(None),
|
||||
slm_model: str = Form(None),
|
||||
slm_serial_number: str = Form(None),
|
||||
slm_frequency_weighting: str = Form(None),
|
||||
slm_time_weighting: str = Form(None),
|
||||
slm_measurement_range: str = Form(None),
|
||||
db: Session = Depends(get_db)
|
||||
):
|
||||
logger.info(f"Adding unit: id={id}, device_type={device_type}, deployed={deployed}, retired={retired}")
|
||||
|
||||
# Convert boolean strings to actual booleans
|
||||
deployed_bool = deployed in ['true', 'True', '1', 'yes'] if deployed else False
|
||||
retired_bool = retired in ['true', 'True', '1', 'yes'] if retired else False
|
||||
|
||||
# Convert port strings to integers
|
||||
slm_tcp_port_int = int(slm_tcp_port) if slm_tcp_port and slm_tcp_port.strip() else None
|
||||
slm_ftp_port_int = int(slm_ftp_port) if slm_ftp_port and slm_ftp_port.strip() else None
|
||||
|
||||
if db.query(RosterUnit).filter(RosterUnit.id == id).first():
|
||||
raise HTTPException(status_code=400, detail="Unit already exists")
|
||||
|
||||
# Parse date fields if provided
|
||||
last_cal_date = None
|
||||
if last_calibrated:
|
||||
try:
|
||||
last_cal_date = datetime.strptime(last_calibrated, "%Y-%m-%d").date()
|
||||
except ValueError:
|
||||
raise HTTPException(status_code=400, detail="Invalid last_calibrated date format. Use YYYY-MM-DD")
|
||||
|
||||
next_cal_date = None
|
||||
if next_calibration_due:
|
||||
try:
|
||||
next_cal_date = datetime.strptime(next_calibration_due, "%Y-%m-%d").date()
|
||||
except ValueError:
|
||||
raise HTTPException(status_code=400, detail="Invalid next_calibration_due date format. Use YYYY-MM-DD")
|
||||
|
||||
unit = RosterUnit(
|
||||
id=id,
|
||||
device_type=device_type,
|
||||
unit_type=unit_type,
|
||||
deployed=deployed_bool,
|
||||
retired=retired_bool,
|
||||
note=note,
|
||||
project_id=project_id,
|
||||
location=location,
|
||||
address=address,
|
||||
coordinates=coordinates,
|
||||
last_updated=datetime.utcnow(),
|
||||
# Seismograph-specific fields
|
||||
last_calibrated=last_cal_date,
|
||||
next_calibration_due=next_cal_date,
|
||||
deployed_with_modem_id=deployed_with_modem_id if deployed_with_modem_id else None,
|
||||
# Modem-specific fields
|
||||
ip_address=ip_address if ip_address else None,
|
||||
phone_number=phone_number if phone_number else None,
|
||||
hardware_model=hardware_model if hardware_model else None,
|
||||
# Sound Level Meter-specific fields
|
||||
slm_host=slm_host if slm_host else None,
|
||||
slm_tcp_port=slm_tcp_port_int,
|
||||
slm_ftp_port=slm_ftp_port_int,
|
||||
slm_model=slm_model if slm_model else None,
|
||||
slm_serial_number=slm_serial_number if slm_serial_number else None,
|
||||
slm_frequency_weighting=slm_frequency_weighting if slm_frequency_weighting else None,
|
||||
slm_time_weighting=slm_time_weighting if slm_time_weighting else None,
|
||||
slm_measurement_range=slm_measurement_range if slm_measurement_range else None,
|
||||
)
|
||||
db.add(unit)
|
||||
db.commit()
|
||||
|
||||
# If sound level meter, sync config to SLMM cache
|
||||
if device_type == "sound_level_meter":
|
||||
logger.info(f"Syncing SLM {id} config to SLMM cache...")
|
||||
result = await sync_slm_to_slmm_cache(
|
||||
unit_id=id,
|
||||
host=slm_host,
|
||||
tcp_port=slm_tcp_port_int,
|
||||
ftp_port=slm_ftp_port_int,
|
||||
deployed_with_modem_id=deployed_with_modem_id,
|
||||
db=db
|
||||
)
|
||||
|
||||
if not result["success"]:
|
||||
logger.warning(f"SLMM cache sync warning for {id}: {result['message']}")
|
||||
# Don't fail the operation - device is still added to Terra-View roster
|
||||
# User can manually sync later or SLMM will be synced on next config update
|
||||
|
||||
return {"message": "Unit added", "id": id, "device_type": device_type}
|
||||
|
||||
|
||||
@router.get("/modems")
|
||||
def get_modems_list(db: Session = Depends(get_db)):
|
||||
"""Get list of all modem units for dropdown selection"""
|
||||
modems = db.query(RosterUnit).filter_by(device_type="modem", retired=False).order_by(RosterUnit.id).all()
|
||||
|
||||
return [
|
||||
{
|
||||
"id": modem.id,
|
||||
"ip_address": modem.ip_address,
|
||||
"phone_number": modem.phone_number,
|
||||
"hardware_model": modem.hardware_model,
|
||||
"deployed": modem.deployed
|
||||
}
|
||||
for modem in modems
|
||||
]
|
||||
|
||||
|
||||
@router.get("/{unit_id}")
|
||||
def get_roster_unit(unit_id: str, db: Session = Depends(get_db)):
|
||||
"""Get a single roster unit by ID"""
|
||||
unit = db.query(RosterUnit).filter(RosterUnit.id == unit_id).first()
|
||||
if not unit:
|
||||
raise HTTPException(status_code=404, detail="Unit not found")
|
||||
|
||||
return {
|
||||
"id": unit.id,
|
||||
"device_type": unit.device_type or "seismograph",
|
||||
"unit_type": unit.unit_type,
|
||||
"deployed": unit.deployed,
|
||||
"retired": unit.retired,
|
||||
"note": unit.note or "",
|
||||
"project_id": unit.project_id or "",
|
||||
"location": unit.location or "",
|
||||
"address": unit.address or "",
|
||||
"coordinates": unit.coordinates or "",
|
||||
"last_calibrated": unit.last_calibrated.isoformat() if unit.last_calibrated else "",
|
||||
"next_calibration_due": unit.next_calibration_due.isoformat() if unit.next_calibration_due else "",
|
||||
"deployed_with_modem_id": unit.deployed_with_modem_id or "",
|
||||
"ip_address": unit.ip_address or "",
|
||||
"phone_number": unit.phone_number or "",
|
||||
"hardware_model": unit.hardware_model or "",
|
||||
"slm_host": unit.slm_host or "",
|
||||
"slm_tcp_port": unit.slm_tcp_port or "",
|
||||
"slm_ftp_port": unit.slm_ftp_port or "",
|
||||
"slm_model": unit.slm_model or "",
|
||||
"slm_serial_number": unit.slm_serial_number or "",
|
||||
"slm_frequency_weighting": unit.slm_frequency_weighting or "",
|
||||
"slm_time_weighting": unit.slm_time_weighting or "",
|
||||
"slm_measurement_range": unit.slm_measurement_range or "",
|
||||
}
|
||||
|
||||
|
||||
@router.post("/edit/{unit_id}")
|
||||
def edit_roster_unit(
|
||||
unit_id: str,
|
||||
device_type: str = Form("seismograph"),
|
||||
unit_type: str = Form("series3"),
|
||||
deployed: str = Form(None),
|
||||
retired: str = Form(None),
|
||||
note: str = Form(""),
|
||||
project_id: str = Form(None),
|
||||
location: str = Form(None),
|
||||
address: str = Form(None),
|
||||
coordinates: str = Form(None),
|
||||
# Seismograph-specific fields
|
||||
last_calibrated: str = Form(None),
|
||||
next_calibration_due: str = Form(None),
|
||||
deployed_with_modem_id: str = Form(None),
|
||||
# Modem-specific fields
|
||||
ip_address: str = Form(None),
|
||||
phone_number: str = Form(None),
|
||||
hardware_model: str = Form(None),
|
||||
# Sound Level Meter-specific fields
|
||||
slm_host: str = Form(None),
|
||||
slm_tcp_port: str = Form(None),
|
||||
slm_ftp_port: str = Form(None),
|
||||
slm_model: str = Form(None),
|
||||
slm_serial_number: str = Form(None),
|
||||
slm_frequency_weighting: str = Form(None),
|
||||
slm_time_weighting: str = Form(None),
|
||||
slm_measurement_range: str = Form(None),
|
||||
db: Session = Depends(get_db)
|
||||
):
|
||||
unit = db.query(RosterUnit).filter(RosterUnit.id == unit_id).first()
|
||||
if not unit:
|
||||
raise HTTPException(status_code=404, detail="Unit not found")
|
||||
|
||||
# Convert boolean strings to actual booleans
|
||||
deployed_bool = deployed in ['true', 'True', '1', 'yes'] if deployed else False
|
||||
retired_bool = retired in ['true', 'True', '1', 'yes'] if retired else False
|
||||
|
||||
# Convert port strings to integers
|
||||
slm_tcp_port_int = int(slm_tcp_port) if slm_tcp_port and slm_tcp_port.strip() else None
|
||||
slm_ftp_port_int = int(slm_ftp_port) if slm_ftp_port and slm_ftp_port.strip() else None
|
||||
|
||||
# Parse date fields if provided
|
||||
last_cal_date = None
|
||||
if last_calibrated:
|
||||
try:
|
||||
last_cal_date = datetime.strptime(last_calibrated, "%Y-%m-%d").date()
|
||||
except ValueError:
|
||||
raise HTTPException(status_code=400, detail="Invalid last_calibrated date format. Use YYYY-MM-DD")
|
||||
|
||||
next_cal_date = None
|
||||
if next_calibration_due:
|
||||
try:
|
||||
next_cal_date = datetime.strptime(next_calibration_due, "%Y-%m-%d").date()
|
||||
except ValueError:
|
||||
raise HTTPException(status_code=400, detail="Invalid next_calibration_due date format. Use YYYY-MM-DD")
|
||||
|
||||
# Track changes for history
|
||||
old_note = unit.note
|
||||
old_deployed = unit.deployed
|
||||
old_retired = unit.retired
|
||||
|
||||
# Update all fields
|
||||
unit.device_type = device_type
|
||||
unit.unit_type = unit_type
|
||||
unit.deployed = deployed_bool
|
||||
unit.retired = retired_bool
|
||||
unit.note = note
|
||||
unit.project_id = project_id
|
||||
unit.location = location
|
||||
unit.address = address
|
||||
unit.coordinates = coordinates
|
||||
unit.last_updated = datetime.utcnow()
|
||||
|
||||
# Seismograph-specific fields
|
||||
unit.last_calibrated = last_cal_date
|
||||
unit.next_calibration_due = next_cal_date
|
||||
unit.deployed_with_modem_id = deployed_with_modem_id if deployed_with_modem_id else None
|
||||
|
||||
# Modem-specific fields
|
||||
unit.ip_address = ip_address if ip_address else None
|
||||
unit.phone_number = phone_number if phone_number else None
|
||||
unit.hardware_model = hardware_model if hardware_model else None
|
||||
|
||||
# Sound Level Meter-specific fields
|
||||
unit.slm_host = slm_host if slm_host else None
|
||||
unit.slm_tcp_port = slm_tcp_port_int
|
||||
unit.slm_ftp_port = slm_ftp_port_int
|
||||
unit.slm_model = slm_model if slm_model else None
|
||||
unit.slm_serial_number = slm_serial_number if slm_serial_number else None
|
||||
unit.slm_frequency_weighting = slm_frequency_weighting if slm_frequency_weighting else None
|
||||
unit.slm_time_weighting = slm_time_weighting if slm_time_weighting else None
|
||||
unit.slm_measurement_range = slm_measurement_range if slm_measurement_range else None
|
||||
|
||||
# Record history entries for changed fields
|
||||
if old_note != note:
|
||||
record_history(db, unit_id, "note_change", "note", old_note, note, "manual")
|
||||
|
||||
if old_deployed != deployed:
|
||||
status_text = "deployed" if deployed else "benched"
|
||||
old_status_text = "deployed" if old_deployed else "benched"
|
||||
record_history(db, unit_id, "deployed_change", "deployed", old_status_text, status_text, "manual")
|
||||
|
||||
if old_retired != retired:
|
||||
status_text = "retired" if retired else "active"
|
||||
old_status_text = "retired" if old_retired else "active"
|
||||
record_history(db, unit_id, "retired_change", "retired", old_status_text, status_text, "manual")
|
||||
|
||||
db.commit()
|
||||
return {"message": "Unit updated", "id": unit_id, "device_type": device_type}
|
||||
|
||||
|
||||
@router.post("/set-deployed/{unit_id}")
|
||||
def set_deployed(unit_id: str, deployed: bool = Form(...), db: Session = Depends(get_db)):
|
||||
unit = get_or_create_roster_unit(db, unit_id)
|
||||
old_deployed = unit.deployed
|
||||
unit.deployed = deployed
|
||||
unit.last_updated = datetime.utcnow()
|
||||
|
||||
# Record history entry for deployed status change
|
||||
if old_deployed != deployed:
|
||||
status_text = "deployed" if deployed else "benched"
|
||||
old_status_text = "deployed" if old_deployed else "benched"
|
||||
record_history(
|
||||
db=db,
|
||||
unit_id=unit_id,
|
||||
change_type="deployed_change",
|
||||
field_name="deployed",
|
||||
old_value=old_status_text,
|
||||
new_value=status_text,
|
||||
source="manual"
|
||||
)
|
||||
|
||||
db.commit()
|
||||
return {"message": "Updated", "id": unit_id, "deployed": deployed}
|
||||
|
||||
|
||||
@router.post("/set-retired/{unit_id}")
|
||||
def set_retired(unit_id: str, retired: bool = Form(...), db: Session = Depends(get_db)):
|
||||
unit = get_or_create_roster_unit(db, unit_id)
|
||||
old_retired = unit.retired
|
||||
unit.retired = retired
|
||||
unit.last_updated = datetime.utcnow()
|
||||
|
||||
# Record history entry for retired status change
|
||||
if old_retired != retired:
|
||||
status_text = "retired" if retired else "active"
|
||||
old_status_text = "retired" if old_retired else "active"
|
||||
record_history(
|
||||
db=db,
|
||||
unit_id=unit_id,
|
||||
change_type="retired_change",
|
||||
field_name="retired",
|
||||
old_value=old_status_text,
|
||||
new_value=status_text,
|
||||
source="manual"
|
||||
)
|
||||
|
||||
db.commit()
|
||||
return {"message": "Updated", "id": unit_id, "retired": retired}
|
||||
|
||||
|
||||
@router.delete("/{unit_id}")
|
||||
def delete_roster_unit(unit_id: str, db: Session = Depends(get_db)):
|
||||
"""
|
||||
Permanently delete a unit from the database.
|
||||
Checks roster, emitters, and ignored_units tables and deletes from any table where the unit exists.
|
||||
"""
|
||||
deleted = False
|
||||
|
||||
# Try to delete from roster table
|
||||
roster_unit = db.query(RosterUnit).filter(RosterUnit.id == unit_id).first()
|
||||
if roster_unit:
|
||||
db.delete(roster_unit)
|
||||
deleted = True
|
||||
|
||||
# Try to delete from emitters table
|
||||
emitter = db.query(Emitter).filter(Emitter.id == unit_id).first()
|
||||
if emitter:
|
||||
db.delete(emitter)
|
||||
deleted = True
|
||||
|
||||
# Try to delete from ignored_units table
|
||||
ignored_unit = db.query(IgnoredUnit).filter(IgnoredUnit.id == unit_id).first()
|
||||
if ignored_unit:
|
||||
db.delete(ignored_unit)
|
||||
deleted = True
|
||||
|
||||
# If not found in any table, return error
|
||||
if not deleted:
|
||||
raise HTTPException(status_code=404, detail="Unit not found")
|
||||
|
||||
db.commit()
|
||||
return {"message": "Unit deleted", "id": unit_id}
|
||||
|
||||
|
||||
@router.post("/set-note/{unit_id}")
|
||||
def set_note(unit_id: str, note: str = Form(""), db: Session = Depends(get_db)):
|
||||
unit = get_or_create_roster_unit(db, unit_id)
|
||||
old_note = unit.note
|
||||
unit.note = note
|
||||
unit.last_updated = datetime.utcnow()
|
||||
|
||||
# Record history entry for note change
|
||||
if old_note != note:
|
||||
record_history(
|
||||
db=db,
|
||||
unit_id=unit_id,
|
||||
change_type="note_change",
|
||||
field_name="note",
|
||||
old_value=old_note,
|
||||
new_value=note,
|
||||
source="manual"
|
||||
)
|
||||
|
||||
db.commit()
|
||||
return {"message": "Updated", "id": unit_id, "note": note}
|
||||
|
||||
|
||||
@router.post("/import-csv")
|
||||
async def import_csv(
|
||||
file: UploadFile = File(...),
|
||||
update_existing: bool = Form(True),
|
||||
db: Session = Depends(get_db)
|
||||
):
|
||||
"""
|
||||
Import roster units from CSV file.
|
||||
|
||||
Expected CSV columns (unit_id is required, others are optional):
|
||||
- unit_id: Unique identifier for the unit
|
||||
- unit_type: Type of unit (default: "series3")
|
||||
- deployed: Boolean for deployment status (default: False)
|
||||
- retired: Boolean for retirement status (default: False)
|
||||
- note: Notes about the unit
|
||||
- project_id: Project identifier
|
||||
- location: Location description
|
||||
|
||||
Args:
|
||||
file: CSV file upload
|
||||
update_existing: If True, update existing units; if False, skip them
|
||||
"""
|
||||
|
||||
if not file.filename.endswith('.csv'):
|
||||
raise HTTPException(status_code=400, detail="File must be a CSV")
|
||||
|
||||
# Read file content
|
||||
contents = await file.read()
|
||||
csv_text = contents.decode('utf-8')
|
||||
csv_reader = csv.DictReader(io.StringIO(csv_text))
|
||||
|
||||
results = {
|
||||
"added": [],
|
||||
"updated": [],
|
||||
"skipped": [],
|
||||
"errors": []
|
||||
}
|
||||
|
||||
for row_num, row in enumerate(csv_reader, start=2): # Start at 2 to account for header
|
||||
try:
|
||||
# Validate required field
|
||||
unit_id = row.get('unit_id', '').strip()
|
||||
if not unit_id:
|
||||
results["errors"].append({
|
||||
"row": row_num,
|
||||
"error": "Missing required field: unit_id"
|
||||
})
|
||||
continue
|
||||
|
||||
# Check if unit exists
|
||||
existing_unit = db.query(RosterUnit).filter(RosterUnit.id == unit_id).first()
|
||||
|
||||
if existing_unit:
|
||||
if not update_existing:
|
||||
results["skipped"].append(unit_id)
|
||||
continue
|
||||
|
||||
# Update existing unit
|
||||
existing_unit.unit_type = row.get('unit_type', existing_unit.unit_type or 'series3')
|
||||
existing_unit.deployed = row.get('deployed', '').lower() in ('true', '1', 'yes') if row.get('deployed') else existing_unit.deployed
|
||||
existing_unit.retired = row.get('retired', '').lower() in ('true', '1', 'yes') if row.get('retired') else existing_unit.retired
|
||||
existing_unit.note = row.get('note', existing_unit.note or '')
|
||||
existing_unit.project_id = row.get('project_id', existing_unit.project_id)
|
||||
existing_unit.location = row.get('location', existing_unit.location)
|
||||
existing_unit.address = row.get('address', existing_unit.address)
|
||||
existing_unit.coordinates = row.get('coordinates', existing_unit.coordinates)
|
||||
existing_unit.last_updated = datetime.utcnow()
|
||||
|
||||
results["updated"].append(unit_id)
|
||||
else:
|
||||
# Create new unit
|
||||
new_unit = RosterUnit(
|
||||
id=unit_id,
|
||||
unit_type=row.get('unit_type', 'series3'),
|
||||
deployed=row.get('deployed', '').lower() in ('true', '1', 'yes'),
|
||||
retired=row.get('retired', '').lower() in ('true', '1', 'yes'),
|
||||
note=row.get('note', ''),
|
||||
project_id=row.get('project_id'),
|
||||
location=row.get('location'),
|
||||
address=row.get('address'),
|
||||
coordinates=row.get('coordinates'),
|
||||
last_updated=datetime.utcnow()
|
||||
)
|
||||
db.add(new_unit)
|
||||
results["added"].append(unit_id)
|
||||
|
||||
except Exception as e:
|
||||
results["errors"].append({
|
||||
"row": row_num,
|
||||
"unit_id": row.get('unit_id', 'unknown'),
|
||||
"error": str(e)
|
||||
})
|
||||
|
||||
# Commit all changes
|
||||
try:
|
||||
db.commit()
|
||||
except Exception as e:
|
||||
db.rollback()
|
||||
raise HTTPException(status_code=500, detail=f"Database error: {str(e)}")
|
||||
|
||||
return {
|
||||
"message": "CSV import completed",
|
||||
"summary": {
|
||||
"added": len(results["added"]),
|
||||
"updated": len(results["updated"]),
|
||||
"skipped": len(results["skipped"]),
|
||||
"errors": len(results["errors"])
|
||||
},
|
||||
"details": results
|
||||
}
|
||||
|
||||
|
||||
@router.post("/ignore/{unit_id}")
|
||||
def ignore_unit(unit_id: str, reason: str = Form(""), db: Session = Depends(get_db)):
|
||||
"""
|
||||
Add a unit to the ignore list to suppress it from unknown emitters.
|
||||
"""
|
||||
# Check if already ignored
|
||||
if db.query(IgnoredUnit).filter(IgnoredUnit.id == unit_id).first():
|
||||
raise HTTPException(status_code=400, detail="Unit already ignored")
|
||||
|
||||
ignored = IgnoredUnit(
|
||||
id=unit_id,
|
||||
reason=reason,
|
||||
ignored_at=datetime.utcnow()
|
||||
)
|
||||
db.add(ignored)
|
||||
db.commit()
|
||||
return {"message": "Unit ignored", "id": unit_id}
|
||||
|
||||
|
||||
@router.delete("/ignore/{unit_id}")
|
||||
def unignore_unit(unit_id: str, db: Session = Depends(get_db)):
|
||||
"""
|
||||
Remove a unit from the ignore list.
|
||||
"""
|
||||
ignored = db.query(IgnoredUnit).filter(IgnoredUnit.id == unit_id).first()
|
||||
if not ignored:
|
||||
raise HTTPException(status_code=404, detail="Unit not in ignore list")
|
||||
|
||||
db.delete(ignored)
|
||||
db.commit()
|
||||
return {"message": "Unit unignored", "id": unit_id}
|
||||
|
||||
|
||||
@router.get("/ignored")
|
||||
def list_ignored_units(db: Session = Depends(get_db)):
|
||||
"""
|
||||
Get list of all ignored units.
|
||||
"""
|
||||
ignored_units = db.query(IgnoredUnit).all()
|
||||
return {
|
||||
"ignored": [
|
||||
{
|
||||
"id": unit.id,
|
||||
"reason": unit.reason,
|
||||
"ignored_at": unit.ignored_at.isoformat()
|
||||
}
|
||||
for unit in ignored_units
|
||||
]
|
||||
}
|
||||
|
||||
|
||||
@router.get("/history/{unit_id}")
|
||||
def get_unit_history(unit_id: str, db: Session = Depends(get_db)):
|
||||
"""
|
||||
Get complete history timeline for a unit.
|
||||
Returns all historical changes ordered by most recent first.
|
||||
"""
|
||||
history_entries = db.query(UnitHistory).filter(
|
||||
UnitHistory.unit_id == unit_id
|
||||
).order_by(UnitHistory.changed_at.desc()).all()
|
||||
|
||||
return {
|
||||
"unit_id": unit_id,
|
||||
"history": [
|
||||
{
|
||||
"id": entry.id,
|
||||
"change_type": entry.change_type,
|
||||
"field_name": entry.field_name,
|
||||
"old_value": entry.old_value,
|
||||
"new_value": entry.new_value,
|
||||
"changed_at": entry.changed_at.isoformat(),
|
||||
"source": entry.source,
|
||||
"notes": entry.notes
|
||||
}
|
||||
for entry in history_entries
|
||||
]
|
||||
}
|
||||
|
||||
|
||||
@router.delete("/history/{history_id}")
|
||||
def delete_history_entry(history_id: int, db: Session = Depends(get_db)):
|
||||
"""
|
||||
Delete a specific history entry by ID.
|
||||
Allows manual cleanup of old history entries.
|
||||
"""
|
||||
history_entry = db.query(UnitHistory).filter(UnitHistory.id == history_id).first()
|
||||
if not history_entry:
|
||||
raise HTTPException(status_code=404, detail="History entry not found")
|
||||
|
||||
db.delete(history_entry)
|
||||
db.commit()
|
||||
return {"message": "History entry deleted", "id": history_id}
|
||||
@@ -1 +0,0 @@
|
||||
# SLMM addon package for NL43 integration.
|
||||
@@ -1,317 +0,0 @@
|
||||
"""
|
||||
Dashboard API endpoints for SLM/NL43 devices.
|
||||
This layer aggregates and transforms data from the device API for UI consumption.
|
||||
"""
|
||||
from fastapi import APIRouter, Depends, HTTPException, Request
|
||||
from fastapi.responses import HTMLResponse
|
||||
from fastapi.templating import Jinja2Templates
|
||||
from sqlalchemy.orm import Session
|
||||
from sqlalchemy import func
|
||||
from typing import List, Dict, Any
|
||||
import logging
|
||||
|
||||
from app.slm.database import get_db as get_slm_db
|
||||
from app.slm.models import NL43Config, NL43Status
|
||||
from app.slm.services import NL43Client
|
||||
# Import seismo database for roster data
|
||||
from app.seismo.database import get_db as get_seismo_db
|
||||
from app.seismo.models import RosterUnit
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
router = APIRouter(prefix="/api/slm-dashboard", tags=["slm-dashboard"])
|
||||
templates = Jinja2Templates(directory="app/ui/templates")
|
||||
|
||||
|
||||
@router.get("/stats", response_class=HTMLResponse)
|
||||
async def get_dashboard_stats(request: Request, db: Session = Depends(get_seismo_db)):
|
||||
"""Get aggregate statistics for the SLM dashboard from roster (returns HTML)."""
|
||||
# Query SLMs from the roster
|
||||
slms = db.query(RosterUnit).filter_by(
|
||||
device_type="sound_level_meter",
|
||||
retired=False
|
||||
).all()
|
||||
|
||||
total_units = len(slms)
|
||||
deployed = sum(1 for s in slms if s.deployed)
|
||||
benched = sum(1 for s in slms if not s.deployed)
|
||||
|
||||
# For "active", count SLMs with recent check-ins (within last hour)
|
||||
from datetime import datetime, timedelta, timezone
|
||||
one_hour_ago = datetime.now(timezone.utc) - timedelta(hours=1)
|
||||
active = sum(1 for s in slms if s.slm_last_check and s.slm_last_check >= one_hour_ago)
|
||||
|
||||
# Map to template variable names
|
||||
# total_count, deployed_count, active_count, benched_count
|
||||
return templates.TemplateResponse(
|
||||
"partials/slm_stats.html",
|
||||
{
|
||||
"request": request,
|
||||
"total_count": total_units,
|
||||
"deployed_count": deployed,
|
||||
"active_count": active,
|
||||
"benched_count": benched
|
||||
}
|
||||
)
|
||||
|
||||
|
||||
@router.get("/units", response_class=HTMLResponse)
|
||||
async def get_units_list(request: Request, db: Session = Depends(get_seismo_db)):
|
||||
"""Get list of all SLM units from roster (returns HTML)."""
|
||||
# Query SLMs from the roster (not retired)
|
||||
slms = db.query(RosterUnit).filter_by(
|
||||
device_type="sound_level_meter",
|
||||
retired=False
|
||||
).order_by(RosterUnit.id).all()
|
||||
|
||||
units = []
|
||||
for slm in slms:
|
||||
# Map to template field names
|
||||
unit_data = {
|
||||
"id": slm.id,
|
||||
"slm_host": slm.slm_host,
|
||||
"slm_tcp_port": slm.slm_tcp_port,
|
||||
"slm_last_check": slm.slm_last_check,
|
||||
"slm_model": slm.slm_model or "NL-43",
|
||||
"address": slm.address,
|
||||
"deployed_with_modem_id": slm.deployed_with_modem_id,
|
||||
}
|
||||
units.append(unit_data)
|
||||
|
||||
return templates.TemplateResponse(
|
||||
"partials/slm_unit_list.html",
|
||||
{
|
||||
"request": request,
|
||||
"units": units
|
||||
}
|
||||
)
|
||||
|
||||
|
||||
@router.get("/live-view/{unit_id}", response_class=HTMLResponse)
|
||||
async def get_live_view(unit_id: str, request: Request, slm_db: Session = Depends(get_slm_db), roster_db: Session = Depends(get_seismo_db)):
|
||||
"""Get live measurement data for a specific unit (returns HTML)."""
|
||||
# Get unit from roster
|
||||
unit = roster_db.query(RosterUnit).filter_by(
|
||||
id=unit_id,
|
||||
device_type="sound_level_meter"
|
||||
).first()
|
||||
|
||||
if not unit:
|
||||
return templates.TemplateResponse(
|
||||
"partials/slm_live_view_error.html",
|
||||
{
|
||||
"request": request,
|
||||
"error": f"Unit {unit_id} not found in roster"
|
||||
}
|
||||
)
|
||||
|
||||
# Get status from monitoring database (may not exist yet)
|
||||
status = slm_db.query(NL43Status).filter_by(unit_id=unit_id).first()
|
||||
|
||||
# Get modem info if available
|
||||
modem = None
|
||||
modem_ip = None
|
||||
if unit.deployed_with_modem_id:
|
||||
modem = roster_db.query(RosterUnit).filter_by(
|
||||
id=unit.deployed_with_modem_id,
|
||||
device_type="modem"
|
||||
).first()
|
||||
if modem:
|
||||
modem_ip = modem.ip_address
|
||||
elif unit.slm_host:
|
||||
modem_ip = unit.slm_host
|
||||
|
||||
# Determine if measuring
|
||||
is_measuring = False
|
||||
if status and status.measurement_state:
|
||||
is_measuring = status.measurement_state.lower() == 'start'
|
||||
|
||||
return templates.TemplateResponse(
|
||||
"partials/slm_live_view.html",
|
||||
{
|
||||
"request": request,
|
||||
"unit": unit,
|
||||
"modem": modem,
|
||||
"modem_ip": modem_ip,
|
||||
"current_status": status,
|
||||
"is_measuring": is_measuring
|
||||
}
|
||||
)
|
||||
|
||||
|
||||
@router.get("/config/{unit_id}", response_class=HTMLResponse)
|
||||
async def get_unit_config(unit_id: str, request: Request, roster_db: Session = Depends(get_seismo_db)):
|
||||
"""Return the HTML config form for a specific unit."""
|
||||
unit = roster_db.query(RosterUnit).filter_by(
|
||||
id=unit_id,
|
||||
device_type="sound_level_meter"
|
||||
).first()
|
||||
|
||||
if not unit:
|
||||
raise HTTPException(status_code=404, detail="Unit configuration not found")
|
||||
|
||||
return templates.TemplateResponse(
|
||||
"partials/slm_config_form.html",
|
||||
{
|
||||
"request": request,
|
||||
"unit": unit
|
||||
}
|
||||
)
|
||||
|
||||
|
||||
@router.post("/config/{unit_id}")
|
||||
async def update_unit_config(
|
||||
unit_id: str,
|
||||
request: Request,
|
||||
roster_db: Session = Depends(get_seismo_db),
|
||||
slm_db: Session = Depends(get_slm_db)
|
||||
):
|
||||
"""Update configuration for a specific unit from the form submission."""
|
||||
unit = roster_db.query(RosterUnit).filter_by(
|
||||
id=unit_id,
|
||||
device_type="sound_level_meter"
|
||||
).first()
|
||||
|
||||
if not unit:
|
||||
raise HTTPException(status_code=404, detail="Unit configuration not found")
|
||||
|
||||
form = await request.form()
|
||||
|
||||
def get_int(value, default=None):
|
||||
try:
|
||||
return int(value) if value not in (None, "") else default
|
||||
except (TypeError, ValueError):
|
||||
return default
|
||||
|
||||
# Update roster fields
|
||||
unit.slm_model = form.get("slm_model") or unit.slm_model
|
||||
unit.slm_serial_number = form.get("slm_serial_number") or unit.slm_serial_number
|
||||
unit.slm_frequency_weighting = form.get("slm_frequency_weighting") or unit.slm_frequency_weighting
|
||||
unit.slm_time_weighting = form.get("slm_time_weighting") or unit.slm_time_weighting
|
||||
unit.slm_measurement_range = form.get("slm_measurement_range") or unit.slm_measurement_range
|
||||
|
||||
unit.slm_host = form.get("slm_host") or None
|
||||
unit.slm_tcp_port = get_int(form.get("slm_tcp_port"), unit.slm_tcp_port or 2255)
|
||||
unit.slm_ftp_port = get_int(form.get("slm_ftp_port"), unit.slm_ftp_port or 21)
|
||||
|
||||
deployed_with_modem_id = form.get("deployed_with_modem_id") or None
|
||||
unit.deployed_with_modem_id = deployed_with_modem_id
|
||||
|
||||
roster_db.commit()
|
||||
roster_db.refresh(unit)
|
||||
|
||||
# Update or create NL43 config so SLMM can reach the device
|
||||
config = slm_db.query(NL43Config).filter_by(unit_id=unit_id).first()
|
||||
if not config:
|
||||
config = NL43Config(unit_id=unit_id)
|
||||
slm_db.add(config)
|
||||
|
||||
# Resolve host from modem if present, otherwise fall back to direct IP or existing config
|
||||
host_for_config = None
|
||||
if deployed_with_modem_id:
|
||||
modem = roster_db.query(RosterUnit).filter_by(
|
||||
id=deployed_with_modem_id,
|
||||
device_type="modem"
|
||||
).first()
|
||||
if modem and modem.ip_address:
|
||||
host_for_config = modem.ip_address
|
||||
if not host_for_config:
|
||||
host_for_config = unit.slm_host or config.host or "127.0.0.1"
|
||||
|
||||
config.host = host_for_config
|
||||
config.tcp_port = get_int(form.get("slm_tcp_port"), config.tcp_port or 2255)
|
||||
config.tcp_enabled = True
|
||||
config.ftp_enabled = bool(config.ftp_username and config.ftp_password)
|
||||
|
||||
slm_db.commit()
|
||||
slm_db.refresh(config)
|
||||
|
||||
return {"success": True, "unit_id": unit_id}
|
||||
|
||||
|
||||
@router.post("/control/{unit_id}/{action}")
|
||||
async def control_unit(unit_id: str, action: str, db: Session = Depends(get_slm_db)):
|
||||
"""Send control command to a unit (start, stop, pause, resume, etc.)."""
|
||||
config = db.query(NL43Config).filter_by(unit_id=unit_id).first()
|
||||
if not config:
|
||||
raise HTTPException(status_code=404, detail="Unit configuration not found")
|
||||
|
||||
if not config.tcp_enabled:
|
||||
raise HTTPException(status_code=400, detail="TCP control not enabled for this unit")
|
||||
|
||||
# Create NL43Client
|
||||
client = NL43Client(
|
||||
host=config.host,
|
||||
port=config.tcp_port,
|
||||
timeout=5.0,
|
||||
ftp_username=config.ftp_username,
|
||||
ftp_password=config.ftp_password
|
||||
)
|
||||
|
||||
# Map action to command
|
||||
action_map = {
|
||||
"start": "start_measurement",
|
||||
"stop": "stop_measurement",
|
||||
"pause": "pause_measurement",
|
||||
"resume": "resume_measurement",
|
||||
"reset": "reset_measurement",
|
||||
"sleep": "sleep_mode",
|
||||
"wake": "wake_from_sleep",
|
||||
}
|
||||
|
||||
if action not in action_map:
|
||||
raise HTTPException(status_code=400, detail=f"Unknown action: {action}")
|
||||
|
||||
method_name = action_map[action]
|
||||
method = getattr(client, method_name, None)
|
||||
|
||||
if not method:
|
||||
raise HTTPException(status_code=500, detail=f"Method {method_name} not implemented")
|
||||
|
||||
try:
|
||||
result = await method()
|
||||
return {"success": True, "action": action, "result": result}
|
||||
except Exception as e:
|
||||
logger.error(f"Error executing {action} on {unit_id}: {e}")
|
||||
raise HTTPException(status_code=500, detail=str(e))
|
||||
|
||||
|
||||
@router.get("/test-modem/{unit_id}")
|
||||
async def test_modem(unit_id: str, db: Session = Depends(get_slm_db)):
|
||||
"""Test connectivity to a unit's modem/device."""
|
||||
config = db.query(NL43Config).filter_by(unit_id=unit_id).first()
|
||||
if not config:
|
||||
raise HTTPException(status_code=404, detail="Unit configuration not found")
|
||||
|
||||
if not config.tcp_enabled:
|
||||
raise HTTPException(status_code=400, detail="TCP control not enabled for this unit")
|
||||
|
||||
client = NL43Client(
|
||||
host=config.host,
|
||||
port=config.tcp_port,
|
||||
timeout=5.0,
|
||||
ftp_username=config.ftp_username,
|
||||
ftp_password=config.ftp_password
|
||||
)
|
||||
|
||||
try:
|
||||
# Try to get measurement state as a connectivity test
|
||||
state = await client.get_measurement_state()
|
||||
return {
|
||||
"success": True,
|
||||
"unit_id": unit_id,
|
||||
"host": config.host,
|
||||
"port": config.tcp_port,
|
||||
"reachable": True,
|
||||
"measurement_state": state
|
||||
}
|
||||
except Exception as e:
|
||||
logger.warning(f"Modem test failed for {unit_id}: {e}")
|
||||
return {
|
||||
"success": False,
|
||||
"unit_id": unit_id,
|
||||
"host": config.host,
|
||||
"port": config.tcp_port,
|
||||
"reachable": False,
|
||||
"error": str(e)
|
||||
}
|
||||
@@ -1,27 +0,0 @@
|
||||
from sqlalchemy import create_engine
|
||||
from sqlalchemy.ext.declarative import declarative_base
|
||||
from sqlalchemy.orm import sessionmaker
|
||||
import os
|
||||
|
||||
# Ensure data directory exists for the SLMM addon
|
||||
os.makedirs("data", exist_ok=True)
|
||||
|
||||
SQLALCHEMY_DATABASE_URL = "sqlite:///./data/slmm.db"
|
||||
|
||||
engine = create_engine(SQLALCHEMY_DATABASE_URL, connect_args={"check_same_thread": False})
|
||||
SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)
|
||||
Base = declarative_base()
|
||||
|
||||
|
||||
def get_db():
|
||||
"""Dependency for database sessions."""
|
||||
db = SessionLocal()
|
||||
try:
|
||||
yield db
|
||||
finally:
|
||||
db.close()
|
||||
|
||||
|
||||
def get_db_session():
|
||||
"""Get a database session directly (not as a dependency)."""
|
||||
return SessionLocal()
|
||||
116
app/slm/main.py
@@ -1,116 +0,0 @@
|
||||
import os
|
||||
import logging
|
||||
from fastapi import FastAPI, Request
|
||||
from fastapi.middleware.cors import CORSMiddleware
|
||||
from fastapi.responses import HTMLResponse
|
||||
from fastapi.templating import Jinja2Templates
|
||||
|
||||
from app.slm.database import Base, engine
|
||||
from app.slm import routers
|
||||
|
||||
# Configure logging
|
||||
logging.basicConfig(
|
||||
level=logging.INFO,
|
||||
format="%(asctime)s - %(name)s - %(levelname)s - %(message)s",
|
||||
handlers=[
|
||||
logging.StreamHandler(),
|
||||
logging.FileHandler("data/slmm.log"),
|
||||
],
|
||||
)
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# Ensure database tables exist for the addon
|
||||
Base.metadata.create_all(bind=engine)
|
||||
logger.info("Database tables initialized")
|
||||
|
||||
app = FastAPI(
|
||||
title="SLMM NL43 Addon",
|
||||
description="Standalone module for NL43 configuration and status APIs",
|
||||
version="0.1.0",
|
||||
)
|
||||
|
||||
# CORS configuration - use environment variable for allowed origins
|
||||
# Default to "*" for development, but should be restricted in production
|
||||
allowed_origins = os.getenv("CORS_ORIGINS", "*").split(",")
|
||||
logger.info(f"CORS allowed origins: {allowed_origins}")
|
||||
|
||||
app.add_middleware(
|
||||
CORSMiddleware,
|
||||
allow_origins=allowed_origins,
|
||||
allow_credentials=True,
|
||||
allow_methods=["*"],
|
||||
allow_headers=["*"],
|
||||
)
|
||||
|
||||
templates = Jinja2Templates(directory="templates")
|
||||
|
||||
app.include_router(routers.router)
|
||||
|
||||
|
||||
@app.get("/", response_class=HTMLResponse)
|
||||
def index(request: Request):
|
||||
return templates.TemplateResponse("index.html", {"request": request})
|
||||
|
||||
|
||||
@app.get("/health")
|
||||
async def health():
|
||||
"""Basic health check endpoint."""
|
||||
return {"status": "ok", "service": "slmm-nl43-addon"}
|
||||
|
||||
|
||||
@app.get("/health/devices")
|
||||
async def health_devices():
|
||||
"""Enhanced health check that tests device connectivity."""
|
||||
from sqlalchemy.orm import Session
|
||||
from app.slm.database import SessionLocal
|
||||
from app.slm.services import NL43Client
|
||||
from app.slm.models import NL43Config
|
||||
|
||||
db: Session = SessionLocal()
|
||||
device_status = []
|
||||
|
||||
try:
|
||||
configs = db.query(NL43Config).filter_by(tcp_enabled=True).all()
|
||||
|
||||
for cfg in configs:
|
||||
client = NL43Client(cfg.host, cfg.tcp_port, timeout=2.0, ftp_username=cfg.ftp_username, ftp_password=cfg.ftp_password)
|
||||
status = {
|
||||
"unit_id": cfg.unit_id,
|
||||
"host": cfg.host,
|
||||
"port": cfg.tcp_port,
|
||||
"reachable": False,
|
||||
"error": None,
|
||||
}
|
||||
|
||||
try:
|
||||
# Try to connect (don't send command to avoid rate limiting issues)
|
||||
import asyncio
|
||||
reader, writer = await asyncio.wait_for(
|
||||
asyncio.open_connection(cfg.host, cfg.tcp_port), timeout=2.0
|
||||
)
|
||||
writer.close()
|
||||
await writer.wait_closed()
|
||||
status["reachable"] = True
|
||||
except Exception as e:
|
||||
status["error"] = str(type(e).__name__)
|
||||
logger.warning(f"Device {cfg.unit_id} health check failed: {e}")
|
||||
|
||||
device_status.append(status)
|
||||
|
||||
finally:
|
||||
db.close()
|
||||
|
||||
all_reachable = all(d["reachable"] for d in device_status) if device_status else True
|
||||
|
||||
return {
|
||||
"status": "ok" if all_reachable else "degraded",
|
||||
"devices": device_status,
|
||||
"total_devices": len(device_status),
|
||||
"reachable_devices": sum(1 for d in device_status if d["reachable"]),
|
||||
}
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
import uvicorn
|
||||
|
||||
uvicorn.run("app.main:app", host="0.0.0.0", port=int(os.getenv("PORT", "8100")), reload=True)
|
||||
@@ -1,43 +0,0 @@
|
||||
from sqlalchemy import Column, String, DateTime, Boolean, Integer, Text, func
|
||||
from app.slm.database import Base
|
||||
|
||||
|
||||
class NL43Config(Base):
|
||||
"""
|
||||
NL43 connection/config metadata for the standalone SLMM addon.
|
||||
"""
|
||||
|
||||
__tablename__ = "nl43_config"
|
||||
|
||||
unit_id = Column(String, primary_key=True, index=True)
|
||||
host = Column(String, default="127.0.0.1")
|
||||
tcp_port = Column(Integer, default=80) # NL43 TCP control port (via RX55)
|
||||
tcp_enabled = Column(Boolean, default=True)
|
||||
ftp_enabled = Column(Boolean, default=False)
|
||||
ftp_username = Column(String, nullable=True) # FTP login username
|
||||
ftp_password = Column(String, nullable=True) # FTP login password
|
||||
web_enabled = Column(Boolean, default=False)
|
||||
|
||||
|
||||
class NL43Status(Base):
|
||||
"""
|
||||
Latest NL43 status snapshot for quick dashboard/API access.
|
||||
"""
|
||||
|
||||
__tablename__ = "nl43_status"
|
||||
|
||||
unit_id = Column(String, primary_key=True, index=True)
|
||||
last_seen = Column(DateTime, default=func.now())
|
||||
measurement_state = Column(String, default="unknown") # Measure/Stop
|
||||
measurement_start_time = Column(DateTime, nullable=True) # When measurement started (UTC)
|
||||
counter = Column(String, nullable=True) # d0: Measurement interval counter (1-600)
|
||||
lp = Column(String, nullable=True) # Instantaneous sound pressure level
|
||||
leq = Column(String, nullable=True) # Equivalent continuous sound level
|
||||
lmax = Column(String, nullable=True) # Maximum level
|
||||
lmin = Column(String, nullable=True) # Minimum level
|
||||
lpeak = Column(String, nullable=True) # Peak level
|
||||
battery_level = Column(String, nullable=True)
|
||||
power_source = Column(String, nullable=True)
|
||||
sd_remaining_mb = Column(String, nullable=True)
|
||||
sd_free_ratio = Column(String, nullable=True)
|
||||
raw_payload = Column(Text, nullable=True)
|
||||
1333
app/slm/routers.py
@@ -1,828 +0,0 @@
|
||||
"""
|
||||
NL43 TCP connector and snapshot persistence.
|
||||
|
||||
Implements simple per-request TCP calls to avoid long-lived socket complexity.
|
||||
Extend to pooled connections/DRD streaming later.
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import contextlib
|
||||
import logging
|
||||
import time
|
||||
from dataclasses import dataclass
|
||||
from datetime import datetime
|
||||
from typing import Optional, List
|
||||
from sqlalchemy.orm import Session
|
||||
from ftplib import FTP
|
||||
from pathlib import Path
|
||||
|
||||
from app.slm.models import NL43Status
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
@dataclass
|
||||
class NL43Snapshot:
|
||||
unit_id: str
|
||||
measurement_state: str = "unknown"
|
||||
counter: Optional[str] = None # d0: Measurement interval counter (1-600)
|
||||
lp: Optional[str] = None # Instantaneous sound pressure level
|
||||
leq: Optional[str] = None # Equivalent continuous sound level
|
||||
lmax: Optional[str] = None # Maximum level
|
||||
lmin: Optional[str] = None # Minimum level
|
||||
lpeak: Optional[str] = None # Peak level
|
||||
battery_level: Optional[str] = None
|
||||
power_source: Optional[str] = None
|
||||
sd_remaining_mb: Optional[str] = None
|
||||
sd_free_ratio: Optional[str] = None
|
||||
raw_payload: Optional[str] = None
|
||||
|
||||
|
||||
def persist_snapshot(s: NL43Snapshot, db: Session):
|
||||
"""Persist the latest snapshot for API/dashboard use."""
|
||||
try:
|
||||
row = db.query(NL43Status).filter_by(unit_id=s.unit_id).first()
|
||||
if not row:
|
||||
row = NL43Status(unit_id=s.unit_id)
|
||||
db.add(row)
|
||||
|
||||
row.last_seen = datetime.utcnow()
|
||||
|
||||
# Track measurement start time by detecting state transition
|
||||
previous_state = row.measurement_state
|
||||
new_state = s.measurement_state
|
||||
|
||||
logger.info(f"State transition check for {s.unit_id}: '{previous_state}' -> '{new_state}'")
|
||||
|
||||
# Device returns "Start" when measuring, "Stop" when stopped
|
||||
# Normalize to previous behavior for backward compatibility
|
||||
is_measuring = new_state == "Start"
|
||||
was_measuring = previous_state == "Start"
|
||||
|
||||
if not was_measuring and is_measuring:
|
||||
# Measurement just started - record the start time
|
||||
row.measurement_start_time = datetime.utcnow()
|
||||
logger.info(f"✓ Measurement started on {s.unit_id} at {row.measurement_start_time}")
|
||||
elif was_measuring and not is_measuring:
|
||||
# Measurement stopped - clear the start time
|
||||
row.measurement_start_time = None
|
||||
logger.info(f"✓ Measurement stopped on {s.unit_id}")
|
||||
|
||||
row.measurement_state = new_state
|
||||
row.counter = s.counter
|
||||
row.lp = s.lp
|
||||
row.leq = s.leq
|
||||
row.lmax = s.lmax
|
||||
row.lmin = s.lmin
|
||||
row.lpeak = s.lpeak
|
||||
row.battery_level = s.battery_level
|
||||
row.power_source = s.power_source
|
||||
row.sd_remaining_mb = s.sd_remaining_mb
|
||||
row.sd_free_ratio = s.sd_free_ratio
|
||||
row.raw_payload = s.raw_payload
|
||||
|
||||
db.commit()
|
||||
except Exception as e:
|
||||
db.rollback()
|
||||
logger.error(f"Failed to persist snapshot for unit {s.unit_id}: {e}")
|
||||
raise
|
||||
|
||||
|
||||
# Rate limiting: NL43 requires ≥1 second between commands
|
||||
_last_command_time = {}
|
||||
_rate_limit_lock = asyncio.Lock()
|
||||
|
||||
|
||||
class NL43Client:
|
||||
def __init__(self, host: str, port: int, timeout: float = 5.0, ftp_username: str = None, ftp_password: str = None):
|
||||
self.host = host
|
||||
self.port = port
|
||||
self.timeout = timeout
|
||||
self.ftp_username = ftp_username or "anonymous"
|
||||
self.ftp_password = ftp_password or ""
|
||||
self.device_key = f"{host}:{port}"
|
||||
|
||||
async def _enforce_rate_limit(self):
|
||||
"""Ensure ≥1 second between commands to the same device."""
|
||||
async with _rate_limit_lock:
|
||||
last_time = _last_command_time.get(self.device_key, 0)
|
||||
elapsed = time.time() - last_time
|
||||
if elapsed < 1.0:
|
||||
wait_time = 1.0 - elapsed
|
||||
logger.debug(f"Rate limiting: waiting {wait_time:.2f}s for {self.device_key}")
|
||||
await asyncio.sleep(wait_time)
|
||||
_last_command_time[self.device_key] = time.time()
|
||||
|
||||
async def _send_command(self, cmd: str) -> str:
|
||||
"""Send ASCII command to NL43 device via TCP.
|
||||
|
||||
NL43 protocol returns two lines for query commands:
|
||||
Line 1: Result code (R+0000 for success, error codes otherwise)
|
||||
Line 2: Actual data (for query commands ending with '?')
|
||||
"""
|
||||
await self._enforce_rate_limit()
|
||||
|
||||
logger.info(f"Sending command to {self.device_key}: {cmd.strip()}")
|
||||
|
||||
try:
|
||||
reader, writer = await asyncio.wait_for(
|
||||
asyncio.open_connection(self.host, self.port), timeout=self.timeout
|
||||
)
|
||||
except asyncio.TimeoutError:
|
||||
logger.error(f"Connection timeout to {self.device_key}")
|
||||
raise ConnectionError(f"Failed to connect to device at {self.host}:{self.port}")
|
||||
except Exception as e:
|
||||
logger.error(f"Connection failed to {self.device_key}: {e}")
|
||||
raise ConnectionError(f"Failed to connect to device: {str(e)}")
|
||||
|
||||
try:
|
||||
writer.write(cmd.encode("ascii"))
|
||||
await writer.drain()
|
||||
|
||||
# Read first line (result code)
|
||||
first_line_data = await asyncio.wait_for(reader.readuntil(b"\n"), timeout=self.timeout)
|
||||
result_code = first_line_data.decode(errors="ignore").strip()
|
||||
|
||||
# Remove leading $ prompt if present
|
||||
if result_code.startswith("$"):
|
||||
result_code = result_code[1:].strip()
|
||||
|
||||
logger.info(f"Result code from {self.device_key}: {result_code}")
|
||||
|
||||
# Check result code
|
||||
if result_code == "R+0000":
|
||||
# Success - for query commands, read the second line with actual data
|
||||
is_query = cmd.strip().endswith("?")
|
||||
if is_query:
|
||||
data_line = await asyncio.wait_for(reader.readuntil(b"\n"), timeout=self.timeout)
|
||||
response = data_line.decode(errors="ignore").strip()
|
||||
logger.debug(f"Data line from {self.device_key}: {response}")
|
||||
return response
|
||||
else:
|
||||
# Setting command - return success code
|
||||
return result_code
|
||||
elif result_code == "R+0001":
|
||||
raise ValueError("Command error - device did not recognize command")
|
||||
elif result_code == "R+0002":
|
||||
raise ValueError("Parameter error - invalid parameter value")
|
||||
elif result_code == "R+0003":
|
||||
raise ValueError("Spec/type error - command not supported by this device model")
|
||||
elif result_code == "R+0004":
|
||||
raise ValueError("Status error - device is in wrong state for this command")
|
||||
else:
|
||||
raise ValueError(f"Unknown result code: {result_code}")
|
||||
|
||||
except asyncio.TimeoutError:
|
||||
logger.error(f"Response timeout from {self.device_key}")
|
||||
raise TimeoutError(f"Device did not respond within {self.timeout}s")
|
||||
except Exception as e:
|
||||
logger.error(f"Communication error with {self.device_key}: {e}")
|
||||
raise
|
||||
finally:
|
||||
writer.close()
|
||||
with contextlib.suppress(Exception):
|
||||
await writer.wait_closed()
|
||||
|
||||
async def request_dod(self) -> NL43Snapshot:
|
||||
"""Request DOD (Data Output Display) snapshot from device.
|
||||
|
||||
Returns parsed measurement data from the device display.
|
||||
"""
|
||||
# _send_command now handles result code validation and returns the data line
|
||||
resp = await self._send_command("DOD?\r\n")
|
||||
|
||||
# Validate response format
|
||||
if not resp:
|
||||
logger.warning(f"Empty data response from DOD command on {self.device_key}")
|
||||
raise ValueError("Device returned empty data for DOD? command")
|
||||
|
||||
# Remove leading $ prompt if present (shouldn't be there after _send_command, but be safe)
|
||||
if resp.startswith("$"):
|
||||
resp = resp[1:].strip()
|
||||
|
||||
parts = [p.strip() for p in resp.split(",") if p.strip() != ""]
|
||||
|
||||
# DOD should return at least some data points
|
||||
if len(parts) < 2:
|
||||
logger.error(f"Malformed DOD data from {self.device_key}: {resp}")
|
||||
raise ValueError(f"Malformed DOD data: expected comma-separated values, got: {resp}")
|
||||
|
||||
logger.info(f"Parsed {len(parts)} data points from DOD response")
|
||||
|
||||
# Query actual measurement state (DOD doesn't include this information)
|
||||
try:
|
||||
measurement_state = await self.get_measurement_state()
|
||||
except Exception as e:
|
||||
logger.warning(f"Failed to get measurement state, defaulting to 'Measure': {e}")
|
||||
measurement_state = "Measure"
|
||||
|
||||
snap = NL43Snapshot(unit_id="", raw_payload=resp, measurement_state=measurement_state)
|
||||
|
||||
# Parse known positions (based on NL43 communication guide - DRD format)
|
||||
# DRD format: d0=counter, d1=Lp, d2=Leq, d3=Lmax, d4=Lmin, d5=Lpeak, d6=LIeq, ...
|
||||
try:
|
||||
# Capture d0 (counter) for timer synchronization
|
||||
if len(parts) >= 1:
|
||||
snap.counter = parts[0] # d0: Measurement interval counter (1-600)
|
||||
if len(parts) >= 2:
|
||||
snap.lp = parts[1] # d1: Instantaneous sound pressure level
|
||||
if len(parts) >= 3:
|
||||
snap.leq = parts[2] # d2: Equivalent continuous sound level
|
||||
if len(parts) >= 4:
|
||||
snap.lmax = parts[3] # d3: Maximum level
|
||||
if len(parts) >= 5:
|
||||
snap.lmin = parts[4] # d4: Minimum level
|
||||
if len(parts) >= 6:
|
||||
snap.lpeak = parts[5] # d5: Peak level
|
||||
except (IndexError, ValueError) as e:
|
||||
logger.warning(f"Error parsing DOD data points: {e}")
|
||||
|
||||
return snap
|
||||
|
||||
async def start(self):
|
||||
"""Start measurement on the device.
|
||||
|
||||
According to NL43 protocol: Measure,Start (no $ prefix, capitalized param)
|
||||
"""
|
||||
await self._send_command("Measure,Start\r\n")
|
||||
|
||||
async def stop(self):
|
||||
"""Stop measurement on the device.
|
||||
|
||||
According to NL43 protocol: Measure,Stop (no $ prefix, capitalized param)
|
||||
"""
|
||||
await self._send_command("Measure,Stop\r\n")
|
||||
|
||||
async def set_store_mode_manual(self):
|
||||
"""Set the device to Manual Store mode.
|
||||
|
||||
According to NL43 protocol: Store Mode,Manual sets manual storage mode
|
||||
"""
|
||||
await self._send_command("Store Mode,Manual\r\n")
|
||||
logger.info(f"Store mode set to Manual on {self.device_key}")
|
||||
|
||||
async def manual_store(self):
|
||||
"""Manually store the current measurement data.
|
||||
|
||||
According to NL43 protocol: Manual Store,Start executes storing
|
||||
Parameter p1="Start" executes the storage operation
|
||||
Device must be in Manual Store mode first
|
||||
"""
|
||||
await self._send_command("Manual Store,Start\r\n")
|
||||
logger.info(f"Manual store executed on {self.device_key}")
|
||||
|
||||
async def pause(self):
|
||||
"""Pause the current measurement."""
|
||||
await self._send_command("Pause,On\r\n")
|
||||
logger.info(f"Measurement paused on {self.device_key}")
|
||||
|
||||
async def resume(self):
|
||||
"""Resume a paused measurement."""
|
||||
await self._send_command("Pause,Off\r\n")
|
||||
logger.info(f"Measurement resumed on {self.device_key}")
|
||||
|
||||
async def reset(self):
|
||||
"""Reset the measurement data."""
|
||||
await self._send_command("Reset\r\n")
|
||||
logger.info(f"Measurement data reset on {self.device_key}")
|
||||
|
||||
async def get_measurement_state(self) -> str:
|
||||
"""Get the current measurement state.
|
||||
|
||||
Returns: "Start" if measuring, "Stop" if stopped
|
||||
"""
|
||||
resp = await self._send_command("Measure?\r\n")
|
||||
state = resp.strip()
|
||||
logger.info(f"Measurement state on {self.device_key}: {state}")
|
||||
return state
|
||||
|
||||
async def get_battery_level(self) -> str:
|
||||
"""Get the battery level."""
|
||||
resp = await self._send_command("Battery Level?\r\n")
|
||||
logger.info(f"Battery level on {self.device_key}: {resp}")
|
||||
return resp.strip()
|
||||
|
||||
async def get_clock(self) -> str:
|
||||
"""Get the device clock time."""
|
||||
resp = await self._send_command("Clock?\r\n")
|
||||
logger.info(f"Clock on {self.device_key}: {resp}")
|
||||
return resp.strip()
|
||||
|
||||
async def set_clock(self, datetime_str: str):
|
||||
"""Set the device clock time.
|
||||
|
||||
Args:
|
||||
datetime_str: Time in format YYYY/MM/DD,HH:MM:SS or YYYY/MM/DD HH:MM:SS
|
||||
"""
|
||||
# Device expects format: Clock,YYYY/MM/DD HH:MM:SS (space between date and time)
|
||||
# Replace comma with space if present to normalize format
|
||||
normalized = datetime_str.replace(',', ' ', 1)
|
||||
await self._send_command(f"Clock,{normalized}\r\n")
|
||||
logger.info(f"Clock set on {self.device_key} to {normalized}")
|
||||
|
||||
async def get_frequency_weighting(self, channel: str = "Main") -> str:
|
||||
"""Get frequency weighting (A, C, Z, etc.).
|
||||
|
||||
Args:
|
||||
channel: Main, Sub1, Sub2, or Sub3
|
||||
"""
|
||||
resp = await self._send_command(f"Frequency Weighting ({channel})?\r\n")
|
||||
logger.info(f"Frequency weighting ({channel}) on {self.device_key}: {resp}")
|
||||
return resp.strip()
|
||||
|
||||
async def set_frequency_weighting(self, weighting: str, channel: str = "Main"):
|
||||
"""Set frequency weighting.
|
||||
|
||||
Args:
|
||||
weighting: A, C, or Z
|
||||
channel: Main, Sub1, Sub2, or Sub3
|
||||
"""
|
||||
await self._send_command(f"Frequency Weighting ({channel}),{weighting}\r\n")
|
||||
logger.info(f"Frequency weighting ({channel}) set to {weighting} on {self.device_key}")
|
||||
|
||||
async def get_time_weighting(self, channel: str = "Main") -> str:
|
||||
"""Get time weighting (F, S, I).
|
||||
|
||||
Args:
|
||||
channel: Main, Sub1, Sub2, or Sub3
|
||||
"""
|
||||
resp = await self._send_command(f"Time Weighting ({channel})?\r\n")
|
||||
logger.info(f"Time weighting ({channel}) on {self.device_key}: {resp}")
|
||||
return resp.strip()
|
||||
|
||||
async def set_time_weighting(self, weighting: str, channel: str = "Main"):
|
||||
"""Set time weighting.
|
||||
|
||||
Args:
|
||||
weighting: F (Fast), S (Slow), or I (Impulse)
|
||||
channel: Main, Sub1, Sub2, or Sub3
|
||||
"""
|
||||
await self._send_command(f"Time Weighting ({channel}),{weighting}\r\n")
|
||||
logger.info(f"Time weighting ({channel}) set to {weighting} on {self.device_key}")
|
||||
|
||||
async def request_dlc(self) -> dict:
|
||||
"""Request DLC (Data Last Calculation) - final stored measurement results.
|
||||
|
||||
This retrieves the complete calculation results from the last/current measurement,
|
||||
including all statistical data. Similar to DOD but for final results.
|
||||
|
||||
Returns:
|
||||
Dict with parsed DLC data
|
||||
"""
|
||||
resp = await self._send_command("DLC?\r\n")
|
||||
logger.info(f"DLC data received from {self.device_key}: {resp[:100]}...")
|
||||
|
||||
# Parse DLC response - similar format to DOD
|
||||
# The exact format depends on device configuration
|
||||
# For now, return raw data - can be enhanced based on actual response format
|
||||
return {
|
||||
"raw_data": resp.strip(),
|
||||
"device_key": self.device_key,
|
||||
}
|
||||
|
||||
async def sleep(self):
|
||||
"""Put the device into sleep mode to conserve battery.
|
||||
|
||||
Sleep mode is useful for battery conservation between scheduled measurements.
|
||||
Device can be woken up remotely via TCP command or by pressing a button.
|
||||
"""
|
||||
await self._send_command("Sleep Mode,On\r\n")
|
||||
logger.info(f"Device {self.device_key} entering sleep mode")
|
||||
|
||||
async def wake(self):
|
||||
"""Wake the device from sleep mode.
|
||||
|
||||
Note: This may not work if the device is in deep sleep.
|
||||
Physical button press might be required in some cases.
|
||||
"""
|
||||
await self._send_command("Sleep Mode,Off\r\n")
|
||||
logger.info(f"Device {self.device_key} waking from sleep mode")
|
||||
|
||||
async def get_sleep_status(self) -> str:
|
||||
"""Get the current sleep mode status."""
|
||||
resp = await self._send_command("Sleep Mode?\r\n")
|
||||
logger.info(f"Sleep mode status on {self.device_key}: {resp}")
|
||||
return resp.strip()
|
||||
|
||||
async def stream_drd(self, callback):
|
||||
"""Stream continuous DRD output from the device.
|
||||
|
||||
Opens a persistent connection and streams DRD data lines.
|
||||
Calls the provided callback function with each parsed snapshot.
|
||||
|
||||
Args:
|
||||
callback: Async function that receives NL43Snapshot objects
|
||||
|
||||
The stream continues until an exception occurs or the connection is closed.
|
||||
Send SUB character (0x1A) to stop the stream.
|
||||
"""
|
||||
await self._enforce_rate_limit()
|
||||
|
||||
logger.info(f"Starting DRD stream for {self.device_key}")
|
||||
|
||||
try:
|
||||
reader, writer = await asyncio.wait_for(
|
||||
asyncio.open_connection(self.host, self.port), timeout=self.timeout
|
||||
)
|
||||
except asyncio.TimeoutError:
|
||||
logger.error(f"DRD stream connection timeout to {self.device_key}")
|
||||
raise ConnectionError(f"Failed to connect to device at {self.host}:{self.port}")
|
||||
except Exception as e:
|
||||
logger.error(f"DRD stream connection failed to {self.device_key}: {e}")
|
||||
raise ConnectionError(f"Failed to connect to device: {str(e)}")
|
||||
|
||||
try:
|
||||
# Start DRD streaming
|
||||
writer.write(b"DRD?\r\n")
|
||||
await writer.drain()
|
||||
|
||||
# Read initial result code
|
||||
first_line_data = await asyncio.wait_for(reader.readuntil(b"\n"), timeout=self.timeout)
|
||||
result_code = first_line_data.decode(errors="ignore").strip()
|
||||
|
||||
if result_code.startswith("$"):
|
||||
result_code = result_code[1:].strip()
|
||||
|
||||
logger.debug(f"DRD stream result code from {self.device_key}: {result_code}")
|
||||
|
||||
if result_code != "R+0000":
|
||||
raise ValueError(f"DRD stream failed to start: {result_code}")
|
||||
|
||||
logger.info(f"DRD stream started successfully for {self.device_key}")
|
||||
|
||||
# Continuously read data lines
|
||||
while True:
|
||||
try:
|
||||
line_data = await asyncio.wait_for(reader.readuntil(b"\n"), timeout=30.0)
|
||||
line = line_data.decode(errors="ignore").strip()
|
||||
|
||||
if not line:
|
||||
continue
|
||||
|
||||
# Remove leading $ if present
|
||||
if line.startswith("$"):
|
||||
line = line[1:].strip()
|
||||
|
||||
# Parse the DRD data (same format as DOD)
|
||||
parts = [p.strip() for p in line.split(",") if p.strip() != ""]
|
||||
|
||||
if len(parts) < 2:
|
||||
logger.warning(f"Malformed DRD data from {self.device_key}: {line}")
|
||||
continue
|
||||
|
||||
snap = NL43Snapshot(unit_id="", raw_payload=line, measurement_state="Measure")
|
||||
|
||||
# Parse known positions (DRD format - same as DOD)
|
||||
# DRD format: d0=counter, d1=Lp, d2=Leq, d3=Lmax, d4=Lmin, d5=Lpeak, d6=LIeq, ...
|
||||
try:
|
||||
# Capture d0 (counter) for timer synchronization
|
||||
if len(parts) >= 1:
|
||||
snap.counter = parts[0] # d0: Measurement interval counter (1-600)
|
||||
if len(parts) >= 2:
|
||||
snap.lp = parts[1] # d1: Instantaneous sound pressure level
|
||||
if len(parts) >= 3:
|
||||
snap.leq = parts[2] # d2: Equivalent continuous sound level
|
||||
if len(parts) >= 4:
|
||||
snap.lmax = parts[3] # d3: Maximum level
|
||||
if len(parts) >= 5:
|
||||
snap.lmin = parts[4] # d4: Minimum level
|
||||
if len(parts) >= 6:
|
||||
snap.lpeak = parts[5] # d5: Peak level
|
||||
except (IndexError, ValueError) as e:
|
||||
logger.warning(f"Error parsing DRD data points: {e}")
|
||||
|
||||
# Call the callback with the snapshot
|
||||
await callback(snap)
|
||||
|
||||
except asyncio.TimeoutError:
|
||||
logger.warning(f"DRD stream timeout (no data for 30s) from {self.device_key}")
|
||||
break
|
||||
except asyncio.IncompleteReadError:
|
||||
logger.info(f"DRD stream closed by device {self.device_key}")
|
||||
break
|
||||
|
||||
finally:
|
||||
# Send SUB character to stop streaming
|
||||
try:
|
||||
writer.write(b"\x1A")
|
||||
await writer.drain()
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
writer.close()
|
||||
with contextlib.suppress(Exception):
|
||||
await writer.wait_closed()
|
||||
|
||||
logger.info(f"DRD stream ended for {self.device_key}")
|
||||
|
||||
async def set_measurement_time(self, preset: str):
|
||||
"""Set measurement time preset.
|
||||
|
||||
Args:
|
||||
preset: Time preset (10s, 1m, 5m, 10m, 15m, 30m, 1h, 8h, 24h, or custom like "00:05:30")
|
||||
"""
|
||||
await self._send_command(f"Measurement Time Preset Manual,{preset}\r\n")
|
||||
logger.info(f"Set measurement time to {preset} on {self.device_key}")
|
||||
|
||||
async def get_measurement_time(self) -> str:
|
||||
"""Get current measurement time preset.
|
||||
|
||||
Returns: Current time preset setting
|
||||
"""
|
||||
resp = await self._send_command("Measurement Time Preset Manual?\r\n")
|
||||
return resp.strip()
|
||||
|
||||
async def set_leq_interval(self, preset: str):
|
||||
"""Set Leq calculation interval preset.
|
||||
|
||||
Args:
|
||||
preset: Interval preset (Off, 10s, 1m, 5m, 10m, 15m, 30m, 1h, 8h, 24h, or custom like "00:05:30")
|
||||
"""
|
||||
await self._send_command(f"Leq Calculation Interval Preset,{preset}\r\n")
|
||||
logger.info(f"Set Leq interval to {preset} on {self.device_key}")
|
||||
|
||||
async def get_leq_interval(self) -> str:
|
||||
"""Get current Leq calculation interval preset.
|
||||
|
||||
Returns: Current interval preset setting
|
||||
"""
|
||||
resp = await self._send_command("Leq Calculation Interval Preset?\r\n")
|
||||
return resp.strip()
|
||||
|
||||
async def set_lp_interval(self, preset: str):
|
||||
"""Set Lp store interval.
|
||||
|
||||
Args:
|
||||
preset: Store interval (Off, 10ms, 25ms, 100ms, 200ms, 1s)
|
||||
"""
|
||||
await self._send_command(f"Lp Store Interval,{preset}\r\n")
|
||||
logger.info(f"Set Lp interval to {preset} on {self.device_key}")
|
||||
|
||||
async def get_lp_interval(self) -> str:
|
||||
"""Get current Lp store interval.
|
||||
|
||||
Returns: Current store interval setting
|
||||
"""
|
||||
resp = await self._send_command("Lp Store Interval?\r\n")
|
||||
return resp.strip()
|
||||
|
||||
async def set_index_number(self, index: int):
|
||||
"""Set index number for file numbering (Store Name).
|
||||
|
||||
Args:
|
||||
index: Index number (0000-9999)
|
||||
"""
|
||||
if not 0 <= index <= 9999:
|
||||
raise ValueError("Index must be between 0000 and 9999")
|
||||
await self._send_command(f"Store Name,{index:04d}\r\n")
|
||||
logger.info(f"Set store name (index) to {index:04d} on {self.device_key}")
|
||||
|
||||
async def get_index_number(self) -> str:
|
||||
"""Get current index number (Store Name).
|
||||
|
||||
Returns: Current index number
|
||||
"""
|
||||
resp = await self._send_command("Store Name?\r\n")
|
||||
return resp.strip()
|
||||
|
||||
async def get_overwrite_status(self) -> str:
|
||||
"""Check if saved data exists at current store target.
|
||||
|
||||
This command checks whether saved data exists in the set store target
|
||||
(store mode / store name / store address). Use this before storing
|
||||
to prevent accidentally overwriting data.
|
||||
|
||||
Returns:
|
||||
"None" - No data exists (safe to store)
|
||||
"Exist" - Data exists (would overwrite)
|
||||
"""
|
||||
resp = await self._send_command("Overwrite?\r\n")
|
||||
return resp.strip()
|
||||
|
||||
async def get_all_settings(self) -> dict:
|
||||
"""Query all device settings for verification.
|
||||
|
||||
Returns: Dictionary with all current device settings
|
||||
"""
|
||||
settings = {}
|
||||
|
||||
# Measurement settings
|
||||
try:
|
||||
settings["measurement_state"] = await self.get_measurement_state()
|
||||
except Exception as e:
|
||||
settings["measurement_state"] = f"Error: {e}"
|
||||
|
||||
try:
|
||||
settings["frequency_weighting"] = await self.get_frequency_weighting()
|
||||
except Exception as e:
|
||||
settings["frequency_weighting"] = f"Error: {e}"
|
||||
|
||||
try:
|
||||
settings["time_weighting"] = await self.get_time_weighting()
|
||||
except Exception as e:
|
||||
settings["time_weighting"] = f"Error: {e}"
|
||||
|
||||
# Timing/interval settings
|
||||
try:
|
||||
settings["measurement_time"] = await self.get_measurement_time()
|
||||
except Exception as e:
|
||||
settings["measurement_time"] = f"Error: {e}"
|
||||
|
||||
try:
|
||||
settings["leq_interval"] = await self.get_leq_interval()
|
||||
except Exception as e:
|
||||
settings["leq_interval"] = f"Error: {e}"
|
||||
|
||||
try:
|
||||
settings["lp_interval"] = await self.get_lp_interval()
|
||||
except Exception as e:
|
||||
settings["lp_interval"] = f"Error: {e}"
|
||||
|
||||
try:
|
||||
settings["index_number"] = await self.get_index_number()
|
||||
except Exception as e:
|
||||
settings["index_number"] = f"Error: {e}"
|
||||
|
||||
# Device info
|
||||
try:
|
||||
settings["battery_level"] = await self.get_battery_level()
|
||||
except Exception as e:
|
||||
settings["battery_level"] = f"Error: {e}"
|
||||
|
||||
try:
|
||||
settings["clock"] = await self.get_clock()
|
||||
except Exception as e:
|
||||
settings["clock"] = f"Error: {e}"
|
||||
|
||||
# Sleep mode
|
||||
try:
|
||||
settings["sleep_mode"] = await self.get_sleep_status()
|
||||
except Exception as e:
|
||||
settings["sleep_mode"] = f"Error: {e}"
|
||||
|
||||
# FTP status
|
||||
try:
|
||||
settings["ftp_status"] = await self.get_ftp_status()
|
||||
except Exception as e:
|
||||
settings["ftp_status"] = f"Error: {e}"
|
||||
|
||||
logger.info(f"Retrieved all settings for {self.device_key}")
|
||||
return settings
|
||||
|
||||
async def enable_ftp(self):
|
||||
"""Enable FTP server on the device.
|
||||
|
||||
According to NL43 protocol: FTP,On enables the FTP server
|
||||
"""
|
||||
await self._send_command("FTP,On\r\n")
|
||||
logger.info(f"FTP enabled on {self.device_key}")
|
||||
|
||||
async def disable_ftp(self):
|
||||
"""Disable FTP server on the device.
|
||||
|
||||
According to NL43 protocol: FTP,Off disables the FTP server
|
||||
"""
|
||||
await self._send_command("FTP,Off\r\n")
|
||||
logger.info(f"FTP disabled on {self.device_key}")
|
||||
|
||||
async def get_ftp_status(self) -> str:
|
||||
"""Query FTP server status on the device.
|
||||
|
||||
Returns: "On" or "Off"
|
||||
"""
|
||||
resp = await self._send_command("FTP?\r\n")
|
||||
logger.info(f"FTP status on {self.device_key}: {resp}")
|
||||
return resp.strip()
|
||||
|
||||
async def list_ftp_files(self, remote_path: str = "/") -> List[dict]:
|
||||
"""List files on the device via FTP.
|
||||
|
||||
Args:
|
||||
remote_path: Directory path on the device (default: root)
|
||||
|
||||
Returns:
|
||||
List of file info dicts with 'name', 'size', 'modified', 'is_dir'
|
||||
"""
|
||||
logger.info(f"Listing FTP files on {self.device_key} at {remote_path}")
|
||||
|
||||
def _list_ftp_sync():
|
||||
"""Synchronous FTP listing using ftplib (supports active mode)."""
|
||||
ftp = FTP()
|
||||
ftp.set_debuglevel(0)
|
||||
try:
|
||||
# Connect and login
|
||||
ftp.connect(self.host, 21, timeout=10)
|
||||
ftp.login(self.ftp_username, self.ftp_password)
|
||||
ftp.set_pasv(False) # Force active mode
|
||||
|
||||
# Change to target directory
|
||||
if remote_path != "/":
|
||||
ftp.cwd(remote_path)
|
||||
|
||||
# Get directory listing with details
|
||||
files = []
|
||||
lines = []
|
||||
ftp.retrlines('LIST', lines.append)
|
||||
|
||||
for line in lines:
|
||||
# Parse Unix-style ls output
|
||||
parts = line.split(None, 8)
|
||||
if len(parts) < 9:
|
||||
continue
|
||||
|
||||
is_dir = parts[0].startswith('d')
|
||||
size = int(parts[4]) if not is_dir else 0
|
||||
name = parts[8]
|
||||
|
||||
# Skip . and ..
|
||||
if name in ('.', '..'):
|
||||
continue
|
||||
|
||||
# Parse modification time
|
||||
# Format: "Jan 07 14:23" or "Dec 25 2025"
|
||||
modified_str = f"{parts[5]} {parts[6]} {parts[7]}"
|
||||
modified_timestamp = None
|
||||
try:
|
||||
from datetime import datetime
|
||||
# Try parsing with time (recent files: "Jan 07 14:23")
|
||||
try:
|
||||
dt = datetime.strptime(modified_str, "%b %d %H:%M")
|
||||
# Add current year since it's not in the format
|
||||
dt = dt.replace(year=datetime.now().year)
|
||||
|
||||
# If the resulting date is in the future, it's actually from last year
|
||||
if dt > datetime.now():
|
||||
dt = dt.replace(year=dt.year - 1)
|
||||
|
||||
modified_timestamp = dt.isoformat()
|
||||
except ValueError:
|
||||
# Try parsing with year (older files: "Dec 25 2025")
|
||||
dt = datetime.strptime(modified_str, "%b %d %Y")
|
||||
modified_timestamp = dt.isoformat()
|
||||
except Exception as e:
|
||||
logger.warning(f"Failed to parse timestamp '{modified_str}': {e}")
|
||||
|
||||
file_info = {
|
||||
"name": name,
|
||||
"path": f"{remote_path.rstrip('/')}/{name}",
|
||||
"size": size,
|
||||
"modified": modified_str, # Keep original string
|
||||
"modified_timestamp": modified_timestamp, # Add parsed timestamp
|
||||
"is_dir": is_dir,
|
||||
}
|
||||
files.append(file_info)
|
||||
logger.debug(f"Found file: {file_info}")
|
||||
|
||||
logger.info(f"Found {len(files)} files/directories on {self.device_key}")
|
||||
return files
|
||||
|
||||
finally:
|
||||
try:
|
||||
ftp.quit()
|
||||
except:
|
||||
pass
|
||||
|
||||
try:
|
||||
# Run synchronous FTP in thread pool
|
||||
return await asyncio.to_thread(_list_ftp_sync)
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to list FTP files on {self.device_key}: {e}")
|
||||
raise ConnectionError(f"FTP connection failed: {str(e)}")
|
||||
|
||||
async def download_ftp_file(self, remote_path: str, local_path: str):
|
||||
"""Download a file from the device via FTP.
|
||||
|
||||
Args:
|
||||
remote_path: Full path to file on the device
|
||||
local_path: Local path where file will be saved
|
||||
"""
|
||||
logger.info(f"Downloading {remote_path} from {self.device_key} to {local_path}")
|
||||
|
||||
def _download_ftp_sync():
|
||||
"""Synchronous FTP download using ftplib (supports active mode)."""
|
||||
ftp = FTP()
|
||||
ftp.set_debuglevel(0)
|
||||
try:
|
||||
# Connect and login
|
||||
ftp.connect(self.host, 21, timeout=10)
|
||||
ftp.login(self.ftp_username, self.ftp_password)
|
||||
ftp.set_pasv(False) # Force active mode
|
||||
|
||||
# Download file
|
||||
with open(local_path, 'wb') as f:
|
||||
ftp.retrbinary(f'RETR {remote_path}', f.write)
|
||||
|
||||
logger.info(f"Successfully downloaded {remote_path} to {local_path}")
|
||||
|
||||
finally:
|
||||
try:
|
||||
ftp.quit()
|
||||
except:
|
||||
pass
|
||||
|
||||
try:
|
||||
# Run synchronous FTP in thread pool
|
||||
await asyncio.to_thread(_download_ftp_sync)
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to download {remote_path} from {self.device_key}: {e}")
|
||||
raise ConnectionError(f"FTP download failed: {str(e)}")
|
||||
@@ -1,92 +0,0 @@
|
||||
"""
|
||||
UI Layer Routes - HTML page routes only (no business logic)
|
||||
"""
|
||||
from fastapi import APIRouter, Request
|
||||
from fastapi.responses import HTMLResponse, FileResponse
|
||||
from fastapi.templating import Jinja2Templates
|
||||
import os
|
||||
|
||||
router = APIRouter()
|
||||
|
||||
# Setup Jinja2 templates
|
||||
templates = Jinja2Templates(directory="app/ui/templates")
|
||||
|
||||
# Read environment (development or production)
|
||||
ENVIRONMENT = os.getenv("ENVIRONMENT", "production")
|
||||
VERSION = "1.0.0" # Terra-View version
|
||||
|
||||
# Override TemplateResponse to include environment and version in context
|
||||
original_template_response = templates.TemplateResponse
|
||||
def custom_template_response(name, context=None, *args, **kwargs):
|
||||
if context is None:
|
||||
context = {}
|
||||
context["environment"] = ENVIRONMENT
|
||||
context["version"] = VERSION
|
||||
return original_template_response(name, context, *args, **kwargs)
|
||||
templates.TemplateResponse = custom_template_response
|
||||
|
||||
|
||||
# ===== HTML PAGE ROUTES =====
|
||||
|
||||
@router.get("/", response_class=HTMLResponse)
|
||||
async def dashboard(request: Request):
|
||||
"""Dashboard home page"""
|
||||
return templates.TemplateResponse("dashboard.html", {"request": request})
|
||||
|
||||
|
||||
@router.get("/roster", response_class=HTMLResponse)
|
||||
async def roster_page(request: Request):
|
||||
"""Fleet roster page"""
|
||||
return templates.TemplateResponse("roster.html", {"request": request})
|
||||
|
||||
|
||||
@router.get("/unit/{unit_id}", response_class=HTMLResponse)
|
||||
async def unit_detail_page(request: Request, unit_id: str):
|
||||
"""Unit detail page"""
|
||||
return templates.TemplateResponse("unit_detail.html", {
|
||||
"request": request,
|
||||
"unit_id": unit_id
|
||||
})
|
||||
|
||||
|
||||
@router.get("/settings", response_class=HTMLResponse)
|
||||
async def settings_page(request: Request):
|
||||
"""Settings page for roster management"""
|
||||
return templates.TemplateResponse("settings.html", {"request": request})
|
||||
|
||||
|
||||
@router.get("/sound-level-meters", response_class=HTMLResponse)
|
||||
async def sound_level_meters_page(request: Request):
|
||||
"""Sound Level Meters management dashboard"""
|
||||
return templates.TemplateResponse("sound_level_meters.html", {"request": request})
|
||||
|
||||
|
||||
@router.get("/seismographs", response_class=HTMLResponse)
|
||||
async def seismographs_page(request: Request):
|
||||
"""Seismographs management dashboard"""
|
||||
return templates.TemplateResponse("seismographs.html", {"request": request})
|
||||
|
||||
|
||||
# ===== PWA ROUTES =====
|
||||
|
||||
@router.get("/sw.js")
|
||||
async def service_worker():
|
||||
"""Serve service worker with proper headers for PWA"""
|
||||
return FileResponse(
|
||||
"app/ui/static/sw.js",
|
||||
media_type="application/javascript",
|
||||
headers={
|
||||
"Service-Worker-Allowed": "/",
|
||||
"Cache-Control": "no-cache"
|
||||
}
|
||||
)
|
||||
|
||||
|
||||
@router.get("/offline-db.js")
|
||||
async def offline_db_script():
|
||||
"""Serve offline database script"""
|
||||
return FileResponse(
|
||||
"app/ui/static/offline-db.js",
|
||||
media_type="application/javascript",
|
||||
headers={"Cache-Control": "no-cache"}
|
||||
)
|
||||
|
Before Width: | Height: | Size: 1.9 KiB |
|
Before Width: | Height: | Size: 2.2 KiB |
|
Before Width: | Height: | Size: 2.2 KiB |
|
Before Width: | Height: | Size: 2.9 KiB |
|
Before Width: | Height: | Size: 5.8 KiB |
|
Before Width: | Height: | Size: 7.8 KiB |
|
Before Width: | Height: | Size: 1.1 KiB |
|
Before Width: | Height: | Size: 1.4 KiB |
@@ -1,195 +0,0 @@
|
||||
{% extends "base.html" %}
|
||||
|
||||
{% block title %}{{ unit_id }} - Sound Level Meter{% endblock %}
|
||||
|
||||
{% block content %}
|
||||
<div class="mb-6">
|
||||
<a href="/roster" class="text-seismo-orange hover:text-seismo-orange-dark flex items-center">
|
||||
<svg class="w-4 h-4 mr-1" fill="none" stroke="currentColor" viewBox="0 0 24 24">
|
||||
<path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M15 19l-7-7 7-7"></path>
|
||||
</svg>
|
||||
Back to Roster
|
||||
</a>
|
||||
</div>
|
||||
|
||||
<div class="mb-8">
|
||||
<div class="flex justify-between items-start">
|
||||
<div>
|
||||
<h1 class="text-3xl font-bold text-gray-900 dark:text-white flex items-center">
|
||||
<svg class="w-8 h-8 mr-3 text-seismo-orange" fill="none" stroke="currentColor" viewBox="0 0 24 24">
|
||||
<path stroke-linecap="round" stroke-linejoin="round" stroke-width="2"
|
||||
d="M9 19V6l12-3v13M9 19c0 1.105-1.343 2-3 2s-3-.895-3-2 1.343-2 3-2 3 .895 3 2zm12-3c0 1.105-1.343 2-3 2s-3-.895-3-2 1.343-2 3-2 3 .895 3 2zM9 10l12-3">
|
||||
</path>
|
||||
</svg>
|
||||
{{ unit_id }}
|
||||
</h1>
|
||||
<p class="text-gray-600 dark:text-gray-400 mt-1">
|
||||
{{ unit.slm_model or 'NL-43' }} Sound Level Meter
|
||||
</p>
|
||||
</div>
|
||||
<div class="flex gap-2">
|
||||
<span class="px-3 py-1 rounded-full text-sm font-medium
|
||||
{% if unit.deployed %}bg-blue-100 text-blue-800 dark:bg-blue-900 dark:text-blue-200
|
||||
{% else %}bg-gray-100 text-gray-800 dark:bg-gray-700 dark:text-gray-200{% endif %}">
|
||||
{% if unit.deployed %}Deployed{% else %}Benched{% endif %}
|
||||
</span>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Control Panel -->
|
||||
<div class="mb-8">
|
||||
<h2 class="text-xl font-semibold text-gray-900 dark:text-white mb-4">Control Panel</h2>
|
||||
<div hx-get="/slm/partials/{{ unit_id }}/controls"
|
||||
hx-trigger="load, every 5s"
|
||||
hx-swap="innerHTML">
|
||||
<div class="text-center py-8 text-gray-500">Loading controls...</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Real-time Data Stream -->
|
||||
<div class="mb-8">
|
||||
<h2 class="text-xl font-semibold text-gray-900 dark:text-white mb-4">Real-time Measurements</h2>
|
||||
<div class="bg-white dark:bg-slate-800 rounded-xl shadow-lg p-6">
|
||||
<div id="slm-stream-container">
|
||||
<div class="text-center py-8">
|
||||
<button onclick="startStream()"
|
||||
id="stream-start-btn"
|
||||
class="px-6 py-3 bg-seismo-orange text-white rounded-lg hover:bg-seismo-orange-dark transition-colors">
|
||||
Start Real-time Stream
|
||||
</button>
|
||||
<p class="text-sm text-gray-500 mt-2">Click to begin streaming live measurement data</p>
|
||||
</div>
|
||||
<div id="stream-data" class="hidden">
|
||||
<div class="grid grid-cols-2 md:grid-cols-4 gap-4 mb-4">
|
||||
<div class="bg-gray-50 dark:bg-gray-900 rounded-lg p-4">
|
||||
<div class="text-sm text-gray-600 dark:text-gray-400 mb-1">Lp (Instant)</div>
|
||||
<div id="stream-lp" class="text-3xl font-bold text-gray-900 dark:text-white">--</div>
|
||||
<div class="text-xs text-gray-500">dB</div>
|
||||
</div>
|
||||
<div class="bg-gray-50 dark:bg-gray-900 rounded-lg p-4">
|
||||
<div class="text-sm text-gray-600 dark:text-gray-400 mb-1">Leq (Average)</div>
|
||||
<div id="stream-leq" class="text-3xl font-bold text-blue-600 dark:text-blue-400">--</div>
|
||||
<div class="text-xs text-gray-500">dB</div>
|
||||
</div>
|
||||
<div class="bg-gray-50 dark:bg-gray-900 rounded-lg p-4">
|
||||
<div class="text-sm text-gray-600 dark:text-gray-400 mb-1">Lmax</div>
|
||||
<div id="stream-lmax" class="text-3xl font-bold text-red-600 dark:text-red-400">--</div>
|
||||
<div class="text-xs text-gray-500">dB</div>
|
||||
</div>
|
||||
<div class="bg-gray-50 dark:bg-gray-900 rounded-lg p-4">
|
||||
<div class="text-sm text-gray-600 dark:text-gray-400 mb-1">Lmin</div>
|
||||
<div id="stream-lmin" class="text-3xl font-bold text-green-600 dark:text-green-400">--</div>
|
||||
<div class="text-xs text-gray-500">dB</div>
|
||||
</div>
|
||||
</div>
|
||||
<div class="flex justify-between items-center">
|
||||
<div class="text-xs text-gray-500">
|
||||
<span class="inline-block w-2 h-2 bg-green-500 rounded-full mr-2 animate-pulse"></span>
|
||||
Streaming
|
||||
</div>
|
||||
<button onclick="stopStream()"
|
||||
class="px-4 py-2 bg-red-600 text-white text-sm rounded-lg hover:bg-red-700 transition-colors">
|
||||
Stop Stream
|
||||
</button>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Device Information -->
|
||||
<div class="mb-8">
|
||||
<h2 class="text-xl font-semibold text-gray-900 dark:text-white mb-4">Device Information</h2>
|
||||
<div class="bg-white dark:bg-slate-800 rounded-xl shadow-lg p-6">
|
||||
<div class="grid grid-cols-1 md:grid-cols-2 gap-4">
|
||||
<div>
|
||||
<div class="text-sm text-gray-600 dark:text-gray-400">Model</div>
|
||||
<div class="text-lg font-medium text-gray-900 dark:text-white">{{ unit.slm_model or 'NL-43' }}</div>
|
||||
</div>
|
||||
<div>
|
||||
<div class="text-sm text-gray-600 dark:text-gray-400">Serial Number</div>
|
||||
<div class="text-lg font-medium text-gray-900 dark:text-white">{{ unit.slm_serial_number or 'N/A' }}</div>
|
||||
</div>
|
||||
<div>
|
||||
<div class="text-sm text-gray-600 dark:text-gray-400">Host</div>
|
||||
<div class="text-lg font-medium text-gray-900 dark:text-white">{{ unit.slm_host or 'Not configured' }}</div>
|
||||
</div>
|
||||
<div>
|
||||
<div class="text-sm text-gray-600 dark:text-gray-400">TCP Port</div>
|
||||
<div class="text-lg font-medium text-gray-900 dark:text-white">{{ unit.slm_tcp_port or 'N/A' }}</div>
|
||||
</div>
|
||||
<div>
|
||||
<div class="text-sm text-gray-600 dark:text-gray-400">Frequency Weighting</div>
|
||||
<div class="text-lg font-medium text-gray-900 dark:text-white">{{ unit.slm_frequency_weighting or 'A' }}</div>
|
||||
</div>
|
||||
<div>
|
||||
<div class="text-sm text-gray-600 dark:text-gray-400">Time Weighting</div>
|
||||
<div class="text-lg font-medium text-gray-900 dark:text-white">{{ unit.slm_time_weighting or 'F (Fast)' }}</div>
|
||||
</div>
|
||||
<div class="md:col-span-2">
|
||||
<div class="text-sm text-gray-600 dark:text-gray-400">Location</div>
|
||||
<div class="text-lg font-medium text-gray-900 dark:text-white">{{ unit.address or unit.location or 'Not specified' }}</div>
|
||||
</div>
|
||||
{% if unit.note %}
|
||||
<div class="md:col-span-2">
|
||||
<div class="text-sm text-gray-600 dark:text-gray-400">Notes</div>
|
||||
<div class="text-gray-900 dark:text-white">{{ unit.note }}</div>
|
||||
</div>
|
||||
{% endif %}
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<script>
|
||||
let ws = null;
|
||||
|
||||
function startStream() {
|
||||
const protocol = window.location.protocol === 'https:' ? 'wss:' : 'ws:';
|
||||
const wsUrl = `${protocol}//${window.location.host}/api/slmm/{{ unit_id }}/stream`;
|
||||
|
||||
ws = new WebSocket(wsUrl);
|
||||
|
||||
ws.onopen = () => {
|
||||
document.getElementById('stream-start-btn').classList.add('hidden');
|
||||
document.getElementById('stream-data').classList.remove('hidden');
|
||||
console.log('WebSocket connected');
|
||||
};
|
||||
|
||||
ws.onmessage = (event) => {
|
||||
const data = JSON.parse(event.data);
|
||||
|
||||
if (data.error) {
|
||||
console.error('Stream error:', data.error);
|
||||
stopStream();
|
||||
alert('Error: ' + data.error);
|
||||
return;
|
||||
}
|
||||
|
||||
// Update values
|
||||
document.getElementById('stream-lp').textContent = data.lp || '--';
|
||||
document.getElementById('stream-leq').textContent = data.leq || '--';
|
||||
document.getElementById('stream-lmax').textContent = data.lmax || '--';
|
||||
document.getElementById('stream-lmin').textContent = data.lmin || '--';
|
||||
};
|
||||
|
||||
ws.onerror = (error) => {
|
||||
console.error('WebSocket error:', error);
|
||||
stopStream();
|
||||
};
|
||||
|
||||
ws.onclose = () => {
|
||||
console.log('WebSocket closed');
|
||||
};
|
||||
}
|
||||
|
||||
function stopStream() {
|
||||
if (ws) {
|
||||
ws.close();
|
||||
ws = null;
|
||||
}
|
||||
document.getElementById('stream-start-btn').classList.remove('hidden');
|
||||
document.getElementById('stream-data').classList.add('hidden');
|
||||
}
|
||||
</script>
|
||||
{% endblock %}
|
||||
@@ -1,257 +0,0 @@
|
||||
{% extends "base.html" %}
|
||||
|
||||
{% block title %}Sound Level Meters - Seismo Fleet Manager{% endblock %}
|
||||
|
||||
{% block content %}
|
||||
<div class="mb-8">
|
||||
<h1 class="text-3xl font-bold text-gray-900 dark:text-white">Sound Level Meters</h1>
|
||||
<p class="text-gray-600 dark:text-gray-400 mt-1">Monitor and manage sound level measurement devices</p>
|
||||
</div>
|
||||
|
||||
<!-- Summary Stats -->
|
||||
<div class="grid grid-cols-1 md:grid-cols-4 gap-6 mb-8"
|
||||
hx-get="/api/slm-dashboard/stats"
|
||||
hx-trigger="load, every 10s"
|
||||
hx-swap="innerHTML">
|
||||
<!-- Stats will be loaded here -->
|
||||
<div class="animate-pulse bg-gray-200 dark:bg-gray-700 h-24 rounded-xl"></div>
|
||||
<div class="animate-pulse bg-gray-200 dark:bg-gray-700 h-24 rounded-xl"></div>
|
||||
<div class="animate-pulse bg-gray-200 dark:bg-gray-700 h-24 rounded-xl"></div>
|
||||
<div class="animate-pulse bg-gray-200 dark:bg-gray-700 h-24 rounded-xl"></div>
|
||||
</div>
|
||||
|
||||
<!-- Main Content Grid -->
|
||||
<div class="grid grid-cols-1 lg:grid-cols-3 gap-6">
|
||||
<!-- SLM List -->
|
||||
<div class="lg:col-span-1">
|
||||
<div class="bg-white dark:bg-slate-800 rounded-xl shadow-lg p-6">
|
||||
<h2 class="text-xl font-semibold text-gray-900 dark:text-white mb-4">Active Units</h2>
|
||||
|
||||
<!-- Search/Filter -->
|
||||
<div class="mb-4">
|
||||
<input type="text"
|
||||
placeholder="Search units..."
|
||||
class="w-full px-4 py-2 border border-gray-300 dark:border-gray-600 rounded-lg bg-white dark:bg-gray-700 text-gray-900 dark:text-white"
|
||||
hx-get="/api/slm-dashboard/units"
|
||||
hx-trigger="keyup changed delay:300ms"
|
||||
hx-target="#slm-list"
|
||||
hx-include="this"
|
||||
name="search">
|
||||
</div>
|
||||
|
||||
<!-- SLM List -->
|
||||
<div id="slm-list"
|
||||
class="space-y-2 max-h-[600px] overflow-y-auto"
|
||||
hx-get="/api/slm-dashboard/units"
|
||||
hx-trigger="load, every 10s"
|
||||
hx-swap="innerHTML">
|
||||
<!-- Loading skeleton -->
|
||||
<div class="animate-pulse space-y-2">
|
||||
<div class="bg-gray-200 dark:bg-gray-700 h-20 rounded-lg"></div>
|
||||
<div class="bg-gray-200 dark:bg-gray-700 h-20 rounded-lg"></div>
|
||||
<div class="bg-gray-200 dark:bg-gray-700 h-20 rounded-lg"></div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Live View Panel -->
|
||||
<div class="lg:col-span-2">
|
||||
<div id="live-view-panel" class="bg-white dark:bg-slate-800 rounded-xl shadow-lg p-6">
|
||||
<!-- Initial state - no unit selected -->
|
||||
<div class="flex flex-col items-center justify-center h-[600px] text-gray-400 dark:text-gray-500">
|
||||
<svg class="w-24 h-24 mb-4" fill="none" stroke="currentColor" viewBox="0 0 24 24">
|
||||
<path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M15.536 8.464a5 5 0 010 7.072m2.828-9.9a9 9 0 010 12.728M5.586 15H4a1 1 0 01-1-1v-4a1 1 0 011-1h1.586l4.707-4.707C10.923 3.663 12 4.109 12 5v14c0 .891-1.077 1.337-1.707.707L5.586 15z"></path>
|
||||
</svg>
|
||||
<p class="text-lg font-medium">No unit selected</p>
|
||||
<p class="text-sm mt-2">Select a sound level meter from the list to view live data</p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Configuration Modal -->
|
||||
<div id="config-modal" class="hidden fixed inset-0 bg-black bg-opacity-50 flex items-center justify-center z-50">
|
||||
<div class="bg-white dark:bg-slate-800 rounded-xl p-6 max-w-2xl w-full mx-4 max-h-[90vh] overflow-y-auto">
|
||||
<div class="flex items-center justify-between mb-6">
|
||||
<h3 class="text-2xl font-bold text-gray-900 dark:text-white">Configure SLM</h3>
|
||||
<button onclick="closeConfigModal()" class="text-gray-500 hover:text-gray-700 dark:text-gray-400 dark:hover:text-gray-200">
|
||||
<svg class="w-6 h-6" fill="none" stroke="currentColor" viewBox="0 0 24 24">
|
||||
<path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M6 18L18 6M6 6l12 12"></path>
|
||||
</svg>
|
||||
</button>
|
||||
</div>
|
||||
|
||||
<div id="config-modal-content">
|
||||
<!-- Content loaded via HTMX -->
|
||||
<div class="animate-pulse space-y-4">
|
||||
<div class="h-4 bg-gray-200 dark:bg-gray-700 rounded w-3/4"></div>
|
||||
<div class="h-4 bg-gray-200 dark:bg-gray-700 rounded"></div>
|
||||
<div class="h-4 bg-gray-200 dark:bg-gray-700 rounded w-5/6"></div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<script>
|
||||
// Function to select a unit and load live view
|
||||
function selectUnit(unitId) {
|
||||
// Remove active state from all items
|
||||
document.querySelectorAll('.slm-unit-item').forEach(item => {
|
||||
item.classList.remove('bg-seismo-orange', 'text-white');
|
||||
item.classList.add('bg-gray-100', 'dark:bg-gray-700');
|
||||
});
|
||||
|
||||
// Add active state to clicked item
|
||||
event.currentTarget.classList.remove('bg-gray-100', 'dark:bg-gray-700');
|
||||
event.currentTarget.classList.add('bg-seismo-orange', 'text-white');
|
||||
|
||||
// Load live view for this unit
|
||||
htmx.ajax('GET', `/api/slm-dashboard/live-view/${unitId}`, {
|
||||
target: '#live-view-panel',
|
||||
swap: 'innerHTML'
|
||||
});
|
||||
}
|
||||
|
||||
// Configuration modal functions
|
||||
function openConfigModal(unitId) {
|
||||
const modal = document.getElementById('config-modal');
|
||||
modal.classList.remove('hidden');
|
||||
|
||||
// Load configuration form via HTMX
|
||||
htmx.ajax('GET', `/api/slm-dashboard/config/${unitId}`, {
|
||||
target: '#config-modal-content',
|
||||
swap: 'innerHTML'
|
||||
});
|
||||
}
|
||||
|
||||
function closeConfigModal() {
|
||||
document.getElementById('config-modal').classList.add('hidden');
|
||||
}
|
||||
|
||||
// Close modal on escape key
|
||||
document.addEventListener('keydown', function(e) {
|
||||
if (e.key === 'Escape') {
|
||||
closeConfigModal();
|
||||
}
|
||||
});
|
||||
|
||||
// Close modal when clicking outside
|
||||
document.getElementById('config-modal')?.addEventListener('click', function(e) {
|
||||
if (e.target === this) {
|
||||
closeConfigModal();
|
||||
}
|
||||
});
|
||||
|
||||
// Initialize WebSocket for selected unit
|
||||
let currentWebSocket = null;
|
||||
|
||||
function initLiveDataStream(unitId) {
|
||||
// Close existing connection if any
|
||||
if (currentWebSocket) {
|
||||
currentWebSocket.close();
|
||||
}
|
||||
|
||||
// WebSocket URL for SLMM backend via proxy
|
||||
const wsProtocol = window.location.protocol === 'https:' ? 'wss:' : 'ws:';
|
||||
const wsUrl = `${wsProtocol}//${window.location.host}/api/slmm/${unitId}/live`;
|
||||
|
||||
currentWebSocket = new WebSocket(wsUrl);
|
||||
|
||||
currentWebSocket.onopen = function() {
|
||||
console.log('WebSocket connected');
|
||||
// Toggle button visibility
|
||||
const startBtn = document.getElementById('start-stream-btn');
|
||||
const stopBtn = document.getElementById('stop-stream-btn');
|
||||
if (startBtn) startBtn.style.display = 'none';
|
||||
if (stopBtn) stopBtn.style.display = 'flex';
|
||||
};
|
||||
|
||||
currentWebSocket.onmessage = async function(event) {
|
||||
try {
|
||||
let payload = event.data;
|
||||
if (payload instanceof Blob) {
|
||||
payload = await payload.text();
|
||||
}
|
||||
const data = typeof payload === 'string' ? JSON.parse(payload) : payload;
|
||||
updateLiveChart(data);
|
||||
updateLiveMetrics(data);
|
||||
} catch (error) {
|
||||
console.error('Error parsing WebSocket message:', error);
|
||||
}
|
||||
};
|
||||
|
||||
currentWebSocket.onerror = function(error) {
|
||||
console.error('WebSocket error:', error);
|
||||
};
|
||||
|
||||
currentWebSocket.onclose = function() {
|
||||
console.log('WebSocket closed');
|
||||
// Toggle button visibility
|
||||
const startBtn = document.getElementById('start-stream-btn');
|
||||
const stopBtn = document.getElementById('stop-stream-btn');
|
||||
if (startBtn) startBtn.style.display = 'flex';
|
||||
if (stopBtn) stopBtn.style.display = 'none';
|
||||
};
|
||||
}
|
||||
|
||||
function stopLiveDataStream() {
|
||||
if (currentWebSocket) {
|
||||
currentWebSocket.close();
|
||||
currentWebSocket = null;
|
||||
}
|
||||
}
|
||||
|
||||
// Update live chart with new data point
|
||||
let chartData = {
|
||||
timestamps: [],
|
||||
lp: [],
|
||||
leq: []
|
||||
};
|
||||
|
||||
function updateLiveChart(data) {
|
||||
const now = new Date();
|
||||
chartData.timestamps.push(now.toLocaleTimeString());
|
||||
chartData.lp.push(parseFloat(data.lp || 0));
|
||||
chartData.leq.push(parseFloat(data.leq || 0));
|
||||
|
||||
// Keep only last 60 data points (1 minute at 1 sample/sec)
|
||||
if (chartData.timestamps.length > 60) {
|
||||
chartData.timestamps.shift();
|
||||
chartData.lp.shift();
|
||||
chartData.leq.shift();
|
||||
}
|
||||
|
||||
// Update chart (using Chart.js if available)
|
||||
if (window.liveChart) {
|
||||
window.liveChart.data.labels = chartData.timestamps;
|
||||
window.liveChart.data.datasets[0].data = chartData.lp;
|
||||
window.liveChart.data.datasets[1].data = chartData.leq;
|
||||
window.liveChart.update('none'); // Update without animation for smooth real-time
|
||||
}
|
||||
}
|
||||
|
||||
function updateLiveMetrics(data) {
|
||||
// Update metric displays
|
||||
if (document.getElementById('live-lp')) {
|
||||
document.getElementById('live-lp').textContent = data.lp || '--';
|
||||
}
|
||||
if (document.getElementById('live-leq')) {
|
||||
document.getElementById('live-leq').textContent = data.leq || '--';
|
||||
}
|
||||
if (document.getElementById('live-lmax')) {
|
||||
document.getElementById('live-lmax').textContent = data.lmax || '--';
|
||||
}
|
||||
if (document.getElementById('live-lmin')) {
|
||||
document.getElementById('live-lmin').textContent = data.lmin || '--';
|
||||
}
|
||||
}
|
||||
|
||||
// Cleanup on page unload
|
||||
window.addEventListener('beforeunload', function() {
|
||||
if (currentWebSocket) {
|
||||
currentWebSocket.close();
|
||||
}
|
||||
});
|
||||
</script>
|
||||
{% endblock %}
|
||||
BIN
assets/terra-view-icon_large.png
Normal file
|
After Width: | Height: | Size: 36 KiB |
108
backend/init_projects_db.py
Normal file
@@ -0,0 +1,108 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Database initialization script for Projects system.
|
||||
|
||||
This script creates the new project management tables and populates
|
||||
the project_types table with default templates.
|
||||
|
||||
Usage:
|
||||
python -m backend.init_projects_db
|
||||
"""
|
||||
|
||||
from sqlalchemy.orm import Session
|
||||
from backend.database import engine, SessionLocal
|
||||
from backend.models import (
|
||||
Base,
|
||||
ProjectType,
|
||||
Project,
|
||||
MonitoringLocation,
|
||||
UnitAssignment,
|
||||
ScheduledAction,
|
||||
RecordingSession,
|
||||
DataFile,
|
||||
)
|
||||
from datetime import datetime
|
||||
|
||||
|
||||
def init_project_types(db: Session):
|
||||
"""Initialize default project types."""
|
||||
project_types = [
|
||||
{
|
||||
"id": "sound_monitoring",
|
||||
"name": "Sound Monitoring",
|
||||
"description": "Noise monitoring projects with sound level meters and NRLs (Noise Recording Locations)",
|
||||
"icon": "volume-2", # Lucide icon name
|
||||
"supports_sound": True,
|
||||
"supports_vibration": False,
|
||||
},
|
||||
{
|
||||
"id": "vibration_monitoring",
|
||||
"name": "Vibration Monitoring",
|
||||
"description": "Seismic/vibration monitoring projects with seismographs and monitoring points",
|
||||
"icon": "activity", # Lucide icon name
|
||||
"supports_sound": False,
|
||||
"supports_vibration": True,
|
||||
},
|
||||
{
|
||||
"id": "combined",
|
||||
"name": "Combined Monitoring",
|
||||
"description": "Full-spectrum monitoring with both sound and vibration capabilities",
|
||||
"icon": "layers", # Lucide icon name
|
||||
"supports_sound": True,
|
||||
"supports_vibration": True,
|
||||
},
|
||||
]
|
||||
|
||||
for pt_data in project_types:
|
||||
existing = db.query(ProjectType).filter_by(id=pt_data["id"]).first()
|
||||
if not existing:
|
||||
pt = ProjectType(**pt_data)
|
||||
db.add(pt)
|
||||
print(f"✓ Created project type: {pt_data['name']}")
|
||||
else:
|
||||
print(f" Project type already exists: {pt_data['name']}")
|
||||
|
||||
db.commit()
|
||||
|
||||
|
||||
def create_tables():
|
||||
"""Create all tables defined in models."""
|
||||
print("Creating project management tables...")
|
||||
Base.metadata.create_all(bind=engine)
|
||||
print("✓ Tables created successfully")
|
||||
|
||||
|
||||
def main():
|
||||
print("=" * 60)
|
||||
print("Terra-View Projects System - Database Initialization")
|
||||
print("=" * 60)
|
||||
print()
|
||||
|
||||
# Create tables
|
||||
create_tables()
|
||||
print()
|
||||
|
||||
# Initialize project types
|
||||
db = SessionLocal()
|
||||
try:
|
||||
print("Initializing project types...")
|
||||
init_project_types(db)
|
||||
print()
|
||||
print("=" * 60)
|
||||
print("✓ Database initialization complete!")
|
||||
print("=" * 60)
|
||||
print()
|
||||
print("Next steps:")
|
||||
print(" 1. Restart Terra-View to load new routes")
|
||||
print(" 2. Navigate to /projects to create your first project")
|
||||
print(" 3. Check documentation for API endpoints")
|
||||
except Exception as e:
|
||||
print(f"✗ Error during initialization: {e}")
|
||||
db.rollback()
|
||||
raise
|
||||
finally:
|
||||
db.close()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
764
backend/main.py
Normal file
@@ -0,0 +1,764 @@
|
||||
import os
|
||||
import logging
|
||||
from fastapi import FastAPI, Request, Depends, HTTPException
|
||||
from fastapi.middleware.cors import CORSMiddleware
|
||||
from fastapi.staticfiles import StaticFiles
|
||||
from fastapi.templating import Jinja2Templates
|
||||
from fastapi.responses import HTMLResponse, FileResponse, JSONResponse
|
||||
from fastapi.exceptions import RequestValidationError
|
||||
from sqlalchemy.orm import Session
|
||||
from typing import List, Dict, Optional
|
||||
from pydantic import BaseModel
|
||||
|
||||
# Configure logging
|
||||
logging.basicConfig(
|
||||
level=logging.INFO,
|
||||
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s'
|
||||
)
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
from backend.database import engine, Base, get_db
|
||||
from backend.routers import roster, units, photos, roster_edit, roster_rename, dashboard, dashboard_tabs, activity, slmm, slm_ui, slm_dashboard, seismo_dashboard, projects, project_locations, scheduler, modem_dashboard
|
||||
from backend.services.snapshot import emit_status_snapshot
|
||||
from backend.models import IgnoredUnit
|
||||
from backend.utils.timezone import get_user_timezone
|
||||
|
||||
# Create database tables
|
||||
Base.metadata.create_all(bind=engine)
|
||||
|
||||
# Read environment (development or production)
|
||||
ENVIRONMENT = os.getenv("ENVIRONMENT", "production")
|
||||
|
||||
# Initialize FastAPI app
|
||||
VERSION = "0.5.1"
|
||||
app = FastAPI(
|
||||
title="Seismo Fleet Manager",
|
||||
description="Backend API for managing seismograph fleet status",
|
||||
version=VERSION
|
||||
)
|
||||
|
||||
# Add validation error handler to log details
|
||||
@app.exception_handler(RequestValidationError)
|
||||
async def validation_exception_handler(request: Request, exc: RequestValidationError):
|
||||
logger.error(f"Validation error on {request.url}: {exc.errors()}")
|
||||
logger.error(f"Body: {await request.body()}")
|
||||
return JSONResponse(
|
||||
status_code=400,
|
||||
content={"detail": exc.errors()}
|
||||
)
|
||||
|
||||
# Configure CORS
|
||||
app.add_middleware(
|
||||
CORSMiddleware,
|
||||
allow_origins=["*"],
|
||||
allow_credentials=True,
|
||||
allow_methods=["*"],
|
||||
allow_headers=["*"],
|
||||
)
|
||||
|
||||
# Mount static files
|
||||
app.mount("/static", StaticFiles(directory="backend/static"), name="static")
|
||||
|
||||
# Use shared templates configuration with timezone filters
|
||||
from backend.templates_config import templates
|
||||
|
||||
# Add custom context processor to inject environment variable into all templates
|
||||
@app.middleware("http")
|
||||
async def add_environment_to_context(request: Request, call_next):
|
||||
"""Middleware to add environment variable to request state"""
|
||||
request.state.environment = ENVIRONMENT
|
||||
response = await call_next(request)
|
||||
return response
|
||||
|
||||
# Override TemplateResponse to include environment and version in context
|
||||
original_template_response = templates.TemplateResponse
|
||||
def custom_template_response(name, context=None, *args, **kwargs):
|
||||
if context is None:
|
||||
context = {}
|
||||
context["environment"] = ENVIRONMENT
|
||||
context["version"] = VERSION
|
||||
return original_template_response(name, context, *args, **kwargs)
|
||||
templates.TemplateResponse = custom_template_response
|
||||
|
||||
# Include API routers
|
||||
app.include_router(roster.router)
|
||||
app.include_router(units.router)
|
||||
app.include_router(photos.router)
|
||||
app.include_router(roster_edit.router)
|
||||
app.include_router(roster_rename.router)
|
||||
app.include_router(dashboard.router)
|
||||
app.include_router(dashboard_tabs.router)
|
||||
app.include_router(activity.router)
|
||||
app.include_router(slmm.router)
|
||||
app.include_router(slm_ui.router)
|
||||
app.include_router(slm_dashboard.router)
|
||||
app.include_router(seismo_dashboard.router)
|
||||
app.include_router(modem_dashboard.router)
|
||||
|
||||
from backend.routers import settings
|
||||
app.include_router(settings.router)
|
||||
|
||||
# Projects system routers
|
||||
app.include_router(projects.router)
|
||||
app.include_router(project_locations.router)
|
||||
app.include_router(scheduler.router)
|
||||
|
||||
# Report templates router
|
||||
from backend.routers import report_templates
|
||||
app.include_router(report_templates.router)
|
||||
|
||||
# Alerts router
|
||||
from backend.routers import alerts
|
||||
app.include_router(alerts.router)
|
||||
|
||||
# Recurring schedules router
|
||||
from backend.routers import recurring_schedules
|
||||
app.include_router(recurring_schedules.router)
|
||||
|
||||
# Start scheduler service and device status monitor on application startup
|
||||
from backend.services.scheduler import start_scheduler, stop_scheduler
|
||||
from backend.services.device_status_monitor import start_device_status_monitor, stop_device_status_monitor
|
||||
|
||||
@app.on_event("startup")
|
||||
async def startup_event():
|
||||
"""Initialize services on app startup"""
|
||||
logger.info("Starting scheduler service...")
|
||||
await start_scheduler()
|
||||
logger.info("Scheduler service started")
|
||||
|
||||
logger.info("Starting device status monitor...")
|
||||
await start_device_status_monitor()
|
||||
logger.info("Device status monitor started")
|
||||
|
||||
@app.on_event("shutdown")
|
||||
def shutdown_event():
|
||||
"""Clean up services on app shutdown"""
|
||||
logger.info("Stopping device status monitor...")
|
||||
stop_device_status_monitor()
|
||||
logger.info("Device status monitor stopped")
|
||||
|
||||
logger.info("Stopping scheduler service...")
|
||||
stop_scheduler()
|
||||
logger.info("Scheduler service stopped")
|
||||
|
||||
|
||||
# Legacy routes from the original backend
|
||||
from backend import routes as legacy_routes
|
||||
app.include_router(legacy_routes.router)
|
||||
|
||||
|
||||
# HTML page routes
|
||||
@app.get("/", response_class=HTMLResponse)
|
||||
async def dashboard(request: Request):
|
||||
"""Dashboard home page"""
|
||||
return templates.TemplateResponse("dashboard.html", {"request": request})
|
||||
|
||||
|
||||
@app.get("/roster", response_class=HTMLResponse)
|
||||
async def roster_page(request: Request):
|
||||
"""Fleet roster page"""
|
||||
return templates.TemplateResponse("roster.html", {"request": request})
|
||||
|
||||
|
||||
@app.get("/unit/{unit_id}", response_class=HTMLResponse)
|
||||
async def unit_detail_page(request: Request, unit_id: str):
|
||||
"""Unit detail page"""
|
||||
return templates.TemplateResponse("unit_detail.html", {
|
||||
"request": request,
|
||||
"unit_id": unit_id
|
||||
})
|
||||
|
||||
|
||||
@app.get("/settings", response_class=HTMLResponse)
|
||||
async def settings_page(request: Request):
|
||||
"""Settings page for roster management"""
|
||||
return templates.TemplateResponse("settings.html", {"request": request})
|
||||
|
||||
|
||||
@app.get("/sound-level-meters", response_class=HTMLResponse)
|
||||
async def sound_level_meters_page(request: Request):
|
||||
"""Sound Level Meters management dashboard"""
|
||||
return templates.TemplateResponse("sound_level_meters.html", {"request": request})
|
||||
|
||||
|
||||
@app.get("/slm/{unit_id}", response_class=HTMLResponse)
|
||||
async def slm_legacy_dashboard(
|
||||
request: Request,
|
||||
unit_id: str,
|
||||
from_project: Optional[str] = None,
|
||||
from_nrl: Optional[str] = None,
|
||||
db: Session = Depends(get_db)
|
||||
):
|
||||
"""Legacy SLM control center dashboard for a specific unit"""
|
||||
# Get project details if from_project is provided
|
||||
project = None
|
||||
if from_project:
|
||||
from backend.models import Project
|
||||
project = db.query(Project).filter_by(id=from_project).first()
|
||||
|
||||
# Get NRL location details if from_nrl is provided
|
||||
nrl_location = None
|
||||
if from_nrl:
|
||||
from backend.models import NRLLocation
|
||||
nrl_location = db.query(NRLLocation).filter_by(id=from_nrl).first()
|
||||
|
||||
return templates.TemplateResponse("slm_legacy_dashboard.html", {
|
||||
"request": request,
|
||||
"unit_id": unit_id,
|
||||
"from_project": from_project,
|
||||
"from_nrl": from_nrl,
|
||||
"project": project,
|
||||
"nrl_location": nrl_location
|
||||
})
|
||||
|
||||
|
||||
@app.get("/seismographs", response_class=HTMLResponse)
|
||||
async def seismographs_page(request: Request):
|
||||
"""Seismographs management dashboard"""
|
||||
return templates.TemplateResponse("seismographs.html", {"request": request})
|
||||
|
||||
|
||||
@app.get("/modems", response_class=HTMLResponse)
|
||||
async def modems_page(request: Request):
|
||||
"""Field modems management dashboard"""
|
||||
return templates.TemplateResponse("modems.html", {"request": request})
|
||||
|
||||
|
||||
@app.get("/pair-devices", response_class=HTMLResponse)
|
||||
async def pair_devices_page(request: Request, db: Session = Depends(get_db)):
|
||||
"""
|
||||
Device pairing page - two-column layout for pairing recorders with modems.
|
||||
"""
|
||||
from backend.models import RosterUnit
|
||||
|
||||
# Get all non-retired recorders (seismographs and SLMs)
|
||||
recorders = db.query(RosterUnit).filter(
|
||||
RosterUnit.retired == False,
|
||||
RosterUnit.device_type.in_(["seismograph", "slm", None]) # None defaults to seismograph
|
||||
).order_by(RosterUnit.id).all()
|
||||
|
||||
# Get all non-retired modems
|
||||
modems = db.query(RosterUnit).filter(
|
||||
RosterUnit.retired == False,
|
||||
RosterUnit.device_type == "modem"
|
||||
).order_by(RosterUnit.id).all()
|
||||
|
||||
# Build existing pairings list
|
||||
pairings = []
|
||||
for recorder in recorders:
|
||||
if recorder.deployed_with_modem_id:
|
||||
modem = next((m for m in modems if m.id == recorder.deployed_with_modem_id), None)
|
||||
pairings.append({
|
||||
"recorder_id": recorder.id,
|
||||
"recorder_type": (recorder.device_type or "seismograph").upper(),
|
||||
"modem_id": recorder.deployed_with_modem_id,
|
||||
"modem_ip": modem.ip_address if modem else None
|
||||
})
|
||||
|
||||
# Convert to dicts for template
|
||||
recorders_data = [
|
||||
{
|
||||
"id": r.id,
|
||||
"device_type": r.device_type or "seismograph",
|
||||
"deployed": r.deployed,
|
||||
"deployed_with_modem_id": r.deployed_with_modem_id
|
||||
}
|
||||
for r in recorders
|
||||
]
|
||||
|
||||
modems_data = [
|
||||
{
|
||||
"id": m.id,
|
||||
"deployed": m.deployed,
|
||||
"deployed_with_unit_id": m.deployed_with_unit_id,
|
||||
"ip_address": m.ip_address,
|
||||
"phone_number": m.phone_number
|
||||
}
|
||||
for m in modems
|
||||
]
|
||||
|
||||
return templates.TemplateResponse("pair_devices.html", {
|
||||
"request": request,
|
||||
"recorders": recorders_data,
|
||||
"modems": modems_data,
|
||||
"pairings": pairings
|
||||
})
|
||||
|
||||
|
||||
@app.get("/projects", response_class=HTMLResponse)
|
||||
async def projects_page(request: Request):
|
||||
"""Projects management and overview"""
|
||||
return templates.TemplateResponse("projects/overview.html", {"request": request})
|
||||
|
||||
|
||||
@app.get("/projects/{project_id}", response_class=HTMLResponse)
|
||||
async def project_detail_page(request: Request, project_id: str):
|
||||
"""Project detail dashboard"""
|
||||
return templates.TemplateResponse("projects/detail.html", {
|
||||
"request": request,
|
||||
"project_id": project_id
|
||||
})
|
||||
|
||||
|
||||
@app.get("/projects/{project_id}/nrl/{location_id}", response_class=HTMLResponse)
|
||||
async def nrl_detail_page(
|
||||
request: Request,
|
||||
project_id: str,
|
||||
location_id: str,
|
||||
db: Session = Depends(get_db)
|
||||
):
|
||||
"""NRL (Noise Recording Location) detail page with tabs"""
|
||||
from backend.models import Project, MonitoringLocation, UnitAssignment, RosterUnit, RecordingSession, DataFile
|
||||
from sqlalchemy import and_
|
||||
|
||||
# Get project
|
||||
project = db.query(Project).filter_by(id=project_id).first()
|
||||
if not project:
|
||||
return templates.TemplateResponse("404.html", {
|
||||
"request": request,
|
||||
"message": "Project not found"
|
||||
}, status_code=404)
|
||||
|
||||
# Get location
|
||||
location = db.query(MonitoringLocation).filter_by(
|
||||
id=location_id,
|
||||
project_id=project_id
|
||||
).first()
|
||||
|
||||
if not location:
|
||||
return templates.TemplateResponse("404.html", {
|
||||
"request": request,
|
||||
"message": "Location not found"
|
||||
}, status_code=404)
|
||||
|
||||
# Get active assignment
|
||||
assignment = db.query(UnitAssignment).filter(
|
||||
and_(
|
||||
UnitAssignment.location_id == location_id,
|
||||
UnitAssignment.status == "active"
|
||||
)
|
||||
).first()
|
||||
|
||||
assigned_unit = None
|
||||
if assignment:
|
||||
assigned_unit = db.query(RosterUnit).filter_by(id=assignment.unit_id).first()
|
||||
|
||||
# Get session count
|
||||
session_count = db.query(RecordingSession).filter_by(location_id=location_id).count()
|
||||
|
||||
# Get file count (DataFile links to session, not directly to location)
|
||||
file_count = db.query(DataFile).join(
|
||||
RecordingSession,
|
||||
DataFile.session_id == RecordingSession.id
|
||||
).filter(RecordingSession.location_id == location_id).count()
|
||||
|
||||
# Check for active session
|
||||
active_session = db.query(RecordingSession).filter(
|
||||
and_(
|
||||
RecordingSession.location_id == location_id,
|
||||
RecordingSession.status == "recording"
|
||||
)
|
||||
).first()
|
||||
|
||||
return templates.TemplateResponse("nrl_detail.html", {
|
||||
"request": request,
|
||||
"project_id": project_id,
|
||||
"location_id": location_id,
|
||||
"project": project,
|
||||
"location": location,
|
||||
"assignment": assignment,
|
||||
"assigned_unit": assigned_unit,
|
||||
"session_count": session_count,
|
||||
"file_count": file_count,
|
||||
"active_session": active_session,
|
||||
})
|
||||
|
||||
|
||||
# ===== PWA ROUTES =====
|
||||
|
||||
@app.get("/sw.js")
|
||||
async def service_worker():
|
||||
"""Serve service worker with proper headers for PWA"""
|
||||
return FileResponse(
|
||||
"backend/static/sw.js",
|
||||
media_type="application/javascript",
|
||||
headers={
|
||||
"Service-Worker-Allowed": "/",
|
||||
"Cache-Control": "no-cache"
|
||||
}
|
||||
)
|
||||
|
||||
|
||||
@app.get("/offline-db.js")
|
||||
async def offline_db_script():
|
||||
"""Serve offline database script"""
|
||||
return FileResponse(
|
||||
"backend/static/offline-db.js",
|
||||
media_type="application/javascript",
|
||||
headers={"Cache-Control": "no-cache"}
|
||||
)
|
||||
|
||||
|
||||
# Pydantic model for sync edits
|
||||
class EditItem(BaseModel):
|
||||
id: int
|
||||
unitId: str
|
||||
changes: Dict
|
||||
timestamp: int
|
||||
|
||||
|
||||
class SyncEditsRequest(BaseModel):
|
||||
edits: List[EditItem]
|
||||
|
||||
|
||||
@app.post("/api/sync-edits")
|
||||
async def sync_edits(request: SyncEditsRequest, db: Session = Depends(get_db)):
|
||||
"""Process offline edit queue and sync to database"""
|
||||
from backend.models import RosterUnit
|
||||
|
||||
results = []
|
||||
synced_ids = []
|
||||
|
||||
for edit in request.edits:
|
||||
try:
|
||||
# Find the unit
|
||||
unit = db.query(RosterUnit).filter_by(id=edit.unitId).first()
|
||||
|
||||
if not unit:
|
||||
results.append({
|
||||
"id": edit.id,
|
||||
"status": "error",
|
||||
"reason": f"Unit {edit.unitId} not found"
|
||||
})
|
||||
continue
|
||||
|
||||
# Apply changes
|
||||
for key, value in edit.changes.items():
|
||||
if hasattr(unit, key):
|
||||
# Handle boolean conversions
|
||||
if key in ['deployed', 'retired']:
|
||||
setattr(unit, key, value in ['true', True, 'True', '1', 1])
|
||||
else:
|
||||
setattr(unit, key, value if value != '' else None)
|
||||
|
||||
db.commit()
|
||||
|
||||
results.append({
|
||||
"id": edit.id,
|
||||
"status": "success"
|
||||
})
|
||||
synced_ids.append(edit.id)
|
||||
|
||||
except Exception as e:
|
||||
db.rollback()
|
||||
results.append({
|
||||
"id": edit.id,
|
||||
"status": "error",
|
||||
"reason": str(e)
|
||||
})
|
||||
|
||||
synced_count = len(synced_ids)
|
||||
|
||||
return JSONResponse({
|
||||
"synced": synced_count,
|
||||
"total": len(request.edits),
|
||||
"synced_ids": synced_ids,
|
||||
"results": results
|
||||
})
|
||||
|
||||
|
||||
@app.get("/partials/roster-deployed", response_class=HTMLResponse)
|
||||
async def roster_deployed_partial(request: Request):
|
||||
"""Partial template for deployed units tab"""
|
||||
from datetime import datetime
|
||||
snapshot = emit_status_snapshot()
|
||||
|
||||
units_list = []
|
||||
for unit_id, unit_data in snapshot["active"].items():
|
||||
units_list.append({
|
||||
"id": unit_id,
|
||||
"status": unit_data.get("status", "Unknown"),
|
||||
"age": unit_data.get("age", "N/A"),
|
||||
"last_seen": unit_data.get("last", "Never"),
|
||||
"deployed": unit_data.get("deployed", False),
|
||||
"note": unit_data.get("note", ""),
|
||||
"device_type": unit_data.get("device_type", "seismograph"),
|
||||
"address": unit_data.get("address", ""),
|
||||
"coordinates": unit_data.get("coordinates", ""),
|
||||
"project_id": unit_data.get("project_id", ""),
|
||||
"last_calibrated": unit_data.get("last_calibrated"),
|
||||
"next_calibration_due": unit_data.get("next_calibration_due"),
|
||||
"deployed_with_modem_id": unit_data.get("deployed_with_modem_id"),
|
||||
"ip_address": unit_data.get("ip_address"),
|
||||
"phone_number": unit_data.get("phone_number"),
|
||||
"hardware_model": unit_data.get("hardware_model"),
|
||||
})
|
||||
|
||||
# Sort by status priority (Missing > Pending > OK) then by ID
|
||||
status_priority = {"Missing": 0, "Pending": 1, "OK": 2}
|
||||
units_list.sort(key=lambda x: (status_priority.get(x["status"], 3), x["id"]))
|
||||
|
||||
return templates.TemplateResponse("partials/roster_table.html", {
|
||||
"request": request,
|
||||
"units": units_list,
|
||||
"timestamp": datetime.now().strftime("%H:%M:%S")
|
||||
})
|
||||
|
||||
|
||||
@app.get("/partials/roster-benched", response_class=HTMLResponse)
|
||||
async def roster_benched_partial(request: Request):
|
||||
"""Partial template for benched units tab"""
|
||||
from datetime import datetime
|
||||
snapshot = emit_status_snapshot()
|
||||
|
||||
units_list = []
|
||||
for unit_id, unit_data in snapshot["benched"].items():
|
||||
units_list.append({
|
||||
"id": unit_id,
|
||||
"status": unit_data.get("status", "N/A"),
|
||||
"age": unit_data.get("age", "N/A"),
|
||||
"last_seen": unit_data.get("last", "Never"),
|
||||
"deployed": unit_data.get("deployed", False),
|
||||
"note": unit_data.get("note", ""),
|
||||
"device_type": unit_data.get("device_type", "seismograph"),
|
||||
"address": unit_data.get("address", ""),
|
||||
"coordinates": unit_data.get("coordinates", ""),
|
||||
"project_id": unit_data.get("project_id", ""),
|
||||
"last_calibrated": unit_data.get("last_calibrated"),
|
||||
"next_calibration_due": unit_data.get("next_calibration_due"),
|
||||
"deployed_with_modem_id": unit_data.get("deployed_with_modem_id"),
|
||||
"ip_address": unit_data.get("ip_address"),
|
||||
"phone_number": unit_data.get("phone_number"),
|
||||
"hardware_model": unit_data.get("hardware_model"),
|
||||
})
|
||||
|
||||
# Sort by ID
|
||||
units_list.sort(key=lambda x: x["id"])
|
||||
|
||||
return templates.TemplateResponse("partials/roster_table.html", {
|
||||
"request": request,
|
||||
"units": units_list,
|
||||
"timestamp": datetime.now().strftime("%H:%M:%S")
|
||||
})
|
||||
|
||||
|
||||
@app.get("/partials/roster-retired", response_class=HTMLResponse)
|
||||
async def roster_retired_partial(request: Request):
|
||||
"""Partial template for retired units tab"""
|
||||
from datetime import datetime
|
||||
snapshot = emit_status_snapshot()
|
||||
|
||||
units_list = []
|
||||
for unit_id, unit_data in snapshot["retired"].items():
|
||||
units_list.append({
|
||||
"id": unit_id,
|
||||
"status": unit_data["status"],
|
||||
"age": unit_data["age"],
|
||||
"last_seen": unit_data["last"],
|
||||
"deployed": unit_data["deployed"],
|
||||
"note": unit_data.get("note", ""),
|
||||
"device_type": unit_data.get("device_type", "seismograph"),
|
||||
"last_calibrated": unit_data.get("last_calibrated"),
|
||||
"next_calibration_due": unit_data.get("next_calibration_due"),
|
||||
"deployed_with_modem_id": unit_data.get("deployed_with_modem_id"),
|
||||
"ip_address": unit_data.get("ip_address"),
|
||||
"phone_number": unit_data.get("phone_number"),
|
||||
"hardware_model": unit_data.get("hardware_model"),
|
||||
})
|
||||
|
||||
# Sort by ID
|
||||
units_list.sort(key=lambda x: x["id"])
|
||||
|
||||
return templates.TemplateResponse("partials/retired_table.html", {
|
||||
"request": request,
|
||||
"units": units_list,
|
||||
"timestamp": datetime.now().strftime("%H:%M:%S")
|
||||
})
|
||||
|
||||
|
||||
@app.get("/partials/roster-ignored", response_class=HTMLResponse)
|
||||
async def roster_ignored_partial(request: Request, db: Session = Depends(get_db)):
|
||||
"""Partial template for ignored units tab"""
|
||||
from datetime import datetime
|
||||
|
||||
ignored = db.query(IgnoredUnit).all()
|
||||
ignored_list = []
|
||||
for unit in ignored:
|
||||
ignored_list.append({
|
||||
"id": unit.id,
|
||||
"reason": unit.reason or "",
|
||||
"ignored_at": unit.ignored_at.strftime("%Y-%m-%d %H:%M:%S") if unit.ignored_at else "Unknown"
|
||||
})
|
||||
|
||||
# Sort by ID
|
||||
ignored_list.sort(key=lambda x: x["id"])
|
||||
|
||||
return templates.TemplateResponse("partials/ignored_table.html", {
|
||||
"request": request,
|
||||
"ignored_units": ignored_list,
|
||||
"timestamp": datetime.now().strftime("%H:%M:%S")
|
||||
})
|
||||
|
||||
|
||||
@app.get("/partials/unknown-emitters", response_class=HTMLResponse)
|
||||
async def unknown_emitters_partial(request: Request):
|
||||
"""Partial template for unknown emitters (HTMX)"""
|
||||
snapshot = emit_status_snapshot()
|
||||
|
||||
unknown_list = []
|
||||
for unit_id, unit_data in snapshot.get("unknown", {}).items():
|
||||
unknown_list.append({
|
||||
"id": unit_id,
|
||||
"status": unit_data["status"],
|
||||
"age": unit_data["age"],
|
||||
"fname": unit_data.get("fname", ""),
|
||||
})
|
||||
|
||||
# Sort by ID
|
||||
unknown_list.sort(key=lambda x: x["id"])
|
||||
|
||||
return templates.TemplateResponse("partials/unknown_emitters.html", {
|
||||
"request": request,
|
||||
"unknown_units": unknown_list
|
||||
})
|
||||
|
||||
|
||||
@app.get("/partials/devices-all", response_class=HTMLResponse)
|
||||
async def devices_all_partial(request: Request):
|
||||
"""Unified partial template for ALL devices with comprehensive filtering support"""
|
||||
from datetime import datetime
|
||||
snapshot = emit_status_snapshot()
|
||||
|
||||
units_list = []
|
||||
|
||||
# Add deployed/active units
|
||||
for unit_id, unit_data in snapshot["active"].items():
|
||||
units_list.append({
|
||||
"id": unit_id,
|
||||
"status": unit_data.get("status", "Unknown"),
|
||||
"age": unit_data.get("age", "N/A"),
|
||||
"last_seen": unit_data.get("last", "Never"),
|
||||
"deployed": True,
|
||||
"retired": False,
|
||||
"ignored": False,
|
||||
"note": unit_data.get("note", ""),
|
||||
"device_type": unit_data.get("device_type", "seismograph"),
|
||||
"address": unit_data.get("address", ""),
|
||||
"coordinates": unit_data.get("coordinates", ""),
|
||||
"project_id": unit_data.get("project_id", ""),
|
||||
"last_calibrated": unit_data.get("last_calibrated"),
|
||||
"next_calibration_due": unit_data.get("next_calibration_due"),
|
||||
"deployed_with_modem_id": unit_data.get("deployed_with_modem_id"),
|
||||
"deployed_with_unit_id": unit_data.get("deployed_with_unit_id"),
|
||||
"ip_address": unit_data.get("ip_address"),
|
||||
"phone_number": unit_data.get("phone_number"),
|
||||
"hardware_model": unit_data.get("hardware_model"),
|
||||
})
|
||||
|
||||
# Add benched units
|
||||
for unit_id, unit_data in snapshot["benched"].items():
|
||||
units_list.append({
|
||||
"id": unit_id,
|
||||
"status": unit_data.get("status", "N/A"),
|
||||
"age": unit_data.get("age", "N/A"),
|
||||
"last_seen": unit_data.get("last", "Never"),
|
||||
"deployed": False,
|
||||
"retired": False,
|
||||
"ignored": False,
|
||||
"note": unit_data.get("note", ""),
|
||||
"device_type": unit_data.get("device_type", "seismograph"),
|
||||
"address": unit_data.get("address", ""),
|
||||
"coordinates": unit_data.get("coordinates", ""),
|
||||
"project_id": unit_data.get("project_id", ""),
|
||||
"last_calibrated": unit_data.get("last_calibrated"),
|
||||
"next_calibration_due": unit_data.get("next_calibration_due"),
|
||||
"deployed_with_modem_id": unit_data.get("deployed_with_modem_id"),
|
||||
"deployed_with_unit_id": unit_data.get("deployed_with_unit_id"),
|
||||
"ip_address": unit_data.get("ip_address"),
|
||||
"phone_number": unit_data.get("phone_number"),
|
||||
"hardware_model": unit_data.get("hardware_model"),
|
||||
})
|
||||
|
||||
# Add retired units
|
||||
for unit_id, unit_data in snapshot["retired"].items():
|
||||
units_list.append({
|
||||
"id": unit_id,
|
||||
"status": "Retired",
|
||||
"age": "N/A",
|
||||
"last_seen": "N/A",
|
||||
"deployed": False,
|
||||
"retired": True,
|
||||
"ignored": False,
|
||||
"note": unit_data.get("note", ""),
|
||||
"device_type": unit_data.get("device_type", "seismograph"),
|
||||
"address": unit_data.get("address", ""),
|
||||
"coordinates": unit_data.get("coordinates", ""),
|
||||
"project_id": unit_data.get("project_id", ""),
|
||||
"last_calibrated": unit_data.get("last_calibrated"),
|
||||
"next_calibration_due": unit_data.get("next_calibration_due"),
|
||||
"deployed_with_modem_id": unit_data.get("deployed_with_modem_id"),
|
||||
"deployed_with_unit_id": unit_data.get("deployed_with_unit_id"),
|
||||
"ip_address": unit_data.get("ip_address"),
|
||||
"phone_number": unit_data.get("phone_number"),
|
||||
"hardware_model": unit_data.get("hardware_model"),
|
||||
})
|
||||
|
||||
# Add ignored units
|
||||
for unit_id, unit_data in snapshot.get("ignored", {}).items():
|
||||
units_list.append({
|
||||
"id": unit_id,
|
||||
"status": "Ignored",
|
||||
"age": "N/A",
|
||||
"last_seen": "N/A",
|
||||
"deployed": False,
|
||||
"retired": False,
|
||||
"ignored": True,
|
||||
"note": unit_data.get("note", unit_data.get("reason", "")),
|
||||
"device_type": unit_data.get("device_type", "unknown"),
|
||||
"address": "",
|
||||
"coordinates": "",
|
||||
"project_id": "",
|
||||
"last_calibrated": None,
|
||||
"next_calibration_due": None,
|
||||
"deployed_with_modem_id": None,
|
||||
"deployed_with_unit_id": None,
|
||||
"ip_address": None,
|
||||
"phone_number": None,
|
||||
"hardware_model": None,
|
||||
})
|
||||
|
||||
# Sort by status category, then by ID
|
||||
def sort_key(unit):
|
||||
# Priority: deployed (active) -> benched -> retired -> ignored
|
||||
if unit["deployed"]:
|
||||
return (0, unit["id"])
|
||||
elif not unit["retired"] and not unit["ignored"]:
|
||||
return (1, unit["id"])
|
||||
elif unit["retired"]:
|
||||
return (2, unit["id"])
|
||||
else:
|
||||
return (3, unit["id"])
|
||||
|
||||
units_list.sort(key=sort_key)
|
||||
|
||||
return templates.TemplateResponse("partials/devices_table.html", {
|
||||
"request": request,
|
||||
"units": units_list,
|
||||
"timestamp": datetime.now().strftime("%H:%M:%S"),
|
||||
"user_timezone": get_user_timezone()
|
||||
})
|
||||
|
||||
|
||||
@app.get("/health")
|
||||
def health_check():
|
||||
"""Health check endpoint"""
|
||||
return {
|
||||
"message": f"Seismo Fleet Manager v{VERSION}",
|
||||
"status": "running",
|
||||
"version": VERSION
|
||||
}
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
import uvicorn
|
||||
uvicorn.run(app, host="0.0.0.0", port=8001)
|
||||
67
backend/migrate_add_auto_increment_index.py
Normal file
@@ -0,0 +1,67 @@
|
||||
"""
|
||||
Migration: Add auto_increment_index column to recurring_schedules table
|
||||
|
||||
This migration adds the auto_increment_index column that controls whether
|
||||
the scheduler should automatically find an unused store index before starting
|
||||
a new measurement.
|
||||
|
||||
Run this script once to update existing databases:
|
||||
python -m backend.migrate_add_auto_increment_index
|
||||
"""
|
||||
|
||||
import sqlite3
|
||||
import os
|
||||
|
||||
DB_PATH = "data/seismo_fleet.db"
|
||||
|
||||
|
||||
def migrate():
|
||||
"""Add auto_increment_index column to recurring_schedules table."""
|
||||
if not os.path.exists(DB_PATH):
|
||||
print(f"Database not found at {DB_PATH}")
|
||||
return False
|
||||
|
||||
conn = sqlite3.connect(DB_PATH)
|
||||
cursor = conn.cursor()
|
||||
|
||||
try:
|
||||
# Check if recurring_schedules table exists
|
||||
cursor.execute("""
|
||||
SELECT name FROM sqlite_master
|
||||
WHERE type='table' AND name='recurring_schedules'
|
||||
""")
|
||||
if not cursor.fetchone():
|
||||
print("recurring_schedules table does not exist yet. Will be created on app startup.")
|
||||
conn.close()
|
||||
return True
|
||||
|
||||
# Check if auto_increment_index column already exists
|
||||
cursor.execute("PRAGMA table_info(recurring_schedules)")
|
||||
columns = [row[1] for row in cursor.fetchall()]
|
||||
|
||||
if "auto_increment_index" in columns:
|
||||
print("auto_increment_index column already exists in recurring_schedules table.")
|
||||
conn.close()
|
||||
return True
|
||||
|
||||
# Add the column
|
||||
print("Adding auto_increment_index column to recurring_schedules table...")
|
||||
cursor.execute("""
|
||||
ALTER TABLE recurring_schedules
|
||||
ADD COLUMN auto_increment_index BOOLEAN DEFAULT 1
|
||||
""")
|
||||
conn.commit()
|
||||
print("Successfully added auto_increment_index column.")
|
||||
|
||||
conn.close()
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
print(f"Migration failed: {e}")
|
||||
conn.close()
|
||||
return False
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
success = migrate()
|
||||
exit(0 if success else 1)
|
||||
84
backend/migrate_add_deployment_type.py
Normal file
@@ -0,0 +1,84 @@
|
||||
"""
|
||||
Migration script to add deployment_type and deployed_with_unit_id fields to roster table.
|
||||
|
||||
deployment_type: tracks what type of device a modem is deployed with:
|
||||
- "seismograph" - Modem is connected to a seismograph
|
||||
- "slm" - Modem is connected to a sound level meter
|
||||
- NULL/empty - Not assigned or unknown
|
||||
|
||||
deployed_with_unit_id: stores the ID of the seismograph/SLM this modem is deployed with
|
||||
(reverse relationship of deployed_with_modem_id)
|
||||
|
||||
Run this script once to migrate an existing database.
|
||||
"""
|
||||
|
||||
import sqlite3
|
||||
import os
|
||||
|
||||
# Database path
|
||||
DB_PATH = "./data/seismo_fleet.db"
|
||||
|
||||
|
||||
def migrate_database():
|
||||
"""Add deployment_type and deployed_with_unit_id columns to roster table"""
|
||||
|
||||
if not os.path.exists(DB_PATH):
|
||||
print(f"Database not found at {DB_PATH}")
|
||||
print("The database will be created automatically when you run the application.")
|
||||
return
|
||||
|
||||
print(f"Migrating database: {DB_PATH}")
|
||||
|
||||
conn = sqlite3.connect(DB_PATH)
|
||||
cursor = conn.cursor()
|
||||
|
||||
# Check if roster table exists
|
||||
cursor.execute("SELECT name FROM sqlite_master WHERE type='table' AND name='roster'")
|
||||
table_exists = cursor.fetchone()
|
||||
|
||||
if not table_exists:
|
||||
print("Roster table does not exist yet - will be created when app runs")
|
||||
conn.close()
|
||||
return
|
||||
|
||||
# Check existing columns
|
||||
cursor.execute("PRAGMA table_info(roster)")
|
||||
columns = [col[1] for col in cursor.fetchall()]
|
||||
|
||||
try:
|
||||
# Add deployment_type if not exists
|
||||
if 'deployment_type' not in columns:
|
||||
print("Adding deployment_type column to roster table...")
|
||||
cursor.execute("ALTER TABLE roster ADD COLUMN deployment_type TEXT")
|
||||
print(" Added deployment_type column")
|
||||
|
||||
cursor.execute("CREATE INDEX IF NOT EXISTS ix_roster_deployment_type ON roster(deployment_type)")
|
||||
print(" Created index on deployment_type")
|
||||
else:
|
||||
print("deployment_type column already exists")
|
||||
|
||||
# Add deployed_with_unit_id if not exists
|
||||
if 'deployed_with_unit_id' not in columns:
|
||||
print("Adding deployed_with_unit_id column to roster table...")
|
||||
cursor.execute("ALTER TABLE roster ADD COLUMN deployed_with_unit_id TEXT")
|
||||
print(" Added deployed_with_unit_id column")
|
||||
|
||||
cursor.execute("CREATE INDEX IF NOT EXISTS ix_roster_deployed_with_unit_id ON roster(deployed_with_unit_id)")
|
||||
print(" Created index on deployed_with_unit_id")
|
||||
else:
|
||||
print("deployed_with_unit_id column already exists")
|
||||
|
||||
conn.commit()
|
||||
print("\nMigration completed successfully!")
|
||||
|
||||
except sqlite3.Error as e:
|
||||
print(f"\nError during migration: {e}")
|
||||
conn.rollback()
|
||||
raise
|
||||
|
||||
finally:
|
||||
conn.close()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
migrate_database()
|
||||
84
backend/migrate_add_device_types.py
Normal file
@@ -0,0 +1,84 @@
|
||||
"""
|
||||
Migration script to add device type support to the roster table.
|
||||
|
||||
This adds columns for:
|
||||
- device_type (seismograph/modem discriminator)
|
||||
- Seismograph-specific fields (calibration dates, modem pairing)
|
||||
- Modem-specific fields (IP address, phone number, hardware model)
|
||||
|
||||
Run this script once to migrate an existing database.
|
||||
"""
|
||||
|
||||
import sqlite3
|
||||
import os
|
||||
|
||||
# Database path
|
||||
DB_PATH = "./data/seismo_fleet.db"
|
||||
|
||||
def migrate_database():
|
||||
"""Add new columns to the roster table"""
|
||||
|
||||
if not os.path.exists(DB_PATH):
|
||||
print(f"Database not found at {DB_PATH}")
|
||||
print("The database will be created automatically when you run the application.")
|
||||
return
|
||||
|
||||
print(f"Migrating database: {DB_PATH}")
|
||||
|
||||
conn = sqlite3.connect(DB_PATH)
|
||||
cursor = conn.cursor()
|
||||
|
||||
# Check if device_type column already exists
|
||||
cursor.execute("PRAGMA table_info(roster)")
|
||||
columns = [col[1] for col in cursor.fetchall()]
|
||||
|
||||
if "device_type" in columns:
|
||||
print("Migration already applied - device_type column exists")
|
||||
conn.close()
|
||||
return
|
||||
|
||||
print("Adding new columns to roster table...")
|
||||
|
||||
try:
|
||||
# Add device type discriminator
|
||||
cursor.execute("ALTER TABLE roster ADD COLUMN device_type TEXT DEFAULT 'seismograph'")
|
||||
print(" ✓ Added device_type column")
|
||||
|
||||
# Add seismograph-specific fields
|
||||
cursor.execute("ALTER TABLE roster ADD COLUMN last_calibrated DATE")
|
||||
print(" ✓ Added last_calibrated column")
|
||||
|
||||
cursor.execute("ALTER TABLE roster ADD COLUMN next_calibration_due DATE")
|
||||
print(" ✓ Added next_calibration_due column")
|
||||
|
||||
cursor.execute("ALTER TABLE roster ADD COLUMN deployed_with_modem_id TEXT")
|
||||
print(" ✓ Added deployed_with_modem_id column")
|
||||
|
||||
# Add modem-specific fields
|
||||
cursor.execute("ALTER TABLE roster ADD COLUMN ip_address TEXT")
|
||||
print(" ✓ Added ip_address column")
|
||||
|
||||
cursor.execute("ALTER TABLE roster ADD COLUMN phone_number TEXT")
|
||||
print(" ✓ Added phone_number column")
|
||||
|
||||
cursor.execute("ALTER TABLE roster ADD COLUMN hardware_model TEXT")
|
||||
print(" ✓ Added hardware_model column")
|
||||
|
||||
# Set all existing units to seismograph type
|
||||
cursor.execute("UPDATE roster SET device_type = 'seismograph' WHERE device_type IS NULL")
|
||||
print(" ✓ Set existing units to seismograph type")
|
||||
|
||||
conn.commit()
|
||||
print("\nMigration completed successfully!")
|
||||
|
||||
except sqlite3.Error as e:
|
||||
print(f"\nError during migration: {e}")
|
||||
conn.rollback()
|
||||
raise
|
||||
|
||||
finally:
|
||||
conn.close()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
migrate_database()
|
||||
80
backend/migrate_add_project_number.py
Normal file
@@ -0,0 +1,80 @@
|
||||
"""
|
||||
Migration script to add project_number field to projects table.
|
||||
|
||||
This adds a new column for TMI internal project numbering:
|
||||
- Format: xxxx-YY (e.g., "2567-23")
|
||||
- xxxx = incremental project number
|
||||
- YY = year project was started
|
||||
|
||||
Combined with client_name and name (project/site name), this enables
|
||||
smart searching across all project identifiers.
|
||||
|
||||
Run this script once to migrate an existing database.
|
||||
"""
|
||||
|
||||
import sqlite3
|
||||
import os
|
||||
|
||||
# Database path
|
||||
DB_PATH = "./data/seismo_fleet.db"
|
||||
|
||||
|
||||
def migrate_database():
|
||||
"""Add project_number column to projects table"""
|
||||
|
||||
if not os.path.exists(DB_PATH):
|
||||
print(f"Database not found at {DB_PATH}")
|
||||
print("The database will be created automatically when you run the application.")
|
||||
return
|
||||
|
||||
print(f"Migrating database: {DB_PATH}")
|
||||
|
||||
conn = sqlite3.connect(DB_PATH)
|
||||
cursor = conn.cursor()
|
||||
|
||||
# Check if projects table exists
|
||||
cursor.execute("SELECT name FROM sqlite_master WHERE type='table' AND name='projects'")
|
||||
table_exists = cursor.fetchone()
|
||||
|
||||
if not table_exists:
|
||||
print("Projects table does not exist yet - will be created when app runs")
|
||||
conn.close()
|
||||
return
|
||||
|
||||
# Check if project_number column already exists
|
||||
cursor.execute("PRAGMA table_info(projects)")
|
||||
columns = [col[1] for col in cursor.fetchall()]
|
||||
|
||||
if 'project_number' in columns:
|
||||
print("Migration already applied - project_number column exists")
|
||||
conn.close()
|
||||
return
|
||||
|
||||
print("Adding project_number column to projects table...")
|
||||
|
||||
try:
|
||||
cursor.execute("ALTER TABLE projects ADD COLUMN project_number TEXT")
|
||||
print(" Added project_number column")
|
||||
|
||||
# Create index for faster searching
|
||||
cursor.execute("CREATE INDEX IF NOT EXISTS ix_projects_project_number ON projects(project_number)")
|
||||
print(" Created index on project_number")
|
||||
|
||||
# Also add index on client_name if it doesn't exist
|
||||
cursor.execute("CREATE INDEX IF NOT EXISTS ix_projects_client_name ON projects(client_name)")
|
||||
print(" Created index on client_name")
|
||||
|
||||
conn.commit()
|
||||
print("\nMigration completed successfully!")
|
||||
|
||||
except sqlite3.Error as e:
|
||||
print(f"\nError during migration: {e}")
|
||||
conn.rollback()
|
||||
raise
|
||||
|
||||
finally:
|
||||
conn.close()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
migrate_database()
|
||||
88
backend/migrate_add_report_templates.py
Normal file
@@ -0,0 +1,88 @@
|
||||
"""
|
||||
Migration script to add report_templates table.
|
||||
|
||||
This creates a new table for storing report generation configurations:
|
||||
- Template name and project association
|
||||
- Time filtering settings (start/end time)
|
||||
- Date range filtering (optional)
|
||||
- Report title defaults
|
||||
|
||||
Run this script once to migrate an existing database.
|
||||
"""
|
||||
|
||||
import sqlite3
|
||||
import os
|
||||
|
||||
# Database path
|
||||
DB_PATH = "./data/seismo_fleet.db"
|
||||
|
||||
def migrate_database():
|
||||
"""Create report_templates table"""
|
||||
|
||||
if not os.path.exists(DB_PATH):
|
||||
print(f"Database not found at {DB_PATH}")
|
||||
print("The database will be created automatically when you run the application.")
|
||||
return
|
||||
|
||||
print(f"Migrating database: {DB_PATH}")
|
||||
|
||||
conn = sqlite3.connect(DB_PATH)
|
||||
cursor = conn.cursor()
|
||||
|
||||
# Check if report_templates table already exists
|
||||
cursor.execute("SELECT name FROM sqlite_master WHERE type='table' AND name='report_templates'")
|
||||
table_exists = cursor.fetchone()
|
||||
|
||||
if table_exists:
|
||||
print("Migration already applied - report_templates table exists")
|
||||
conn.close()
|
||||
return
|
||||
|
||||
print("Creating report_templates table...")
|
||||
|
||||
try:
|
||||
cursor.execute("""
|
||||
CREATE TABLE report_templates (
|
||||
id TEXT PRIMARY KEY,
|
||||
name TEXT NOT NULL,
|
||||
project_id TEXT,
|
||||
report_title TEXT DEFAULT 'Background Noise Study',
|
||||
start_time TEXT,
|
||||
end_time TEXT,
|
||||
start_date TEXT,
|
||||
end_date TEXT,
|
||||
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
|
||||
)
|
||||
""")
|
||||
print(" ✓ Created report_templates table")
|
||||
|
||||
# Insert default templates
|
||||
import uuid
|
||||
|
||||
default_templates = [
|
||||
(str(uuid.uuid4()), "Nighttime (7PM-7AM)", None, "Background Noise Study", "19:00", "07:00", None, None),
|
||||
(str(uuid.uuid4()), "Daytime (7AM-7PM)", None, "Background Noise Study", "07:00", "19:00", None, None),
|
||||
(str(uuid.uuid4()), "Full Day (All Data)", None, "Background Noise Study", None, None, None, None),
|
||||
]
|
||||
|
||||
cursor.executemany("""
|
||||
INSERT INTO report_templates (id, name, project_id, report_title, start_time, end_time, start_date, end_date)
|
||||
VALUES (?, ?, ?, ?, ?, ?, ?, ?)
|
||||
""", default_templates)
|
||||
print(" ✓ Inserted default templates (Nighttime, Daytime, Full Day)")
|
||||
|
||||
conn.commit()
|
||||
print("\nMigration completed successfully!")
|
||||
|
||||
except sqlite3.Error as e:
|
||||
print(f"\nError during migration: {e}")
|
||||
conn.rollback()
|
||||
raise
|
||||
|
||||
finally:
|
||||
conn.close()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
migrate_database()
|
||||
78
backend/migrate_add_slm_fields.py
Normal file
@@ -0,0 +1,78 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Database migration: Add sound level meter fields to roster table.
|
||||
|
||||
Adds columns for sound_level_meter device type support.
|
||||
"""
|
||||
|
||||
import sqlite3
|
||||
from pathlib import Path
|
||||
|
||||
def migrate():
|
||||
"""Add SLM fields to roster table if they don't exist."""
|
||||
|
||||
# Try multiple possible database locations
|
||||
possible_paths = [
|
||||
Path("data/seismo_fleet.db"),
|
||||
Path("data/sfm.db"),
|
||||
Path("data/seismo.db"),
|
||||
]
|
||||
|
||||
db_path = None
|
||||
for path in possible_paths:
|
||||
if path.exists():
|
||||
db_path = path
|
||||
break
|
||||
|
||||
if db_path is None:
|
||||
print(f"Database not found in any of: {[str(p) for p in possible_paths]}")
|
||||
print("Creating database with models.py will include new fields automatically.")
|
||||
return
|
||||
|
||||
print(f"Using database: {db_path}")
|
||||
|
||||
conn = sqlite3.connect(db_path)
|
||||
cursor = conn.cursor()
|
||||
|
||||
# Check if columns already exist
|
||||
cursor.execute("PRAGMA table_info(roster)")
|
||||
existing_columns = {row[1] for row in cursor.fetchall()}
|
||||
|
||||
new_columns = {
|
||||
"slm_host": "TEXT",
|
||||
"slm_tcp_port": "INTEGER",
|
||||
"slm_model": "TEXT",
|
||||
"slm_serial_number": "TEXT",
|
||||
"slm_frequency_weighting": "TEXT",
|
||||
"slm_time_weighting": "TEXT",
|
||||
"slm_measurement_range": "TEXT",
|
||||
"slm_last_check": "DATETIME",
|
||||
}
|
||||
|
||||
migrations_applied = []
|
||||
|
||||
for column_name, column_type in new_columns.items():
|
||||
if column_name not in existing_columns:
|
||||
try:
|
||||
cursor.execute(f"ALTER TABLE roster ADD COLUMN {column_name} {column_type}")
|
||||
migrations_applied.append(column_name)
|
||||
print(f"✓ Added column: {column_name} ({column_type})")
|
||||
except sqlite3.OperationalError as e:
|
||||
print(f"✗ Failed to add column {column_name}: {e}")
|
||||
else:
|
||||
print(f"○ Column already exists: {column_name}")
|
||||
|
||||
conn.commit()
|
||||
conn.close()
|
||||
|
||||
if migrations_applied:
|
||||
print(f"\n✓ Migration complete! Added {len(migrations_applied)} new columns.")
|
||||
else:
|
||||
print("\n○ No migration needed - all columns already exist.")
|
||||
|
||||
print("\nSound level meter fields are now available in the roster table.")
|
||||
print("Note: Use device_type='slm' for Sound Level Meters. Legacy 'sound_level_meter' has been deprecated.")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
migrate()
|
||||
78
backend/migrate_add_unit_history.py
Normal file
@@ -0,0 +1,78 @@
|
||||
"""
|
||||
Migration script to add unit history timeline support.
|
||||
|
||||
This creates the unit_history table to track all changes to units:
|
||||
- Note changes (archived old notes, new notes)
|
||||
- Deployment status changes (deployed/benched)
|
||||
- Retired status changes
|
||||
- Other field changes
|
||||
|
||||
Run this script once to migrate an existing database.
|
||||
"""
|
||||
|
||||
import sqlite3
|
||||
import os
|
||||
|
||||
# Database path
|
||||
DB_PATH = "./data/seismo_fleet.db"
|
||||
|
||||
def migrate_database():
|
||||
"""Create the unit_history table"""
|
||||
|
||||
if not os.path.exists(DB_PATH):
|
||||
print(f"Database not found at {DB_PATH}")
|
||||
print("The database will be created automatically when you run the application.")
|
||||
return
|
||||
|
||||
print(f"Migrating database: {DB_PATH}")
|
||||
|
||||
conn = sqlite3.connect(DB_PATH)
|
||||
cursor = conn.cursor()
|
||||
|
||||
# Check if unit_history table already exists
|
||||
cursor.execute("SELECT name FROM sqlite_master WHERE type='table' AND name='unit_history'")
|
||||
if cursor.fetchone():
|
||||
print("Migration already applied - unit_history table exists")
|
||||
conn.close()
|
||||
return
|
||||
|
||||
print("Creating unit_history table...")
|
||||
|
||||
try:
|
||||
cursor.execute("""
|
||||
CREATE TABLE unit_history (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
unit_id TEXT NOT NULL,
|
||||
change_type TEXT NOT NULL,
|
||||
field_name TEXT,
|
||||
old_value TEXT,
|
||||
new_value TEXT,
|
||||
changed_at TIMESTAMP NOT NULL,
|
||||
source TEXT DEFAULT 'manual',
|
||||
notes TEXT
|
||||
)
|
||||
""")
|
||||
print(" ✓ Created unit_history table")
|
||||
|
||||
# Create indexes for better query performance
|
||||
cursor.execute("CREATE INDEX idx_unit_history_unit_id ON unit_history(unit_id)")
|
||||
print(" ✓ Created index on unit_id")
|
||||
|
||||
cursor.execute("CREATE INDEX idx_unit_history_changed_at ON unit_history(changed_at)")
|
||||
print(" ✓ Created index on changed_at")
|
||||
|
||||
conn.commit()
|
||||
print("\nMigration completed successfully!")
|
||||
print("Units will now track their complete history of changes.")
|
||||
|
||||
except sqlite3.Error as e:
|
||||
print(f"\nError during migration: {e}")
|
||||
conn.rollback()
|
||||
raise
|
||||
|
||||
finally:
|
||||
conn.close()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
migrate_database()
|
||||
80
backend/migrate_add_user_preferences.py
Normal file
@@ -0,0 +1,80 @@
|
||||
"""
|
||||
Migration script to add user_preferences table.
|
||||
|
||||
This creates a new table for storing persistent user preferences:
|
||||
- Display settings (timezone, theme, date format)
|
||||
- Auto-refresh configuration
|
||||
- Calibration defaults
|
||||
- Status threshold customization
|
||||
|
||||
Run this script once to migrate an existing database.
|
||||
"""
|
||||
|
||||
import sqlite3
|
||||
import os
|
||||
|
||||
# Database path
|
||||
DB_PATH = "./data/seismo_fleet.db"
|
||||
|
||||
def migrate_database():
|
||||
"""Create user_preferences table"""
|
||||
|
||||
if not os.path.exists(DB_PATH):
|
||||
print(f"Database not found at {DB_PATH}")
|
||||
print("The database will be created automatically when you run the application.")
|
||||
return
|
||||
|
||||
print(f"Migrating database: {DB_PATH}")
|
||||
|
||||
conn = sqlite3.connect(DB_PATH)
|
||||
cursor = conn.cursor()
|
||||
|
||||
# Check if user_preferences table already exists
|
||||
cursor.execute("SELECT name FROM sqlite_master WHERE type='table' AND name='user_preferences'")
|
||||
table_exists = cursor.fetchone()
|
||||
|
||||
if table_exists:
|
||||
print("Migration already applied - user_preferences table exists")
|
||||
conn.close()
|
||||
return
|
||||
|
||||
print("Creating user_preferences table...")
|
||||
|
||||
try:
|
||||
cursor.execute("""
|
||||
CREATE TABLE user_preferences (
|
||||
id INTEGER PRIMARY KEY DEFAULT 1,
|
||||
timezone TEXT DEFAULT 'America/New_York',
|
||||
theme TEXT DEFAULT 'auto',
|
||||
auto_refresh_interval INTEGER DEFAULT 10,
|
||||
date_format TEXT DEFAULT 'MM/DD/YYYY',
|
||||
table_rows_per_page INTEGER DEFAULT 25,
|
||||
calibration_interval_days INTEGER DEFAULT 365,
|
||||
calibration_warning_days INTEGER DEFAULT 30,
|
||||
status_ok_threshold_hours INTEGER DEFAULT 12,
|
||||
status_pending_threshold_hours INTEGER DEFAULT 24,
|
||||
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
|
||||
)
|
||||
""")
|
||||
print(" ✓ Created user_preferences table")
|
||||
|
||||
# Insert default preferences
|
||||
cursor.execute("""
|
||||
INSERT INTO user_preferences (id) VALUES (1)
|
||||
""")
|
||||
print(" ✓ Inserted default preferences")
|
||||
|
||||
conn.commit()
|
||||
print("\nMigration completed successfully!")
|
||||
|
||||
except sqlite3.Error as e:
|
||||
print(f"\nError during migration: {e}")
|
||||
conn.rollback()
|
||||
raise
|
||||
|
||||
finally:
|
||||
conn.close()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
migrate_database()
|
||||
106
backend/migrate_standardize_device_types.py
Normal file
@@ -0,0 +1,106 @@
|
||||
"""
|
||||
Database Migration: Standardize device_type values
|
||||
|
||||
This migration ensures all device_type values follow the official schema:
|
||||
- "seismograph" - Seismic monitoring devices
|
||||
- "modem" - Field modems and network equipment
|
||||
- "slm" - Sound level meters (NL-43/NL-53)
|
||||
|
||||
Changes:
|
||||
- Converts "sound_level_meter" → "slm"
|
||||
- Safe to run multiple times (idempotent)
|
||||
- No data loss
|
||||
|
||||
Usage:
|
||||
python backend/migrate_standardize_device_types.py
|
||||
"""
|
||||
|
||||
import sys
|
||||
import os
|
||||
|
||||
# Add parent directory to path so we can import backend modules
|
||||
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
||||
|
||||
from sqlalchemy import create_engine, text
|
||||
from sqlalchemy.orm import sessionmaker
|
||||
|
||||
# Database configuration
|
||||
SQLALCHEMY_DATABASE_URL = "sqlite:///./data/seismo_fleet.db"
|
||||
engine = create_engine(SQLALCHEMY_DATABASE_URL, connect_args={"check_same_thread": False})
|
||||
SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)
|
||||
|
||||
|
||||
def migrate():
|
||||
"""Standardize device_type values in the database"""
|
||||
db = SessionLocal()
|
||||
|
||||
try:
|
||||
print("=" * 70)
|
||||
print("Database Migration: Standardize device_type values")
|
||||
print("=" * 70)
|
||||
print()
|
||||
|
||||
# Check for existing "sound_level_meter" values
|
||||
result = db.execute(
|
||||
text("SELECT COUNT(*) as count FROM roster WHERE device_type = 'sound_level_meter'")
|
||||
).fetchone()
|
||||
|
||||
count_to_migrate = result[0] if result else 0
|
||||
|
||||
if count_to_migrate == 0:
|
||||
print("✓ No records need migration - all device_type values are already standardized")
|
||||
print()
|
||||
print("Current device_type distribution:")
|
||||
|
||||
# Show distribution
|
||||
distribution = db.execute(
|
||||
text("SELECT device_type, COUNT(*) as count FROM roster GROUP BY device_type ORDER BY count DESC")
|
||||
).fetchall()
|
||||
|
||||
for row in distribution:
|
||||
device_type, count = row
|
||||
print(f" - {device_type}: {count} units")
|
||||
|
||||
print()
|
||||
print("Migration not needed.")
|
||||
return
|
||||
|
||||
print(f"Found {count_to_migrate} record(s) with device_type='sound_level_meter'")
|
||||
print()
|
||||
print("Converting 'sound_level_meter' → 'slm'...")
|
||||
|
||||
# Perform the migration
|
||||
db.execute(
|
||||
text("UPDATE roster SET device_type = 'slm' WHERE device_type = 'sound_level_meter'")
|
||||
)
|
||||
db.commit()
|
||||
|
||||
print(f"✓ Successfully migrated {count_to_migrate} record(s)")
|
||||
print()
|
||||
|
||||
# Show final distribution
|
||||
print("Updated device_type distribution:")
|
||||
distribution = db.execute(
|
||||
text("SELECT device_type, COUNT(*) as count FROM roster GROUP BY device_type ORDER BY count DESC")
|
||||
).fetchall()
|
||||
|
||||
for row in distribution:
|
||||
device_type, count = row
|
||||
print(f" - {device_type}: {count} units")
|
||||
|
||||
print()
|
||||
print("=" * 70)
|
||||
print("Migration completed successfully!")
|
||||
print("=" * 70)
|
||||
|
||||
except Exception as e:
|
||||
db.rollback()
|
||||
print(f"\n❌ Error during migration: {e}")
|
||||
print("\nRolling back changes...")
|
||||
raise
|
||||
finally:
|
||||
db.close()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
migrate()
|
||||
404
backend/models.py
Normal file
@@ -0,0 +1,404 @@
|
||||
from sqlalchemy import Column, String, DateTime, Boolean, Text, Date, Integer
|
||||
from datetime import datetime
|
||||
from backend.database import Base
|
||||
|
||||
|
||||
class Emitter(Base):
|
||||
__tablename__ = "emitters"
|
||||
|
||||
id = Column(String, primary_key=True, index=True)
|
||||
unit_type = Column(String, nullable=False)
|
||||
last_seen = Column(DateTime, default=datetime.utcnow)
|
||||
last_file = Column(String, nullable=False)
|
||||
status = Column(String, nullable=False)
|
||||
notes = Column(String, nullable=True)
|
||||
|
||||
|
||||
class RosterUnit(Base):
|
||||
"""
|
||||
Roster table: represents our *intended assignment* of a unit.
|
||||
This is editable from the GUI.
|
||||
|
||||
Supports multiple device types with type-specific fields:
|
||||
- "seismograph" - Seismic monitoring devices (default)
|
||||
- "modem" - Field modems and network equipment
|
||||
- "slm" - Sound level meters (NL-43/NL-53)
|
||||
"""
|
||||
__tablename__ = "roster"
|
||||
|
||||
# Core fields (all device types)
|
||||
id = Column(String, primary_key=True, index=True)
|
||||
unit_type = Column(String, default="series3") # Backward compatibility
|
||||
device_type = Column(String, default="seismograph") # "seismograph" | "modem" | "slm"
|
||||
deployed = Column(Boolean, default=True)
|
||||
retired = Column(Boolean, default=False)
|
||||
note = Column(String, nullable=True)
|
||||
project_id = Column(String, nullable=True)
|
||||
location = Column(String, nullable=True) # Legacy field - use address/coordinates instead
|
||||
address = Column(String, nullable=True) # Human-readable address
|
||||
coordinates = Column(String, nullable=True) # Lat,Lon format: "34.0522,-118.2437"
|
||||
last_updated = Column(DateTime, default=datetime.utcnow)
|
||||
|
||||
# Seismograph-specific fields (nullable for modems and SLMs)
|
||||
last_calibrated = Column(Date, nullable=True)
|
||||
next_calibration_due = Column(Date, nullable=True)
|
||||
|
||||
# Modem assignment (shared by seismographs and SLMs)
|
||||
deployed_with_modem_id = Column(String, nullable=True) # FK to another RosterUnit (device_type=modem)
|
||||
|
||||
# Modem-specific fields (nullable for seismographs and SLMs)
|
||||
ip_address = Column(String, nullable=True)
|
||||
phone_number = Column(String, nullable=True)
|
||||
hardware_model = Column(String, nullable=True)
|
||||
deployment_type = Column(String, nullable=True) # "seismograph" | "slm" - what type of device this modem is deployed with
|
||||
deployed_with_unit_id = Column(String, nullable=True) # ID of seismograph/SLM this modem is deployed with
|
||||
|
||||
# Sound Level Meter-specific fields (nullable for seismographs and modems)
|
||||
slm_host = Column(String, nullable=True) # Device IP or hostname
|
||||
slm_tcp_port = Column(Integer, nullable=True) # TCP control port (default 2255)
|
||||
slm_ftp_port = Column(Integer, nullable=True) # FTP data retrieval port (default 21)
|
||||
slm_model = Column(String, nullable=True) # NL-43, NL-53, etc.
|
||||
slm_serial_number = Column(String, nullable=True) # Device serial number
|
||||
slm_frequency_weighting = Column(String, nullable=True) # A, C, Z
|
||||
slm_time_weighting = Column(String, nullable=True) # F (Fast), S (Slow), I (Impulse)
|
||||
slm_measurement_range = Column(String, nullable=True) # e.g., "30-130 dB"
|
||||
slm_last_check = Column(DateTime, nullable=True) # Last communication check
|
||||
|
||||
|
||||
class IgnoredUnit(Base):
|
||||
"""
|
||||
Ignored units: units that report but should be filtered out from unknown emitters.
|
||||
Used to suppress noise from old projects.
|
||||
"""
|
||||
__tablename__ = "ignored_units"
|
||||
|
||||
id = Column(String, primary_key=True, index=True)
|
||||
reason = Column(String, nullable=True)
|
||||
ignored_at = Column(DateTime, default=datetime.utcnow)
|
||||
|
||||
|
||||
class UnitHistory(Base):
|
||||
"""
|
||||
Unit history: complete timeline of changes to each unit.
|
||||
Tracks note changes, status changes, deployment/benched events, and more.
|
||||
"""
|
||||
__tablename__ = "unit_history"
|
||||
|
||||
id = Column(Integer, primary_key=True, autoincrement=True)
|
||||
unit_id = Column(String, nullable=False, index=True) # FK to RosterUnit.id
|
||||
change_type = Column(String, nullable=False) # note_change, deployed_change, retired_change, etc.
|
||||
field_name = Column(String, nullable=True) # Which field changed
|
||||
old_value = Column(Text, nullable=True) # Previous value
|
||||
new_value = Column(Text, nullable=True) # New value
|
||||
changed_at = Column(DateTime, default=datetime.utcnow, nullable=False, index=True)
|
||||
source = Column(String, default="manual") # manual, csv_import, telemetry, offline_sync
|
||||
notes = Column(Text, nullable=True) # Optional reason/context for the change
|
||||
|
||||
|
||||
class UserPreferences(Base):
|
||||
"""
|
||||
User preferences: persistent storage for application settings.
|
||||
Single-row table (id=1) to store global user preferences.
|
||||
"""
|
||||
__tablename__ = "user_preferences"
|
||||
|
||||
id = Column(Integer, primary_key=True, default=1)
|
||||
timezone = Column(String, default="America/New_York")
|
||||
theme = Column(String, default="auto") # auto, light, dark
|
||||
auto_refresh_interval = Column(Integer, default=10) # seconds
|
||||
date_format = Column(String, default="MM/DD/YYYY")
|
||||
table_rows_per_page = Column(Integer, default=25)
|
||||
calibration_interval_days = Column(Integer, default=365)
|
||||
calibration_warning_days = Column(Integer, default=30)
|
||||
status_ok_threshold_hours = Column(Integer, default=12)
|
||||
status_pending_threshold_hours = Column(Integer, default=24)
|
||||
updated_at = Column(DateTime, default=datetime.utcnow, onupdate=datetime.utcnow)
|
||||
|
||||
|
||||
# ============================================================================
|
||||
# Project Management System
|
||||
# ============================================================================
|
||||
|
||||
class ProjectType(Base):
|
||||
"""
|
||||
Project type templates: defines available project types and their capabilities.
|
||||
Pre-populated with: sound_monitoring, vibration_monitoring, combined.
|
||||
"""
|
||||
__tablename__ = "project_types"
|
||||
|
||||
id = Column(String, primary_key=True) # sound_monitoring, vibration_monitoring, combined
|
||||
name = Column(String, nullable=False, unique=True) # "Sound Monitoring", "Vibration Monitoring"
|
||||
description = Column(Text, nullable=True)
|
||||
icon = Column(String, nullable=True) # Icon identifier for UI
|
||||
supports_sound = Column(Boolean, default=False) # Enables SLM features
|
||||
supports_vibration = Column(Boolean, default=False) # Enables seismograph features
|
||||
created_at = Column(DateTime, default=datetime.utcnow)
|
||||
|
||||
|
||||
class Project(Base):
|
||||
"""
|
||||
Projects: top-level organization for monitoring work.
|
||||
Type-aware to enable/disable features based on project_type_id.
|
||||
|
||||
Project naming convention:
|
||||
- project_number: TMI internal ID format xxxx-YY (e.g., "2567-23")
|
||||
- client_name: Client/contractor name (e.g., "PJ Dick")
|
||||
- name: Project/site name (e.g., "RKM Hall", "CMU Campus")
|
||||
|
||||
Display format: "2567-23 - PJ Dick - RKM Hall"
|
||||
Users can search by any of these fields.
|
||||
"""
|
||||
__tablename__ = "projects"
|
||||
|
||||
id = Column(String, primary_key=True, index=True) # UUID
|
||||
project_number = Column(String, nullable=True, index=True) # TMI ID: xxxx-YY format (e.g., "2567-23")
|
||||
name = Column(String, nullable=False, unique=True) # Project/site name (e.g., "RKM Hall")
|
||||
description = Column(Text, nullable=True)
|
||||
project_type_id = Column(String, nullable=False) # FK to ProjectType.id
|
||||
status = Column(String, default="active") # active, completed, archived
|
||||
|
||||
# Project metadata
|
||||
client_name = Column(String, nullable=True, index=True) # Client name (e.g., "PJ Dick")
|
||||
site_address = Column(String, nullable=True)
|
||||
site_coordinates = Column(String, nullable=True) # "lat,lon"
|
||||
start_date = Column(Date, nullable=True)
|
||||
end_date = Column(Date, nullable=True)
|
||||
|
||||
created_at = Column(DateTime, default=datetime.utcnow)
|
||||
updated_at = Column(DateTime, default=datetime.utcnow, onupdate=datetime.utcnow)
|
||||
|
||||
|
||||
class MonitoringLocation(Base):
|
||||
"""
|
||||
Monitoring locations: generic location for monitoring activities.
|
||||
Can be NRL (Noise Recording Location) for sound projects,
|
||||
or monitoring point for vibration projects.
|
||||
"""
|
||||
__tablename__ = "monitoring_locations"
|
||||
|
||||
id = Column(String, primary_key=True, index=True) # UUID
|
||||
project_id = Column(String, nullable=False, index=True) # FK to Project.id
|
||||
location_type = Column(String, nullable=False) # "sound" | "vibration"
|
||||
|
||||
name = Column(String, nullable=False) # NRL-001, VP-North, etc.
|
||||
description = Column(Text, nullable=True)
|
||||
coordinates = Column(String, nullable=True) # "lat,lon"
|
||||
address = Column(String, nullable=True)
|
||||
|
||||
# Type-specific metadata stored as JSON
|
||||
# For sound: {"ambient_conditions": "urban", "expected_sources": ["traffic"]}
|
||||
# For vibration: {"ground_type": "bedrock", "depth": "10m"}
|
||||
location_metadata = Column(Text, nullable=True)
|
||||
|
||||
created_at = Column(DateTime, default=datetime.utcnow)
|
||||
updated_at = Column(DateTime, default=datetime.utcnow, onupdate=datetime.utcnow)
|
||||
|
||||
|
||||
class UnitAssignment(Base):
|
||||
"""
|
||||
Unit assignments: links devices (SLMs or seismographs) to monitoring locations.
|
||||
Supports temporary assignments with assigned_until.
|
||||
"""
|
||||
__tablename__ = "unit_assignments"
|
||||
|
||||
id = Column(String, primary_key=True, index=True) # UUID
|
||||
unit_id = Column(String, nullable=False, index=True) # FK to RosterUnit.id
|
||||
location_id = Column(String, nullable=False, index=True) # FK to MonitoringLocation.id
|
||||
|
||||
assigned_at = Column(DateTime, default=datetime.utcnow)
|
||||
assigned_until = Column(DateTime, nullable=True) # Null = indefinite
|
||||
status = Column(String, default="active") # active, completed, cancelled
|
||||
notes = Column(Text, nullable=True)
|
||||
|
||||
# Denormalized for efficient queries
|
||||
device_type = Column(String, nullable=False) # "slm" | "seismograph"
|
||||
project_id = Column(String, nullable=False, index=True) # FK to Project.id
|
||||
|
||||
created_at = Column(DateTime, default=datetime.utcnow)
|
||||
|
||||
|
||||
class ScheduledAction(Base):
|
||||
"""
|
||||
Scheduled actions: automation for recording start/stop/download.
|
||||
Terra-View executes these by calling SLMM or SFM endpoints.
|
||||
"""
|
||||
__tablename__ = "scheduled_actions"
|
||||
|
||||
id = Column(String, primary_key=True, index=True) # UUID
|
||||
project_id = Column(String, nullable=False, index=True) # FK to Project.id
|
||||
location_id = Column(String, nullable=False, index=True) # FK to MonitoringLocation.id
|
||||
unit_id = Column(String, nullable=True, index=True) # FK to RosterUnit.id (nullable if location-based)
|
||||
|
||||
action_type = Column(String, nullable=False) # start, stop, download, calibrate
|
||||
device_type = Column(String, nullable=False) # "slm" | "seismograph"
|
||||
|
||||
scheduled_time = Column(DateTime, nullable=False, index=True)
|
||||
executed_at = Column(DateTime, nullable=True)
|
||||
execution_status = Column(String, default="pending") # pending, completed, failed, cancelled
|
||||
|
||||
# Response from device module (SLMM or SFM)
|
||||
module_response = Column(Text, nullable=True) # JSON
|
||||
error_message = Column(Text, nullable=True)
|
||||
|
||||
notes = Column(Text, nullable=True)
|
||||
created_at = Column(DateTime, default=datetime.utcnow)
|
||||
|
||||
|
||||
class RecordingSession(Base):
|
||||
"""
|
||||
Recording sessions: tracks actual monitoring sessions.
|
||||
Created when recording starts, updated when it stops.
|
||||
"""
|
||||
__tablename__ = "recording_sessions"
|
||||
|
||||
id = Column(String, primary_key=True, index=True) # UUID
|
||||
project_id = Column(String, nullable=False, index=True) # FK to Project.id
|
||||
location_id = Column(String, nullable=False, index=True) # FK to MonitoringLocation.id
|
||||
unit_id = Column(String, nullable=False, index=True) # FK to RosterUnit.id
|
||||
|
||||
session_type = Column(String, nullable=False) # sound | vibration
|
||||
started_at = Column(DateTime, nullable=False)
|
||||
stopped_at = Column(DateTime, nullable=True)
|
||||
duration_seconds = Column(Integer, nullable=True)
|
||||
status = Column(String, default="recording") # recording, completed, failed
|
||||
|
||||
# Snapshot of device configuration at recording time
|
||||
session_metadata = Column(Text, nullable=True) # JSON
|
||||
|
||||
created_at = Column(DateTime, default=datetime.utcnow)
|
||||
updated_at = Column(DateTime, default=datetime.utcnow, onupdate=datetime.utcnow)
|
||||
|
||||
|
||||
class DataFile(Base):
|
||||
"""
|
||||
Data files: references to recorded data files.
|
||||
Terra-View tracks file metadata; actual files stored in data/Projects/ directory.
|
||||
"""
|
||||
__tablename__ = "data_files"
|
||||
|
||||
id = Column(String, primary_key=True, index=True) # UUID
|
||||
session_id = Column(String, nullable=False, index=True) # FK to RecordingSession.id
|
||||
|
||||
file_path = Column(String, nullable=False) # Relative to data/Projects/
|
||||
file_type = Column(String, nullable=False) # wav, csv, mseed, json
|
||||
file_size_bytes = Column(Integer, nullable=True)
|
||||
downloaded_at = Column(DateTime, nullable=True)
|
||||
checksum = Column(String, nullable=True) # SHA256 or MD5
|
||||
|
||||
# Additional file metadata
|
||||
file_metadata = Column(Text, nullable=True) # JSON
|
||||
|
||||
created_at = Column(DateTime, default=datetime.utcnow)
|
||||
|
||||
|
||||
class ReportTemplate(Base):
|
||||
"""
|
||||
Report templates: saved configurations for generating Excel reports.
|
||||
Allows users to save time filter presets, titles, etc. for reuse.
|
||||
"""
|
||||
__tablename__ = "report_templates"
|
||||
|
||||
id = Column(String, primary_key=True, index=True) # UUID
|
||||
name = Column(String, nullable=False) # "Nighttime Report", "Full Day Report"
|
||||
project_id = Column(String, nullable=True) # Optional: project-specific template
|
||||
|
||||
# Template settings
|
||||
report_title = Column(String, default="Background Noise Study")
|
||||
start_time = Column(String, nullable=True) # "19:00" format
|
||||
end_time = Column(String, nullable=True) # "07:00" format
|
||||
start_date = Column(String, nullable=True) # "2025-01-15" format (optional)
|
||||
end_date = Column(String, nullable=True) # "2025-01-20" format (optional)
|
||||
|
||||
created_at = Column(DateTime, default=datetime.utcnow)
|
||||
updated_at = Column(DateTime, default=datetime.utcnow, onupdate=datetime.utcnow)
|
||||
|
||||
|
||||
# ============================================================================
|
||||
# Sound Monitoring Scheduler
|
||||
# ============================================================================
|
||||
|
||||
class RecurringSchedule(Base):
|
||||
"""
|
||||
Recurring schedule definitions for automated sound monitoring.
|
||||
|
||||
Supports two schedule types:
|
||||
- "weekly_calendar": Select specific days with start/end times (e.g., Mon/Wed/Fri 7pm-7am)
|
||||
- "simple_interval": For 24/7 monitoring with daily stop/download/restart cycles
|
||||
"""
|
||||
__tablename__ = "recurring_schedules"
|
||||
|
||||
id = Column(String, primary_key=True, index=True) # UUID
|
||||
project_id = Column(String, nullable=False, index=True) # FK to Project.id
|
||||
location_id = Column(String, nullable=False, index=True) # FK to MonitoringLocation.id
|
||||
unit_id = Column(String, nullable=True, index=True) # FK to RosterUnit.id (optional, can use assignment)
|
||||
|
||||
name = Column(String, nullable=False) # "Weeknight Monitoring", "24/7 Continuous"
|
||||
schedule_type = Column(String, nullable=False) # "weekly_calendar" | "simple_interval"
|
||||
device_type = Column(String, nullable=False) # "slm" | "seismograph"
|
||||
|
||||
# Weekly Calendar fields (schedule_type = "weekly_calendar")
|
||||
# JSON format: {
|
||||
# "monday": {"enabled": true, "start": "19:00", "end": "07:00"},
|
||||
# "tuesday": {"enabled": false},
|
||||
# ...
|
||||
# }
|
||||
weekly_pattern = Column(Text, nullable=True)
|
||||
|
||||
# Simple Interval fields (schedule_type = "simple_interval")
|
||||
interval_type = Column(String, nullable=True) # "daily" | "hourly"
|
||||
cycle_time = Column(String, nullable=True) # "00:00" - time to run stop/download/restart
|
||||
include_download = Column(Boolean, default=True) # Download data before restart
|
||||
|
||||
# Automation options (applies to both schedule types)
|
||||
auto_increment_index = Column(Boolean, default=True) # Auto-increment store/index number before start
|
||||
# When True: prevents "overwrite data?" prompts by using a new index each time
|
||||
|
||||
# Shared configuration
|
||||
enabled = Column(Boolean, default=True)
|
||||
timezone = Column(String, default="America/New_York")
|
||||
|
||||
# Tracking
|
||||
last_generated_at = Column(DateTime, nullable=True) # When actions were last generated
|
||||
next_occurrence = Column(DateTime, nullable=True) # Computed next action time
|
||||
|
||||
created_at = Column(DateTime, default=datetime.utcnow)
|
||||
updated_at = Column(DateTime, default=datetime.utcnow, onupdate=datetime.utcnow)
|
||||
|
||||
|
||||
class Alert(Base):
|
||||
"""
|
||||
In-app alerts for device status changes and system events.
|
||||
|
||||
Designed for future expansion to email/webhook notifications.
|
||||
Currently supports:
|
||||
- device_offline: Device became unreachable
|
||||
- device_online: Device came back online
|
||||
- schedule_failed: Scheduled action failed to execute
|
||||
- schedule_completed: Scheduled action completed successfully
|
||||
"""
|
||||
__tablename__ = "alerts"
|
||||
|
||||
id = Column(String, primary_key=True, index=True) # UUID
|
||||
|
||||
# Alert classification
|
||||
alert_type = Column(String, nullable=False) # "device_offline" | "device_online" | "schedule_failed" | "schedule_completed"
|
||||
severity = Column(String, default="warning") # "info" | "warning" | "critical"
|
||||
|
||||
# Related entities (nullable - may not all apply)
|
||||
project_id = Column(String, nullable=True, index=True)
|
||||
location_id = Column(String, nullable=True, index=True)
|
||||
unit_id = Column(String, nullable=True, index=True)
|
||||
schedule_id = Column(String, nullable=True) # RecurringSchedule or ScheduledAction id
|
||||
|
||||
# Alert content
|
||||
title = Column(String, nullable=False) # "NRL-001 Device Offline"
|
||||
message = Column(Text, nullable=True) # Detailed description
|
||||
alert_metadata = Column(Text, nullable=True) # JSON: additional context data
|
||||
|
||||
# Status tracking
|
||||
status = Column(String, default="active") # "active" | "acknowledged" | "resolved" | "dismissed"
|
||||
acknowledged_at = Column(DateTime, nullable=True)
|
||||
resolved_at = Column(DateTime, nullable=True)
|
||||
|
||||
created_at = Column(DateTime, default=datetime.utcnow)
|
||||
expires_at = Column(DateTime, nullable=True) # Auto-dismiss after this time
|
||||
@@ -4,8 +4,8 @@ from sqlalchemy import desc
|
||||
from pathlib import Path
|
||||
from datetime import datetime, timedelta, timezone
|
||||
from typing import List, Dict, Any
|
||||
from app.seismo.database import get_db
|
||||
from app.seismo.models import UnitHistory, Emitter, RosterUnit
|
||||
from backend.database import get_db
|
||||
from backend.models import UnitHistory, Emitter, RosterUnit
|
||||
|
||||
router = APIRouter(prefix="/api", tags=["activity"])
|
||||
|
||||
326
backend/routers/alerts.py
Normal file
@@ -0,0 +1,326 @@
|
||||
"""
|
||||
Alerts Router
|
||||
|
||||
API endpoints for managing in-app alerts.
|
||||
"""
|
||||
|
||||
from fastapi import APIRouter, Request, Depends, HTTPException, Query
|
||||
from fastapi.responses import HTMLResponse, JSONResponse
|
||||
from sqlalchemy.orm import Session
|
||||
from typing import Optional
|
||||
from datetime import datetime, timedelta
|
||||
|
||||
from backend.database import get_db
|
||||
from backend.models import Alert, RosterUnit
|
||||
from backend.services.alert_service import get_alert_service
|
||||
from backend.templates_config import templates
|
||||
|
||||
router = APIRouter(prefix="/api/alerts", tags=["alerts"])
|
||||
|
||||
|
||||
# ============================================================================
|
||||
# Alert List and Count
|
||||
# ============================================================================
|
||||
|
||||
@router.get("/")
|
||||
async def list_alerts(
|
||||
db: Session = Depends(get_db),
|
||||
status: Optional[str] = Query(None, description="Filter by status: active, acknowledged, resolved, dismissed"),
|
||||
project_id: Optional[str] = Query(None),
|
||||
unit_id: Optional[str] = Query(None),
|
||||
alert_type: Optional[str] = Query(None, description="Filter by type: device_offline, device_online, schedule_failed"),
|
||||
limit: int = Query(50, le=100),
|
||||
offset: int = Query(0, ge=0),
|
||||
):
|
||||
"""
|
||||
List alerts with optional filters.
|
||||
"""
|
||||
alert_service = get_alert_service(db)
|
||||
|
||||
alerts = alert_service.get_all_alerts(
|
||||
status=status,
|
||||
project_id=project_id,
|
||||
unit_id=unit_id,
|
||||
alert_type=alert_type,
|
||||
limit=limit,
|
||||
offset=offset,
|
||||
)
|
||||
|
||||
return {
|
||||
"alerts": [
|
||||
{
|
||||
"id": a.id,
|
||||
"alert_type": a.alert_type,
|
||||
"severity": a.severity,
|
||||
"title": a.title,
|
||||
"message": a.message,
|
||||
"status": a.status,
|
||||
"unit_id": a.unit_id,
|
||||
"project_id": a.project_id,
|
||||
"location_id": a.location_id,
|
||||
"created_at": a.created_at.isoformat() if a.created_at else None,
|
||||
"acknowledged_at": a.acknowledged_at.isoformat() if a.acknowledged_at else None,
|
||||
"resolved_at": a.resolved_at.isoformat() if a.resolved_at else None,
|
||||
}
|
||||
for a in alerts
|
||||
],
|
||||
"count": len(alerts),
|
||||
"limit": limit,
|
||||
"offset": offset,
|
||||
}
|
||||
|
||||
|
||||
@router.get("/active")
|
||||
async def list_active_alerts(
|
||||
db: Session = Depends(get_db),
|
||||
project_id: Optional[str] = Query(None),
|
||||
unit_id: Optional[str] = Query(None),
|
||||
alert_type: Optional[str] = Query(None),
|
||||
min_severity: Optional[str] = Query(None, description="Minimum severity: info, warning, critical"),
|
||||
limit: int = Query(50, le=100),
|
||||
):
|
||||
"""
|
||||
List only active alerts.
|
||||
"""
|
||||
alert_service = get_alert_service(db)
|
||||
|
||||
alerts = alert_service.get_active_alerts(
|
||||
project_id=project_id,
|
||||
unit_id=unit_id,
|
||||
alert_type=alert_type,
|
||||
min_severity=min_severity,
|
||||
limit=limit,
|
||||
)
|
||||
|
||||
return {
|
||||
"alerts": [
|
||||
{
|
||||
"id": a.id,
|
||||
"alert_type": a.alert_type,
|
||||
"severity": a.severity,
|
||||
"title": a.title,
|
||||
"message": a.message,
|
||||
"unit_id": a.unit_id,
|
||||
"project_id": a.project_id,
|
||||
"created_at": a.created_at.isoformat() if a.created_at else None,
|
||||
}
|
||||
for a in alerts
|
||||
],
|
||||
"count": len(alerts),
|
||||
}
|
||||
|
||||
|
||||
@router.get("/active/count")
|
||||
async def get_active_alert_count(db: Session = Depends(get_db)):
|
||||
"""
|
||||
Get count of active alerts (for navbar badge).
|
||||
"""
|
||||
alert_service = get_alert_service(db)
|
||||
count = alert_service.get_active_alert_count()
|
||||
return {"count": count}
|
||||
|
||||
|
||||
# ============================================================================
|
||||
# Single Alert Operations
|
||||
# ============================================================================
|
||||
|
||||
@router.get("/{alert_id}")
|
||||
async def get_alert(
|
||||
alert_id: str,
|
||||
db: Session = Depends(get_db),
|
||||
):
|
||||
"""
|
||||
Get a specific alert.
|
||||
"""
|
||||
alert = db.query(Alert).filter_by(id=alert_id).first()
|
||||
if not alert:
|
||||
raise HTTPException(status_code=404, detail="Alert not found")
|
||||
|
||||
# Get related unit info
|
||||
unit = None
|
||||
if alert.unit_id:
|
||||
unit = db.query(RosterUnit).filter_by(id=alert.unit_id).first()
|
||||
|
||||
return {
|
||||
"id": alert.id,
|
||||
"alert_type": alert.alert_type,
|
||||
"severity": alert.severity,
|
||||
"title": alert.title,
|
||||
"message": alert.message,
|
||||
"metadata": alert.alert_metadata,
|
||||
"status": alert.status,
|
||||
"unit_id": alert.unit_id,
|
||||
"unit_name": unit.id if unit else None,
|
||||
"project_id": alert.project_id,
|
||||
"location_id": alert.location_id,
|
||||
"schedule_id": alert.schedule_id,
|
||||
"created_at": alert.created_at.isoformat() if alert.created_at else None,
|
||||
"acknowledged_at": alert.acknowledged_at.isoformat() if alert.acknowledged_at else None,
|
||||
"resolved_at": alert.resolved_at.isoformat() if alert.resolved_at else None,
|
||||
"expires_at": alert.expires_at.isoformat() if alert.expires_at else None,
|
||||
}
|
||||
|
||||
|
||||
@router.post("/{alert_id}/acknowledge")
|
||||
async def acknowledge_alert(
|
||||
alert_id: str,
|
||||
db: Session = Depends(get_db),
|
||||
):
|
||||
"""
|
||||
Mark alert as acknowledged.
|
||||
"""
|
||||
alert_service = get_alert_service(db)
|
||||
alert = alert_service.acknowledge_alert(alert_id)
|
||||
|
||||
if not alert:
|
||||
raise HTTPException(status_code=404, detail="Alert not found")
|
||||
|
||||
return {
|
||||
"success": True,
|
||||
"alert_id": alert.id,
|
||||
"status": alert.status,
|
||||
}
|
||||
|
||||
|
||||
@router.post("/{alert_id}/dismiss")
|
||||
async def dismiss_alert(
|
||||
alert_id: str,
|
||||
db: Session = Depends(get_db),
|
||||
):
|
||||
"""
|
||||
Dismiss alert.
|
||||
"""
|
||||
alert_service = get_alert_service(db)
|
||||
alert = alert_service.dismiss_alert(alert_id)
|
||||
|
||||
if not alert:
|
||||
raise HTTPException(status_code=404, detail="Alert not found")
|
||||
|
||||
return {
|
||||
"success": True,
|
||||
"alert_id": alert.id,
|
||||
"status": alert.status,
|
||||
}
|
||||
|
||||
|
||||
@router.post("/{alert_id}/resolve")
|
||||
async def resolve_alert(
|
||||
alert_id: str,
|
||||
db: Session = Depends(get_db),
|
||||
):
|
||||
"""
|
||||
Manually resolve an alert.
|
||||
"""
|
||||
alert_service = get_alert_service(db)
|
||||
alert = alert_service.resolve_alert(alert_id)
|
||||
|
||||
if not alert:
|
||||
raise HTTPException(status_code=404, detail="Alert not found")
|
||||
|
||||
return {
|
||||
"success": True,
|
||||
"alert_id": alert.id,
|
||||
"status": alert.status,
|
||||
}
|
||||
|
||||
|
||||
# ============================================================================
|
||||
# HTML Partials for HTMX
|
||||
# ============================================================================
|
||||
|
||||
@router.get("/partials/dropdown", response_class=HTMLResponse)
|
||||
async def get_alert_dropdown(
|
||||
request: Request,
|
||||
db: Session = Depends(get_db),
|
||||
):
|
||||
"""
|
||||
Return HTML partial for alert dropdown in navbar.
|
||||
"""
|
||||
alert_service = get_alert_service(db)
|
||||
alerts = alert_service.get_active_alerts(limit=10)
|
||||
|
||||
# Calculate relative time for each alert
|
||||
now = datetime.utcnow()
|
||||
alerts_data = []
|
||||
for alert in alerts:
|
||||
delta = now - alert.created_at
|
||||
if delta.days > 0:
|
||||
time_ago = f"{delta.days}d ago"
|
||||
elif delta.seconds >= 3600:
|
||||
time_ago = f"{delta.seconds // 3600}h ago"
|
||||
elif delta.seconds >= 60:
|
||||
time_ago = f"{delta.seconds // 60}m ago"
|
||||
else:
|
||||
time_ago = "just now"
|
||||
|
||||
alerts_data.append({
|
||||
"alert": alert,
|
||||
"time_ago": time_ago,
|
||||
})
|
||||
|
||||
return templates.TemplateResponse("partials/alerts/alert_dropdown.html", {
|
||||
"request": request,
|
||||
"alerts": alerts_data,
|
||||
"total_count": alert_service.get_active_alert_count(),
|
||||
})
|
||||
|
||||
|
||||
@router.get("/partials/list", response_class=HTMLResponse)
|
||||
async def get_alert_list(
|
||||
request: Request,
|
||||
db: Session = Depends(get_db),
|
||||
status: Optional[str] = Query(None),
|
||||
limit: int = Query(20),
|
||||
):
|
||||
"""
|
||||
Return HTML partial for alert list page.
|
||||
"""
|
||||
alert_service = get_alert_service(db)
|
||||
|
||||
if status:
|
||||
alerts = alert_service.get_all_alerts(status=status, limit=limit)
|
||||
else:
|
||||
alerts = alert_service.get_all_alerts(limit=limit)
|
||||
|
||||
# Calculate relative time for each alert
|
||||
now = datetime.utcnow()
|
||||
alerts_data = []
|
||||
for alert in alerts:
|
||||
delta = now - alert.created_at
|
||||
if delta.days > 0:
|
||||
time_ago = f"{delta.days}d ago"
|
||||
elif delta.seconds >= 3600:
|
||||
time_ago = f"{delta.seconds // 3600}h ago"
|
||||
elif delta.seconds >= 60:
|
||||
time_ago = f"{delta.seconds // 60}m ago"
|
||||
else:
|
||||
time_ago = "just now"
|
||||
|
||||
alerts_data.append({
|
||||
"alert": alert,
|
||||
"time_ago": time_ago,
|
||||
})
|
||||
|
||||
return templates.TemplateResponse("partials/alerts/alert_list.html", {
|
||||
"request": request,
|
||||
"alerts": alerts_data,
|
||||
"status_filter": status,
|
||||
})
|
||||
|
||||
|
||||
# ============================================================================
|
||||
# Cleanup
|
||||
# ============================================================================
|
||||
|
||||
@router.post("/cleanup-expired")
|
||||
async def cleanup_expired_alerts(db: Session = Depends(get_db)):
|
||||
"""
|
||||
Cleanup expired alerts (admin/maintenance endpoint).
|
||||
"""
|
||||
alert_service = get_alert_service(db)
|
||||
count = alert_service.cleanup_expired_alerts()
|
||||
|
||||
return {
|
||||
"success": True,
|
||||
"cleaned_up": count,
|
||||
}
|
||||
97
backend/routers/dashboard.py
Normal file
@@ -0,0 +1,97 @@
|
||||
from fastapi import APIRouter, Request, Depends
|
||||
from sqlalchemy.orm import Session
|
||||
from datetime import datetime, timedelta
|
||||
|
||||
from backend.database import get_db
|
||||
from backend.models import ScheduledAction, MonitoringLocation, Project
|
||||
from backend.services.snapshot import emit_status_snapshot
|
||||
from backend.templates_config import templates
|
||||
from backend.utils.timezone import utc_to_local, local_to_utc, get_user_timezone
|
||||
|
||||
router = APIRouter()
|
||||
|
||||
|
||||
@router.get("/dashboard/active")
|
||||
def dashboard_active(request: Request):
|
||||
snapshot = emit_status_snapshot()
|
||||
return templates.TemplateResponse(
|
||||
"partials/active_table.html",
|
||||
{"request": request, "units": snapshot["active"]}
|
||||
)
|
||||
|
||||
|
||||
@router.get("/dashboard/benched")
|
||||
def dashboard_benched(request: Request):
|
||||
snapshot = emit_status_snapshot()
|
||||
return templates.TemplateResponse(
|
||||
"partials/benched_table.html",
|
||||
{"request": request, "units": snapshot["benched"]}
|
||||
)
|
||||
|
||||
|
||||
@router.get("/dashboard/todays-actions")
|
||||
def dashboard_todays_actions(request: Request, db: Session = Depends(get_db)):
|
||||
"""
|
||||
Get today's scheduled actions for the dashboard card.
|
||||
Shows upcoming, completed, and failed actions for today.
|
||||
"""
|
||||
import json
|
||||
from zoneinfo import ZoneInfo
|
||||
|
||||
# Get today's date range in local timezone
|
||||
tz = ZoneInfo(get_user_timezone())
|
||||
now_local = datetime.now(tz)
|
||||
today_start_local = now_local.replace(hour=0, minute=0, second=0, microsecond=0)
|
||||
today_end_local = today_start_local + timedelta(days=1)
|
||||
|
||||
# Convert to UTC for database query
|
||||
today_start_utc = today_start_local.astimezone(ZoneInfo("UTC")).replace(tzinfo=None)
|
||||
today_end_utc = today_end_local.astimezone(ZoneInfo("UTC")).replace(tzinfo=None)
|
||||
|
||||
# Query today's actions
|
||||
actions = db.query(ScheduledAction).filter(
|
||||
ScheduledAction.scheduled_time >= today_start_utc,
|
||||
ScheduledAction.scheduled_time < today_end_utc,
|
||||
).order_by(ScheduledAction.scheduled_time.asc()).all()
|
||||
|
||||
# Enrich with location/project info and parse results
|
||||
enriched_actions = []
|
||||
for action in actions:
|
||||
location = None
|
||||
project = None
|
||||
if action.location_id:
|
||||
location = db.query(MonitoringLocation).filter_by(id=action.location_id).first()
|
||||
if action.project_id:
|
||||
project = db.query(Project).filter_by(id=action.project_id).first()
|
||||
|
||||
# Parse module_response for result details
|
||||
result_data = None
|
||||
if action.module_response:
|
||||
try:
|
||||
result_data = json.loads(action.module_response)
|
||||
except json.JSONDecodeError:
|
||||
pass
|
||||
|
||||
enriched_actions.append({
|
||||
"action": action,
|
||||
"location": location,
|
||||
"project": project,
|
||||
"result": result_data,
|
||||
})
|
||||
|
||||
# Count by status
|
||||
pending_count = sum(1 for a in actions if a.execution_status == "pending")
|
||||
completed_count = sum(1 for a in actions if a.execution_status == "completed")
|
||||
failed_count = sum(1 for a in actions if a.execution_status == "failed")
|
||||
|
||||
return templates.TemplateResponse(
|
||||
"partials/dashboard/todays_actions.html",
|
||||
{
|
||||
"request": request,
|
||||
"actions": enriched_actions,
|
||||
"pending_count": pending_count,
|
||||
"completed_count": completed_count,
|
||||
"failed_count": failed_count,
|
||||
"total_count": len(actions),
|
||||
}
|
||||
)
|
||||
@@ -2,8 +2,8 @@
|
||||
from fastapi import APIRouter, Depends
|
||||
from sqlalchemy.orm import Session
|
||||
|
||||
from app.seismo.database import get_db
|
||||
from app.seismo.services.snapshot import emit_status_snapshot
|
||||
from backend.database import get_db
|
||||
from backend.services.snapshot import emit_status_snapshot
|
||||
|
||||
router = APIRouter(prefix="/dashboard", tags=["dashboard-tabs"])
|
||||
|
||||
286
backend/routers/modem_dashboard.py
Normal file
@@ -0,0 +1,286 @@
|
||||
"""
|
||||
Modem Dashboard Router
|
||||
|
||||
Provides API endpoints for the Field Modems management page.
|
||||
"""
|
||||
|
||||
from fastapi import APIRouter, Request, Depends, Query
|
||||
from fastapi.responses import HTMLResponse
|
||||
from sqlalchemy.orm import Session
|
||||
from datetime import datetime
|
||||
import subprocess
|
||||
import time
|
||||
import logging
|
||||
|
||||
from backend.database import get_db
|
||||
from backend.models import RosterUnit
|
||||
from backend.templates_config import templates
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
router = APIRouter(prefix="/api/modem-dashboard", tags=["modem-dashboard"])
|
||||
|
||||
|
||||
@router.get("/stats", response_class=HTMLResponse)
|
||||
async def get_modem_stats(request: Request, db: Session = Depends(get_db)):
|
||||
"""
|
||||
Get summary statistics for modem dashboard.
|
||||
Returns HTML partial with stat cards.
|
||||
"""
|
||||
# Query all modems
|
||||
all_modems = db.query(RosterUnit).filter_by(device_type="modem").all()
|
||||
|
||||
# Get IDs of modems that have devices paired to them
|
||||
paired_modem_ids = set()
|
||||
devices_with_modems = db.query(RosterUnit).filter(
|
||||
RosterUnit.deployed_with_modem_id.isnot(None),
|
||||
RosterUnit.retired == False
|
||||
).all()
|
||||
for device in devices_with_modems:
|
||||
if device.deployed_with_modem_id:
|
||||
paired_modem_ids.add(device.deployed_with_modem_id)
|
||||
|
||||
# Count categories
|
||||
total_count = len(all_modems)
|
||||
retired_count = sum(1 for m in all_modems if m.retired)
|
||||
|
||||
# In use = deployed AND paired with a device
|
||||
in_use_count = sum(1 for m in all_modems
|
||||
if m.deployed and not m.retired and m.id in paired_modem_ids)
|
||||
|
||||
# Spare = deployed but NOT paired (available for assignment)
|
||||
spare_count = sum(1 for m in all_modems
|
||||
if m.deployed and not m.retired and m.id not in paired_modem_ids)
|
||||
|
||||
# Benched = not deployed and not retired
|
||||
benched_count = sum(1 for m in all_modems if not m.deployed and not m.retired)
|
||||
|
||||
return templates.TemplateResponse("partials/modem_stats.html", {
|
||||
"request": request,
|
||||
"total_count": total_count,
|
||||
"in_use_count": in_use_count,
|
||||
"spare_count": spare_count,
|
||||
"benched_count": benched_count,
|
||||
"retired_count": retired_count
|
||||
})
|
||||
|
||||
|
||||
@router.get("/units", response_class=HTMLResponse)
|
||||
async def get_modem_units(
|
||||
request: Request,
|
||||
db: Session = Depends(get_db),
|
||||
search: str = Query(None),
|
||||
filter_status: str = Query(None), # "in_use", "spare", "benched", "retired"
|
||||
):
|
||||
"""
|
||||
Get list of modem units for the dashboard.
|
||||
Returns HTML partial with modem cards.
|
||||
"""
|
||||
query = db.query(RosterUnit).filter_by(device_type="modem")
|
||||
|
||||
# Filter by search term if provided
|
||||
if search:
|
||||
search_term = f"%{search}%"
|
||||
query = query.filter(
|
||||
(RosterUnit.id.ilike(search_term)) |
|
||||
(RosterUnit.ip_address.ilike(search_term)) |
|
||||
(RosterUnit.hardware_model.ilike(search_term)) |
|
||||
(RosterUnit.phone_number.ilike(search_term)) |
|
||||
(RosterUnit.location.ilike(search_term))
|
||||
)
|
||||
|
||||
modems = query.order_by(
|
||||
RosterUnit.retired.asc(),
|
||||
RosterUnit.deployed.desc(),
|
||||
RosterUnit.id.asc()
|
||||
).all()
|
||||
|
||||
# Get paired device info for each modem
|
||||
paired_devices = {}
|
||||
devices_with_modems = db.query(RosterUnit).filter(
|
||||
RosterUnit.deployed_with_modem_id.isnot(None),
|
||||
RosterUnit.retired == False
|
||||
).all()
|
||||
for device in devices_with_modems:
|
||||
if device.deployed_with_modem_id:
|
||||
paired_devices[device.deployed_with_modem_id] = {
|
||||
"id": device.id,
|
||||
"device_type": device.device_type,
|
||||
"deployed": device.deployed
|
||||
}
|
||||
|
||||
# Annotate modems with paired device info
|
||||
modem_list = []
|
||||
for modem in modems:
|
||||
paired = paired_devices.get(modem.id)
|
||||
|
||||
# Determine status category
|
||||
if modem.retired:
|
||||
status = "retired"
|
||||
elif not modem.deployed:
|
||||
status = "benched"
|
||||
elif paired:
|
||||
status = "in_use"
|
||||
else:
|
||||
status = "spare"
|
||||
|
||||
# Apply filter if specified
|
||||
if filter_status and status != filter_status:
|
||||
continue
|
||||
|
||||
modem_list.append({
|
||||
"id": modem.id,
|
||||
"ip_address": modem.ip_address,
|
||||
"phone_number": modem.phone_number,
|
||||
"hardware_model": modem.hardware_model,
|
||||
"deployed": modem.deployed,
|
||||
"retired": modem.retired,
|
||||
"location": modem.location,
|
||||
"project_id": modem.project_id,
|
||||
"paired_device": paired,
|
||||
"status": status
|
||||
})
|
||||
|
||||
return templates.TemplateResponse("partials/modem_list.html", {
|
||||
"request": request,
|
||||
"modems": modem_list
|
||||
})
|
||||
|
||||
|
||||
@router.get("/{modem_id}/paired-device")
|
||||
async def get_paired_device(modem_id: str, db: Session = Depends(get_db)):
|
||||
"""
|
||||
Get the device (SLM/seismograph) that is paired with this modem.
|
||||
Returns JSON with device info or null if not paired.
|
||||
"""
|
||||
# Check modem exists
|
||||
modem = db.query(RosterUnit).filter_by(id=modem_id, device_type="modem").first()
|
||||
if not modem:
|
||||
return {"status": "error", "detail": f"Modem {modem_id} not found"}
|
||||
|
||||
# Find device paired with this modem
|
||||
device = db.query(RosterUnit).filter(
|
||||
RosterUnit.deployed_with_modem_id == modem_id,
|
||||
RosterUnit.retired == False
|
||||
).first()
|
||||
|
||||
if device:
|
||||
return {
|
||||
"paired": True,
|
||||
"device": {
|
||||
"id": device.id,
|
||||
"device_type": device.device_type,
|
||||
"deployed": device.deployed,
|
||||
"project_id": device.project_id,
|
||||
"location": device.location or device.address
|
||||
}
|
||||
}
|
||||
|
||||
return {"paired": False, "device": None}
|
||||
|
||||
|
||||
@router.get("/{modem_id}/paired-device-html", response_class=HTMLResponse)
|
||||
async def get_paired_device_html(modem_id: str, request: Request, db: Session = Depends(get_db)):
|
||||
"""
|
||||
Get HTML partial showing the device paired with this modem.
|
||||
Used by unit_detail.html for modems.
|
||||
"""
|
||||
# Check modem exists
|
||||
modem = db.query(RosterUnit).filter_by(id=modem_id, device_type="modem").first()
|
||||
if not modem:
|
||||
return HTMLResponse('<p class="text-red-500">Modem not found</p>')
|
||||
|
||||
# Find device paired with this modem
|
||||
device = db.query(RosterUnit).filter(
|
||||
RosterUnit.deployed_with_modem_id == modem_id,
|
||||
RosterUnit.retired == False
|
||||
).first()
|
||||
|
||||
return templates.TemplateResponse("partials/modem_paired_device.html", {
|
||||
"request": request,
|
||||
"modem_id": modem_id,
|
||||
"device": device
|
||||
})
|
||||
|
||||
|
||||
@router.get("/{modem_id}/ping")
|
||||
async def ping_modem(modem_id: str, db: Session = Depends(get_db)):
|
||||
"""
|
||||
Test modem connectivity with a simple ping.
|
||||
Returns response time and connection status.
|
||||
"""
|
||||
# Get modem from database
|
||||
modem = db.query(RosterUnit).filter_by(id=modem_id, device_type="modem").first()
|
||||
|
||||
if not modem:
|
||||
return {"status": "error", "detail": f"Modem {modem_id} not found"}
|
||||
|
||||
if not modem.ip_address:
|
||||
return {"status": "error", "detail": f"Modem {modem_id} has no IP address configured"}
|
||||
|
||||
try:
|
||||
# Ping the modem (1 packet, 2 second timeout)
|
||||
start_time = time.time()
|
||||
result = subprocess.run(
|
||||
["ping", "-c", "1", "-W", "2", modem.ip_address],
|
||||
capture_output=True,
|
||||
text=True,
|
||||
timeout=3
|
||||
)
|
||||
response_time = int((time.time() - start_time) * 1000) # Convert to milliseconds
|
||||
|
||||
if result.returncode == 0:
|
||||
return {
|
||||
"status": "success",
|
||||
"modem_id": modem_id,
|
||||
"ip_address": modem.ip_address,
|
||||
"response_time_ms": response_time,
|
||||
"message": "Modem is responding"
|
||||
}
|
||||
else:
|
||||
return {
|
||||
"status": "error",
|
||||
"modem_id": modem_id,
|
||||
"ip_address": modem.ip_address,
|
||||
"detail": "Modem not responding to ping"
|
||||
}
|
||||
|
||||
except subprocess.TimeoutExpired:
|
||||
return {
|
||||
"status": "error",
|
||||
"modem_id": modem_id,
|
||||
"ip_address": modem.ip_address,
|
||||
"detail": "Ping timeout"
|
||||
}
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to ping modem {modem_id}: {e}")
|
||||
return {
|
||||
"status": "error",
|
||||
"modem_id": modem_id,
|
||||
"detail": str(e)
|
||||
}
|
||||
|
||||
|
||||
@router.get("/{modem_id}/diagnostics")
|
||||
async def get_modem_diagnostics(modem_id: str, db: Session = Depends(get_db)):
|
||||
"""
|
||||
Get modem diagnostics (signal strength, data usage, uptime).
|
||||
|
||||
Currently returns placeholders. When ModemManager is available,
|
||||
this endpoint will query it for real diagnostics.
|
||||
"""
|
||||
modem = db.query(RosterUnit).filter_by(id=modem_id, device_type="modem").first()
|
||||
if not modem:
|
||||
return {"status": "error", "detail": f"Modem {modem_id} not found"}
|
||||
|
||||
# TODO: Query ModemManager backend when available
|
||||
return {
|
||||
"status": "unavailable",
|
||||
"message": "ModemManager integration not yet available",
|
||||
"modem_id": modem_id,
|
||||
"signal_strength_dbm": None,
|
||||
"data_usage_mb": None,
|
||||
"uptime_seconds": None,
|
||||
"carrier": None,
|
||||
"connection_type": None # LTE, 5G, etc.
|
||||
}
|
||||
@@ -8,8 +8,8 @@ import shutil
|
||||
from PIL import Image
|
||||
from PIL.ExifTags import TAGS, GPSTAGS
|
||||
from sqlalchemy.orm import Session
|
||||
from app.seismo.database import get_db
|
||||
from app.seismo.models import RosterUnit
|
||||
from backend.database import get_db
|
||||
from backend.models import RosterUnit
|
||||
|
||||
router = APIRouter(prefix="/api", tags=["photos"])
|
||||
|
||||
521
backend/routers/project_locations.py
Normal file
@@ -0,0 +1,521 @@
|
||||
"""
|
||||
Project Locations Router
|
||||
|
||||
Handles monitoring locations (NRLs for sound, monitoring points for vibration)
|
||||
and unit assignments within projects.
|
||||
"""
|
||||
|
||||
from fastapi import APIRouter, Request, Depends, HTTPException, Query
|
||||
from fastapi.responses import HTMLResponse, JSONResponse
|
||||
from sqlalchemy.orm import Session
|
||||
from sqlalchemy import and_, or_
|
||||
from datetime import datetime
|
||||
from typing import Optional
|
||||
import uuid
|
||||
import json
|
||||
|
||||
from backend.database import get_db
|
||||
from backend.models import (
|
||||
Project,
|
||||
ProjectType,
|
||||
MonitoringLocation,
|
||||
UnitAssignment,
|
||||
RosterUnit,
|
||||
RecordingSession,
|
||||
)
|
||||
from backend.templates_config import templates
|
||||
|
||||
router = APIRouter(prefix="/api/projects/{project_id}", tags=["project-locations"])
|
||||
|
||||
|
||||
# ============================================================================
|
||||
# Monitoring Locations CRUD
|
||||
# ============================================================================
|
||||
|
||||
@router.get("/locations", response_class=HTMLResponse)
|
||||
async def get_project_locations(
|
||||
project_id: str,
|
||||
request: Request,
|
||||
db: Session = Depends(get_db),
|
||||
location_type: Optional[str] = Query(None),
|
||||
):
|
||||
"""
|
||||
Get all monitoring locations for a project.
|
||||
Returns HTML partial with location list.
|
||||
"""
|
||||
project = db.query(Project).filter_by(id=project_id).first()
|
||||
if not project:
|
||||
raise HTTPException(status_code=404, detail="Project not found")
|
||||
|
||||
query = db.query(MonitoringLocation).filter_by(project_id=project_id)
|
||||
|
||||
# Filter by type if provided
|
||||
if location_type:
|
||||
query = query.filter_by(location_type=location_type)
|
||||
|
||||
locations = query.order_by(MonitoringLocation.name).all()
|
||||
|
||||
# Enrich with assignment info
|
||||
locations_data = []
|
||||
for location in locations:
|
||||
# Get active assignment
|
||||
assignment = db.query(UnitAssignment).filter(
|
||||
and_(
|
||||
UnitAssignment.location_id == location.id,
|
||||
UnitAssignment.status == "active",
|
||||
)
|
||||
).first()
|
||||
|
||||
assigned_unit = None
|
||||
if assignment:
|
||||
assigned_unit = db.query(RosterUnit).filter_by(id=assignment.unit_id).first()
|
||||
|
||||
# Count recording sessions
|
||||
session_count = db.query(RecordingSession).filter_by(
|
||||
location_id=location.id
|
||||
).count()
|
||||
|
||||
locations_data.append({
|
||||
"location": location,
|
||||
"assignment": assignment,
|
||||
"assigned_unit": assigned_unit,
|
||||
"session_count": session_count,
|
||||
})
|
||||
|
||||
return templates.TemplateResponse("partials/projects/location_list.html", {
|
||||
"request": request,
|
||||
"project": project,
|
||||
"locations": locations_data,
|
||||
})
|
||||
|
||||
|
||||
@router.get("/locations-json")
|
||||
async def get_project_locations_json(
|
||||
project_id: str,
|
||||
db: Session = Depends(get_db),
|
||||
location_type: Optional[str] = Query(None),
|
||||
):
|
||||
"""
|
||||
Get all monitoring locations for a project as JSON.
|
||||
Used by the schedule modal to populate location dropdown.
|
||||
"""
|
||||
project = db.query(Project).filter_by(id=project_id).first()
|
||||
if not project:
|
||||
raise HTTPException(status_code=404, detail="Project not found")
|
||||
|
||||
query = db.query(MonitoringLocation).filter_by(project_id=project_id)
|
||||
|
||||
if location_type:
|
||||
query = query.filter_by(location_type=location_type)
|
||||
|
||||
locations = query.order_by(MonitoringLocation.name).all()
|
||||
|
||||
return [
|
||||
{
|
||||
"id": loc.id,
|
||||
"name": loc.name,
|
||||
"location_type": loc.location_type,
|
||||
"description": loc.description,
|
||||
"address": loc.address,
|
||||
"coordinates": loc.coordinates,
|
||||
}
|
||||
for loc in locations
|
||||
]
|
||||
|
||||
|
||||
@router.post("/locations/create")
|
||||
async def create_location(
|
||||
project_id: str,
|
||||
request: Request,
|
||||
db: Session = Depends(get_db),
|
||||
):
|
||||
"""
|
||||
Create a new monitoring location within a project.
|
||||
"""
|
||||
project = db.query(Project).filter_by(id=project_id).first()
|
||||
if not project:
|
||||
raise HTTPException(status_code=404, detail="Project not found")
|
||||
|
||||
form_data = await request.form()
|
||||
|
||||
location = MonitoringLocation(
|
||||
id=str(uuid.uuid4()),
|
||||
project_id=project_id,
|
||||
location_type=form_data.get("location_type"),
|
||||
name=form_data.get("name"),
|
||||
description=form_data.get("description"),
|
||||
coordinates=form_data.get("coordinates"),
|
||||
address=form_data.get("address"),
|
||||
location_metadata=form_data.get("location_metadata"), # JSON string
|
||||
)
|
||||
|
||||
db.add(location)
|
||||
db.commit()
|
||||
db.refresh(location)
|
||||
|
||||
return JSONResponse({
|
||||
"success": True,
|
||||
"location_id": location.id,
|
||||
"message": f"Location '{location.name}' created successfully",
|
||||
})
|
||||
|
||||
|
||||
@router.put("/locations/{location_id}")
|
||||
async def update_location(
|
||||
project_id: str,
|
||||
location_id: str,
|
||||
request: Request,
|
||||
db: Session = Depends(get_db),
|
||||
):
|
||||
"""
|
||||
Update a monitoring location.
|
||||
"""
|
||||
location = db.query(MonitoringLocation).filter_by(
|
||||
id=location_id,
|
||||
project_id=project_id,
|
||||
).first()
|
||||
|
||||
if not location:
|
||||
raise HTTPException(status_code=404, detail="Location not found")
|
||||
|
||||
data = await request.json()
|
||||
|
||||
# Update fields if provided
|
||||
if "name" in data:
|
||||
location.name = data["name"]
|
||||
if "description" in data:
|
||||
location.description = data["description"]
|
||||
if "location_type" in data:
|
||||
location.location_type = data["location_type"]
|
||||
if "coordinates" in data:
|
||||
location.coordinates = data["coordinates"]
|
||||
if "address" in data:
|
||||
location.address = data["address"]
|
||||
if "location_metadata" in data:
|
||||
location.location_metadata = data["location_metadata"]
|
||||
|
||||
location.updated_at = datetime.utcnow()
|
||||
|
||||
db.commit()
|
||||
|
||||
return {"success": True, "message": "Location updated successfully"}
|
||||
|
||||
|
||||
@router.delete("/locations/{location_id}")
|
||||
async def delete_location(
|
||||
project_id: str,
|
||||
location_id: str,
|
||||
db: Session = Depends(get_db),
|
||||
):
|
||||
"""
|
||||
Delete a monitoring location.
|
||||
"""
|
||||
location = db.query(MonitoringLocation).filter_by(
|
||||
id=location_id,
|
||||
project_id=project_id,
|
||||
).first()
|
||||
|
||||
if not location:
|
||||
raise HTTPException(status_code=404, detail="Location not found")
|
||||
|
||||
# Check if location has active assignments
|
||||
active_assignments = db.query(UnitAssignment).filter(
|
||||
and_(
|
||||
UnitAssignment.location_id == location_id,
|
||||
UnitAssignment.status == "active",
|
||||
)
|
||||
).count()
|
||||
|
||||
if active_assignments > 0:
|
||||
raise HTTPException(
|
||||
status_code=400,
|
||||
detail="Cannot delete location with active unit assignments. Unassign units first.",
|
||||
)
|
||||
|
||||
db.delete(location)
|
||||
db.commit()
|
||||
|
||||
return {"success": True, "message": "Location deleted successfully"}
|
||||
|
||||
|
||||
# ============================================================================
|
||||
# Unit Assignments
|
||||
# ============================================================================
|
||||
|
||||
@router.get("/assignments", response_class=HTMLResponse)
|
||||
async def get_project_assignments(
|
||||
project_id: str,
|
||||
request: Request,
|
||||
db: Session = Depends(get_db),
|
||||
status: Optional[str] = Query("active"),
|
||||
):
|
||||
"""
|
||||
Get all unit assignments for a project.
|
||||
Returns HTML partial with assignment list.
|
||||
"""
|
||||
query = db.query(UnitAssignment).filter_by(project_id=project_id)
|
||||
|
||||
if status:
|
||||
query = query.filter_by(status=status)
|
||||
|
||||
assignments = query.order_by(UnitAssignment.assigned_at.desc()).all()
|
||||
|
||||
# Enrich with unit and location details
|
||||
assignments_data = []
|
||||
for assignment in assignments:
|
||||
unit = db.query(RosterUnit).filter_by(id=assignment.unit_id).first()
|
||||
location = db.query(MonitoringLocation).filter_by(id=assignment.location_id).first()
|
||||
|
||||
assignments_data.append({
|
||||
"assignment": assignment,
|
||||
"unit": unit,
|
||||
"location": location,
|
||||
})
|
||||
|
||||
return templates.TemplateResponse("partials/projects/assignment_list.html", {
|
||||
"request": request,
|
||||
"project_id": project_id,
|
||||
"assignments": assignments_data,
|
||||
})
|
||||
|
||||
|
||||
@router.post("/locations/{location_id}/assign")
|
||||
async def assign_unit_to_location(
|
||||
project_id: str,
|
||||
location_id: str,
|
||||
request: Request,
|
||||
db: Session = Depends(get_db),
|
||||
):
|
||||
"""
|
||||
Assign a unit to a monitoring location.
|
||||
"""
|
||||
location = db.query(MonitoringLocation).filter_by(
|
||||
id=location_id,
|
||||
project_id=project_id,
|
||||
).first()
|
||||
|
||||
if not location:
|
||||
raise HTTPException(status_code=404, detail="Location not found")
|
||||
|
||||
form_data = await request.form()
|
||||
unit_id = form_data.get("unit_id")
|
||||
|
||||
# Verify unit exists and matches location type
|
||||
unit = db.query(RosterUnit).filter_by(id=unit_id).first()
|
||||
if not unit:
|
||||
raise HTTPException(status_code=404, detail="Unit not found")
|
||||
|
||||
# Check device type matches location type
|
||||
expected_device_type = "slm" if location.location_type == "sound" else "seismograph"
|
||||
if unit.device_type != expected_device_type:
|
||||
raise HTTPException(
|
||||
status_code=400,
|
||||
detail=f"Unit type '{unit.device_type}' does not match location type '{location.location_type}'",
|
||||
)
|
||||
|
||||
# Check if location already has an active assignment
|
||||
existing_assignment = db.query(UnitAssignment).filter(
|
||||
and_(
|
||||
UnitAssignment.location_id == location_id,
|
||||
UnitAssignment.status == "active",
|
||||
)
|
||||
).first()
|
||||
|
||||
if existing_assignment:
|
||||
raise HTTPException(
|
||||
status_code=400,
|
||||
detail=f"Location already has an active unit assignment ({existing_assignment.unit_id}). Unassign first.",
|
||||
)
|
||||
|
||||
# Create new assignment
|
||||
assigned_until_str = form_data.get("assigned_until")
|
||||
assigned_until = datetime.fromisoformat(assigned_until_str) if assigned_until_str else None
|
||||
|
||||
assignment = UnitAssignment(
|
||||
id=str(uuid.uuid4()),
|
||||
unit_id=unit_id,
|
||||
location_id=location_id,
|
||||
project_id=project_id,
|
||||
device_type=unit.device_type,
|
||||
assigned_until=assigned_until,
|
||||
status="active",
|
||||
notes=form_data.get("notes"),
|
||||
)
|
||||
|
||||
db.add(assignment)
|
||||
db.commit()
|
||||
db.refresh(assignment)
|
||||
|
||||
return JSONResponse({
|
||||
"success": True,
|
||||
"assignment_id": assignment.id,
|
||||
"message": f"Unit '{unit_id}' assigned to '{location.name}'",
|
||||
})
|
||||
|
||||
|
||||
@router.post("/assignments/{assignment_id}/unassign")
|
||||
async def unassign_unit(
|
||||
project_id: str,
|
||||
assignment_id: str,
|
||||
db: Session = Depends(get_db),
|
||||
):
|
||||
"""
|
||||
Unassign a unit from a location.
|
||||
"""
|
||||
assignment = db.query(UnitAssignment).filter_by(
|
||||
id=assignment_id,
|
||||
project_id=project_id,
|
||||
).first()
|
||||
|
||||
if not assignment:
|
||||
raise HTTPException(status_code=404, detail="Assignment not found")
|
||||
|
||||
# Check if there are active recording sessions
|
||||
active_sessions = db.query(RecordingSession).filter(
|
||||
and_(
|
||||
RecordingSession.location_id == assignment.location_id,
|
||||
RecordingSession.unit_id == assignment.unit_id,
|
||||
RecordingSession.status == "recording",
|
||||
)
|
||||
).count()
|
||||
|
||||
if active_sessions > 0:
|
||||
raise HTTPException(
|
||||
status_code=400,
|
||||
detail="Cannot unassign unit with active recording sessions. Stop recording first.",
|
||||
)
|
||||
|
||||
assignment.status = "completed"
|
||||
assignment.assigned_until = datetime.utcnow()
|
||||
|
||||
db.commit()
|
||||
|
||||
return {"success": True, "message": "Unit unassigned successfully"}
|
||||
|
||||
|
||||
# ============================================================================
|
||||
# Available Units for Assignment
|
||||
# ============================================================================
|
||||
|
||||
@router.get("/available-units", response_class=JSONResponse)
|
||||
async def get_available_units(
|
||||
project_id: str,
|
||||
location_type: str = Query(...),
|
||||
db: Session = Depends(get_db),
|
||||
):
|
||||
"""
|
||||
Get list of available units for assignment to a location.
|
||||
Filters by device type matching the location type.
|
||||
"""
|
||||
# Determine required device type
|
||||
required_device_type = "slm" if location_type == "sound" else "seismograph"
|
||||
|
||||
# Get all units of the required type that are deployed and not retired
|
||||
all_units = db.query(RosterUnit).filter(
|
||||
and_(
|
||||
RosterUnit.device_type == required_device_type,
|
||||
RosterUnit.deployed == True,
|
||||
RosterUnit.retired == False,
|
||||
)
|
||||
).all()
|
||||
|
||||
# Filter out units that already have active assignments
|
||||
assigned_unit_ids = db.query(UnitAssignment.unit_id).filter(
|
||||
UnitAssignment.status == "active"
|
||||
).distinct().all()
|
||||
assigned_unit_ids = [uid[0] for uid in assigned_unit_ids]
|
||||
|
||||
available_units = [
|
||||
{
|
||||
"id": unit.id,
|
||||
"device_type": unit.device_type,
|
||||
"location": unit.address or unit.location,
|
||||
"model": unit.slm_model if unit.device_type == "slm" else unit.unit_type,
|
||||
}
|
||||
for unit in all_units
|
||||
if unit.id not in assigned_unit_ids
|
||||
]
|
||||
|
||||
return available_units
|
||||
|
||||
|
||||
# ============================================================================
|
||||
# NRL-specific endpoints for detail page
|
||||
# ============================================================================
|
||||
|
||||
@router.get("/nrl/{location_id}/sessions", response_class=HTMLResponse)
|
||||
async def get_nrl_sessions(
|
||||
project_id: str,
|
||||
location_id: str,
|
||||
request: Request,
|
||||
db: Session = Depends(get_db),
|
||||
):
|
||||
"""
|
||||
Get recording sessions for a specific NRL.
|
||||
Returns HTML partial with session list.
|
||||
"""
|
||||
from backend.models import RecordingSession, RosterUnit
|
||||
|
||||
sessions = db.query(RecordingSession).filter_by(
|
||||
location_id=location_id
|
||||
).order_by(RecordingSession.started_at.desc()).all()
|
||||
|
||||
# Enrich with unit details
|
||||
sessions_data = []
|
||||
for session in sessions:
|
||||
unit = None
|
||||
if session.unit_id:
|
||||
unit = db.query(RosterUnit).filter_by(id=session.unit_id).first()
|
||||
|
||||
sessions_data.append({
|
||||
"session": session,
|
||||
"unit": unit,
|
||||
})
|
||||
|
||||
return templates.TemplateResponse("partials/projects/session_list.html", {
|
||||
"request": request,
|
||||
"project_id": project_id,
|
||||
"location_id": location_id,
|
||||
"sessions": sessions_data,
|
||||
})
|
||||
|
||||
|
||||
@router.get("/nrl/{location_id}/files", response_class=HTMLResponse)
|
||||
async def get_nrl_files(
|
||||
project_id: str,
|
||||
location_id: str,
|
||||
request: Request,
|
||||
db: Session = Depends(get_db),
|
||||
):
|
||||
"""
|
||||
Get data files for a specific NRL.
|
||||
Returns HTML partial with file list.
|
||||
"""
|
||||
from backend.models import DataFile, RecordingSession
|
||||
|
||||
# Join DataFile with RecordingSession to filter by location_id
|
||||
files = db.query(DataFile).join(
|
||||
RecordingSession,
|
||||
DataFile.session_id == RecordingSession.id
|
||||
).filter(
|
||||
RecordingSession.location_id == location_id
|
||||
).order_by(DataFile.created_at.desc()).all()
|
||||
|
||||
# Enrich with session details
|
||||
files_data = []
|
||||
for file in files:
|
||||
session = None
|
||||
if file.session_id:
|
||||
session = db.query(RecordingSession).filter_by(id=file.session_id).first()
|
||||
|
||||
files_data.append({
|
||||
"file": file,
|
||||
"session": session,
|
||||
})
|
||||
|
||||
return templates.TemplateResponse("partials/projects/file_list.html", {
|
||||
"request": request,
|
||||
"project_id": project_id,
|
||||
"location_id": location_id,
|
||||
"files": files_data,
|
||||
})
|
||||
2620
backend/routers/projects.py
Normal file
465
backend/routers/recurring_schedules.py
Normal file
@@ -0,0 +1,465 @@
|
||||
"""
|
||||
Recurring Schedules Router
|
||||
|
||||
API endpoints for managing recurring monitoring schedules.
|
||||
"""
|
||||
|
||||
from fastapi import APIRouter, Request, Depends, HTTPException, Query
|
||||
from fastapi.responses import HTMLResponse, JSONResponse
|
||||
from sqlalchemy.orm import Session
|
||||
from typing import Optional
|
||||
from datetime import datetime
|
||||
import json
|
||||
|
||||
from backend.database import get_db
|
||||
from backend.models import RecurringSchedule, MonitoringLocation, Project, RosterUnit
|
||||
from backend.services.recurring_schedule_service import get_recurring_schedule_service
|
||||
from backend.templates_config import templates
|
||||
|
||||
router = APIRouter(prefix="/api/projects/{project_id}/recurring-schedules", tags=["recurring-schedules"])
|
||||
|
||||
|
||||
# ============================================================================
|
||||
# List and Get
|
||||
# ============================================================================
|
||||
|
||||
@router.get("/")
|
||||
async def list_recurring_schedules(
|
||||
project_id: str,
|
||||
db: Session = Depends(get_db),
|
||||
enabled_only: bool = Query(False),
|
||||
):
|
||||
"""
|
||||
List all recurring schedules for a project.
|
||||
"""
|
||||
project = db.query(Project).filter_by(id=project_id).first()
|
||||
if not project:
|
||||
raise HTTPException(status_code=404, detail="Project not found")
|
||||
|
||||
query = db.query(RecurringSchedule).filter_by(project_id=project_id)
|
||||
if enabled_only:
|
||||
query = query.filter_by(enabled=True)
|
||||
|
||||
schedules = query.order_by(RecurringSchedule.created_at.desc()).all()
|
||||
|
||||
return {
|
||||
"schedules": [
|
||||
{
|
||||
"id": s.id,
|
||||
"name": s.name,
|
||||
"schedule_type": s.schedule_type,
|
||||
"device_type": s.device_type,
|
||||
"location_id": s.location_id,
|
||||
"unit_id": s.unit_id,
|
||||
"enabled": s.enabled,
|
||||
"weekly_pattern": json.loads(s.weekly_pattern) if s.weekly_pattern else None,
|
||||
"interval_type": s.interval_type,
|
||||
"cycle_time": s.cycle_time,
|
||||
"include_download": s.include_download,
|
||||
"timezone": s.timezone,
|
||||
"next_occurrence": s.next_occurrence.isoformat() if s.next_occurrence else None,
|
||||
"last_generated_at": s.last_generated_at.isoformat() if s.last_generated_at else None,
|
||||
"created_at": s.created_at.isoformat() if s.created_at else None,
|
||||
}
|
||||
for s in schedules
|
||||
],
|
||||
"count": len(schedules),
|
||||
}
|
||||
|
||||
|
||||
@router.get("/{schedule_id}")
|
||||
async def get_recurring_schedule(
|
||||
project_id: str,
|
||||
schedule_id: str,
|
||||
db: Session = Depends(get_db),
|
||||
):
|
||||
"""
|
||||
Get a specific recurring schedule.
|
||||
"""
|
||||
schedule = db.query(RecurringSchedule).filter_by(
|
||||
id=schedule_id,
|
||||
project_id=project_id,
|
||||
).first()
|
||||
|
||||
if not schedule:
|
||||
raise HTTPException(status_code=404, detail="Schedule not found")
|
||||
|
||||
# Get related location and unit info
|
||||
location = db.query(MonitoringLocation).filter_by(id=schedule.location_id).first()
|
||||
unit = None
|
||||
if schedule.unit_id:
|
||||
unit = db.query(RosterUnit).filter_by(id=schedule.unit_id).first()
|
||||
|
||||
return {
|
||||
"id": schedule.id,
|
||||
"name": schedule.name,
|
||||
"schedule_type": schedule.schedule_type,
|
||||
"device_type": schedule.device_type,
|
||||
"location_id": schedule.location_id,
|
||||
"location_name": location.name if location else None,
|
||||
"unit_id": schedule.unit_id,
|
||||
"unit_name": unit.id if unit else None,
|
||||
"enabled": schedule.enabled,
|
||||
"weekly_pattern": json.loads(schedule.weekly_pattern) if schedule.weekly_pattern else None,
|
||||
"interval_type": schedule.interval_type,
|
||||
"cycle_time": schedule.cycle_time,
|
||||
"include_download": schedule.include_download,
|
||||
"timezone": schedule.timezone,
|
||||
"next_occurrence": schedule.next_occurrence.isoformat() if schedule.next_occurrence else None,
|
||||
"last_generated_at": schedule.last_generated_at.isoformat() if schedule.last_generated_at else None,
|
||||
"created_at": schedule.created_at.isoformat() if schedule.created_at else None,
|
||||
"updated_at": schedule.updated_at.isoformat() if schedule.updated_at else None,
|
||||
}
|
||||
|
||||
|
||||
# ============================================================================
|
||||
# Create
|
||||
# ============================================================================
|
||||
|
||||
@router.post("/")
|
||||
async def create_recurring_schedule(
|
||||
project_id: str,
|
||||
request: Request,
|
||||
db: Session = Depends(get_db),
|
||||
):
|
||||
"""
|
||||
Create recurring schedules for one or more locations.
|
||||
|
||||
Body for weekly_calendar (supports multiple locations):
|
||||
{
|
||||
"name": "Weeknight Monitoring",
|
||||
"schedule_type": "weekly_calendar",
|
||||
"location_ids": ["uuid1", "uuid2"], // Array of location IDs
|
||||
"weekly_pattern": {
|
||||
"monday": {"enabled": true, "start": "19:00", "end": "07:00"},
|
||||
"tuesday": {"enabled": false},
|
||||
...
|
||||
},
|
||||
"include_download": true,
|
||||
"auto_increment_index": true,
|
||||
"timezone": "America/New_York"
|
||||
}
|
||||
|
||||
Body for simple_interval (supports multiple locations):
|
||||
{
|
||||
"name": "24/7 Continuous",
|
||||
"schedule_type": "simple_interval",
|
||||
"location_ids": ["uuid1", "uuid2"], // Array of location IDs
|
||||
"interval_type": "daily",
|
||||
"cycle_time": "00:00",
|
||||
"include_download": true,
|
||||
"auto_increment_index": true,
|
||||
"timezone": "America/New_York"
|
||||
}
|
||||
|
||||
Legacy single location support (backwards compatible):
|
||||
{
|
||||
"name": "...",
|
||||
"location_id": "uuid", // Single location ID
|
||||
...
|
||||
}
|
||||
"""
|
||||
project = db.query(Project).filter_by(id=project_id).first()
|
||||
if not project:
|
||||
raise HTTPException(status_code=404, detail="Project not found")
|
||||
|
||||
data = await request.json()
|
||||
|
||||
# Support both location_ids (array) and location_id (single) for backwards compatibility
|
||||
location_ids = data.get("location_ids", [])
|
||||
if not location_ids and data.get("location_id"):
|
||||
location_ids = [data.get("location_id")]
|
||||
|
||||
if not location_ids:
|
||||
raise HTTPException(status_code=400, detail="At least one location is required")
|
||||
|
||||
# Validate all locations exist
|
||||
locations = db.query(MonitoringLocation).filter(
|
||||
MonitoringLocation.id.in_(location_ids),
|
||||
MonitoringLocation.project_id == project_id,
|
||||
).all()
|
||||
|
||||
if len(locations) != len(location_ids):
|
||||
raise HTTPException(status_code=404, detail="One or more locations not found")
|
||||
|
||||
service = get_recurring_schedule_service(db)
|
||||
created_schedules = []
|
||||
base_name = data.get("name", "Unnamed Schedule")
|
||||
|
||||
# Create a schedule for each location
|
||||
for location in locations:
|
||||
# Determine device type from location
|
||||
device_type = "slm" if location.location_type == "sound" else "seismograph"
|
||||
|
||||
# Append location name if multiple locations
|
||||
schedule_name = f"{base_name} - {location.name}" if len(locations) > 1 else base_name
|
||||
|
||||
schedule = service.create_schedule(
|
||||
project_id=project_id,
|
||||
location_id=location.id,
|
||||
name=schedule_name,
|
||||
schedule_type=data.get("schedule_type", "weekly_calendar"),
|
||||
device_type=device_type,
|
||||
unit_id=data.get("unit_id"),
|
||||
weekly_pattern=data.get("weekly_pattern"),
|
||||
interval_type=data.get("interval_type"),
|
||||
cycle_time=data.get("cycle_time"),
|
||||
include_download=data.get("include_download", True),
|
||||
auto_increment_index=data.get("auto_increment_index", True),
|
||||
timezone=data.get("timezone", "America/New_York"),
|
||||
)
|
||||
|
||||
# Generate actions immediately so they appear right away
|
||||
generated_actions = service.generate_actions_for_schedule(schedule, horizon_days=7)
|
||||
|
||||
created_schedules.append({
|
||||
"schedule_id": schedule.id,
|
||||
"location_id": location.id,
|
||||
"location_name": location.name,
|
||||
"actions_generated": len(generated_actions),
|
||||
})
|
||||
|
||||
total_actions = sum(s.get("actions_generated", 0) for s in created_schedules)
|
||||
|
||||
return JSONResponse({
|
||||
"success": True,
|
||||
"schedules": created_schedules,
|
||||
"count": len(created_schedules),
|
||||
"actions_generated": total_actions,
|
||||
"message": f"Created {len(created_schedules)} recurring schedule(s) with {total_actions} upcoming actions",
|
||||
})
|
||||
|
||||
|
||||
# ============================================================================
|
||||
# Update
|
||||
# ============================================================================
|
||||
|
||||
@router.put("/{schedule_id}")
|
||||
async def update_recurring_schedule(
|
||||
project_id: str,
|
||||
schedule_id: str,
|
||||
request: Request,
|
||||
db: Session = Depends(get_db),
|
||||
):
|
||||
"""
|
||||
Update a recurring schedule.
|
||||
"""
|
||||
schedule = db.query(RecurringSchedule).filter_by(
|
||||
id=schedule_id,
|
||||
project_id=project_id,
|
||||
).first()
|
||||
|
||||
if not schedule:
|
||||
raise HTTPException(status_code=404, detail="Schedule not found")
|
||||
|
||||
data = await request.json()
|
||||
service = get_recurring_schedule_service(db)
|
||||
|
||||
# Build update kwargs
|
||||
update_kwargs = {}
|
||||
for field in ["name", "weekly_pattern", "interval_type", "cycle_time",
|
||||
"include_download", "auto_increment_index", "timezone", "unit_id"]:
|
||||
if field in data:
|
||||
update_kwargs[field] = data[field]
|
||||
|
||||
updated = service.update_schedule(schedule_id, **update_kwargs)
|
||||
|
||||
return {
|
||||
"success": True,
|
||||
"schedule_id": updated.id,
|
||||
"message": "Schedule updated successfully",
|
||||
}
|
||||
|
||||
|
||||
# ============================================================================
|
||||
# Delete
|
||||
# ============================================================================
|
||||
|
||||
@router.delete("/{schedule_id}")
|
||||
async def delete_recurring_schedule(
|
||||
project_id: str,
|
||||
schedule_id: str,
|
||||
db: Session = Depends(get_db),
|
||||
):
|
||||
"""
|
||||
Delete a recurring schedule.
|
||||
"""
|
||||
service = get_recurring_schedule_service(db)
|
||||
deleted = service.delete_schedule(schedule_id)
|
||||
|
||||
if not deleted:
|
||||
raise HTTPException(status_code=404, detail="Schedule not found")
|
||||
|
||||
return {
|
||||
"success": True,
|
||||
"message": "Schedule deleted successfully",
|
||||
}
|
||||
|
||||
|
||||
# ============================================================================
|
||||
# Enable/Disable
|
||||
# ============================================================================
|
||||
|
||||
@router.post("/{schedule_id}/enable")
|
||||
async def enable_schedule(
|
||||
project_id: str,
|
||||
schedule_id: str,
|
||||
db: Session = Depends(get_db),
|
||||
):
|
||||
"""
|
||||
Enable a disabled schedule.
|
||||
"""
|
||||
service = get_recurring_schedule_service(db)
|
||||
schedule = service.enable_schedule(schedule_id)
|
||||
|
||||
if not schedule:
|
||||
raise HTTPException(status_code=404, detail="Schedule not found")
|
||||
|
||||
return {
|
||||
"success": True,
|
||||
"schedule_id": schedule.id,
|
||||
"enabled": schedule.enabled,
|
||||
"message": "Schedule enabled",
|
||||
}
|
||||
|
||||
|
||||
@router.post("/{schedule_id}/disable")
|
||||
async def disable_schedule(
|
||||
project_id: str,
|
||||
schedule_id: str,
|
||||
db: Session = Depends(get_db),
|
||||
):
|
||||
"""
|
||||
Disable a schedule.
|
||||
"""
|
||||
service = get_recurring_schedule_service(db)
|
||||
schedule = service.disable_schedule(schedule_id)
|
||||
|
||||
if not schedule:
|
||||
raise HTTPException(status_code=404, detail="Schedule not found")
|
||||
|
||||
return {
|
||||
"success": True,
|
||||
"schedule_id": schedule.id,
|
||||
"enabled": schedule.enabled,
|
||||
"message": "Schedule disabled",
|
||||
}
|
||||
|
||||
|
||||
# ============================================================================
|
||||
# Preview Generated Actions
|
||||
# ============================================================================
|
||||
|
||||
@router.post("/{schedule_id}/generate-preview")
|
||||
async def preview_generated_actions(
|
||||
project_id: str,
|
||||
schedule_id: str,
|
||||
db: Session = Depends(get_db),
|
||||
days: int = Query(7, ge=1, le=30),
|
||||
):
|
||||
"""
|
||||
Preview what actions would be generated without saving them.
|
||||
"""
|
||||
schedule = db.query(RecurringSchedule).filter_by(
|
||||
id=schedule_id,
|
||||
project_id=project_id,
|
||||
).first()
|
||||
|
||||
if not schedule:
|
||||
raise HTTPException(status_code=404, detail="Schedule not found")
|
||||
|
||||
service = get_recurring_schedule_service(db)
|
||||
actions = service.generate_actions_for_schedule(
|
||||
schedule,
|
||||
horizon_days=days,
|
||||
preview_only=True,
|
||||
)
|
||||
|
||||
return {
|
||||
"schedule_id": schedule_id,
|
||||
"schedule_name": schedule.name,
|
||||
"preview_days": days,
|
||||
"actions": [
|
||||
{
|
||||
"action_type": a.action_type,
|
||||
"scheduled_time": a.scheduled_time.isoformat(),
|
||||
"notes": a.notes,
|
||||
}
|
||||
for a in actions
|
||||
],
|
||||
"action_count": len(actions),
|
||||
}
|
||||
|
||||
|
||||
# ============================================================================
|
||||
# Manual Generation Trigger
|
||||
# ============================================================================
|
||||
|
||||
@router.post("/{schedule_id}/generate")
|
||||
async def generate_actions_now(
|
||||
project_id: str,
|
||||
schedule_id: str,
|
||||
db: Session = Depends(get_db),
|
||||
days: int = Query(7, ge=1, le=30),
|
||||
):
|
||||
"""
|
||||
Manually trigger action generation for a schedule.
|
||||
"""
|
||||
schedule = db.query(RecurringSchedule).filter_by(
|
||||
id=schedule_id,
|
||||
project_id=project_id,
|
||||
).first()
|
||||
|
||||
if not schedule:
|
||||
raise HTTPException(status_code=404, detail="Schedule not found")
|
||||
|
||||
if not schedule.enabled:
|
||||
raise HTTPException(status_code=400, detail="Schedule is disabled")
|
||||
|
||||
service = get_recurring_schedule_service(db)
|
||||
actions = service.generate_actions_for_schedule(
|
||||
schedule,
|
||||
horizon_days=days,
|
||||
preview_only=False,
|
||||
)
|
||||
|
||||
return {
|
||||
"success": True,
|
||||
"schedule_id": schedule_id,
|
||||
"generated_count": len(actions),
|
||||
"message": f"Generated {len(actions)} scheduled actions",
|
||||
}
|
||||
|
||||
|
||||
# ============================================================================
|
||||
# HTML Partials
|
||||
# ============================================================================
|
||||
|
||||
@router.get("/partials/list", response_class=HTMLResponse)
|
||||
async def get_schedule_list_partial(
|
||||
project_id: str,
|
||||
request: Request,
|
||||
db: Session = Depends(get_db),
|
||||
):
|
||||
"""
|
||||
Return HTML partial for schedule list.
|
||||
"""
|
||||
schedules = db.query(RecurringSchedule).filter_by(
|
||||
project_id=project_id
|
||||
).order_by(RecurringSchedule.created_at.desc()).all()
|
||||
|
||||
# Enrich with location info
|
||||
schedule_data = []
|
||||
for s in schedules:
|
||||
location = db.query(MonitoringLocation).filter_by(id=s.location_id).first()
|
||||
schedule_data.append({
|
||||
"schedule": s,
|
||||
"location": location,
|
||||
"pattern": json.loads(s.weekly_pattern) if s.weekly_pattern else None,
|
||||
})
|
||||
|
||||
return templates.TemplateResponse("partials/projects/recurring_schedule_list.html", {
|
||||
"request": request,
|
||||
"project_id": project_id,
|
||||
"schedules": schedule_data,
|
||||
})
|
||||
187
backend/routers/report_templates.py
Normal file
@@ -0,0 +1,187 @@
|
||||
"""
|
||||
Report Templates Router
|
||||
|
||||
CRUD operations for report template management.
|
||||
Templates store time filter presets and report configuration for reuse.
|
||||
"""
|
||||
|
||||
from fastapi import APIRouter, Depends, HTTPException
|
||||
from fastapi.responses import JSONResponse
|
||||
from sqlalchemy.orm import Session
|
||||
from datetime import datetime
|
||||
from typing import Optional
|
||||
import uuid
|
||||
|
||||
from backend.database import get_db
|
||||
from backend.models import ReportTemplate
|
||||
|
||||
router = APIRouter(prefix="/api/report-templates", tags=["report-templates"])
|
||||
|
||||
|
||||
@router.get("")
|
||||
async def list_templates(
|
||||
project_id: Optional[str] = None,
|
||||
db: Session = Depends(get_db),
|
||||
):
|
||||
"""
|
||||
List all report templates.
|
||||
Optionally filter by project_id (includes global templates with project_id=None).
|
||||
"""
|
||||
query = db.query(ReportTemplate)
|
||||
|
||||
if project_id:
|
||||
# Include global templates (project_id=None) AND project-specific templates
|
||||
query = query.filter(
|
||||
(ReportTemplate.project_id == None) | (ReportTemplate.project_id == project_id)
|
||||
)
|
||||
|
||||
templates = query.order_by(ReportTemplate.name).all()
|
||||
|
||||
return [
|
||||
{
|
||||
"id": t.id,
|
||||
"name": t.name,
|
||||
"project_id": t.project_id,
|
||||
"report_title": t.report_title,
|
||||
"start_time": t.start_time,
|
||||
"end_time": t.end_time,
|
||||
"start_date": t.start_date,
|
||||
"end_date": t.end_date,
|
||||
"created_at": t.created_at.isoformat() if t.created_at else None,
|
||||
"updated_at": t.updated_at.isoformat() if t.updated_at else None,
|
||||
}
|
||||
for t in templates
|
||||
]
|
||||
|
||||
|
||||
@router.post("")
|
||||
async def create_template(
|
||||
data: dict,
|
||||
db: Session = Depends(get_db),
|
||||
):
|
||||
"""
|
||||
Create a new report template.
|
||||
|
||||
Request body:
|
||||
- name: Template name (required)
|
||||
- project_id: Optional project ID for project-specific template
|
||||
- report_title: Default report title
|
||||
- start_time: Start time filter (HH:MM format)
|
||||
- end_time: End time filter (HH:MM format)
|
||||
- start_date: Start date filter (YYYY-MM-DD format)
|
||||
- end_date: End date filter (YYYY-MM-DD format)
|
||||
"""
|
||||
name = data.get("name")
|
||||
if not name:
|
||||
raise HTTPException(status_code=400, detail="Template name is required")
|
||||
|
||||
template = ReportTemplate(
|
||||
id=str(uuid.uuid4()),
|
||||
name=name,
|
||||
project_id=data.get("project_id"),
|
||||
report_title=data.get("report_title", "Background Noise Study"),
|
||||
start_time=data.get("start_time"),
|
||||
end_time=data.get("end_time"),
|
||||
start_date=data.get("start_date"),
|
||||
end_date=data.get("end_date"),
|
||||
)
|
||||
|
||||
db.add(template)
|
||||
db.commit()
|
||||
db.refresh(template)
|
||||
|
||||
return {
|
||||
"id": template.id,
|
||||
"name": template.name,
|
||||
"project_id": template.project_id,
|
||||
"report_title": template.report_title,
|
||||
"start_time": template.start_time,
|
||||
"end_time": template.end_time,
|
||||
"start_date": template.start_date,
|
||||
"end_date": template.end_date,
|
||||
"created_at": template.created_at.isoformat() if template.created_at else None,
|
||||
}
|
||||
|
||||
|
||||
@router.get("/{template_id}")
|
||||
async def get_template(
|
||||
template_id: str,
|
||||
db: Session = Depends(get_db),
|
||||
):
|
||||
"""Get a specific report template by ID."""
|
||||
template = db.query(ReportTemplate).filter_by(id=template_id).first()
|
||||
if not template:
|
||||
raise HTTPException(status_code=404, detail="Template not found")
|
||||
|
||||
return {
|
||||
"id": template.id,
|
||||
"name": template.name,
|
||||
"project_id": template.project_id,
|
||||
"report_title": template.report_title,
|
||||
"start_time": template.start_time,
|
||||
"end_time": template.end_time,
|
||||
"start_date": template.start_date,
|
||||
"end_date": template.end_date,
|
||||
"created_at": template.created_at.isoformat() if template.created_at else None,
|
||||
"updated_at": template.updated_at.isoformat() if template.updated_at else None,
|
||||
}
|
||||
|
||||
|
||||
@router.put("/{template_id}")
|
||||
async def update_template(
|
||||
template_id: str,
|
||||
data: dict,
|
||||
db: Session = Depends(get_db),
|
||||
):
|
||||
"""Update an existing report template."""
|
||||
template = db.query(ReportTemplate).filter_by(id=template_id).first()
|
||||
if not template:
|
||||
raise HTTPException(status_code=404, detail="Template not found")
|
||||
|
||||
# Update fields if provided
|
||||
if "name" in data:
|
||||
template.name = data["name"]
|
||||
if "project_id" in data:
|
||||
template.project_id = data["project_id"]
|
||||
if "report_title" in data:
|
||||
template.report_title = data["report_title"]
|
||||
if "start_time" in data:
|
||||
template.start_time = data["start_time"]
|
||||
if "end_time" in data:
|
||||
template.end_time = data["end_time"]
|
||||
if "start_date" in data:
|
||||
template.start_date = data["start_date"]
|
||||
if "end_date" in data:
|
||||
template.end_date = data["end_date"]
|
||||
|
||||
template.updated_at = datetime.utcnow()
|
||||
db.commit()
|
||||
db.refresh(template)
|
||||
|
||||
return {
|
||||
"id": template.id,
|
||||
"name": template.name,
|
||||
"project_id": template.project_id,
|
||||
"report_title": template.report_title,
|
||||
"start_time": template.start_time,
|
||||
"end_time": template.end_time,
|
||||
"start_date": template.start_date,
|
||||
"end_date": template.end_date,
|
||||
"updated_at": template.updated_at.isoformat() if template.updated_at else None,
|
||||
}
|
||||
|
||||
|
||||
@router.delete("/{template_id}")
|
||||
async def delete_template(
|
||||
template_id: str,
|
||||
db: Session = Depends(get_db),
|
||||
):
|
||||
"""Delete a report template."""
|
||||
template = db.query(ReportTemplate).filter_by(id=template_id).first()
|
||||
if not template:
|
||||
raise HTTPException(status_code=404, detail="Template not found")
|
||||
|
||||
db.delete(template)
|
||||
db.commit()
|
||||
|
||||
return JSONResponse({"status": "success", "message": "Template deleted"})
|
||||
@@ -2,20 +2,32 @@ from fastapi import APIRouter, Depends
|
||||
from sqlalchemy.orm import Session
|
||||
from datetime import datetime, timedelta
|
||||
from typing import Dict, Any
|
||||
import asyncio
|
||||
import logging
|
||||
import random
|
||||
|
||||
from app.seismo.database import get_db
|
||||
from app.seismo.services.snapshot import emit_status_snapshot
|
||||
from backend.database import get_db
|
||||
from backend.services.snapshot import emit_status_snapshot
|
||||
from backend.services.slm_status_sync import sync_slm_status_to_emitters
|
||||
|
||||
router = APIRouter(prefix="/api", tags=["roster"])
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
@router.get("/status-snapshot")
|
||||
def get_status_snapshot(db: Session = Depends(get_db)):
|
||||
async def get_status_snapshot(db: Session = Depends(get_db)):
|
||||
"""
|
||||
Calls emit_status_snapshot() to get current fleet status.
|
||||
This will be replaced with real Series3 emitter logic later.
|
||||
Syncs SLM status from SLMM before generating snapshot.
|
||||
"""
|
||||
# Sync SLM status from SLMM (with timeout to prevent blocking)
|
||||
try:
|
||||
await asyncio.wait_for(sync_slm_status_to_emitters(), timeout=2.0)
|
||||
except asyncio.TimeoutError:
|
||||
logger.warning("SLM status sync timed out, using cached data")
|
||||
except Exception as e:
|
||||
logger.warning(f"SLM status sync failed: {e}")
|
||||
|
||||
return emit_status_snapshot()
|
||||
|
||||
|
||||
1331
backend/routers/roster_edit.py
Normal file
139
backend/routers/roster_rename.py
Normal file
@@ -0,0 +1,139 @@
|
||||
"""
|
||||
Roster Unit Rename Router
|
||||
|
||||
Provides endpoint for safely renaming unit IDs across all database tables.
|
||||
"""
|
||||
|
||||
from fastapi import APIRouter, Depends, HTTPException, Form
|
||||
from sqlalchemy.orm import Session
|
||||
from datetime import datetime
|
||||
import logging
|
||||
|
||||
from backend.database import get_db
|
||||
from backend.models import RosterUnit, Emitter, UnitHistory
|
||||
from backend.routers.roster_edit import record_history, sync_slm_to_slmm_cache
|
||||
|
||||
router = APIRouter(prefix="/api/roster", tags=["roster-rename"])
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
@router.post("/rename")
|
||||
async def rename_unit(
|
||||
old_id: str = Form(...),
|
||||
new_id: str = Form(...),
|
||||
db: Session = Depends(get_db)
|
||||
):
|
||||
"""
|
||||
Rename a unit ID across all tables.
|
||||
Updates the unit ID in roster, emitters, unit_history, and all foreign key references.
|
||||
|
||||
IMPORTANT: This operation updates the primary key, which affects all relationships.
|
||||
"""
|
||||
# Validate input
|
||||
if not old_id or not new_id:
|
||||
raise HTTPException(status_code=400, detail="Both old_id and new_id are required")
|
||||
|
||||
if old_id == new_id:
|
||||
raise HTTPException(status_code=400, detail="New ID must be different from old ID")
|
||||
|
||||
# Check if old unit exists
|
||||
old_unit = db.query(RosterUnit).filter(RosterUnit.id == old_id).first()
|
||||
if not old_unit:
|
||||
raise HTTPException(status_code=404, detail=f"Unit '{old_id}' not found")
|
||||
|
||||
# Check if new ID already exists
|
||||
existing_unit = db.query(RosterUnit).filter(RosterUnit.id == new_id).first()
|
||||
if existing_unit:
|
||||
raise HTTPException(status_code=409, detail=f"Unit ID '{new_id}' already exists")
|
||||
|
||||
device_type = old_unit.device_type
|
||||
|
||||
try:
|
||||
# Record history for the rename operation (using old_id since that's still valid)
|
||||
record_history(
|
||||
db=db,
|
||||
unit_id=old_id,
|
||||
change_type="id_change",
|
||||
field_name="id",
|
||||
old_value=old_id,
|
||||
new_value=new_id,
|
||||
source="manual",
|
||||
notes=f"Unit renamed from '{old_id}' to '{new_id}'"
|
||||
)
|
||||
|
||||
# Update roster table (primary)
|
||||
old_unit.id = new_id
|
||||
old_unit.last_updated = datetime.utcnow()
|
||||
|
||||
# Update emitters table
|
||||
emitter = db.query(Emitter).filter(Emitter.id == old_id).first()
|
||||
if emitter:
|
||||
emitter.id = new_id
|
||||
|
||||
# Update unit_history table (all entries for this unit)
|
||||
db.query(UnitHistory).filter(UnitHistory.unit_id == old_id).update(
|
||||
{"unit_id": new_id},
|
||||
synchronize_session=False
|
||||
)
|
||||
|
||||
# Update deployed_with_modem_id references (units that reference this as modem)
|
||||
db.query(RosterUnit).filter(RosterUnit.deployed_with_modem_id == old_id).update(
|
||||
{"deployed_with_modem_id": new_id},
|
||||
synchronize_session=False
|
||||
)
|
||||
|
||||
# Update unit_assignments table (if exists)
|
||||
try:
|
||||
from backend.models import UnitAssignment
|
||||
db.query(UnitAssignment).filter(UnitAssignment.unit_id == old_id).update(
|
||||
{"unit_id": new_id},
|
||||
synchronize_session=False
|
||||
)
|
||||
except Exception as e:
|
||||
logger.warning(f"Could not update unit_assignments: {e}")
|
||||
|
||||
# Update recording_sessions table (if exists)
|
||||
try:
|
||||
from backend.models import RecordingSession
|
||||
db.query(RecordingSession).filter(RecordingSession.unit_id == old_id).update(
|
||||
{"unit_id": new_id},
|
||||
synchronize_session=False
|
||||
)
|
||||
except Exception as e:
|
||||
logger.warning(f"Could not update recording_sessions: {e}")
|
||||
|
||||
# Commit all changes
|
||||
db.commit()
|
||||
|
||||
# If sound level meter, sync updated config to SLMM cache
|
||||
if device_type == "slm":
|
||||
logger.info(f"Syncing renamed SLM {new_id} (was {old_id}) config to SLMM cache...")
|
||||
result = await sync_slm_to_slmm_cache(
|
||||
unit_id=new_id,
|
||||
host=old_unit.slm_host,
|
||||
tcp_port=old_unit.slm_tcp_port,
|
||||
ftp_port=old_unit.slm_ftp_port,
|
||||
deployed_with_modem_id=old_unit.deployed_with_modem_id,
|
||||
db=db
|
||||
)
|
||||
|
||||
if not result["success"]:
|
||||
logger.warning(f"SLMM cache sync warning for renamed unit {new_id}: {result['message']}")
|
||||
|
||||
logger.info(f"Successfully renamed unit '{old_id}' to '{new_id}'")
|
||||
|
||||
return {
|
||||
"success": True,
|
||||
"message": f"Successfully renamed unit from '{old_id}' to '{new_id}'",
|
||||
"old_id": old_id,
|
||||
"new_id": new_id,
|
||||
"device_type": device_type
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
db.rollback()
|
||||
logger.error(f"Error renaming unit '{old_id}' to '{new_id}': {e}")
|
||||
raise HTTPException(
|
||||
status_code=500,
|
||||
detail=f"Failed to rename unit: {str(e)}"
|
||||
)
|
||||
408
backend/routers/scheduler.py
Normal file
@@ -0,0 +1,408 @@
|
||||
"""
|
||||
Scheduler Router
|
||||
|
||||
Handles scheduled actions for automated recording control.
|
||||
"""
|
||||
|
||||
from fastapi import APIRouter, Request, Depends, HTTPException, Query
|
||||
from fastapi.responses import HTMLResponse, JSONResponse
|
||||
from sqlalchemy.orm import Session
|
||||
from sqlalchemy import and_, or_
|
||||
from datetime import datetime, timedelta
|
||||
from typing import Optional
|
||||
import uuid
|
||||
import json
|
||||
|
||||
from backend.database import get_db
|
||||
from backend.models import (
|
||||
Project,
|
||||
ScheduledAction,
|
||||
MonitoringLocation,
|
||||
UnitAssignment,
|
||||
RosterUnit,
|
||||
)
|
||||
from backend.services.scheduler import get_scheduler
|
||||
from backend.templates_config import templates
|
||||
|
||||
router = APIRouter(prefix="/api/projects/{project_id}/scheduler", tags=["scheduler"])
|
||||
|
||||
|
||||
# ============================================================================
|
||||
# Scheduled Actions List
|
||||
# ============================================================================
|
||||
|
||||
@router.get("/actions", response_class=HTMLResponse)
|
||||
async def get_scheduled_actions(
|
||||
project_id: str,
|
||||
request: Request,
|
||||
db: Session = Depends(get_db),
|
||||
status: Optional[str] = Query(None),
|
||||
start_date: Optional[str] = Query(None),
|
||||
end_date: Optional[str] = Query(None),
|
||||
):
|
||||
"""
|
||||
Get scheduled actions for a project.
|
||||
Returns HTML partial with agenda/calendar view.
|
||||
"""
|
||||
query = db.query(ScheduledAction).filter_by(project_id=project_id)
|
||||
|
||||
# Filter by status
|
||||
if status:
|
||||
query = query.filter_by(execution_status=status)
|
||||
else:
|
||||
# By default, show pending and upcoming completed/failed
|
||||
query = query.filter(
|
||||
or_(
|
||||
ScheduledAction.execution_status == "pending",
|
||||
and_(
|
||||
ScheduledAction.execution_status.in_(["completed", "failed"]),
|
||||
ScheduledAction.scheduled_time >= datetime.utcnow() - timedelta(days=7),
|
||||
),
|
||||
)
|
||||
)
|
||||
|
||||
# Filter by date range
|
||||
if start_date:
|
||||
query = query.filter(ScheduledAction.scheduled_time >= datetime.fromisoformat(start_date))
|
||||
if end_date:
|
||||
query = query.filter(ScheduledAction.scheduled_time <= datetime.fromisoformat(end_date))
|
||||
|
||||
actions = query.order_by(ScheduledAction.scheduled_time).all()
|
||||
|
||||
# Enrich with location and unit details
|
||||
actions_data = []
|
||||
for action in actions:
|
||||
location = db.query(MonitoringLocation).filter_by(id=action.location_id).first()
|
||||
|
||||
unit = None
|
||||
if action.unit_id:
|
||||
unit = db.query(RosterUnit).filter_by(id=action.unit_id).first()
|
||||
else:
|
||||
# Get from assignment
|
||||
assignment = db.query(UnitAssignment).filter(
|
||||
and_(
|
||||
UnitAssignment.location_id == action.location_id,
|
||||
UnitAssignment.status == "active",
|
||||
)
|
||||
).first()
|
||||
if assignment:
|
||||
unit = db.query(RosterUnit).filter_by(id=assignment.unit_id).first()
|
||||
|
||||
actions_data.append({
|
||||
"action": action,
|
||||
"location": location,
|
||||
"unit": unit,
|
||||
})
|
||||
|
||||
return templates.TemplateResponse("partials/projects/scheduler_agenda.html", {
|
||||
"request": request,
|
||||
"project_id": project_id,
|
||||
"actions": actions_data,
|
||||
})
|
||||
|
||||
|
||||
# ============================================================================
|
||||
# Create Scheduled Action
|
||||
# ============================================================================
|
||||
|
||||
@router.post("/actions/create")
|
||||
async def create_scheduled_action(
|
||||
project_id: str,
|
||||
request: Request,
|
||||
db: Session = Depends(get_db),
|
||||
):
|
||||
"""
|
||||
Create a new scheduled action.
|
||||
"""
|
||||
project = db.query(Project).filter_by(id=project_id).first()
|
||||
if not project:
|
||||
raise HTTPException(status_code=404, detail="Project not found")
|
||||
|
||||
form_data = await request.form()
|
||||
|
||||
location_id = form_data.get("location_id")
|
||||
location = db.query(MonitoringLocation).filter_by(
|
||||
id=location_id,
|
||||
project_id=project_id,
|
||||
).first()
|
||||
|
||||
if not location:
|
||||
raise HTTPException(status_code=404, detail="Location not found")
|
||||
|
||||
# Determine device type from location
|
||||
device_type = "slm" if location.location_type == "sound" else "seismograph"
|
||||
|
||||
# Get unit_id (optional - can be determined from assignment at execution time)
|
||||
unit_id = form_data.get("unit_id")
|
||||
|
||||
action = ScheduledAction(
|
||||
id=str(uuid.uuid4()),
|
||||
project_id=project_id,
|
||||
location_id=location_id,
|
||||
unit_id=unit_id,
|
||||
action_type=form_data.get("action_type"),
|
||||
device_type=device_type,
|
||||
scheduled_time=datetime.fromisoformat(form_data.get("scheduled_time")),
|
||||
execution_status="pending",
|
||||
notes=form_data.get("notes"),
|
||||
)
|
||||
|
||||
db.add(action)
|
||||
db.commit()
|
||||
db.refresh(action)
|
||||
|
||||
return JSONResponse({
|
||||
"success": True,
|
||||
"action_id": action.id,
|
||||
"message": f"Scheduled action '{action.action_type}' created for {action.scheduled_time}",
|
||||
})
|
||||
|
||||
|
||||
# ============================================================================
|
||||
# Schedule Recording Session
|
||||
# ============================================================================
|
||||
|
||||
@router.post("/schedule-session")
|
||||
async def schedule_recording_session(
|
||||
project_id: str,
|
||||
request: Request,
|
||||
db: Session = Depends(get_db),
|
||||
):
|
||||
"""
|
||||
Schedule a complete recording session (start + stop).
|
||||
Creates two scheduled actions: start and stop.
|
||||
"""
|
||||
project = db.query(Project).filter_by(id=project_id).first()
|
||||
if not project:
|
||||
raise HTTPException(status_code=404, detail="Project not found")
|
||||
|
||||
form_data = await request.form()
|
||||
|
||||
location_id = form_data.get("location_id")
|
||||
location = db.query(MonitoringLocation).filter_by(
|
||||
id=location_id,
|
||||
project_id=project_id,
|
||||
).first()
|
||||
|
||||
if not location:
|
||||
raise HTTPException(status_code=404, detail="Location not found")
|
||||
|
||||
device_type = "slm" if location.location_type == "sound" else "seismograph"
|
||||
unit_id = form_data.get("unit_id")
|
||||
|
||||
start_time = datetime.fromisoformat(form_data.get("start_time"))
|
||||
duration_minutes = int(form_data.get("duration_minutes", 60))
|
||||
stop_time = start_time + timedelta(minutes=duration_minutes)
|
||||
|
||||
# Create START action
|
||||
start_action = ScheduledAction(
|
||||
id=str(uuid.uuid4()),
|
||||
project_id=project_id,
|
||||
location_id=location_id,
|
||||
unit_id=unit_id,
|
||||
action_type="start",
|
||||
device_type=device_type,
|
||||
scheduled_time=start_time,
|
||||
execution_status="pending",
|
||||
notes=form_data.get("notes"),
|
||||
)
|
||||
|
||||
# Create STOP action
|
||||
stop_action = ScheduledAction(
|
||||
id=str(uuid.uuid4()),
|
||||
project_id=project_id,
|
||||
location_id=location_id,
|
||||
unit_id=unit_id,
|
||||
action_type="stop",
|
||||
device_type=device_type,
|
||||
scheduled_time=stop_time,
|
||||
execution_status="pending",
|
||||
notes=f"Auto-stop after {duration_minutes} minutes",
|
||||
)
|
||||
|
||||
db.add(start_action)
|
||||
db.add(stop_action)
|
||||
db.commit()
|
||||
|
||||
return JSONResponse({
|
||||
"success": True,
|
||||
"start_action_id": start_action.id,
|
||||
"stop_action_id": stop_action.id,
|
||||
"message": f"Recording session scheduled from {start_time} to {stop_time}",
|
||||
})
|
||||
|
||||
|
||||
# ============================================================================
|
||||
# Update/Cancel Scheduled Action
|
||||
# ============================================================================
|
||||
|
||||
@router.put("/actions/{action_id}")
|
||||
async def update_scheduled_action(
|
||||
project_id: str,
|
||||
action_id: str,
|
||||
request: Request,
|
||||
db: Session = Depends(get_db),
|
||||
):
|
||||
"""
|
||||
Update a scheduled action (only if not yet executed).
|
||||
"""
|
||||
action = db.query(ScheduledAction).filter_by(
|
||||
id=action_id,
|
||||
project_id=project_id,
|
||||
).first()
|
||||
|
||||
if not action:
|
||||
raise HTTPException(status_code=404, detail="Action not found")
|
||||
|
||||
if action.execution_status != "pending":
|
||||
raise HTTPException(
|
||||
status_code=400,
|
||||
detail="Cannot update action that has already been executed",
|
||||
)
|
||||
|
||||
data = await request.json()
|
||||
|
||||
if "scheduled_time" in data:
|
||||
action.scheduled_time = datetime.fromisoformat(data["scheduled_time"])
|
||||
if "notes" in data:
|
||||
action.notes = data["notes"]
|
||||
|
||||
db.commit()
|
||||
|
||||
return {"success": True, "message": "Action updated successfully"}
|
||||
|
||||
|
||||
@router.post("/actions/{action_id}/cancel")
|
||||
async def cancel_scheduled_action(
|
||||
project_id: str,
|
||||
action_id: str,
|
||||
db: Session = Depends(get_db),
|
||||
):
|
||||
"""
|
||||
Cancel a pending scheduled action.
|
||||
"""
|
||||
action = db.query(ScheduledAction).filter_by(
|
||||
id=action_id,
|
||||
project_id=project_id,
|
||||
).first()
|
||||
|
||||
if not action:
|
||||
raise HTTPException(status_code=404, detail="Action not found")
|
||||
|
||||
if action.execution_status != "pending":
|
||||
raise HTTPException(
|
||||
status_code=400,
|
||||
detail="Can only cancel pending actions",
|
||||
)
|
||||
|
||||
action.execution_status = "cancelled"
|
||||
db.commit()
|
||||
|
||||
return {"success": True, "message": "Action cancelled successfully"}
|
||||
|
||||
|
||||
@router.delete("/actions/{action_id}")
|
||||
async def delete_scheduled_action(
|
||||
project_id: str,
|
||||
action_id: str,
|
||||
db: Session = Depends(get_db),
|
||||
):
|
||||
"""
|
||||
Delete a scheduled action (only if pending or cancelled).
|
||||
"""
|
||||
action = db.query(ScheduledAction).filter_by(
|
||||
id=action_id,
|
||||
project_id=project_id,
|
||||
).first()
|
||||
|
||||
if not action:
|
||||
raise HTTPException(status_code=404, detail="Action not found")
|
||||
|
||||
if action.execution_status not in ["pending", "cancelled"]:
|
||||
raise HTTPException(
|
||||
status_code=400,
|
||||
detail="Cannot delete action that has been executed",
|
||||
)
|
||||
|
||||
db.delete(action)
|
||||
db.commit()
|
||||
|
||||
return {"success": True, "message": "Action deleted successfully"}
|
||||
|
||||
|
||||
# ============================================================================
|
||||
# Manual Execution
|
||||
# ============================================================================
|
||||
|
||||
@router.post("/actions/{action_id}/execute")
|
||||
async def execute_action_now(
|
||||
project_id: str,
|
||||
action_id: str,
|
||||
db: Session = Depends(get_db),
|
||||
):
|
||||
"""
|
||||
Manually trigger execution of a scheduled action (for testing/debugging).
|
||||
"""
|
||||
action = db.query(ScheduledAction).filter_by(
|
||||
id=action_id,
|
||||
project_id=project_id,
|
||||
).first()
|
||||
|
||||
if not action:
|
||||
raise HTTPException(status_code=404, detail="Action not found")
|
||||
|
||||
if action.execution_status != "pending":
|
||||
raise HTTPException(
|
||||
status_code=400,
|
||||
detail="Action is not pending",
|
||||
)
|
||||
|
||||
# Execute via scheduler service
|
||||
scheduler = get_scheduler()
|
||||
result = await scheduler.execute_action_by_id(action_id)
|
||||
|
||||
# Refresh from DB to get updated status
|
||||
db.refresh(action)
|
||||
|
||||
return JSONResponse({
|
||||
"success": result.get("success", False),
|
||||
"result": result,
|
||||
"action": {
|
||||
"id": action.id,
|
||||
"execution_status": action.execution_status,
|
||||
"executed_at": action.executed_at.isoformat() if action.executed_at else None,
|
||||
"error_message": action.error_message,
|
||||
},
|
||||
})
|
||||
|
||||
|
||||
# ============================================================================
|
||||
# Scheduler Status
|
||||
# ============================================================================
|
||||
|
||||
@router.get("/status")
|
||||
async def get_scheduler_status():
|
||||
"""
|
||||
Get scheduler service status.
|
||||
"""
|
||||
scheduler = get_scheduler()
|
||||
|
||||
return {
|
||||
"running": scheduler.running,
|
||||
"check_interval": scheduler.check_interval,
|
||||
}
|
||||
|
||||
|
||||
@router.post("/execute-pending")
|
||||
async def trigger_pending_execution():
|
||||
"""
|
||||
Manually trigger execution of all pending actions (for testing).
|
||||
"""
|
||||
scheduler = get_scheduler()
|
||||
results = await scheduler.execute_pending_actions()
|
||||
|
||||
return {
|
||||
"success": True,
|
||||
"executed_count": len(results),
|
||||
"results": results,
|
||||
}
|
||||
@@ -5,13 +5,12 @@ Provides endpoints for the seismograph-specific dashboard
|
||||
|
||||
from fastapi import APIRouter, Request, Depends, Query
|
||||
from fastapi.responses import HTMLResponse
|
||||
from fastapi.templating import Jinja2Templates
|
||||
from sqlalchemy.orm import Session
|
||||
from app.seismo.database import get_db
|
||||
from app.seismo.models import RosterUnit
|
||||
from backend.database import get_db
|
||||
from backend.models import RosterUnit
|
||||
from backend.templates_config import templates
|
||||
|
||||
router = APIRouter(prefix="/api/seismo-dashboard", tags=["seismo-dashboard"])
|
||||
templates = Jinja2Templates(directory="app/ui/templates")
|
||||
|
||||
|
||||
@router.get("/stats", response_class=HTMLResponse)
|
||||
@@ -9,9 +9,9 @@ import io
|
||||
import shutil
|
||||
from pathlib import Path
|
||||
|
||||
from app.seismo.database import get_db
|
||||
from app.seismo.models import RosterUnit, Emitter, IgnoredUnit, UserPreferences
|
||||
from app.seismo.services.database_backup import DatabaseBackupService
|
||||
from backend.database import get_db
|
||||
from backend.models import RosterUnit, Emitter, IgnoredUnit, UserPreferences
|
||||
from backend.services.database_backup import DatabaseBackupService
|
||||
|
||||
router = APIRouter(prefix="/api/settings", tags=["settings"])
|
||||
|
||||
@@ -477,3 +477,75 @@ async def upload_snapshot(file: UploadFile = File(...)):
|
||||
|
||||
except Exception as e:
|
||||
raise HTTPException(status_code=500, detail=f"Upload failed: {str(e)}")
|
||||
|
||||
|
||||
# ============================================================================
|
||||
# SLMM SYNC ENDPOINTS
|
||||
# ============================================================================
|
||||
|
||||
@router.post("/slmm/sync-all")
|
||||
async def sync_all_slms(db: Session = Depends(get_db)):
|
||||
"""
|
||||
Manually trigger full sync of all SLM devices from Terra-View roster to SLMM.
|
||||
|
||||
This ensures SLMM database matches Terra-View roster (source of truth).
|
||||
Also cleans up orphaned devices in SLMM that are not in Terra-View.
|
||||
"""
|
||||
from backend.services.slmm_sync import sync_all_slms_to_slmm, cleanup_orphaned_slmm_devices
|
||||
|
||||
try:
|
||||
# Sync all SLMs
|
||||
sync_results = await sync_all_slms_to_slmm(db)
|
||||
|
||||
# Clean up orphaned devices
|
||||
cleanup_results = await cleanup_orphaned_slmm_devices(db)
|
||||
|
||||
return {
|
||||
"status": "ok",
|
||||
"sync": sync_results,
|
||||
"cleanup": cleanup_results
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
raise HTTPException(status_code=500, detail=f"Sync failed: {str(e)}")
|
||||
|
||||
|
||||
@router.get("/slmm/status")
|
||||
async def get_slmm_sync_status(db: Session = Depends(get_db)):
|
||||
"""
|
||||
Get status of SLMM synchronization.
|
||||
|
||||
Shows which devices are in Terra-View roster vs SLMM database.
|
||||
"""
|
||||
from backend.services.slmm_sync import get_slmm_devices
|
||||
|
||||
try:
|
||||
# Get devices from both systems
|
||||
roster_slms = db.query(RosterUnit).filter_by(device_type="slm").all()
|
||||
slmm_devices = await get_slmm_devices()
|
||||
|
||||
if slmm_devices is None:
|
||||
raise HTTPException(status_code=503, detail="SLMM service unavailable")
|
||||
|
||||
roster_unit_ids = {unit.unit_type for unit in roster_slms}
|
||||
slmm_unit_ids = set(slmm_devices)
|
||||
|
||||
# Find differences
|
||||
in_roster_only = roster_unit_ids - slmm_unit_ids
|
||||
in_slmm_only = slmm_unit_ids - roster_unit_ids
|
||||
in_both = roster_unit_ids & slmm_unit_ids
|
||||
|
||||
return {
|
||||
"status": "ok",
|
||||
"terra_view_total": len(roster_unit_ids),
|
||||
"slmm_total": len(slmm_unit_ids),
|
||||
"synced": len(in_both),
|
||||
"missing_from_slmm": list(in_roster_only),
|
||||
"orphaned_in_slmm": list(in_slmm_only),
|
||||
"in_sync": len(in_roster_only) == 0 and len(in_slmm_only) == 0
|
||||
}
|
||||
|
||||
except HTTPException:
|
||||
raise
|
||||
except Exception as e:
|
||||
raise HTTPException(status_code=500, detail=f"Status check failed: {str(e)}")
|
||||
363
backend/routers/slm_dashboard.py
Normal file
@@ -0,0 +1,363 @@
|
||||
"""
|
||||
SLM Dashboard Router
|
||||
|
||||
Provides API endpoints for the Sound Level Meters dashboard page.
|
||||
"""
|
||||
|
||||
from fastapi import APIRouter, Request, Depends, Query
|
||||
from fastapi.responses import HTMLResponse
|
||||
from sqlalchemy.orm import Session
|
||||
from sqlalchemy import func
|
||||
from datetime import datetime, timedelta
|
||||
import asyncio
|
||||
import httpx
|
||||
import logging
|
||||
import os
|
||||
|
||||
from backend.database import get_db
|
||||
from backend.models import RosterUnit
|
||||
from backend.routers.roster_edit import sync_slm_to_slmm_cache
|
||||
from backend.templates_config import templates
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
router = APIRouter(prefix="/api/slm-dashboard", tags=["slm-dashboard"])
|
||||
|
||||
# SLMM backend URL - configurable via environment variable
|
||||
SLMM_BASE_URL = os.getenv("SLMM_BASE_URL", "http://localhost:8100")
|
||||
|
||||
|
||||
@router.get("/stats", response_class=HTMLResponse)
|
||||
async def get_slm_stats(request: Request, db: Session = Depends(get_db)):
|
||||
"""
|
||||
Get summary statistics for SLM dashboard.
|
||||
Returns HTML partial with stat cards.
|
||||
"""
|
||||
# Query all SLMs
|
||||
all_slms = db.query(RosterUnit).filter_by(device_type="slm").all()
|
||||
|
||||
# Count deployed vs benched
|
||||
deployed_count = sum(1 for slm in all_slms if slm.deployed and not slm.retired)
|
||||
benched_count = sum(1 for slm in all_slms if not slm.deployed and not slm.retired)
|
||||
retired_count = sum(1 for slm in all_slms if slm.retired)
|
||||
|
||||
# Count recently active (checked in last hour)
|
||||
one_hour_ago = datetime.utcnow() - timedelta(hours=1)
|
||||
active_count = sum(1 for slm in all_slms
|
||||
if slm.slm_last_check and slm.slm_last_check > one_hour_ago)
|
||||
|
||||
return templates.TemplateResponse("partials/slm_stats.html", {
|
||||
"request": request,
|
||||
"total_count": len(all_slms),
|
||||
"deployed_count": deployed_count,
|
||||
"benched_count": benched_count,
|
||||
"active_count": active_count,
|
||||
"retired_count": retired_count
|
||||
})
|
||||
|
||||
|
||||
@router.get("/units", response_class=HTMLResponse)
|
||||
async def get_slm_units(
|
||||
request: Request,
|
||||
db: Session = Depends(get_db),
|
||||
search: str = Query(None),
|
||||
project: str = Query(None),
|
||||
include_measurement: bool = Query(False),
|
||||
):
|
||||
"""
|
||||
Get list of SLM units for the sidebar.
|
||||
Returns HTML partial with unit cards.
|
||||
"""
|
||||
query = db.query(RosterUnit).filter_by(device_type="slm")
|
||||
|
||||
# Filter by project if provided
|
||||
if project:
|
||||
query = query.filter(RosterUnit.project_id == project)
|
||||
|
||||
# Filter by search term if provided
|
||||
if search:
|
||||
search_term = f"%{search}%"
|
||||
query = query.filter(
|
||||
(RosterUnit.id.like(search_term)) |
|
||||
(RosterUnit.slm_model.like(search_term)) |
|
||||
(RosterUnit.address.like(search_term))
|
||||
)
|
||||
|
||||
units = query.order_by(
|
||||
RosterUnit.retired.asc(),
|
||||
RosterUnit.deployed.desc(),
|
||||
RosterUnit.id.asc()
|
||||
).all()
|
||||
|
||||
one_hour_ago = datetime.utcnow() - timedelta(hours=1)
|
||||
for unit in units:
|
||||
unit.is_recent = bool(unit.slm_last_check and unit.slm_last_check > one_hour_ago)
|
||||
|
||||
if include_measurement:
|
||||
async def fetch_measurement_state(client: httpx.AsyncClient, unit_id: str) -> str | None:
|
||||
try:
|
||||
response = await client.get(f"{SLMM_BASE_URL}/api/nl43/{unit_id}/measurement-state")
|
||||
if response.status_code == 200:
|
||||
return response.json().get("measurement_state")
|
||||
except Exception:
|
||||
return None
|
||||
return None
|
||||
|
||||
deployed_units = [unit for unit in units if unit.deployed and not unit.retired]
|
||||
if deployed_units:
|
||||
async with httpx.AsyncClient(timeout=3.0) as client:
|
||||
tasks = [fetch_measurement_state(client, unit.id) for unit in deployed_units]
|
||||
results = await asyncio.gather(*tasks, return_exceptions=True)
|
||||
|
||||
for unit, state in zip(deployed_units, results):
|
||||
if isinstance(state, Exception):
|
||||
unit.measurement_state = None
|
||||
else:
|
||||
unit.measurement_state = state
|
||||
|
||||
return templates.TemplateResponse("partials/slm_device_list.html", {
|
||||
"request": request,
|
||||
"units": units
|
||||
})
|
||||
|
||||
|
||||
@router.get("/live-view/{unit_id}", response_class=HTMLResponse)
|
||||
async def get_live_view(request: Request, unit_id: str, db: Session = Depends(get_db)):
|
||||
"""
|
||||
Get live view panel for a specific SLM unit.
|
||||
Returns HTML partial with live metrics and chart.
|
||||
"""
|
||||
# Get unit from database
|
||||
unit = db.query(RosterUnit).filter_by(id=unit_id, device_type="slm").first()
|
||||
|
||||
if not unit:
|
||||
return templates.TemplateResponse("partials/slm_live_view_error.html", {
|
||||
"request": request,
|
||||
"error": f"Unit {unit_id} not found"
|
||||
})
|
||||
|
||||
# Get modem information if assigned
|
||||
modem = None
|
||||
modem_ip = None
|
||||
if unit.deployed_with_modem_id:
|
||||
modem = db.query(RosterUnit).filter_by(id=unit.deployed_with_modem_id, device_type="modem").first()
|
||||
if modem:
|
||||
modem_ip = modem.ip_address
|
||||
else:
|
||||
logger.warning(f"SLM {unit_id} is assigned to modem {unit.deployed_with_modem_id} but modem not found")
|
||||
|
||||
# Fallback to direct slm_host if no modem assigned (backward compatibility)
|
||||
if not modem_ip and unit.slm_host:
|
||||
modem_ip = unit.slm_host
|
||||
logger.info(f"Using legacy slm_host for {unit_id}: {modem_ip}")
|
||||
|
||||
# Try to get current status from SLMM
|
||||
current_status = None
|
||||
measurement_state = None
|
||||
is_measuring = False
|
||||
|
||||
try:
|
||||
async with httpx.AsyncClient(timeout=10.0) as client:
|
||||
# Get measurement state
|
||||
state_response = await client.get(
|
||||
f"{SLMM_BASE_URL}/api/nl43/{unit_id}/measurement-state"
|
||||
)
|
||||
if state_response.status_code == 200:
|
||||
state_data = state_response.json()
|
||||
measurement_state = state_data.get("measurement_state", "Unknown")
|
||||
is_measuring = state_data.get("is_measuring", False)
|
||||
|
||||
# Get live status (measurement_start_time is already stored in SLMM database)
|
||||
status_response = await client.get(
|
||||
f"{SLMM_BASE_URL}/api/nl43/{unit_id}/live"
|
||||
)
|
||||
if status_response.status_code == 200:
|
||||
status_data = status_response.json()
|
||||
current_status = status_data.get("data", {})
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to get status for {unit_id}: {e}")
|
||||
|
||||
return templates.TemplateResponse("partials/slm_live_view.html", {
|
||||
"request": request,
|
||||
"unit": unit,
|
||||
"modem": modem,
|
||||
"modem_ip": modem_ip,
|
||||
"current_status": current_status,
|
||||
"measurement_state": measurement_state,
|
||||
"is_measuring": is_measuring
|
||||
})
|
||||
|
||||
|
||||
@router.post("/control/{unit_id}/{action}")
|
||||
async def control_slm(unit_id: str, action: str):
|
||||
"""
|
||||
Send control commands to SLM (start, stop, pause, resume, reset).
|
||||
Proxies to SLMM backend.
|
||||
"""
|
||||
valid_actions = ["start", "stop", "pause", "resume", "reset"]
|
||||
|
||||
if action not in valid_actions:
|
||||
return {"status": "error", "detail": f"Invalid action. Must be one of: {valid_actions}"}
|
||||
|
||||
try:
|
||||
async with httpx.AsyncClient(timeout=10.0) as client:
|
||||
response = await client.post(
|
||||
f"{SLMM_BASE_URL}/api/nl43/{unit_id}/{action}"
|
||||
)
|
||||
|
||||
if response.status_code == 200:
|
||||
return response.json()
|
||||
else:
|
||||
return {
|
||||
"status": "error",
|
||||
"detail": f"SLMM returned status {response.status_code}"
|
||||
}
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to control {unit_id}: {e}")
|
||||
return {
|
||||
"status": "error",
|
||||
"detail": str(e)
|
||||
}
|
||||
|
||||
@router.get("/config/{unit_id}", response_class=HTMLResponse)
|
||||
async def get_slm_config(request: Request, unit_id: str, db: Session = Depends(get_db)):
|
||||
"""
|
||||
Get configuration form for a specific SLM unit.
|
||||
Returns HTML partial with configuration form.
|
||||
"""
|
||||
unit = db.query(RosterUnit).filter_by(id=unit_id, device_type="slm").first()
|
||||
|
||||
if not unit:
|
||||
return HTMLResponse(
|
||||
content=f'<div class="text-red-500">Unit {unit_id} not found</div>',
|
||||
status_code=404
|
||||
)
|
||||
|
||||
return templates.TemplateResponse("partials/slm_config_form.html", {
|
||||
"request": request,
|
||||
"unit": unit
|
||||
})
|
||||
|
||||
|
||||
@router.post("/config/{unit_id}")
|
||||
async def save_slm_config(request: Request, unit_id: str, db: Session = Depends(get_db)):
|
||||
"""
|
||||
Save SLM configuration.
|
||||
Updates unit parameters in the database.
|
||||
"""
|
||||
unit = db.query(RosterUnit).filter_by(id=unit_id, device_type="slm").first()
|
||||
|
||||
if not unit:
|
||||
return {"status": "error", "detail": f"Unit {unit_id} not found"}
|
||||
|
||||
try:
|
||||
# Get form data
|
||||
form_data = await request.form()
|
||||
|
||||
# Update SLM-specific fields
|
||||
unit.slm_model = form_data.get("slm_model") or None
|
||||
unit.slm_serial_number = form_data.get("slm_serial_number") or None
|
||||
unit.slm_frequency_weighting = form_data.get("slm_frequency_weighting") or None
|
||||
unit.slm_time_weighting = form_data.get("slm_time_weighting") or None
|
||||
unit.slm_measurement_range = form_data.get("slm_measurement_range") or None
|
||||
|
||||
# Update network configuration
|
||||
modem_id = form_data.get("deployed_with_modem_id")
|
||||
unit.deployed_with_modem_id = modem_id if modem_id else None
|
||||
|
||||
# Always update TCP and FTP ports (used regardless of modem assignment)
|
||||
unit.slm_tcp_port = int(form_data.get("slm_tcp_port")) if form_data.get("slm_tcp_port") else None
|
||||
unit.slm_ftp_port = int(form_data.get("slm_ftp_port")) if form_data.get("slm_ftp_port") else None
|
||||
|
||||
# Only update direct IP if no modem is assigned
|
||||
if not modem_id:
|
||||
unit.slm_host = form_data.get("slm_host") or None
|
||||
else:
|
||||
# Clear legacy direct IP field when modem is assigned
|
||||
unit.slm_host = None
|
||||
|
||||
db.commit()
|
||||
logger.info(f"Updated configuration for SLM {unit_id}")
|
||||
|
||||
# Sync updated configuration to SLMM cache
|
||||
logger.info(f"Syncing SLM {unit_id} config changes to SLMM cache...")
|
||||
result = await sync_slm_to_slmm_cache(
|
||||
unit_id=unit_id,
|
||||
host=unit.slm_host, # Use the updated host from Terra-View
|
||||
tcp_port=unit.slm_tcp_port,
|
||||
ftp_port=unit.slm_ftp_port,
|
||||
deployed_with_modem_id=unit.deployed_with_modem_id, # Resolve modem IP if assigned
|
||||
db=db
|
||||
)
|
||||
|
||||
if not result["success"]:
|
||||
logger.warning(f"SLMM cache sync warning for {unit_id}: {result['message']}")
|
||||
# Config still saved in Terra-View (source of truth)
|
||||
|
||||
return {"status": "success", "unit_id": unit_id}
|
||||
|
||||
except Exception as e:
|
||||
db.rollback()
|
||||
logger.error(f"Failed to save config for {unit_id}: {e}")
|
||||
return {"status": "error", "detail": str(e)}
|
||||
|
||||
|
||||
@router.get("/test-modem/{modem_id}")
|
||||
async def test_modem_connection(modem_id: str, db: Session = Depends(get_db)):
|
||||
"""
|
||||
Test modem connectivity with a simple ping/health check.
|
||||
Returns response time and connection status.
|
||||
"""
|
||||
import subprocess
|
||||
import time
|
||||
|
||||
# Get modem from database
|
||||
modem = db.query(RosterUnit).filter_by(id=modem_id, device_type="modem").first()
|
||||
|
||||
if not modem:
|
||||
return {"status": "error", "detail": f"Modem {modem_id} not found"}
|
||||
|
||||
if not modem.ip_address:
|
||||
return {"status": "error", "detail": f"Modem {modem_id} has no IP address configured"}
|
||||
|
||||
try:
|
||||
# Ping the modem (1 packet, 2 second timeout)
|
||||
start_time = time.time()
|
||||
result = subprocess.run(
|
||||
["ping", "-c", "1", "-W", "2", modem.ip_address],
|
||||
capture_output=True,
|
||||
text=True,
|
||||
timeout=3
|
||||
)
|
||||
response_time = int((time.time() - start_time) * 1000) # Convert to milliseconds
|
||||
|
||||
if result.returncode == 0:
|
||||
return {
|
||||
"status": "success",
|
||||
"modem_id": modem_id,
|
||||
"ip_address": modem.ip_address,
|
||||
"response_time": response_time,
|
||||
"message": "Modem is responding to ping"
|
||||
}
|
||||
else:
|
||||
return {
|
||||
"status": "error",
|
||||
"modem_id": modem_id,
|
||||
"ip_address": modem.ip_address,
|
||||
"detail": "Modem not responding to ping"
|
||||
}
|
||||
|
||||
except subprocess.TimeoutExpired:
|
||||
return {
|
||||
"status": "error",
|
||||
"modem_id": modem_id,
|
||||
"ip_address": modem.ip_address,
|
||||
"detail": "Ping timeout (> 2 seconds)"
|
||||
}
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to ping modem {modem_id}: {e}")
|
||||
return {
|
||||
"status": "error",
|
||||
"modem_id": modem_id,
|
||||
"detail": str(e)
|
||||
}
|
||||
122
backend/routers/slm_ui.py
Normal file
@@ -0,0 +1,122 @@
|
||||
"""
|
||||
Sound Level Meter UI Router
|
||||
|
||||
Provides endpoints for SLM dashboard cards, detail pages, and real-time data.
|
||||
"""
|
||||
|
||||
from fastapi import APIRouter, Depends, HTTPException, Request
|
||||
from fastapi.responses import HTMLResponse
|
||||
from sqlalchemy.orm import Session
|
||||
from datetime import datetime
|
||||
import httpx
|
||||
import logging
|
||||
import os
|
||||
|
||||
from backend.database import get_db
|
||||
from backend.models import RosterUnit
|
||||
from backend.templates_config import templates
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
router = APIRouter(prefix="/slm", tags=["slm-ui"])
|
||||
|
||||
SLMM_BASE_URL = os.getenv("SLMM_BASE_URL", "http://172.19.0.1:8100")
|
||||
|
||||
|
||||
@router.get("/{unit_id}", response_class=HTMLResponse)
|
||||
async def slm_detail_page(request: Request, unit_id: str, db: Session = Depends(get_db)):
|
||||
"""Sound level meter detail page with controls."""
|
||||
|
||||
# Get roster unit
|
||||
unit = db.query(RosterUnit).filter_by(id=unit_id).first()
|
||||
if not unit or unit.device_type != "slm":
|
||||
raise HTTPException(status_code=404, detail="Sound level meter not found")
|
||||
|
||||
return templates.TemplateResponse("slm_detail.html", {
|
||||
"request": request,
|
||||
"unit": unit,
|
||||
"unit_id": unit_id
|
||||
})
|
||||
|
||||
|
||||
@router.get("/api/{unit_id}/summary")
|
||||
async def get_slm_summary(unit_id: str, db: Session = Depends(get_db)):
|
||||
"""Get SLM summary data for dashboard card."""
|
||||
|
||||
# Get roster unit
|
||||
unit = db.query(RosterUnit).filter_by(id=unit_id).first()
|
||||
if not unit or unit.device_type != "slm":
|
||||
raise HTTPException(status_code=404, detail="Sound level meter not found")
|
||||
|
||||
# Try to get live status from SLMM
|
||||
status_data = None
|
||||
try:
|
||||
async with httpx.AsyncClient(timeout=3.0) as client:
|
||||
response = await client.get(f"{SLMM_BASE_URL}/api/nl43/{unit_id}/status")
|
||||
if response.status_code == 200:
|
||||
status_data = response.json().get("data")
|
||||
except Exception as e:
|
||||
logger.warning(f"Failed to get SLM status for {unit_id}: {e}")
|
||||
|
||||
return {
|
||||
"unit_id": unit_id,
|
||||
"device_type": "slm",
|
||||
"deployed": unit.deployed,
|
||||
"model": unit.slm_model or "NL-43",
|
||||
"location": unit.address or unit.location,
|
||||
"coordinates": unit.coordinates,
|
||||
"note": unit.note,
|
||||
"status": status_data,
|
||||
"last_check": unit.slm_last_check.isoformat() if unit.slm_last_check else None,
|
||||
}
|
||||
|
||||
|
||||
@router.get("/partials/{unit_id}/card", response_class=HTMLResponse)
|
||||
async def slm_dashboard_card(request: Request, unit_id: str, db: Session = Depends(get_db)):
|
||||
"""Render SLM dashboard card partial."""
|
||||
|
||||
summary = await get_slm_summary(unit_id, db)
|
||||
|
||||
return templates.TemplateResponse("partials/slm_card.html", {
|
||||
"request": request,
|
||||
"slm": summary
|
||||
})
|
||||
|
||||
|
||||
@router.get("/partials/{unit_id}/controls", response_class=HTMLResponse)
|
||||
async def slm_controls_partial(request: Request, unit_id: str, db: Session = Depends(get_db)):
|
||||
"""Render SLM control panel partial."""
|
||||
|
||||
unit = db.query(RosterUnit).filter_by(id=unit_id).first()
|
||||
if not unit or unit.device_type != "slm":
|
||||
raise HTTPException(status_code=404, detail="Sound level meter not found")
|
||||
|
||||
# Get current status from SLMM
|
||||
measurement_state = None
|
||||
battery_level = None
|
||||
try:
|
||||
async with httpx.AsyncClient(timeout=3.0) as client:
|
||||
# Get measurement state
|
||||
state_response = await client.get(
|
||||
f"{SLMM_BASE_URL}/api/nl43/{unit_id}/measurement-state"
|
||||
)
|
||||
if state_response.status_code == 200:
|
||||
measurement_state = state_response.json().get("measurement_state")
|
||||
|
||||
# Get battery level
|
||||
battery_response = await client.get(
|
||||
f"{SLMM_BASE_URL}/api/nl43/{unit_id}/battery"
|
||||
)
|
||||
if battery_response.status_code == 200:
|
||||
battery_level = battery_response.json().get("battery_level")
|
||||
except Exception as e:
|
||||
logger.warning(f"Failed to get SLM control data for {unit_id}: {e}")
|
||||
|
||||
return templates.TemplateResponse("partials/slm_controls.html", {
|
||||
"request": request,
|
||||
"unit_id": unit_id,
|
||||
"unit": unit,
|
||||
"measurement_state": measurement_state,
|
||||
"battery_level": battery_level,
|
||||
"is_measuring": measurement_state == "Start"
|
||||
})
|
||||
301
backend/routers/slmm.py
Normal file
@@ -0,0 +1,301 @@
|
||||
"""
|
||||
SLMM (Sound Level Meter Manager) Proxy Router
|
||||
|
||||
Proxies requests from SFM to the standalone SLMM backend service.
|
||||
SLMM runs on port 8100 and handles NL43/NL53 sound level meter communication.
|
||||
"""
|
||||
|
||||
from fastapi import APIRouter, HTTPException, Request, Response, WebSocket, WebSocketDisconnect
|
||||
from fastapi.responses import StreamingResponse
|
||||
import httpx
|
||||
import websockets
|
||||
import asyncio
|
||||
import logging
|
||||
import os
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
router = APIRouter(prefix="/api/slmm", tags=["slmm"])
|
||||
|
||||
# SLMM backend URL - configurable via environment variable
|
||||
SLMM_BASE_URL = os.getenv("SLMM_BASE_URL", "http://localhost:8100")
|
||||
# WebSocket URL derived from HTTP URL
|
||||
SLMM_WS_BASE_URL = SLMM_BASE_URL.replace("http://", "ws://").replace("https://", "wss://")
|
||||
|
||||
|
||||
@router.get("/health")
|
||||
async def check_slmm_health():
|
||||
"""
|
||||
Check if the SLMM backend service is reachable and healthy.
|
||||
"""
|
||||
try:
|
||||
async with httpx.AsyncClient(timeout=5.0) as client:
|
||||
response = await client.get(f"{SLMM_BASE_URL}/health")
|
||||
|
||||
if response.status_code == 200:
|
||||
data = response.json()
|
||||
return {
|
||||
"status": "ok",
|
||||
"slmm_status": "connected",
|
||||
"slmm_url": SLMM_BASE_URL,
|
||||
"slmm_version": data.get("version", "unknown"),
|
||||
"slmm_response": data
|
||||
}
|
||||
else:
|
||||
return {
|
||||
"status": "degraded",
|
||||
"slmm_status": "error",
|
||||
"slmm_url": SLMM_BASE_URL,
|
||||
"detail": f"SLMM returned status {response.status_code}"
|
||||
}
|
||||
|
||||
except httpx.ConnectError:
|
||||
return {
|
||||
"status": "error",
|
||||
"slmm_status": "unreachable",
|
||||
"slmm_url": SLMM_BASE_URL,
|
||||
"detail": "Cannot connect to SLMM backend. Is it running?"
|
||||
}
|
||||
except Exception as e:
|
||||
return {
|
||||
"status": "error",
|
||||
"slmm_status": "error",
|
||||
"slmm_url": SLMM_BASE_URL,
|
||||
"detail": str(e)
|
||||
}
|
||||
|
||||
|
||||
# WebSocket routes MUST come before the catch-all route
|
||||
@router.websocket("/{unit_id}/stream")
|
||||
async def proxy_websocket_stream(websocket: WebSocket, unit_id: str):
|
||||
"""
|
||||
Proxy WebSocket connections to SLMM's /stream endpoint.
|
||||
|
||||
This allows real-time streaming of measurement data from NL43 devices
|
||||
through the SFM unified interface.
|
||||
"""
|
||||
await websocket.accept()
|
||||
logger.info(f"WebSocket connection accepted for SLMM unit {unit_id}")
|
||||
|
||||
# Build target WebSocket URL
|
||||
target_ws_url = f"{SLMM_WS_BASE_URL}/api/nl43/{unit_id}/stream"
|
||||
logger.info(f"Connecting to SLMM WebSocket: {target_ws_url}")
|
||||
|
||||
backend_ws = None
|
||||
|
||||
try:
|
||||
# Connect to SLMM backend WebSocket
|
||||
backend_ws = await websockets.connect(target_ws_url)
|
||||
logger.info(f"Connected to SLMM backend WebSocket for {unit_id}")
|
||||
|
||||
# Create tasks for bidirectional communication
|
||||
async def forward_to_backend():
|
||||
"""Forward messages from client to SLMM backend"""
|
||||
try:
|
||||
while True:
|
||||
data = await websocket.receive_text()
|
||||
await backend_ws.send(data)
|
||||
except WebSocketDisconnect:
|
||||
logger.info(f"Client WebSocket disconnected for {unit_id}")
|
||||
except Exception as e:
|
||||
logger.error(f"Error forwarding to backend: {e}")
|
||||
|
||||
async def forward_to_client():
|
||||
"""Forward messages from SLMM backend to client"""
|
||||
try:
|
||||
async for message in backend_ws:
|
||||
await websocket.send_text(message)
|
||||
except websockets.exceptions.ConnectionClosed:
|
||||
logger.info(f"Backend WebSocket closed for {unit_id}")
|
||||
except Exception as e:
|
||||
logger.error(f"Error forwarding to client: {e}")
|
||||
|
||||
# Run both forwarding tasks concurrently
|
||||
await asyncio.gather(
|
||||
forward_to_backend(),
|
||||
forward_to_client(),
|
||||
return_exceptions=True
|
||||
)
|
||||
|
||||
except websockets.exceptions.WebSocketException as e:
|
||||
logger.error(f"WebSocket error connecting to SLMM backend: {e}")
|
||||
try:
|
||||
await websocket.send_json({
|
||||
"error": "Failed to connect to SLMM backend",
|
||||
"detail": str(e)
|
||||
})
|
||||
except Exception:
|
||||
pass
|
||||
except Exception as e:
|
||||
logger.error(f"Unexpected error in WebSocket proxy for {unit_id}: {e}")
|
||||
try:
|
||||
await websocket.send_json({
|
||||
"error": "Internal server error",
|
||||
"detail": str(e)
|
||||
})
|
||||
except Exception:
|
||||
pass
|
||||
finally:
|
||||
# Clean up connections
|
||||
if backend_ws:
|
||||
try:
|
||||
await backend_ws.close()
|
||||
except Exception:
|
||||
pass
|
||||
try:
|
||||
await websocket.close()
|
||||
except Exception:
|
||||
pass
|
||||
logger.info(f"WebSocket proxy closed for {unit_id}")
|
||||
|
||||
|
||||
@router.websocket("/{unit_id}/live")
|
||||
async def proxy_websocket_live(websocket: WebSocket, unit_id: str):
|
||||
"""
|
||||
Proxy WebSocket connections to SLMM's /live endpoint.
|
||||
|
||||
Alternative WebSocket endpoint that may be used by some frontend components.
|
||||
"""
|
||||
await websocket.accept()
|
||||
logger.info(f"WebSocket connection accepted for SLMM unit {unit_id} (live endpoint)")
|
||||
|
||||
# Build target WebSocket URL - try /stream endpoint as SLMM uses that for WebSocket
|
||||
target_ws_url = f"{SLMM_WS_BASE_URL}/api/nl43/{unit_id}/stream"
|
||||
logger.info(f"Connecting to SLMM WebSocket: {target_ws_url}")
|
||||
|
||||
backend_ws = None
|
||||
|
||||
try:
|
||||
# Connect to SLMM backend WebSocket
|
||||
backend_ws = await websockets.connect(target_ws_url)
|
||||
logger.info(f"Connected to SLMM backend WebSocket for {unit_id} (live endpoint)")
|
||||
|
||||
# Create tasks for bidirectional communication
|
||||
async def forward_to_backend():
|
||||
"""Forward messages from client to SLMM backend"""
|
||||
try:
|
||||
while True:
|
||||
data = await websocket.receive_text()
|
||||
await backend_ws.send(data)
|
||||
except WebSocketDisconnect:
|
||||
logger.info(f"Client WebSocket disconnected for {unit_id} (live)")
|
||||
except Exception as e:
|
||||
logger.error(f"Error forwarding to backend (live): {e}")
|
||||
|
||||
async def forward_to_client():
|
||||
"""Forward messages from SLMM backend to client"""
|
||||
try:
|
||||
async for message in backend_ws:
|
||||
await websocket.send_text(message)
|
||||
except websockets.exceptions.ConnectionClosed:
|
||||
logger.info(f"Backend WebSocket closed for {unit_id} (live)")
|
||||
except Exception as e:
|
||||
logger.error(f"Error forwarding to client (live): {e}")
|
||||
|
||||
# Run both forwarding tasks concurrently
|
||||
await asyncio.gather(
|
||||
forward_to_backend(),
|
||||
forward_to_client(),
|
||||
return_exceptions=True
|
||||
)
|
||||
|
||||
except websockets.exceptions.WebSocketException as e:
|
||||
logger.error(f"WebSocket error connecting to SLMM backend (live): {e}")
|
||||
try:
|
||||
await websocket.send_json({
|
||||
"error": "Failed to connect to SLMM backend",
|
||||
"detail": str(e)
|
||||
})
|
||||
except Exception:
|
||||
pass
|
||||
except Exception as e:
|
||||
logger.error(f"Unexpected error in WebSocket proxy for {unit_id} (live): {e}")
|
||||
try:
|
||||
await websocket.send_json({
|
||||
"error": "Internal server error",
|
||||
"detail": str(e)
|
||||
})
|
||||
except Exception:
|
||||
pass
|
||||
finally:
|
||||
# Clean up connections
|
||||
if backend_ws:
|
||||
try:
|
||||
await backend_ws.close()
|
||||
except Exception:
|
||||
pass
|
||||
try:
|
||||
await websocket.close()
|
||||
except Exception:
|
||||
pass
|
||||
logger.info(f"WebSocket proxy closed for {unit_id} (live)")
|
||||
|
||||
|
||||
# HTTP catch-all route MUST come after specific routes (including WebSocket routes)
|
||||
@router.api_route("/{path:path}", methods=["GET", "POST", "PUT", "DELETE", "PATCH"])
|
||||
async def proxy_to_slmm(path: str, request: Request):
|
||||
"""
|
||||
Proxy all requests to the SLMM backend service.
|
||||
|
||||
This allows SFM to act as a unified frontend for all device types,
|
||||
while SLMM remains a standalone backend service.
|
||||
"""
|
||||
# Build target URL
|
||||
target_url = f"{SLMM_BASE_URL}/api/nl43/{path}"
|
||||
|
||||
# Get query parameters
|
||||
query_params = dict(request.query_params)
|
||||
|
||||
# Get request body if present
|
||||
body = None
|
||||
if request.method in ["POST", "PUT", "PATCH"]:
|
||||
try:
|
||||
body = await request.body()
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to read request body: {e}")
|
||||
body = None
|
||||
|
||||
# Get headers (exclude host and other proxy-specific headers)
|
||||
headers = dict(request.headers)
|
||||
headers_to_exclude = ["host", "content-length", "transfer-encoding", "connection"]
|
||||
proxy_headers = {k: v for k, v in headers.items() if k.lower() not in headers_to_exclude}
|
||||
|
||||
logger.info(f"Proxying {request.method} request to SLMM: {target_url}")
|
||||
|
||||
try:
|
||||
async with httpx.AsyncClient(timeout=30.0) as client:
|
||||
# Forward the request to SLMM
|
||||
response = await client.request(
|
||||
method=request.method,
|
||||
url=target_url,
|
||||
params=query_params,
|
||||
headers=proxy_headers,
|
||||
content=body
|
||||
)
|
||||
|
||||
# Return the response from SLMM
|
||||
return Response(
|
||||
content=response.content,
|
||||
status_code=response.status_code,
|
||||
headers=dict(response.headers),
|
||||
media_type=response.headers.get("content-type")
|
||||
)
|
||||
|
||||
except httpx.ConnectError:
|
||||
logger.error(f"Failed to connect to SLMM backend at {SLMM_BASE_URL}")
|
||||
raise HTTPException(
|
||||
status_code=503,
|
||||
detail=f"SLMM backend service unavailable. Is SLMM running on {SLMM_BASE_URL}?"
|
||||
)
|
||||
except httpx.TimeoutException:
|
||||
logger.error(f"Timeout connecting to SLMM backend at {SLMM_BASE_URL}")
|
||||
raise HTTPException(
|
||||
status_code=504,
|
||||
detail="SLMM backend timeout"
|
||||
)
|
||||
except Exception as e:
|
||||
logger.error(f"Error proxying to SLMM: {e}")
|
||||
raise HTTPException(
|
||||
status_code=500,
|
||||
detail=f"Failed to proxy request to SLMM: {str(e)}"
|
||||
)
|
||||
@@ -3,8 +3,9 @@ from sqlalchemy.orm import Session
|
||||
from datetime import datetime
|
||||
from typing import Dict, Any
|
||||
|
||||
from app.seismo.database import get_db
|
||||
from app.seismo.services.snapshot import emit_status_snapshot
|
||||
from backend.database import get_db
|
||||
from backend.services.snapshot import emit_status_snapshot
|
||||
from backend.models import RosterUnit
|
||||
|
||||
router = APIRouter(prefix="/api", tags=["units"])
|
||||
|
||||
@@ -42,3 +43,32 @@ def get_unit_detail(unit_id: str, db: Session = Depends(get_db)):
|
||||
"note": unit_data.get("note", ""),
|
||||
"coordinates": coords
|
||||
}
|
||||
|
||||
|
||||
@router.get("/units/{unit_id}")
|
||||
def get_unit_by_id(unit_id: str, db: Session = Depends(get_db)):
|
||||
"""
|
||||
Get unit data directly from the roster (for settings/configuration).
|
||||
"""
|
||||
unit = db.query(RosterUnit).filter_by(id=unit_id).first()
|
||||
|
||||
if not unit:
|
||||
raise HTTPException(status_code=404, detail=f"Unit {unit_id} not found")
|
||||
|
||||
return {
|
||||
"id": unit.id,
|
||||
"unit_type": unit.unit_type,
|
||||
"device_type": unit.device_type,
|
||||
"deployed": unit.deployed,
|
||||
"retired": unit.retired,
|
||||
"note": unit.note,
|
||||
"location": unit.location,
|
||||
"address": unit.address,
|
||||
"coordinates": unit.coordinates,
|
||||
"slm_host": unit.slm_host,
|
||||
"slm_tcp_port": unit.slm_tcp_port,
|
||||
"slm_ftp_port": unit.slm_ftp_port,
|
||||
"slm_model": unit.slm_model,
|
||||
"slm_serial_number": unit.slm_serial_number,
|
||||
"deployed_with_modem_id": unit.deployed_with_modem_id
|
||||
}
|
||||
@@ -4,8 +4,8 @@ from pydantic import BaseModel
|
||||
from datetime import datetime
|
||||
from typing import Optional, List
|
||||
|
||||
from app.seismo.database import get_db
|
||||
from app.seismo.models import Emitter
|
||||
from backend.database import get_db
|
||||
from backend.models import Emitter
|
||||
|
||||
router = APIRouter()
|
||||
|
||||
462
backend/services/alert_service.py
Normal file
@@ -0,0 +1,462 @@
|
||||
"""
|
||||
Alert Service
|
||||
|
||||
Manages in-app alerts for device status changes and system events.
|
||||
Provides foundation for future notification channels (email, webhook).
|
||||
"""
|
||||
|
||||
import json
|
||||
import uuid
|
||||
import logging
|
||||
from datetime import datetime, timedelta
|
||||
from typing import Optional, List, Dict, Any
|
||||
|
||||
from sqlalchemy.orm import Session
|
||||
from sqlalchemy import and_, or_
|
||||
|
||||
from backend.models import Alert, RosterUnit
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class AlertService:
|
||||
"""
|
||||
Service for managing alerts.
|
||||
|
||||
Handles alert lifecycle:
|
||||
- Create alerts from various triggers
|
||||
- Query active alerts
|
||||
- Acknowledge/resolve/dismiss alerts
|
||||
- (Future) Dispatch to notification channels
|
||||
"""
|
||||
|
||||
def __init__(self, db: Session):
|
||||
self.db = db
|
||||
|
||||
def create_alert(
|
||||
self,
|
||||
alert_type: str,
|
||||
title: str,
|
||||
message: str = None,
|
||||
severity: str = "warning",
|
||||
unit_id: str = None,
|
||||
project_id: str = None,
|
||||
location_id: str = None,
|
||||
schedule_id: str = None,
|
||||
metadata: dict = None,
|
||||
expires_hours: int = 24,
|
||||
) -> Alert:
|
||||
"""
|
||||
Create a new alert.
|
||||
|
||||
Args:
|
||||
alert_type: Type of alert (device_offline, device_online, schedule_failed)
|
||||
title: Short alert title
|
||||
message: Detailed description
|
||||
severity: info, warning, or critical
|
||||
unit_id: Related unit ID (optional)
|
||||
project_id: Related project ID (optional)
|
||||
location_id: Related location ID (optional)
|
||||
schedule_id: Related schedule ID (optional)
|
||||
metadata: Additional JSON data
|
||||
expires_hours: Hours until auto-expiry (default 24)
|
||||
|
||||
Returns:
|
||||
Created Alert instance
|
||||
"""
|
||||
alert = Alert(
|
||||
id=str(uuid.uuid4()),
|
||||
alert_type=alert_type,
|
||||
title=title,
|
||||
message=message,
|
||||
severity=severity,
|
||||
unit_id=unit_id,
|
||||
project_id=project_id,
|
||||
location_id=location_id,
|
||||
schedule_id=schedule_id,
|
||||
alert_metadata=json.dumps(metadata) if metadata else None,
|
||||
status="active",
|
||||
expires_at=datetime.utcnow() + timedelta(hours=expires_hours),
|
||||
)
|
||||
|
||||
self.db.add(alert)
|
||||
self.db.commit()
|
||||
self.db.refresh(alert)
|
||||
|
||||
logger.info(f"Created alert: {alert.title} ({alert.alert_type})")
|
||||
return alert
|
||||
|
||||
def create_device_offline_alert(
|
||||
self,
|
||||
unit_id: str,
|
||||
consecutive_failures: int = 0,
|
||||
last_error: str = None,
|
||||
) -> Optional[Alert]:
|
||||
"""
|
||||
Create alert when device becomes unreachable.
|
||||
|
||||
Only creates if no active offline alert exists for this device.
|
||||
|
||||
Args:
|
||||
unit_id: The unit that went offline
|
||||
consecutive_failures: Number of consecutive poll failures
|
||||
last_error: Last error message from polling
|
||||
|
||||
Returns:
|
||||
Created Alert or None if alert already exists
|
||||
"""
|
||||
# Check if active offline alert already exists
|
||||
existing = self.db.query(Alert).filter(
|
||||
and_(
|
||||
Alert.unit_id == unit_id,
|
||||
Alert.alert_type == "device_offline",
|
||||
Alert.status == "active",
|
||||
)
|
||||
).first()
|
||||
|
||||
if existing:
|
||||
logger.debug(f"Offline alert already exists for {unit_id}")
|
||||
return None
|
||||
|
||||
# Get unit info for title
|
||||
unit = self.db.query(RosterUnit).filter_by(id=unit_id).first()
|
||||
unit_name = unit.id if unit else unit_id
|
||||
|
||||
# Determine severity based on failure count
|
||||
severity = "critical" if consecutive_failures >= 5 else "warning"
|
||||
|
||||
return self.create_alert(
|
||||
alert_type="device_offline",
|
||||
title=f"{unit_name} is offline",
|
||||
message=f"Device has been unreachable after {consecutive_failures} failed connection attempts."
|
||||
+ (f" Last error: {last_error}" if last_error else ""),
|
||||
severity=severity,
|
||||
unit_id=unit_id,
|
||||
metadata={
|
||||
"consecutive_failures": consecutive_failures,
|
||||
"last_error": last_error,
|
||||
},
|
||||
expires_hours=48, # Offline alerts stay longer
|
||||
)
|
||||
|
||||
def resolve_device_offline_alert(self, unit_id: str) -> Optional[Alert]:
|
||||
"""
|
||||
Auto-resolve offline alert when device comes back online.
|
||||
|
||||
Also creates an "device_online" info alert to notify user.
|
||||
|
||||
Args:
|
||||
unit_id: The unit that came back online
|
||||
|
||||
Returns:
|
||||
The resolved Alert or None if no alert existed
|
||||
"""
|
||||
# Find active offline alert
|
||||
alert = self.db.query(Alert).filter(
|
||||
and_(
|
||||
Alert.unit_id == unit_id,
|
||||
Alert.alert_type == "device_offline",
|
||||
Alert.status == "active",
|
||||
)
|
||||
).first()
|
||||
|
||||
if not alert:
|
||||
return None
|
||||
|
||||
# Resolve the offline alert
|
||||
alert.status = "resolved"
|
||||
alert.resolved_at = datetime.utcnow()
|
||||
self.db.commit()
|
||||
|
||||
logger.info(f"Resolved offline alert for {unit_id}")
|
||||
|
||||
# Create online notification
|
||||
unit = self.db.query(RosterUnit).filter_by(id=unit_id).first()
|
||||
unit_name = unit.id if unit else unit_id
|
||||
|
||||
self.create_alert(
|
||||
alert_type="device_online",
|
||||
title=f"{unit_name} is back online",
|
||||
message="Device connection has been restored.",
|
||||
severity="info",
|
||||
unit_id=unit_id,
|
||||
expires_hours=6, # Info alerts expire quickly
|
||||
)
|
||||
|
||||
return alert
|
||||
|
||||
def create_schedule_failed_alert(
|
||||
self,
|
||||
schedule_id: str,
|
||||
action_type: str,
|
||||
unit_id: str = None,
|
||||
error_message: str = None,
|
||||
project_id: str = None,
|
||||
location_id: str = None,
|
||||
) -> Alert:
|
||||
"""
|
||||
Create alert when a scheduled action fails.
|
||||
|
||||
Args:
|
||||
schedule_id: The ScheduledAction or RecurringSchedule ID
|
||||
action_type: start, stop, download
|
||||
unit_id: Related unit
|
||||
error_message: Error from execution
|
||||
project_id: Related project
|
||||
location_id: Related location
|
||||
|
||||
Returns:
|
||||
Created Alert
|
||||
"""
|
||||
return self.create_alert(
|
||||
alert_type="schedule_failed",
|
||||
title=f"Scheduled {action_type} failed",
|
||||
message=error_message or f"The scheduled {action_type} action did not complete successfully.",
|
||||
severity="warning",
|
||||
unit_id=unit_id,
|
||||
project_id=project_id,
|
||||
location_id=location_id,
|
||||
schedule_id=schedule_id,
|
||||
metadata={"action_type": action_type},
|
||||
expires_hours=24,
|
||||
)
|
||||
|
||||
def create_schedule_completed_alert(
|
||||
self,
|
||||
schedule_id: str,
|
||||
action_type: str,
|
||||
unit_id: str = None,
|
||||
project_id: str = None,
|
||||
location_id: str = None,
|
||||
metadata: dict = None,
|
||||
) -> Alert:
|
||||
"""
|
||||
Create alert when a scheduled action completes successfully.
|
||||
|
||||
Args:
|
||||
schedule_id: The ScheduledAction ID
|
||||
action_type: start, stop, download
|
||||
unit_id: Related unit
|
||||
project_id: Related project
|
||||
location_id: Related location
|
||||
metadata: Additional info (e.g., downloaded folder, index numbers)
|
||||
|
||||
Returns:
|
||||
Created Alert
|
||||
"""
|
||||
# Build descriptive message based on action type and metadata
|
||||
if action_type == "stop" and metadata:
|
||||
download_folder = metadata.get("downloaded_folder")
|
||||
download_success = metadata.get("download_success", False)
|
||||
if download_success and download_folder:
|
||||
message = f"Measurement stopped and data downloaded ({download_folder})"
|
||||
elif download_success is False and metadata.get("download_attempted"):
|
||||
message = "Measurement stopped but download failed"
|
||||
else:
|
||||
message = "Measurement stopped successfully"
|
||||
elif action_type == "start" and metadata:
|
||||
new_index = metadata.get("new_index")
|
||||
if new_index is not None:
|
||||
message = f"Measurement started (index {new_index:04d})"
|
||||
else:
|
||||
message = "Measurement started successfully"
|
||||
else:
|
||||
message = f"Scheduled {action_type} completed successfully"
|
||||
|
||||
return self.create_alert(
|
||||
alert_type="schedule_completed",
|
||||
title=f"Scheduled {action_type} completed",
|
||||
message=message,
|
||||
severity="info",
|
||||
unit_id=unit_id,
|
||||
project_id=project_id,
|
||||
location_id=location_id,
|
||||
schedule_id=schedule_id,
|
||||
metadata={"action_type": action_type, **(metadata or {})},
|
||||
expires_hours=12, # Info alerts expire quickly
|
||||
)
|
||||
|
||||
def get_active_alerts(
|
||||
self,
|
||||
project_id: str = None,
|
||||
unit_id: str = None,
|
||||
alert_type: str = None,
|
||||
min_severity: str = None,
|
||||
limit: int = 50,
|
||||
) -> List[Alert]:
|
||||
"""
|
||||
Query active alerts with optional filters.
|
||||
|
||||
Args:
|
||||
project_id: Filter by project
|
||||
unit_id: Filter by unit
|
||||
alert_type: Filter by alert type
|
||||
min_severity: Minimum severity (info, warning, critical)
|
||||
limit: Maximum results
|
||||
|
||||
Returns:
|
||||
List of matching alerts
|
||||
"""
|
||||
query = self.db.query(Alert).filter(Alert.status == "active")
|
||||
|
||||
if project_id:
|
||||
query = query.filter(Alert.project_id == project_id)
|
||||
|
||||
if unit_id:
|
||||
query = query.filter(Alert.unit_id == unit_id)
|
||||
|
||||
if alert_type:
|
||||
query = query.filter(Alert.alert_type == alert_type)
|
||||
|
||||
if min_severity:
|
||||
# Map severity to numeric for comparison
|
||||
severity_levels = {"info": 1, "warning": 2, "critical": 3}
|
||||
min_level = severity_levels.get(min_severity, 1)
|
||||
|
||||
if min_level == 2:
|
||||
query = query.filter(Alert.severity.in_(["warning", "critical"]))
|
||||
elif min_level == 3:
|
||||
query = query.filter(Alert.severity == "critical")
|
||||
|
||||
return query.order_by(Alert.created_at.desc()).limit(limit).all()
|
||||
|
||||
def get_all_alerts(
|
||||
self,
|
||||
status: str = None,
|
||||
project_id: str = None,
|
||||
unit_id: str = None,
|
||||
alert_type: str = None,
|
||||
limit: int = 50,
|
||||
offset: int = 0,
|
||||
) -> List[Alert]:
|
||||
"""
|
||||
Query all alerts with optional filters (includes non-active).
|
||||
|
||||
Args:
|
||||
status: Filter by status (active, acknowledged, resolved, dismissed)
|
||||
project_id: Filter by project
|
||||
unit_id: Filter by unit
|
||||
alert_type: Filter by alert type
|
||||
limit: Maximum results
|
||||
offset: Pagination offset
|
||||
|
||||
Returns:
|
||||
List of matching alerts
|
||||
"""
|
||||
query = self.db.query(Alert)
|
||||
|
||||
if status:
|
||||
query = query.filter(Alert.status == status)
|
||||
|
||||
if project_id:
|
||||
query = query.filter(Alert.project_id == project_id)
|
||||
|
||||
if unit_id:
|
||||
query = query.filter(Alert.unit_id == unit_id)
|
||||
|
||||
if alert_type:
|
||||
query = query.filter(Alert.alert_type == alert_type)
|
||||
|
||||
return (
|
||||
query.order_by(Alert.created_at.desc())
|
||||
.offset(offset)
|
||||
.limit(limit)
|
||||
.all()
|
||||
)
|
||||
|
||||
def get_active_alert_count(self) -> int:
|
||||
"""Get count of active alerts for badge display."""
|
||||
return self.db.query(Alert).filter(Alert.status == "active").count()
|
||||
|
||||
def acknowledge_alert(self, alert_id: str) -> Optional[Alert]:
|
||||
"""
|
||||
Mark alert as acknowledged.
|
||||
|
||||
Args:
|
||||
alert_id: Alert to acknowledge
|
||||
|
||||
Returns:
|
||||
Updated Alert or None if not found
|
||||
"""
|
||||
alert = self.db.query(Alert).filter_by(id=alert_id).first()
|
||||
if not alert:
|
||||
return None
|
||||
|
||||
alert.status = "acknowledged"
|
||||
alert.acknowledged_at = datetime.utcnow()
|
||||
self.db.commit()
|
||||
|
||||
logger.info(f"Acknowledged alert: {alert.title}")
|
||||
return alert
|
||||
|
||||
def dismiss_alert(self, alert_id: str) -> Optional[Alert]:
|
||||
"""
|
||||
Dismiss alert (user chose to ignore).
|
||||
|
||||
Args:
|
||||
alert_id: Alert to dismiss
|
||||
|
||||
Returns:
|
||||
Updated Alert or None if not found
|
||||
"""
|
||||
alert = self.db.query(Alert).filter_by(id=alert_id).first()
|
||||
if not alert:
|
||||
return None
|
||||
|
||||
alert.status = "dismissed"
|
||||
self.db.commit()
|
||||
|
||||
logger.info(f"Dismissed alert: {alert.title}")
|
||||
return alert
|
||||
|
||||
def resolve_alert(self, alert_id: str) -> Optional[Alert]:
|
||||
"""
|
||||
Manually resolve an alert.
|
||||
|
||||
Args:
|
||||
alert_id: Alert to resolve
|
||||
|
||||
Returns:
|
||||
Updated Alert or None if not found
|
||||
"""
|
||||
alert = self.db.query(Alert).filter_by(id=alert_id).first()
|
||||
if not alert:
|
||||
return None
|
||||
|
||||
alert.status = "resolved"
|
||||
alert.resolved_at = datetime.utcnow()
|
||||
self.db.commit()
|
||||
|
||||
logger.info(f"Resolved alert: {alert.title}")
|
||||
return alert
|
||||
|
||||
def cleanup_expired_alerts(self) -> int:
|
||||
"""
|
||||
Remove alerts past their expiration time.
|
||||
|
||||
Returns:
|
||||
Number of alerts cleaned up
|
||||
"""
|
||||
now = datetime.utcnow()
|
||||
expired = self.db.query(Alert).filter(
|
||||
and_(
|
||||
Alert.expires_at.isnot(None),
|
||||
Alert.expires_at < now,
|
||||
Alert.status == "active",
|
||||
)
|
||||
).all()
|
||||
|
||||
count = len(expired)
|
||||
for alert in expired:
|
||||
alert.status = "dismissed"
|
||||
|
||||
if count > 0:
|
||||
self.db.commit()
|
||||
logger.info(f"Cleaned up {count} expired alerts")
|
||||
|
||||
return count
|
||||
|
||||
|
||||
def get_alert_service(db: Session) -> AlertService:
|
||||
"""Get an AlertService instance with the given database session."""
|
||||
return AlertService(db)
|
||||
@@ -10,7 +10,7 @@ from datetime import datetime
|
||||
from typing import Optional
|
||||
import logging
|
||||
|
||||
from app.seismo.services.database_backup import DatabaseBackupService
|
||||
from backend.services.database_backup import DatabaseBackupService
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
603
backend/services/device_controller.py
Normal file
@@ -0,0 +1,603 @@
|
||||
"""
|
||||
Device Controller Service
|
||||
|
||||
Routes device operations to the appropriate backend module:
|
||||
- SLMM for sound level meters
|
||||
- SFM for seismographs (future implementation)
|
||||
|
||||
This abstraction allows Projects system to work with any device type
|
||||
without knowing the underlying communication protocol.
|
||||
"""
|
||||
|
||||
from typing import Dict, Any, Optional, List
|
||||
from backend.services.slmm_client import get_slmm_client, SLMMClientError
|
||||
|
||||
|
||||
class DeviceControllerError(Exception):
|
||||
"""Base exception for device controller errors."""
|
||||
pass
|
||||
|
||||
|
||||
class UnsupportedDeviceTypeError(DeviceControllerError):
|
||||
"""Raised when device type is not supported."""
|
||||
pass
|
||||
|
||||
|
||||
class DeviceController:
|
||||
"""
|
||||
Unified interface for controlling all device types.
|
||||
|
||||
Routes commands to appropriate backend module based on device_type.
|
||||
|
||||
Usage:
|
||||
controller = DeviceController()
|
||||
await controller.start_recording("nl43-001", "slm", config={})
|
||||
await controller.stop_recording("seismo-042", "seismograph")
|
||||
"""
|
||||
|
||||
def __init__(self):
|
||||
self.slmm_client = get_slmm_client()
|
||||
|
||||
# ========================================================================
|
||||
# Recording Control
|
||||
# ========================================================================
|
||||
|
||||
async def start_recording(
|
||||
self,
|
||||
unit_id: str,
|
||||
device_type: str,
|
||||
config: Optional[Dict[str, Any]] = None,
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Start recording on a device.
|
||||
|
||||
Args:
|
||||
unit_id: Unit identifier
|
||||
device_type: "slm" | "seismograph"
|
||||
config: Device-specific recording configuration
|
||||
|
||||
Returns:
|
||||
Response dict from device module
|
||||
|
||||
Raises:
|
||||
UnsupportedDeviceTypeError: Device type not supported
|
||||
DeviceControllerError: Operation failed
|
||||
"""
|
||||
if device_type == "slm":
|
||||
try:
|
||||
return await self.slmm_client.start_recording(unit_id, config)
|
||||
except SLMMClientError as e:
|
||||
raise DeviceControllerError(f"SLMM error: {str(e)}")
|
||||
|
||||
elif device_type == "seismograph":
|
||||
# TODO: Implement SFM client for seismograph control
|
||||
# For now, return a placeholder response
|
||||
return {
|
||||
"status": "not_implemented",
|
||||
"message": "Seismograph recording control not yet implemented",
|
||||
"unit_id": unit_id,
|
||||
}
|
||||
|
||||
else:
|
||||
raise UnsupportedDeviceTypeError(
|
||||
f"Device type '{device_type}' is not supported. "
|
||||
f"Supported types: slm, seismograph"
|
||||
)
|
||||
|
||||
async def stop_recording(
|
||||
self,
|
||||
unit_id: str,
|
||||
device_type: str,
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Stop recording on a device.
|
||||
|
||||
Args:
|
||||
unit_id: Unit identifier
|
||||
device_type: "slm" | "seismograph"
|
||||
|
||||
Returns:
|
||||
Response dict from device module
|
||||
"""
|
||||
if device_type == "slm":
|
||||
try:
|
||||
return await self.slmm_client.stop_recording(unit_id)
|
||||
except SLMMClientError as e:
|
||||
raise DeviceControllerError(f"SLMM error: {str(e)}")
|
||||
|
||||
elif device_type == "seismograph":
|
||||
# TODO: Implement SFM client
|
||||
return {
|
||||
"status": "not_implemented",
|
||||
"message": "Seismograph recording control not yet implemented",
|
||||
"unit_id": unit_id,
|
||||
}
|
||||
|
||||
else:
|
||||
raise UnsupportedDeviceTypeError(f"Unsupported device type: {device_type}")
|
||||
|
||||
async def pause_recording(
|
||||
self,
|
||||
unit_id: str,
|
||||
device_type: str,
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Pause recording on a device.
|
||||
|
||||
Args:
|
||||
unit_id: Unit identifier
|
||||
device_type: "slm" | "seismograph"
|
||||
|
||||
Returns:
|
||||
Response dict from device module
|
||||
"""
|
||||
if device_type == "slm":
|
||||
try:
|
||||
return await self.slmm_client.pause_recording(unit_id)
|
||||
except SLMMClientError as e:
|
||||
raise DeviceControllerError(f"SLMM error: {str(e)}")
|
||||
|
||||
elif device_type == "seismograph":
|
||||
return {
|
||||
"status": "not_implemented",
|
||||
"message": "Seismograph pause not yet implemented",
|
||||
"unit_id": unit_id,
|
||||
}
|
||||
|
||||
else:
|
||||
raise UnsupportedDeviceTypeError(f"Unsupported device type: {device_type}")
|
||||
|
||||
async def resume_recording(
|
||||
self,
|
||||
unit_id: str,
|
||||
device_type: str,
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Resume paused recording on a device.
|
||||
|
||||
Args:
|
||||
unit_id: Unit identifier
|
||||
device_type: "slm" | "seismograph"
|
||||
|
||||
Returns:
|
||||
Response dict from device module
|
||||
"""
|
||||
if device_type == "slm":
|
||||
try:
|
||||
return await self.slmm_client.resume_recording(unit_id)
|
||||
except SLMMClientError as e:
|
||||
raise DeviceControllerError(f"SLMM error: {str(e)}")
|
||||
|
||||
elif device_type == "seismograph":
|
||||
return {
|
||||
"status": "not_implemented",
|
||||
"message": "Seismograph resume not yet implemented",
|
||||
"unit_id": unit_id,
|
||||
}
|
||||
|
||||
else:
|
||||
raise UnsupportedDeviceTypeError(f"Unsupported device type: {device_type}")
|
||||
|
||||
# ========================================================================
|
||||
# Status & Monitoring
|
||||
# ========================================================================
|
||||
|
||||
async def get_device_status(
|
||||
self,
|
||||
unit_id: str,
|
||||
device_type: str,
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Get current device status.
|
||||
|
||||
Args:
|
||||
unit_id: Unit identifier
|
||||
device_type: "slm" | "seismograph"
|
||||
|
||||
Returns:
|
||||
Status dict from device module
|
||||
"""
|
||||
if device_type == "slm":
|
||||
try:
|
||||
return await self.slmm_client.get_unit_status(unit_id)
|
||||
except SLMMClientError as e:
|
||||
raise DeviceControllerError(f"SLMM error: {str(e)}")
|
||||
|
||||
elif device_type == "seismograph":
|
||||
# TODO: Implement SFM status check
|
||||
return {
|
||||
"status": "not_implemented",
|
||||
"message": "Seismograph status not yet implemented",
|
||||
"unit_id": unit_id,
|
||||
}
|
||||
|
||||
else:
|
||||
raise UnsupportedDeviceTypeError(f"Unsupported device type: {device_type}")
|
||||
|
||||
async def get_live_data(
|
||||
self,
|
||||
unit_id: str,
|
||||
device_type: str,
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Get live data from device.
|
||||
|
||||
Args:
|
||||
unit_id: Unit identifier
|
||||
device_type: "slm" | "seismograph"
|
||||
|
||||
Returns:
|
||||
Live data dict from device module
|
||||
"""
|
||||
if device_type == "slm":
|
||||
try:
|
||||
return await self.slmm_client.get_live_data(unit_id)
|
||||
except SLMMClientError as e:
|
||||
raise DeviceControllerError(f"SLMM error: {str(e)}")
|
||||
|
||||
elif device_type == "seismograph":
|
||||
return {
|
||||
"status": "not_implemented",
|
||||
"message": "Seismograph live data not yet implemented",
|
||||
"unit_id": unit_id,
|
||||
}
|
||||
|
||||
else:
|
||||
raise UnsupportedDeviceTypeError(f"Unsupported device type: {device_type}")
|
||||
|
||||
# ========================================================================
|
||||
# Data Download
|
||||
# ========================================================================
|
||||
|
||||
async def download_files(
|
||||
self,
|
||||
unit_id: str,
|
||||
device_type: str,
|
||||
destination_path: str,
|
||||
files: Optional[List[str]] = None,
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Download data files from device.
|
||||
|
||||
Args:
|
||||
unit_id: Unit identifier
|
||||
device_type: "slm" | "seismograph"
|
||||
destination_path: Local path to save files
|
||||
files: List of filenames, or None for all
|
||||
|
||||
Returns:
|
||||
Download result with file list
|
||||
"""
|
||||
if device_type == "slm":
|
||||
try:
|
||||
return await self.slmm_client.download_files(
|
||||
unit_id,
|
||||
destination_path,
|
||||
files,
|
||||
)
|
||||
except SLMMClientError as e:
|
||||
raise DeviceControllerError(f"SLMM error: {str(e)}")
|
||||
|
||||
elif device_type == "seismograph":
|
||||
# TODO: Implement SFM file download
|
||||
return {
|
||||
"status": "not_implemented",
|
||||
"message": "Seismograph file download not yet implemented",
|
||||
"unit_id": unit_id,
|
||||
}
|
||||
|
||||
else:
|
||||
raise UnsupportedDeviceTypeError(f"Unsupported device type: {device_type}")
|
||||
|
||||
# ========================================================================
|
||||
# FTP Control
|
||||
# ========================================================================
|
||||
|
||||
async def enable_ftp(
|
||||
self,
|
||||
unit_id: str,
|
||||
device_type: str,
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Enable FTP server on device.
|
||||
|
||||
Must be called before downloading files.
|
||||
|
||||
Args:
|
||||
unit_id: Unit identifier
|
||||
device_type: "slm" | "seismograph"
|
||||
|
||||
Returns:
|
||||
Response dict with status
|
||||
"""
|
||||
if device_type == "slm":
|
||||
try:
|
||||
return await self.slmm_client.enable_ftp(unit_id)
|
||||
except SLMMClientError as e:
|
||||
raise DeviceControllerError(f"SLMM error: {str(e)}")
|
||||
|
||||
elif device_type == "seismograph":
|
||||
return {
|
||||
"status": "not_implemented",
|
||||
"message": "Seismograph FTP not yet implemented",
|
||||
"unit_id": unit_id,
|
||||
}
|
||||
|
||||
else:
|
||||
raise UnsupportedDeviceTypeError(f"Unsupported device type: {device_type}")
|
||||
|
||||
async def disable_ftp(
|
||||
self,
|
||||
unit_id: str,
|
||||
device_type: str,
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Disable FTP server on device.
|
||||
|
||||
Args:
|
||||
unit_id: Unit identifier
|
||||
device_type: "slm" | "seismograph"
|
||||
|
||||
Returns:
|
||||
Response dict with status
|
||||
"""
|
||||
if device_type == "slm":
|
||||
try:
|
||||
return await self.slmm_client.disable_ftp(unit_id)
|
||||
except SLMMClientError as e:
|
||||
raise DeviceControllerError(f"SLMM error: {str(e)}")
|
||||
|
||||
elif device_type == "seismograph":
|
||||
return {
|
||||
"status": "not_implemented",
|
||||
"message": "Seismograph FTP not yet implemented",
|
||||
"unit_id": unit_id,
|
||||
}
|
||||
|
||||
else:
|
||||
raise UnsupportedDeviceTypeError(f"Unsupported device type: {device_type}")
|
||||
|
||||
# ========================================================================
|
||||
# Device Configuration
|
||||
# ========================================================================
|
||||
|
||||
async def update_device_config(
|
||||
self,
|
||||
unit_id: str,
|
||||
device_type: str,
|
||||
config: Dict[str, Any],
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Update device configuration.
|
||||
|
||||
Args:
|
||||
unit_id: Unit identifier
|
||||
device_type: "slm" | "seismograph"
|
||||
config: Configuration parameters
|
||||
|
||||
Returns:
|
||||
Updated config from device module
|
||||
"""
|
||||
if device_type == "slm":
|
||||
try:
|
||||
return await self.slmm_client.update_unit_config(
|
||||
unit_id,
|
||||
host=config.get("host"),
|
||||
tcp_port=config.get("tcp_port"),
|
||||
ftp_port=config.get("ftp_port"),
|
||||
ftp_username=config.get("ftp_username"),
|
||||
ftp_password=config.get("ftp_password"),
|
||||
)
|
||||
except SLMMClientError as e:
|
||||
raise DeviceControllerError(f"SLMM error: {str(e)}")
|
||||
|
||||
elif device_type == "seismograph":
|
||||
return {
|
||||
"status": "not_implemented",
|
||||
"message": "Seismograph config update not yet implemented",
|
||||
"unit_id": unit_id,
|
||||
}
|
||||
|
||||
else:
|
||||
raise UnsupportedDeviceTypeError(f"Unsupported device type: {device_type}")
|
||||
|
||||
# ========================================================================
|
||||
# Store/Index Management
|
||||
# ========================================================================
|
||||
|
||||
async def increment_index(
|
||||
self,
|
||||
unit_id: str,
|
||||
device_type: str,
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Increment the store/index number on a device.
|
||||
|
||||
For SLMs, this increments the store name to prevent "overwrite data?" prompts.
|
||||
Should be called before starting a new measurement if auto_increment_index is enabled.
|
||||
|
||||
Args:
|
||||
unit_id: Unit identifier
|
||||
device_type: "slm" | "seismograph"
|
||||
|
||||
Returns:
|
||||
Response dict with old_index and new_index
|
||||
"""
|
||||
if device_type == "slm":
|
||||
try:
|
||||
return await self.slmm_client.increment_index(unit_id)
|
||||
except SLMMClientError as e:
|
||||
raise DeviceControllerError(f"SLMM error: {str(e)}")
|
||||
|
||||
elif device_type == "seismograph":
|
||||
# Seismographs may not have the same concept of store index
|
||||
return {
|
||||
"status": "not_applicable",
|
||||
"message": "Index increment not applicable for seismographs",
|
||||
"unit_id": unit_id,
|
||||
}
|
||||
|
||||
else:
|
||||
raise UnsupportedDeviceTypeError(f"Unsupported device type: {device_type}")
|
||||
|
||||
async def get_index_number(
|
||||
self,
|
||||
unit_id: str,
|
||||
device_type: str,
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Get current store/index number from device.
|
||||
|
||||
Args:
|
||||
unit_id: Unit identifier
|
||||
device_type: "slm" | "seismograph"
|
||||
|
||||
Returns:
|
||||
Response dict with current index_number
|
||||
"""
|
||||
if device_type == "slm":
|
||||
try:
|
||||
return await self.slmm_client.get_index_number(unit_id)
|
||||
except SLMMClientError as e:
|
||||
raise DeviceControllerError(f"SLMM error: {str(e)}")
|
||||
|
||||
elif device_type == "seismograph":
|
||||
return {
|
||||
"status": "not_applicable",
|
||||
"message": "Index number not applicable for seismographs",
|
||||
"unit_id": unit_id,
|
||||
}
|
||||
|
||||
else:
|
||||
raise UnsupportedDeviceTypeError(f"Unsupported device type: {device_type}")
|
||||
|
||||
# ========================================================================
|
||||
# Cycle Commands (for scheduled automation)
|
||||
# ========================================================================
|
||||
|
||||
async def start_cycle(
|
||||
self,
|
||||
unit_id: str,
|
||||
device_type: str,
|
||||
sync_clock: bool = True,
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Execute complete start cycle for scheduled automation.
|
||||
|
||||
This handles the full pre-recording workflow:
|
||||
1. Sync device clock to server time
|
||||
2. Find next safe index (with overwrite protection)
|
||||
3. Start measurement
|
||||
|
||||
Args:
|
||||
unit_id: Unit identifier
|
||||
device_type: "slm" | "seismograph"
|
||||
sync_clock: Whether to sync device clock to server time
|
||||
|
||||
Returns:
|
||||
Response dict from device module
|
||||
"""
|
||||
if device_type == "slm":
|
||||
try:
|
||||
return await self.slmm_client.start_cycle(unit_id, sync_clock)
|
||||
except SLMMClientError as e:
|
||||
raise DeviceControllerError(f"SLMM error: {str(e)}")
|
||||
|
||||
elif device_type == "seismograph":
|
||||
return {
|
||||
"status": "not_implemented",
|
||||
"message": "Seismograph start cycle not yet implemented",
|
||||
"unit_id": unit_id,
|
||||
}
|
||||
|
||||
else:
|
||||
raise UnsupportedDeviceTypeError(f"Unsupported device type: {device_type}")
|
||||
|
||||
async def stop_cycle(
|
||||
self,
|
||||
unit_id: str,
|
||||
device_type: str,
|
||||
download: bool = True,
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Execute complete stop cycle for scheduled automation.
|
||||
|
||||
This handles the full post-recording workflow:
|
||||
1. Stop measurement
|
||||
2. Enable FTP
|
||||
3. Download measurement folder
|
||||
4. Verify download
|
||||
|
||||
Args:
|
||||
unit_id: Unit identifier
|
||||
device_type: "slm" | "seismograph"
|
||||
download: Whether to download measurement data
|
||||
|
||||
Returns:
|
||||
Response dict from device module
|
||||
"""
|
||||
if device_type == "slm":
|
||||
try:
|
||||
return await self.slmm_client.stop_cycle(unit_id, download)
|
||||
except SLMMClientError as e:
|
||||
raise DeviceControllerError(f"SLMM error: {str(e)}")
|
||||
|
||||
elif device_type == "seismograph":
|
||||
return {
|
||||
"status": "not_implemented",
|
||||
"message": "Seismograph stop cycle not yet implemented",
|
||||
"unit_id": unit_id,
|
||||
}
|
||||
|
||||
else:
|
||||
raise UnsupportedDeviceTypeError(f"Unsupported device type: {device_type}")
|
||||
|
||||
# ========================================================================
|
||||
# Health Check
|
||||
# ========================================================================
|
||||
|
||||
async def check_device_connectivity(
|
||||
self,
|
||||
unit_id: str,
|
||||
device_type: str,
|
||||
) -> bool:
|
||||
"""
|
||||
Check if device is reachable.
|
||||
|
||||
Args:
|
||||
unit_id: Unit identifier
|
||||
device_type: "slm" | "seismograph"
|
||||
|
||||
Returns:
|
||||
True if device is reachable, False otherwise
|
||||
"""
|
||||
if device_type == "slm":
|
||||
try:
|
||||
status = await self.slmm_client.get_unit_status(unit_id)
|
||||
return status.get("last_seen") is not None
|
||||
except:
|
||||
return False
|
||||
|
||||
elif device_type == "seismograph":
|
||||
# TODO: Implement SFM connectivity check
|
||||
return False
|
||||
|
||||
else:
|
||||
return False
|
||||
|
||||
|
||||
# Singleton instance
|
||||
_default_controller: Optional[DeviceController] = None
|
||||
|
||||
|
||||
def get_device_controller() -> DeviceController:
|
||||
"""
|
||||
Get the default device controller instance.
|
||||
|
||||
Returns:
|
||||
DeviceController instance
|
||||
"""
|
||||
global _default_controller
|
||||
if _default_controller is None:
|
||||
_default_controller = DeviceController()
|
||||
return _default_controller
|
||||
184
backend/services/device_status_monitor.py
Normal file
@@ -0,0 +1,184 @@
|
||||
"""
|
||||
Device Status Monitor
|
||||
|
||||
Background task that monitors device reachability via SLMM polling status
|
||||
and triggers alerts when devices go offline or come back online.
|
||||
|
||||
This service bridges SLMM's device polling with Terra-View's alert system.
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import logging
|
||||
from datetime import datetime
|
||||
from typing import Optional, Dict
|
||||
|
||||
from backend.database import SessionLocal
|
||||
from backend.services.slmm_client import get_slmm_client, SLMMClientError
|
||||
from backend.services.alert_service import get_alert_service
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class DeviceStatusMonitor:
|
||||
"""
|
||||
Monitors device reachability via SLMM's polling status endpoint.
|
||||
|
||||
Detects state transitions (online→offline, offline→online) and
|
||||
triggers AlertService to create/resolve alerts.
|
||||
|
||||
Usage:
|
||||
monitor = DeviceStatusMonitor()
|
||||
await monitor.start() # Start background monitoring
|
||||
monitor.stop() # Stop monitoring
|
||||
"""
|
||||
|
||||
def __init__(self, check_interval: int = 60):
|
||||
"""
|
||||
Initialize the monitor.
|
||||
|
||||
Args:
|
||||
check_interval: Seconds between status checks (default: 60)
|
||||
"""
|
||||
self.check_interval = check_interval
|
||||
self.running = False
|
||||
self.task: Optional[asyncio.Task] = None
|
||||
self.slmm_client = get_slmm_client()
|
||||
|
||||
# Track previous device states to detect transitions
|
||||
self._device_states: Dict[str, bool] = {}
|
||||
|
||||
async def start(self):
|
||||
"""Start the monitoring background task."""
|
||||
if self.running:
|
||||
logger.warning("DeviceStatusMonitor is already running")
|
||||
return
|
||||
|
||||
self.running = True
|
||||
self.task = asyncio.create_task(self._monitor_loop())
|
||||
logger.info(f"DeviceStatusMonitor started (checking every {self.check_interval}s)")
|
||||
|
||||
def stop(self):
|
||||
"""Stop the monitoring background task."""
|
||||
self.running = False
|
||||
if self.task:
|
||||
self.task.cancel()
|
||||
logger.info("DeviceStatusMonitor stopped")
|
||||
|
||||
async def _monitor_loop(self):
|
||||
"""Main monitoring loop."""
|
||||
while self.running:
|
||||
try:
|
||||
await self._check_all_devices()
|
||||
except Exception as e:
|
||||
logger.error(f"Error in device status monitor: {e}", exc_info=True)
|
||||
|
||||
# Sleep in small intervals for graceful shutdown
|
||||
for _ in range(self.check_interval):
|
||||
if not self.running:
|
||||
break
|
||||
await asyncio.sleep(1)
|
||||
|
||||
logger.info("DeviceStatusMonitor loop exited")
|
||||
|
||||
async def _check_all_devices(self):
|
||||
"""
|
||||
Fetch polling status from SLMM and detect state transitions.
|
||||
|
||||
Uses GET /api/slmm/_polling/status (proxied to SLMM)
|
||||
"""
|
||||
try:
|
||||
# Get status from SLMM
|
||||
status_response = await self.slmm_client.get_polling_status()
|
||||
devices = status_response.get("devices", [])
|
||||
|
||||
if not devices:
|
||||
logger.debug("No devices in polling status response")
|
||||
return
|
||||
|
||||
db = SessionLocal()
|
||||
try:
|
||||
alert_service = get_alert_service(db)
|
||||
|
||||
for device in devices:
|
||||
unit_id = device.get("unit_id")
|
||||
if not unit_id:
|
||||
continue
|
||||
|
||||
is_reachable = device.get("is_reachable", True)
|
||||
previous_reachable = self._device_states.get(unit_id)
|
||||
|
||||
# Skip if this is the first check (no previous state)
|
||||
if previous_reachable is None:
|
||||
self._device_states[unit_id] = is_reachable
|
||||
logger.debug(f"Initial state for {unit_id}: reachable={is_reachable}")
|
||||
continue
|
||||
|
||||
# Detect offline transition (was online, now offline)
|
||||
if previous_reachable and not is_reachable:
|
||||
logger.warning(f"Device {unit_id} went OFFLINE")
|
||||
alert_service.create_device_offline_alert(
|
||||
unit_id=unit_id,
|
||||
consecutive_failures=device.get("consecutive_failures", 0),
|
||||
last_error=device.get("last_error"),
|
||||
)
|
||||
|
||||
# Detect online transition (was offline, now online)
|
||||
elif not previous_reachable and is_reachable:
|
||||
logger.info(f"Device {unit_id} came back ONLINE")
|
||||
alert_service.resolve_device_offline_alert(unit_id)
|
||||
|
||||
# Update tracked state
|
||||
self._device_states[unit_id] = is_reachable
|
||||
|
||||
# Cleanup expired alerts while we're here
|
||||
alert_service.cleanup_expired_alerts()
|
||||
|
||||
finally:
|
||||
db.close()
|
||||
|
||||
except SLMMClientError as e:
|
||||
logger.warning(f"Could not reach SLMM for status check: {e}")
|
||||
except Exception as e:
|
||||
logger.error(f"Error checking device status: {e}", exc_info=True)
|
||||
|
||||
def get_tracked_devices(self) -> Dict[str, bool]:
|
||||
"""
|
||||
Get the current tracked device states.
|
||||
|
||||
Returns:
|
||||
Dict mapping unit_id to is_reachable status
|
||||
"""
|
||||
return dict(self._device_states)
|
||||
|
||||
def clear_tracked_devices(self):
|
||||
"""Clear all tracked device states (useful for testing)."""
|
||||
self._device_states.clear()
|
||||
|
||||
|
||||
# Singleton instance
|
||||
_monitor_instance: Optional[DeviceStatusMonitor] = None
|
||||
|
||||
|
||||
def get_device_status_monitor() -> DeviceStatusMonitor:
|
||||
"""
|
||||
Get the device status monitor singleton instance.
|
||||
|
||||
Returns:
|
||||
DeviceStatusMonitor instance
|
||||
"""
|
||||
global _monitor_instance
|
||||
if _monitor_instance is None:
|
||||
_monitor_instance = DeviceStatusMonitor()
|
||||
return _monitor_instance
|
||||
|
||||
|
||||
async def start_device_status_monitor():
|
||||
"""Start the global device status monitor."""
|
||||
monitor = get_device_status_monitor()
|
||||
await monitor.start()
|
||||
|
||||
|
||||
def stop_device_status_monitor():
|
||||
"""Stop the global device status monitor."""
|
||||
monitor = get_device_status_monitor()
|
||||
monitor.stop()
|
||||
559
backend/services/recurring_schedule_service.py
Normal file
@@ -0,0 +1,559 @@
|
||||
"""
|
||||
Recurring Schedule Service
|
||||
|
||||
Manages recurring schedule definitions and generates ScheduledAction
|
||||
instances based on patterns (weekly calendar, simple interval).
|
||||
"""
|
||||
|
||||
import json
|
||||
import uuid
|
||||
import logging
|
||||
from datetime import datetime, timedelta, date, time
|
||||
from typing import Optional, List, Dict, Any, Tuple
|
||||
from zoneinfo import ZoneInfo
|
||||
|
||||
from sqlalchemy.orm import Session
|
||||
from sqlalchemy import and_
|
||||
|
||||
from backend.models import RecurringSchedule, ScheduledAction, MonitoringLocation, UnitAssignment
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# Day name mapping
|
||||
DAY_NAMES = ["monday", "tuesday", "wednesday", "thursday", "friday", "saturday", "sunday"]
|
||||
|
||||
|
||||
class RecurringScheduleService:
|
||||
"""
|
||||
Service for managing recurring schedules and generating ScheduledActions.
|
||||
|
||||
Supports two schedule types:
|
||||
- weekly_calendar: Specific days with start/end times
|
||||
- simple_interval: Daily stop/download/restart cycles for 24/7 monitoring
|
||||
"""
|
||||
|
||||
def __init__(self, db: Session):
|
||||
self.db = db
|
||||
|
||||
def create_schedule(
|
||||
self,
|
||||
project_id: str,
|
||||
location_id: str,
|
||||
name: str,
|
||||
schedule_type: str,
|
||||
device_type: str = "slm",
|
||||
unit_id: str = None,
|
||||
weekly_pattern: dict = None,
|
||||
interval_type: str = None,
|
||||
cycle_time: str = None,
|
||||
include_download: bool = True,
|
||||
auto_increment_index: bool = True,
|
||||
timezone: str = "America/New_York",
|
||||
) -> RecurringSchedule:
|
||||
"""
|
||||
Create a new recurring schedule.
|
||||
|
||||
Args:
|
||||
project_id: Project ID
|
||||
location_id: Monitoring location ID
|
||||
name: Schedule name
|
||||
schedule_type: "weekly_calendar" or "simple_interval"
|
||||
device_type: "slm" or "seismograph"
|
||||
unit_id: Specific unit (optional, can use assignment)
|
||||
weekly_pattern: Dict of day patterns for weekly_calendar
|
||||
interval_type: "daily" or "hourly" for simple_interval
|
||||
cycle_time: Time string "HH:MM" for cycle
|
||||
include_download: Whether to download data on cycle
|
||||
auto_increment_index: Whether to auto-increment store index before start
|
||||
timezone: Timezone for schedule times
|
||||
|
||||
Returns:
|
||||
Created RecurringSchedule
|
||||
"""
|
||||
schedule = RecurringSchedule(
|
||||
id=str(uuid.uuid4()),
|
||||
project_id=project_id,
|
||||
location_id=location_id,
|
||||
unit_id=unit_id,
|
||||
name=name,
|
||||
schedule_type=schedule_type,
|
||||
device_type=device_type,
|
||||
weekly_pattern=json.dumps(weekly_pattern) if weekly_pattern else None,
|
||||
interval_type=interval_type,
|
||||
cycle_time=cycle_time,
|
||||
include_download=include_download,
|
||||
auto_increment_index=auto_increment_index,
|
||||
enabled=True,
|
||||
timezone=timezone,
|
||||
)
|
||||
|
||||
# Calculate next occurrence
|
||||
schedule.next_occurrence = self._calculate_next_occurrence(schedule)
|
||||
|
||||
self.db.add(schedule)
|
||||
self.db.commit()
|
||||
self.db.refresh(schedule)
|
||||
|
||||
logger.info(f"Created recurring schedule: {name} ({schedule_type})")
|
||||
return schedule
|
||||
|
||||
def update_schedule(
|
||||
self,
|
||||
schedule_id: str,
|
||||
**kwargs,
|
||||
) -> Optional[RecurringSchedule]:
|
||||
"""
|
||||
Update a recurring schedule.
|
||||
|
||||
Args:
|
||||
schedule_id: Schedule to update
|
||||
**kwargs: Fields to update
|
||||
|
||||
Returns:
|
||||
Updated schedule or None
|
||||
"""
|
||||
schedule = self.db.query(RecurringSchedule).filter_by(id=schedule_id).first()
|
||||
if not schedule:
|
||||
return None
|
||||
|
||||
for key, value in kwargs.items():
|
||||
if hasattr(schedule, key):
|
||||
if key == "weekly_pattern" and isinstance(value, dict):
|
||||
value = json.dumps(value)
|
||||
setattr(schedule, key, value)
|
||||
|
||||
# Recalculate next occurrence
|
||||
schedule.next_occurrence = self._calculate_next_occurrence(schedule)
|
||||
|
||||
self.db.commit()
|
||||
self.db.refresh(schedule)
|
||||
|
||||
logger.info(f"Updated recurring schedule: {schedule.name}")
|
||||
return schedule
|
||||
|
||||
def delete_schedule(self, schedule_id: str) -> bool:
|
||||
"""
|
||||
Delete a recurring schedule and its pending generated actions.
|
||||
|
||||
Args:
|
||||
schedule_id: Schedule to delete
|
||||
|
||||
Returns:
|
||||
True if deleted, False if not found
|
||||
"""
|
||||
schedule = self.db.query(RecurringSchedule).filter_by(id=schedule_id).first()
|
||||
if not schedule:
|
||||
return False
|
||||
|
||||
# Delete pending generated actions for this schedule
|
||||
# The schedule_id is stored in the notes field as JSON
|
||||
pending_actions = self.db.query(ScheduledAction).filter(
|
||||
and_(
|
||||
ScheduledAction.execution_status == "pending",
|
||||
ScheduledAction.notes.like(f'%"schedule_id": "{schedule_id}"%'),
|
||||
)
|
||||
).all()
|
||||
|
||||
deleted_count = len(pending_actions)
|
||||
for action in pending_actions:
|
||||
self.db.delete(action)
|
||||
|
||||
self.db.delete(schedule)
|
||||
self.db.commit()
|
||||
|
||||
logger.info(f"Deleted recurring schedule: {schedule.name} (and {deleted_count} pending actions)")
|
||||
return True
|
||||
|
||||
def enable_schedule(self, schedule_id: str) -> Optional[RecurringSchedule]:
|
||||
"""Enable a disabled schedule."""
|
||||
return self.update_schedule(schedule_id, enabled=True)
|
||||
|
||||
def disable_schedule(self, schedule_id: str) -> Optional[RecurringSchedule]:
|
||||
"""Disable a schedule."""
|
||||
return self.update_schedule(schedule_id, enabled=False)
|
||||
|
||||
def generate_actions_for_schedule(
|
||||
self,
|
||||
schedule: RecurringSchedule,
|
||||
horizon_days: int = 7,
|
||||
preview_only: bool = False,
|
||||
) -> List[ScheduledAction]:
|
||||
"""
|
||||
Generate ScheduledAction entries for the next N days based on pattern.
|
||||
|
||||
Args:
|
||||
schedule: The recurring schedule
|
||||
horizon_days: Days ahead to generate
|
||||
preview_only: If True, don't save to DB (for preview)
|
||||
|
||||
Returns:
|
||||
List of generated ScheduledAction instances
|
||||
"""
|
||||
if not schedule.enabled:
|
||||
return []
|
||||
|
||||
if schedule.schedule_type == "weekly_calendar":
|
||||
actions = self._generate_weekly_calendar_actions(schedule, horizon_days)
|
||||
elif schedule.schedule_type == "simple_interval":
|
||||
actions = self._generate_interval_actions(schedule, horizon_days)
|
||||
else:
|
||||
logger.warning(f"Unknown schedule type: {schedule.schedule_type}")
|
||||
return []
|
||||
|
||||
if not preview_only and actions:
|
||||
for action in actions:
|
||||
self.db.add(action)
|
||||
|
||||
schedule.last_generated_at = datetime.utcnow()
|
||||
schedule.next_occurrence = self._calculate_next_occurrence(schedule)
|
||||
|
||||
self.db.commit()
|
||||
logger.info(f"Generated {len(actions)} actions for schedule: {schedule.name}")
|
||||
|
||||
return actions
|
||||
|
||||
def _generate_weekly_calendar_actions(
|
||||
self,
|
||||
schedule: RecurringSchedule,
|
||||
horizon_days: int,
|
||||
) -> List[ScheduledAction]:
|
||||
"""
|
||||
Generate actions from weekly calendar pattern.
|
||||
|
||||
Pattern format:
|
||||
{
|
||||
"monday": {"enabled": true, "start": "19:00", "end": "07:00"},
|
||||
"tuesday": {"enabled": false},
|
||||
...
|
||||
}
|
||||
"""
|
||||
if not schedule.weekly_pattern:
|
||||
return []
|
||||
|
||||
try:
|
||||
pattern = json.loads(schedule.weekly_pattern)
|
||||
except json.JSONDecodeError:
|
||||
logger.error(f"Invalid weekly_pattern JSON for schedule {schedule.id}")
|
||||
return []
|
||||
|
||||
actions = []
|
||||
tz = ZoneInfo(schedule.timezone)
|
||||
now_utc = datetime.utcnow()
|
||||
now_local = now_utc.replace(tzinfo=ZoneInfo("UTC")).astimezone(tz)
|
||||
|
||||
# Get unit_id (from schedule or assignment)
|
||||
unit_id = self._resolve_unit_id(schedule)
|
||||
|
||||
for day_offset in range(horizon_days):
|
||||
check_date = now_local.date() + timedelta(days=day_offset)
|
||||
day_name = DAY_NAMES[check_date.weekday()]
|
||||
day_config = pattern.get(day_name, {})
|
||||
|
||||
if not day_config.get("enabled", False):
|
||||
continue
|
||||
|
||||
start_time_str = day_config.get("start")
|
||||
end_time_str = day_config.get("end")
|
||||
|
||||
if not start_time_str or not end_time_str:
|
||||
continue
|
||||
|
||||
# Parse times
|
||||
start_time = self._parse_time(start_time_str)
|
||||
end_time = self._parse_time(end_time_str)
|
||||
|
||||
if not start_time or not end_time:
|
||||
continue
|
||||
|
||||
# Create start datetime in local timezone
|
||||
start_local = datetime.combine(check_date, start_time, tzinfo=tz)
|
||||
start_utc = start_local.astimezone(ZoneInfo("UTC")).replace(tzinfo=None)
|
||||
|
||||
# Handle overnight schedules (end time is next day)
|
||||
if end_time <= start_time:
|
||||
end_date = check_date + timedelta(days=1)
|
||||
else:
|
||||
end_date = check_date
|
||||
|
||||
end_local = datetime.combine(end_date, end_time, tzinfo=tz)
|
||||
end_utc = end_local.astimezone(ZoneInfo("UTC")).replace(tzinfo=None)
|
||||
|
||||
# Skip if start time has already passed
|
||||
if start_utc <= now_utc:
|
||||
continue
|
||||
|
||||
# Check if action already exists
|
||||
if self._action_exists(schedule.project_id, schedule.location_id, "start", start_utc):
|
||||
continue
|
||||
|
||||
# Build notes with automation metadata
|
||||
start_notes = json.dumps({
|
||||
"schedule_name": schedule.name,
|
||||
"schedule_id": schedule.id,
|
||||
"auto_increment_index": schedule.auto_increment_index,
|
||||
})
|
||||
|
||||
# Create START action
|
||||
start_action = ScheduledAction(
|
||||
id=str(uuid.uuid4()),
|
||||
project_id=schedule.project_id,
|
||||
location_id=schedule.location_id,
|
||||
unit_id=unit_id,
|
||||
action_type="start",
|
||||
device_type=schedule.device_type,
|
||||
scheduled_time=start_utc,
|
||||
execution_status="pending",
|
||||
notes=start_notes,
|
||||
)
|
||||
actions.append(start_action)
|
||||
|
||||
# Create STOP action
|
||||
stop_notes = json.dumps({
|
||||
"schedule_name": schedule.name,
|
||||
"schedule_id": schedule.id,
|
||||
})
|
||||
stop_action = ScheduledAction(
|
||||
id=str(uuid.uuid4()),
|
||||
project_id=schedule.project_id,
|
||||
location_id=schedule.location_id,
|
||||
unit_id=unit_id,
|
||||
action_type="stop",
|
||||
device_type=schedule.device_type,
|
||||
scheduled_time=end_utc,
|
||||
execution_status="pending",
|
||||
notes=stop_notes,
|
||||
)
|
||||
actions.append(stop_action)
|
||||
|
||||
# Create DOWNLOAD action if enabled (1 minute after stop)
|
||||
if schedule.include_download:
|
||||
download_time = end_utc + timedelta(minutes=1)
|
||||
download_notes = json.dumps({
|
||||
"schedule_name": schedule.name,
|
||||
"schedule_id": schedule.id,
|
||||
"schedule_type": "weekly_calendar",
|
||||
})
|
||||
download_action = ScheduledAction(
|
||||
id=str(uuid.uuid4()),
|
||||
project_id=schedule.project_id,
|
||||
location_id=schedule.location_id,
|
||||
unit_id=unit_id,
|
||||
action_type="download",
|
||||
device_type=schedule.device_type,
|
||||
scheduled_time=download_time,
|
||||
execution_status="pending",
|
||||
notes=download_notes,
|
||||
)
|
||||
actions.append(download_action)
|
||||
|
||||
return actions
|
||||
|
||||
def _generate_interval_actions(
|
||||
self,
|
||||
schedule: RecurringSchedule,
|
||||
horizon_days: int,
|
||||
) -> List[ScheduledAction]:
|
||||
"""
|
||||
Generate actions from simple interval pattern.
|
||||
|
||||
For daily cycles: stop, download (optional), start at cycle_time each day.
|
||||
"""
|
||||
if not schedule.cycle_time:
|
||||
return []
|
||||
|
||||
cycle_time = self._parse_time(schedule.cycle_time)
|
||||
if not cycle_time:
|
||||
return []
|
||||
|
||||
actions = []
|
||||
tz = ZoneInfo(schedule.timezone)
|
||||
now_utc = datetime.utcnow()
|
||||
now_local = now_utc.replace(tzinfo=ZoneInfo("UTC")).astimezone(tz)
|
||||
|
||||
# Get unit_id
|
||||
unit_id = self._resolve_unit_id(schedule)
|
||||
|
||||
for day_offset in range(horizon_days):
|
||||
check_date = now_local.date() + timedelta(days=day_offset)
|
||||
|
||||
# Create cycle datetime in local timezone
|
||||
cycle_local = datetime.combine(check_date, cycle_time, tzinfo=tz)
|
||||
cycle_utc = cycle_local.astimezone(ZoneInfo("UTC")).replace(tzinfo=None)
|
||||
|
||||
# Skip if time has passed
|
||||
if cycle_utc <= now_utc:
|
||||
continue
|
||||
|
||||
# Check if action already exists
|
||||
if self._action_exists(schedule.project_id, schedule.location_id, "stop", cycle_utc):
|
||||
continue
|
||||
|
||||
# Build notes with metadata
|
||||
stop_notes = json.dumps({
|
||||
"schedule_name": schedule.name,
|
||||
"schedule_id": schedule.id,
|
||||
"cycle_type": "daily",
|
||||
})
|
||||
|
||||
# Create STOP action
|
||||
stop_action = ScheduledAction(
|
||||
id=str(uuid.uuid4()),
|
||||
project_id=schedule.project_id,
|
||||
location_id=schedule.location_id,
|
||||
unit_id=unit_id,
|
||||
action_type="stop",
|
||||
device_type=schedule.device_type,
|
||||
scheduled_time=cycle_utc,
|
||||
execution_status="pending",
|
||||
notes=stop_notes,
|
||||
)
|
||||
actions.append(stop_action)
|
||||
|
||||
# Create DOWNLOAD action if enabled (1 minute after stop)
|
||||
if schedule.include_download:
|
||||
download_time = cycle_utc + timedelta(minutes=1)
|
||||
download_notes = json.dumps({
|
||||
"schedule_name": schedule.name,
|
||||
"schedule_id": schedule.id,
|
||||
"cycle_type": "daily",
|
||||
})
|
||||
download_action = ScheduledAction(
|
||||
id=str(uuid.uuid4()),
|
||||
project_id=schedule.project_id,
|
||||
location_id=schedule.location_id,
|
||||
unit_id=unit_id,
|
||||
action_type="download",
|
||||
device_type=schedule.device_type,
|
||||
scheduled_time=download_time,
|
||||
execution_status="pending",
|
||||
notes=download_notes,
|
||||
)
|
||||
actions.append(download_action)
|
||||
|
||||
# Create START action (2 minutes after stop, or 1 minute after download)
|
||||
start_offset = 2 if schedule.include_download else 1
|
||||
start_time = cycle_utc + timedelta(minutes=start_offset)
|
||||
start_notes = json.dumps({
|
||||
"schedule_name": schedule.name,
|
||||
"schedule_id": schedule.id,
|
||||
"cycle_type": "daily",
|
||||
"auto_increment_index": schedule.auto_increment_index,
|
||||
})
|
||||
start_action = ScheduledAction(
|
||||
id=str(uuid.uuid4()),
|
||||
project_id=schedule.project_id,
|
||||
location_id=schedule.location_id,
|
||||
unit_id=unit_id,
|
||||
action_type="start",
|
||||
device_type=schedule.device_type,
|
||||
scheduled_time=start_time,
|
||||
execution_status="pending",
|
||||
notes=start_notes,
|
||||
)
|
||||
actions.append(start_action)
|
||||
|
||||
return actions
|
||||
|
||||
def _calculate_next_occurrence(self, schedule: RecurringSchedule) -> Optional[datetime]:
|
||||
"""Calculate when the next action should occur."""
|
||||
if not schedule.enabled:
|
||||
return None
|
||||
|
||||
tz = ZoneInfo(schedule.timezone)
|
||||
now_utc = datetime.utcnow()
|
||||
now_local = now_utc.replace(tzinfo=ZoneInfo("UTC")).astimezone(tz)
|
||||
|
||||
if schedule.schedule_type == "weekly_calendar" and schedule.weekly_pattern:
|
||||
try:
|
||||
pattern = json.loads(schedule.weekly_pattern)
|
||||
except:
|
||||
return None
|
||||
|
||||
# Find next enabled day
|
||||
for day_offset in range(8): # Check up to a week ahead
|
||||
check_date = now_local.date() + timedelta(days=day_offset)
|
||||
day_name = DAY_NAMES[check_date.weekday()]
|
||||
day_config = pattern.get(day_name, {})
|
||||
|
||||
if day_config.get("enabled") and day_config.get("start"):
|
||||
start_time = self._parse_time(day_config["start"])
|
||||
if start_time:
|
||||
start_local = datetime.combine(check_date, start_time, tzinfo=tz)
|
||||
start_utc = start_local.astimezone(ZoneInfo("UTC")).replace(tzinfo=None)
|
||||
if start_utc > now_utc:
|
||||
return start_utc
|
||||
|
||||
elif schedule.schedule_type == "simple_interval" and schedule.cycle_time:
|
||||
cycle_time = self._parse_time(schedule.cycle_time)
|
||||
if cycle_time:
|
||||
# Find next cycle time
|
||||
for day_offset in range(2):
|
||||
check_date = now_local.date() + timedelta(days=day_offset)
|
||||
cycle_local = datetime.combine(check_date, cycle_time, tzinfo=tz)
|
||||
cycle_utc = cycle_local.astimezone(ZoneInfo("UTC")).replace(tzinfo=None)
|
||||
if cycle_utc > now_utc:
|
||||
return cycle_utc
|
||||
|
||||
return None
|
||||
|
||||
def _resolve_unit_id(self, schedule: RecurringSchedule) -> Optional[str]:
|
||||
"""Get unit_id from schedule or active assignment."""
|
||||
if schedule.unit_id:
|
||||
return schedule.unit_id
|
||||
|
||||
# Try to get from active assignment
|
||||
assignment = self.db.query(UnitAssignment).filter(
|
||||
and_(
|
||||
UnitAssignment.location_id == schedule.location_id,
|
||||
UnitAssignment.status == "active",
|
||||
)
|
||||
).first()
|
||||
|
||||
return assignment.unit_id if assignment else None
|
||||
|
||||
def _action_exists(
|
||||
self,
|
||||
project_id: str,
|
||||
location_id: str,
|
||||
action_type: str,
|
||||
scheduled_time: datetime,
|
||||
) -> bool:
|
||||
"""Check if an action already exists for this time slot."""
|
||||
# Allow 5-minute window for duplicate detection
|
||||
time_window_start = scheduled_time - timedelta(minutes=5)
|
||||
time_window_end = scheduled_time + timedelta(minutes=5)
|
||||
|
||||
exists = self.db.query(ScheduledAction).filter(
|
||||
and_(
|
||||
ScheduledAction.project_id == project_id,
|
||||
ScheduledAction.location_id == location_id,
|
||||
ScheduledAction.action_type == action_type,
|
||||
ScheduledAction.scheduled_time >= time_window_start,
|
||||
ScheduledAction.scheduled_time <= time_window_end,
|
||||
ScheduledAction.execution_status == "pending",
|
||||
)
|
||||
).first()
|
||||
|
||||
return exists is not None
|
||||
|
||||
@staticmethod
|
||||
def _parse_time(time_str: str) -> Optional[time]:
|
||||
"""Parse time string "HH:MM" to time object."""
|
||||
try:
|
||||
parts = time_str.split(":")
|
||||
return time(int(parts[0]), int(parts[1]))
|
||||
except (ValueError, IndexError):
|
||||
return None
|
||||
|
||||
def get_schedules_for_project(self, project_id: str) -> List[RecurringSchedule]:
|
||||
"""Get all recurring schedules for a project."""
|
||||
return self.db.query(RecurringSchedule).filter_by(project_id=project_id).all()
|
||||
|
||||
def get_enabled_schedules(self) -> List[RecurringSchedule]:
|
||||
"""Get all enabled recurring schedules."""
|
||||
return self.db.query(RecurringSchedule).filter_by(enabled=True).all()
|
||||
|
||||
|
||||
def get_recurring_schedule_service(db: Session) -> RecurringScheduleService:
|
||||
"""Get a RecurringScheduleService instance."""
|
||||
return RecurringScheduleService(db)
|
||||
541
backend/services/scheduler.py
Normal file
@@ -0,0 +1,541 @@
|
||||
"""
|
||||
Scheduler Service
|
||||
|
||||
Executes scheduled actions for Projects system.
|
||||
Monitors pending scheduled actions and executes them by calling device modules (SLMM/SFM).
|
||||
|
||||
Extended to support recurring schedules:
|
||||
- Generates ScheduledActions from RecurringSchedule patterns
|
||||
- Cleans up old completed/failed actions
|
||||
|
||||
This service runs as a background task in FastAPI, checking for pending actions
|
||||
every minute and executing them when their scheduled time arrives.
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import json
|
||||
import logging
|
||||
from datetime import datetime, timedelta
|
||||
from typing import Optional, List, Dict, Any
|
||||
from sqlalchemy.orm import Session
|
||||
from sqlalchemy import and_
|
||||
|
||||
from backend.database import SessionLocal
|
||||
from backend.models import ScheduledAction, RecordingSession, MonitoringLocation, Project, RecurringSchedule
|
||||
from backend.services.device_controller import get_device_controller, DeviceControllerError
|
||||
from backend.services.alert_service import get_alert_service
|
||||
import uuid
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class SchedulerService:
|
||||
"""
|
||||
Service for executing scheduled actions.
|
||||
|
||||
Usage:
|
||||
scheduler = SchedulerService()
|
||||
await scheduler.start() # Start background loop
|
||||
scheduler.stop() # Stop background loop
|
||||
"""
|
||||
|
||||
def __init__(self, check_interval: int = 60):
|
||||
"""
|
||||
Initialize scheduler.
|
||||
|
||||
Args:
|
||||
check_interval: Seconds between checks for pending actions (default: 60)
|
||||
"""
|
||||
self.check_interval = check_interval
|
||||
self.running = False
|
||||
self.task: Optional[asyncio.Task] = None
|
||||
self.device_controller = get_device_controller()
|
||||
|
||||
async def start(self):
|
||||
"""Start the scheduler background task."""
|
||||
if self.running:
|
||||
print("Scheduler is already running")
|
||||
return
|
||||
|
||||
self.running = True
|
||||
self.task = asyncio.create_task(self._run_loop())
|
||||
print(f"Scheduler started (checking every {self.check_interval}s)")
|
||||
|
||||
def stop(self):
|
||||
"""Stop the scheduler background task."""
|
||||
self.running = False
|
||||
if self.task:
|
||||
self.task.cancel()
|
||||
print("Scheduler stopped")
|
||||
|
||||
async def _run_loop(self):
|
||||
"""Main scheduler loop."""
|
||||
# Track when we last generated recurring actions (do this once per hour)
|
||||
last_generation_check = datetime.utcnow() - timedelta(hours=1)
|
||||
|
||||
while self.running:
|
||||
try:
|
||||
# Execute pending actions
|
||||
await self.execute_pending_actions()
|
||||
|
||||
# Generate actions from recurring schedules (every hour)
|
||||
now = datetime.utcnow()
|
||||
if (now - last_generation_check).total_seconds() >= 3600:
|
||||
await self.generate_recurring_actions()
|
||||
last_generation_check = now
|
||||
|
||||
# Cleanup old actions (also every hour, during generation cycle)
|
||||
if (now - last_generation_check).total_seconds() < 60:
|
||||
await self.cleanup_old_actions()
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Scheduler error: {e}", exc_info=True)
|
||||
# Continue running even if there's an error
|
||||
|
||||
await asyncio.sleep(self.check_interval)
|
||||
|
||||
async def execute_pending_actions(self) -> List[Dict[str, Any]]:
|
||||
"""
|
||||
Find and execute all pending scheduled actions that are due.
|
||||
|
||||
Returns:
|
||||
List of execution results
|
||||
"""
|
||||
db = SessionLocal()
|
||||
results = []
|
||||
|
||||
try:
|
||||
# Find pending actions that are due
|
||||
now = datetime.utcnow()
|
||||
pending_actions = db.query(ScheduledAction).filter(
|
||||
and_(
|
||||
ScheduledAction.execution_status == "pending",
|
||||
ScheduledAction.scheduled_time <= now,
|
||||
)
|
||||
).order_by(ScheduledAction.scheduled_time).all()
|
||||
|
||||
if not pending_actions:
|
||||
return []
|
||||
|
||||
print(f"Found {len(pending_actions)} pending action(s) to execute")
|
||||
|
||||
for action in pending_actions:
|
||||
result = await self._execute_action(action, db)
|
||||
results.append(result)
|
||||
|
||||
db.commit()
|
||||
|
||||
except Exception as e:
|
||||
print(f"Error executing pending actions: {e}")
|
||||
db.rollback()
|
||||
finally:
|
||||
db.close()
|
||||
|
||||
return results
|
||||
|
||||
async def _execute_action(
|
||||
self,
|
||||
action: ScheduledAction,
|
||||
db: Session,
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Execute a single scheduled action.
|
||||
|
||||
Args:
|
||||
action: ScheduledAction to execute
|
||||
db: Database session
|
||||
|
||||
Returns:
|
||||
Execution result dict
|
||||
"""
|
||||
print(f"Executing action {action.id}: {action.action_type} for unit {action.unit_id}")
|
||||
|
||||
result = {
|
||||
"action_id": action.id,
|
||||
"action_type": action.action_type,
|
||||
"unit_id": action.unit_id,
|
||||
"scheduled_time": action.scheduled_time.isoformat(),
|
||||
"success": False,
|
||||
"error": None,
|
||||
}
|
||||
|
||||
try:
|
||||
# Determine which unit to use
|
||||
# If unit_id is specified, use it; otherwise get from location assignment
|
||||
unit_id = action.unit_id
|
||||
if not unit_id:
|
||||
# Get assigned unit from location
|
||||
from backend.models import UnitAssignment
|
||||
assignment = db.query(UnitAssignment).filter(
|
||||
and_(
|
||||
UnitAssignment.location_id == action.location_id,
|
||||
UnitAssignment.status == "active",
|
||||
)
|
||||
).first()
|
||||
|
||||
if not assignment:
|
||||
raise Exception(f"No active unit assigned to location {action.location_id}")
|
||||
|
||||
unit_id = assignment.unit_id
|
||||
|
||||
# Execute the action based on type
|
||||
if action.action_type == "start":
|
||||
response = await self._execute_start(action, unit_id, db)
|
||||
elif action.action_type == "stop":
|
||||
response = await self._execute_stop(action, unit_id, db)
|
||||
elif action.action_type == "download":
|
||||
response = await self._execute_download(action, unit_id, db)
|
||||
else:
|
||||
raise Exception(f"Unknown action type: {action.action_type}")
|
||||
|
||||
# Mark action as completed
|
||||
action.execution_status = "completed"
|
||||
action.executed_at = datetime.utcnow()
|
||||
action.module_response = json.dumps(response)
|
||||
|
||||
result["success"] = True
|
||||
result["response"] = response
|
||||
|
||||
print(f"✓ Action {action.id} completed successfully")
|
||||
|
||||
# Create success alert
|
||||
try:
|
||||
alert_service = get_alert_service(db)
|
||||
alert_metadata = response.get("cycle_response", {}) if isinstance(response, dict) else {}
|
||||
alert_service.create_schedule_completed_alert(
|
||||
schedule_id=action.id,
|
||||
action_type=action.action_type,
|
||||
unit_id=unit_id,
|
||||
project_id=action.project_id,
|
||||
location_id=action.location_id,
|
||||
metadata=alert_metadata,
|
||||
)
|
||||
except Exception as alert_err:
|
||||
logger.warning(f"Failed to create success alert: {alert_err}")
|
||||
|
||||
except Exception as e:
|
||||
# Mark action as failed
|
||||
action.execution_status = "failed"
|
||||
action.executed_at = datetime.utcnow()
|
||||
action.error_message = str(e)
|
||||
|
||||
result["error"] = str(e)
|
||||
|
||||
print(f"✗ Action {action.id} failed: {e}")
|
||||
|
||||
# Create failure alert
|
||||
try:
|
||||
alert_service = get_alert_service(db)
|
||||
alert_service.create_schedule_failed_alert(
|
||||
schedule_id=action.id,
|
||||
action_type=action.action_type,
|
||||
unit_id=unit_id if 'unit_id' in dir() else action.unit_id,
|
||||
error_message=str(e),
|
||||
project_id=action.project_id,
|
||||
location_id=action.location_id,
|
||||
)
|
||||
except Exception as alert_err:
|
||||
logger.warning(f"Failed to create failure alert: {alert_err}")
|
||||
|
||||
return result
|
||||
|
||||
async def _execute_start(
|
||||
self,
|
||||
action: ScheduledAction,
|
||||
unit_id: str,
|
||||
db: Session,
|
||||
) -> Dict[str, Any]:
|
||||
"""Execute a 'start' action using the start_cycle command.
|
||||
|
||||
start_cycle handles:
|
||||
1. Sync device clock to server time
|
||||
2. Find next safe index (with overwrite protection)
|
||||
3. Start measurement
|
||||
"""
|
||||
# Execute the full start cycle via device controller
|
||||
# SLMM handles clock sync, index increment, and start
|
||||
cycle_response = await self.device_controller.start_cycle(
|
||||
unit_id,
|
||||
action.device_type,
|
||||
sync_clock=True,
|
||||
)
|
||||
|
||||
# Create recording session
|
||||
session = RecordingSession(
|
||||
id=str(uuid.uuid4()),
|
||||
project_id=action.project_id,
|
||||
location_id=action.location_id,
|
||||
unit_id=unit_id,
|
||||
session_type="sound" if action.device_type == "slm" else "vibration",
|
||||
started_at=datetime.utcnow(),
|
||||
status="recording",
|
||||
session_metadata=json.dumps({
|
||||
"scheduled_action_id": action.id,
|
||||
"cycle_response": cycle_response,
|
||||
}),
|
||||
)
|
||||
db.add(session)
|
||||
|
||||
return {
|
||||
"status": "started",
|
||||
"session_id": session.id,
|
||||
"cycle_response": cycle_response,
|
||||
}
|
||||
|
||||
async def _execute_stop(
|
||||
self,
|
||||
action: ScheduledAction,
|
||||
unit_id: str,
|
||||
db: Session,
|
||||
) -> Dict[str, Any]:
|
||||
"""Execute a 'stop' action using the stop_cycle command.
|
||||
|
||||
stop_cycle handles:
|
||||
1. Stop measurement
|
||||
2. Enable FTP
|
||||
3. Download measurement folder
|
||||
4. Verify download
|
||||
"""
|
||||
# Parse notes for download preference
|
||||
include_download = True
|
||||
try:
|
||||
if action.notes:
|
||||
notes_data = json.loads(action.notes)
|
||||
include_download = notes_data.get("include_download", True)
|
||||
except json.JSONDecodeError:
|
||||
pass # Notes is plain text, not JSON
|
||||
|
||||
# Execute the full stop cycle via device controller
|
||||
# SLMM handles stop, FTP enable, and download
|
||||
cycle_response = await self.device_controller.stop_cycle(
|
||||
unit_id,
|
||||
action.device_type,
|
||||
download=include_download,
|
||||
)
|
||||
|
||||
# Find and update the active recording session
|
||||
active_session = db.query(RecordingSession).filter(
|
||||
and_(
|
||||
RecordingSession.location_id == action.location_id,
|
||||
RecordingSession.unit_id == unit_id,
|
||||
RecordingSession.status == "recording",
|
||||
)
|
||||
).first()
|
||||
|
||||
if active_session:
|
||||
active_session.stopped_at = datetime.utcnow()
|
||||
active_session.status = "completed"
|
||||
active_session.duration_seconds = int(
|
||||
(active_session.stopped_at - active_session.started_at).total_seconds()
|
||||
)
|
||||
# Store download info in session metadata
|
||||
if cycle_response.get("download_success"):
|
||||
try:
|
||||
metadata = json.loads(active_session.session_metadata or "{}")
|
||||
metadata["downloaded_folder"] = cycle_response.get("downloaded_folder")
|
||||
metadata["local_path"] = cycle_response.get("local_path")
|
||||
active_session.session_metadata = json.dumps(metadata)
|
||||
except json.JSONDecodeError:
|
||||
pass
|
||||
|
||||
return {
|
||||
"status": "stopped",
|
||||
"session_id": active_session.id if active_session else None,
|
||||
"cycle_response": cycle_response,
|
||||
}
|
||||
|
||||
async def _execute_download(
|
||||
self,
|
||||
action: ScheduledAction,
|
||||
unit_id: str,
|
||||
db: Session,
|
||||
) -> Dict[str, Any]:
|
||||
"""Execute a 'download' action.
|
||||
|
||||
This handles standalone download actions (not part of stop_cycle).
|
||||
The workflow is:
|
||||
1. Enable FTP on device
|
||||
2. Download current measurement folder
|
||||
3. (Optionally disable FTP - left enabled for now)
|
||||
"""
|
||||
# Get project and location info for file path
|
||||
location = db.query(MonitoringLocation).filter_by(id=action.location_id).first()
|
||||
project = db.query(Project).filter_by(id=action.project_id).first()
|
||||
|
||||
if not location or not project:
|
||||
raise Exception("Project or location not found")
|
||||
|
||||
# Build destination path (for logging/metadata reference)
|
||||
# Actual download location is managed by SLMM (data/downloads/{unit_id}/)
|
||||
session_timestamp = datetime.utcnow().strftime("%Y-%m-%d-%H%M")
|
||||
location_type_dir = "sound" if action.device_type == "slm" else "vibration"
|
||||
|
||||
destination_path = (
|
||||
f"data/Projects/{project.id}/{location_type_dir}/"
|
||||
f"{location.name}/session-{session_timestamp}/"
|
||||
)
|
||||
|
||||
# Step 1: Enable FTP on device
|
||||
logger.info(f"Enabling FTP on {unit_id} for download")
|
||||
await self.device_controller.enable_ftp(unit_id, action.device_type)
|
||||
|
||||
# Step 2: Download current measurement folder
|
||||
# The slmm_client.download_files() now automatically determines the correct
|
||||
# folder based on the device's current index number
|
||||
response = await self.device_controller.download_files(
|
||||
unit_id,
|
||||
action.device_type,
|
||||
destination_path,
|
||||
files=None, # Download all files in current measurement folder
|
||||
)
|
||||
|
||||
# TODO: Create DataFile records for downloaded files
|
||||
|
||||
return {
|
||||
"status": "downloaded",
|
||||
"destination_path": destination_path,
|
||||
"device_response": response,
|
||||
}
|
||||
|
||||
# ========================================================================
|
||||
# Recurring Schedule Generation
|
||||
# ========================================================================
|
||||
|
||||
async def generate_recurring_actions(self) -> int:
|
||||
"""
|
||||
Generate ScheduledActions from all enabled recurring schedules.
|
||||
|
||||
Runs once per hour to generate actions for the next 7 days.
|
||||
|
||||
Returns:
|
||||
Total number of actions generated
|
||||
"""
|
||||
db = SessionLocal()
|
||||
total_generated = 0
|
||||
|
||||
try:
|
||||
from backend.services.recurring_schedule_service import get_recurring_schedule_service
|
||||
|
||||
service = get_recurring_schedule_service(db)
|
||||
schedules = service.get_enabled_schedules()
|
||||
|
||||
if not schedules:
|
||||
logger.debug("No enabled recurring schedules found")
|
||||
return 0
|
||||
|
||||
logger.info(f"Generating actions for {len(schedules)} recurring schedule(s)")
|
||||
|
||||
for schedule in schedules:
|
||||
try:
|
||||
actions = service.generate_actions_for_schedule(schedule, horizon_days=7)
|
||||
total_generated += len(actions)
|
||||
except Exception as e:
|
||||
logger.error(f"Error generating actions for schedule {schedule.id}: {e}")
|
||||
|
||||
if total_generated > 0:
|
||||
logger.info(f"Generated {total_generated} scheduled actions from recurring schedules")
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error in generate_recurring_actions: {e}", exc_info=True)
|
||||
finally:
|
||||
db.close()
|
||||
|
||||
return total_generated
|
||||
|
||||
async def cleanup_old_actions(self, retention_days: int = 30) -> int:
|
||||
"""
|
||||
Remove old completed/failed actions to prevent database bloat.
|
||||
|
||||
Args:
|
||||
retention_days: Keep actions newer than this many days
|
||||
|
||||
Returns:
|
||||
Number of actions cleaned up
|
||||
"""
|
||||
db = SessionLocal()
|
||||
cleaned = 0
|
||||
|
||||
try:
|
||||
cutoff = datetime.utcnow() - timedelta(days=retention_days)
|
||||
|
||||
old_actions = db.query(ScheduledAction).filter(
|
||||
and_(
|
||||
ScheduledAction.execution_status.in_(["completed", "failed", "cancelled"]),
|
||||
ScheduledAction.executed_at < cutoff,
|
||||
)
|
||||
).all()
|
||||
|
||||
cleaned = len(old_actions)
|
||||
for action in old_actions:
|
||||
db.delete(action)
|
||||
|
||||
if cleaned > 0:
|
||||
db.commit()
|
||||
logger.info(f"Cleaned up {cleaned} old scheduled actions (>{retention_days} days)")
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error cleaning up old actions: {e}")
|
||||
db.rollback()
|
||||
finally:
|
||||
db.close()
|
||||
|
||||
return cleaned
|
||||
|
||||
# ========================================================================
|
||||
# Manual Execution (for testing/debugging)
|
||||
# ========================================================================
|
||||
|
||||
async def execute_action_by_id(self, action_id: str) -> Dict[str, Any]:
|
||||
"""
|
||||
Manually execute a specific action by ID.
|
||||
|
||||
Args:
|
||||
action_id: ScheduledAction ID
|
||||
|
||||
Returns:
|
||||
Execution result
|
||||
"""
|
||||
db = SessionLocal()
|
||||
try:
|
||||
action = db.query(ScheduledAction).filter_by(id=action_id).first()
|
||||
if not action:
|
||||
return {"success": False, "error": "Action not found"}
|
||||
|
||||
result = await self._execute_action(action, db)
|
||||
db.commit()
|
||||
return result
|
||||
|
||||
except Exception as e:
|
||||
db.rollback()
|
||||
return {"success": False, "error": str(e)}
|
||||
finally:
|
||||
db.close()
|
||||
|
||||
|
||||
# Singleton instance
|
||||
_scheduler_instance: Optional[SchedulerService] = None
|
||||
|
||||
|
||||
def get_scheduler() -> SchedulerService:
|
||||
"""
|
||||
Get the scheduler singleton instance.
|
||||
|
||||
Returns:
|
||||
SchedulerService instance
|
||||
"""
|
||||
global _scheduler_instance
|
||||
if _scheduler_instance is None:
|
||||
_scheduler_instance = SchedulerService()
|
||||
return _scheduler_instance
|
||||
|
||||
|
||||
async def start_scheduler():
|
||||
"""Start the global scheduler instance."""
|
||||
scheduler = get_scheduler()
|
||||
await scheduler.start()
|
||||
|
||||
|
||||
def stop_scheduler():
|
||||
"""Stop the global scheduler instance."""
|
||||
scheduler = get_scheduler()
|
||||
scheduler.stop()
|
||||
125
backend/services/slm_status_sync.py
Normal file
@@ -0,0 +1,125 @@
|
||||
"""
|
||||
SLM Status Synchronization Service
|
||||
|
||||
Syncs SLM device status from SLMM backend to Terra-View's Emitter table.
|
||||
This bridges SLMM's polling data with Terra-View's status snapshot system.
|
||||
|
||||
SLMM tracks device reachability via background polling. This service
|
||||
fetches that data and creates/updates Emitter records so SLMs appear
|
||||
correctly in the dashboard status snapshot.
|
||||
"""
|
||||
|
||||
import logging
|
||||
from datetime import datetime, timezone
|
||||
from typing import Dict, Any
|
||||
|
||||
from backend.database import get_db_session
|
||||
from backend.models import Emitter
|
||||
from backend.services.slmm_client import get_slmm_client, SLMMClientError
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
async def sync_slm_status_to_emitters() -> Dict[str, Any]:
|
||||
"""
|
||||
Fetch SLM status from SLMM and sync to Terra-View's Emitter table.
|
||||
|
||||
For each device in SLMM's polling status:
|
||||
- If last_success exists, create/update Emitter with that timestamp
|
||||
- If not reachable, update Emitter with last known timestamp (or None)
|
||||
|
||||
Returns:
|
||||
Dict with synced_count, error_count, errors list
|
||||
"""
|
||||
client = get_slmm_client()
|
||||
synced = 0
|
||||
errors = []
|
||||
|
||||
try:
|
||||
# Get polling status from SLMM
|
||||
status_response = await client.get_polling_status()
|
||||
|
||||
# Handle nested response structure
|
||||
data = status_response.get("data", status_response)
|
||||
devices = data.get("devices", [])
|
||||
|
||||
if not devices:
|
||||
logger.debug("No SLM devices in SLMM polling status")
|
||||
return {"synced_count": 0, "error_count": 0, "errors": []}
|
||||
|
||||
db = get_db_session()
|
||||
try:
|
||||
for device in devices:
|
||||
unit_id = device.get("unit_id")
|
||||
if not unit_id:
|
||||
continue
|
||||
|
||||
try:
|
||||
# Get or create Emitter record
|
||||
emitter = db.query(Emitter).filter(Emitter.id == unit_id).first()
|
||||
|
||||
# Determine last_seen from SLMM data
|
||||
last_success_str = device.get("last_success")
|
||||
is_reachable = device.get("is_reachable", False)
|
||||
|
||||
if last_success_str:
|
||||
# Parse ISO format timestamp
|
||||
last_seen = datetime.fromisoformat(
|
||||
last_success_str.replace("Z", "+00:00")
|
||||
)
|
||||
# Convert to naive UTC for consistency with existing code
|
||||
if last_seen.tzinfo:
|
||||
last_seen = last_seen.astimezone(timezone.utc).replace(tzinfo=None)
|
||||
else:
|
||||
last_seen = None
|
||||
|
||||
# Status will be recalculated by snapshot.py based on time thresholds
|
||||
# Just store a provisional status here
|
||||
status = "OK" if is_reachable else "Missing"
|
||||
|
||||
# Store last error message if available
|
||||
last_error = device.get("last_error") or ""
|
||||
|
||||
if emitter:
|
||||
# Update existing record
|
||||
emitter.last_seen = last_seen
|
||||
emitter.status = status
|
||||
emitter.unit_type = "slm"
|
||||
emitter.last_file = last_error
|
||||
else:
|
||||
# Create new record
|
||||
emitter = Emitter(
|
||||
id=unit_id,
|
||||
unit_type="slm",
|
||||
last_seen=last_seen,
|
||||
last_file=last_error,
|
||||
status=status
|
||||
)
|
||||
db.add(emitter)
|
||||
|
||||
synced += 1
|
||||
|
||||
except Exception as e:
|
||||
errors.append(f"{unit_id}: {str(e)}")
|
||||
logger.error(f"Error syncing SLM {unit_id}: {e}")
|
||||
|
||||
db.commit()
|
||||
|
||||
finally:
|
||||
db.close()
|
||||
|
||||
if synced > 0:
|
||||
logger.info(f"Synced {synced} SLM device(s) to Emitter table")
|
||||
|
||||
except SLMMClientError as e:
|
||||
logger.warning(f"Could not reach SLMM for status sync: {e}")
|
||||
errors.append(f"SLMM unreachable: {str(e)}")
|
||||
except Exception as e:
|
||||
logger.error(f"Error in SLM status sync: {e}", exc_info=True)
|
||||
errors.append(str(e))
|
||||
|
||||
return {
|
||||
"synced_count": synced,
|
||||
"error_count": len(errors),
|
||||
"errors": errors
|
||||
}
|
||||
781
backend/services/slmm_client.py
Normal file
@@ -0,0 +1,781 @@
|
||||
"""
|
||||
SLMM API Client Wrapper
|
||||
|
||||
Provides a clean interface for Terra-View to interact with the SLMM backend.
|
||||
All SLM operations should go through this client instead of direct HTTP calls.
|
||||
|
||||
SLMM (Sound Level Meter Manager) is a separate service running on port 8100
|
||||
that handles TCP/FTP communication with Rion NL-43/NL-53 devices.
|
||||
"""
|
||||
|
||||
import httpx
|
||||
import os
|
||||
from typing import Optional, Dict, Any, List
|
||||
from datetime import datetime
|
||||
import json
|
||||
|
||||
|
||||
# SLMM backend base URLs - use environment variable if set (for Docker)
|
||||
SLMM_BASE_URL = os.environ.get("SLMM_BASE_URL", "http://localhost:8100")
|
||||
SLMM_API_BASE = f"{SLMM_BASE_URL}/api/nl43"
|
||||
|
||||
|
||||
class SLMMClientError(Exception):
|
||||
"""Base exception for SLMM client errors."""
|
||||
pass
|
||||
|
||||
|
||||
class SLMMConnectionError(SLMMClientError):
|
||||
"""Raised when cannot connect to SLMM backend."""
|
||||
pass
|
||||
|
||||
|
||||
class SLMMDeviceError(SLMMClientError):
|
||||
"""Raised when device operation fails."""
|
||||
pass
|
||||
|
||||
|
||||
class SLMMClient:
|
||||
"""
|
||||
Client for interacting with SLMM backend.
|
||||
|
||||
Usage:
|
||||
client = SLMMClient()
|
||||
units = await client.get_all_units()
|
||||
status = await client.get_unit_status("nl43-001")
|
||||
await client.start_recording("nl43-001", config={...})
|
||||
"""
|
||||
|
||||
def __init__(self, base_url: str = SLMM_BASE_URL, timeout: float = 30.0):
|
||||
self.base_url = base_url
|
||||
self.api_base = f"{base_url}/api/nl43"
|
||||
self.timeout = timeout
|
||||
|
||||
async def _request(
|
||||
self,
|
||||
method: str,
|
||||
endpoint: str,
|
||||
data: Optional[Dict] = None,
|
||||
params: Optional[Dict] = None,
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Make an HTTP request to SLMM backend.
|
||||
|
||||
Args:
|
||||
method: HTTP method (GET, POST, PUT, DELETE)
|
||||
endpoint: API endpoint (e.g., "/units", "/{unit_id}/status")
|
||||
data: JSON body for POST/PUT requests
|
||||
params: Query parameters
|
||||
|
||||
Returns:
|
||||
Response JSON as dict
|
||||
|
||||
Raises:
|
||||
SLMMConnectionError: Cannot reach SLMM
|
||||
SLMMDeviceError: Device operation failed
|
||||
"""
|
||||
url = f"{self.api_base}{endpoint}"
|
||||
|
||||
try:
|
||||
async with httpx.AsyncClient(timeout=self.timeout) as client:
|
||||
response = await client.request(
|
||||
method=method,
|
||||
url=url,
|
||||
json=data,
|
||||
params=params,
|
||||
)
|
||||
response.raise_for_status()
|
||||
|
||||
# Handle empty responses
|
||||
if not response.content:
|
||||
return {}
|
||||
|
||||
return response.json()
|
||||
|
||||
except httpx.ConnectError as e:
|
||||
raise SLMMConnectionError(
|
||||
f"Cannot connect to SLMM backend at {self.base_url}. "
|
||||
f"Is SLMM running? Error: {str(e)}"
|
||||
)
|
||||
except httpx.HTTPStatusError as e:
|
||||
error_detail = "Unknown error"
|
||||
try:
|
||||
error_data = e.response.json()
|
||||
error_detail = error_data.get("detail", str(error_data))
|
||||
except:
|
||||
error_detail = e.response.text or str(e)
|
||||
|
||||
raise SLMMDeviceError(
|
||||
f"SLMM operation failed: {error_detail}"
|
||||
)
|
||||
except Exception as e:
|
||||
raise SLMMClientError(f"Unexpected error: {str(e)}")
|
||||
|
||||
# ========================================================================
|
||||
# Unit Management
|
||||
# ========================================================================
|
||||
|
||||
async def get_all_units(self) -> List[Dict[str, Any]]:
|
||||
"""
|
||||
Get all configured SLM units from SLMM.
|
||||
|
||||
Returns:
|
||||
List of unit dicts with id, config, and status
|
||||
"""
|
||||
# SLMM doesn't have a /units endpoint yet, so we'll need to add this
|
||||
# For now, return empty list or implement when SLMM endpoint is ready
|
||||
try:
|
||||
response = await self._request("GET", "/units")
|
||||
return response.get("units", [])
|
||||
except SLMMClientError:
|
||||
# Endpoint may not exist yet
|
||||
return []
|
||||
|
||||
async def get_unit_config(self, unit_id: str) -> Dict[str, Any]:
|
||||
"""
|
||||
Get unit configuration from SLMM cache.
|
||||
|
||||
Args:
|
||||
unit_id: Unit identifier (e.g., "nl43-001")
|
||||
|
||||
Returns:
|
||||
Config dict with host, tcp_port, ftp_port, etc.
|
||||
"""
|
||||
return await self._request("GET", f"/{unit_id}/config")
|
||||
|
||||
async def update_unit_config(
|
||||
self,
|
||||
unit_id: str,
|
||||
host: Optional[str] = None,
|
||||
tcp_port: Optional[int] = None,
|
||||
ftp_port: Optional[int] = None,
|
||||
ftp_username: Optional[str] = None,
|
||||
ftp_password: Optional[str] = None,
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Update unit configuration in SLMM cache.
|
||||
|
||||
Args:
|
||||
unit_id: Unit identifier
|
||||
host: Device IP address
|
||||
tcp_port: TCP control port (default: 2255)
|
||||
ftp_port: FTP data port (default: 21)
|
||||
ftp_username: FTP username
|
||||
ftp_password: FTP password
|
||||
|
||||
Returns:
|
||||
Updated config
|
||||
"""
|
||||
config = {}
|
||||
if host is not None:
|
||||
config["host"] = host
|
||||
if tcp_port is not None:
|
||||
config["tcp_port"] = tcp_port
|
||||
if ftp_port is not None:
|
||||
config["ftp_port"] = ftp_port
|
||||
if ftp_username is not None:
|
||||
config["ftp_username"] = ftp_username
|
||||
if ftp_password is not None:
|
||||
config["ftp_password"] = ftp_password
|
||||
|
||||
return await self._request("PUT", f"/{unit_id}/config", data=config)
|
||||
|
||||
# ========================================================================
|
||||
# Status & Monitoring
|
||||
# ========================================================================
|
||||
|
||||
async def get_unit_status(self, unit_id: str) -> Dict[str, Any]:
|
||||
"""
|
||||
Get cached status snapshot from SLMM.
|
||||
|
||||
Args:
|
||||
unit_id: Unit identifier
|
||||
|
||||
Returns:
|
||||
Status dict with measurement_state, lp, leq, battery, etc.
|
||||
"""
|
||||
return await self._request("GET", f"/{unit_id}/status")
|
||||
|
||||
async def get_live_data(self, unit_id: str) -> Dict[str, Any]:
|
||||
"""
|
||||
Request fresh data from device (DOD command).
|
||||
|
||||
Args:
|
||||
unit_id: Unit identifier
|
||||
|
||||
Returns:
|
||||
Live data snapshot
|
||||
"""
|
||||
return await self._request("GET", f"/{unit_id}/live")
|
||||
|
||||
# ========================================================================
|
||||
# Recording Control
|
||||
# ========================================================================
|
||||
|
||||
async def start_recording(
|
||||
self,
|
||||
unit_id: str,
|
||||
config: Optional[Dict[str, Any]] = None,
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Start recording on a unit.
|
||||
|
||||
Args:
|
||||
unit_id: Unit identifier
|
||||
config: Optional recording config (interval, settings, etc.)
|
||||
|
||||
Returns:
|
||||
Response from SLMM with success status
|
||||
"""
|
||||
return await self._request("POST", f"/{unit_id}/start", data=config or {})
|
||||
|
||||
async def stop_recording(self, unit_id: str) -> Dict[str, Any]:
|
||||
"""
|
||||
Stop recording on a unit.
|
||||
|
||||
Args:
|
||||
unit_id: Unit identifier
|
||||
|
||||
Returns:
|
||||
Response from SLMM
|
||||
"""
|
||||
return await self._request("POST", f"/{unit_id}/stop")
|
||||
|
||||
async def pause_recording(self, unit_id: str) -> Dict[str, Any]:
|
||||
"""
|
||||
Pause recording on a unit.
|
||||
|
||||
Args:
|
||||
unit_id: Unit identifier
|
||||
|
||||
Returns:
|
||||
Response from SLMM
|
||||
"""
|
||||
return await self._request("POST", f"/{unit_id}/pause")
|
||||
|
||||
async def resume_recording(self, unit_id: str) -> Dict[str, Any]:
|
||||
"""
|
||||
Resume paused recording on a unit.
|
||||
|
||||
Args:
|
||||
unit_id: Unit identifier
|
||||
|
||||
Returns:
|
||||
Response from SLMM
|
||||
"""
|
||||
return await self._request("POST", f"/{unit_id}/resume")
|
||||
|
||||
async def reset_data(self, unit_id: str) -> Dict[str, Any]:
|
||||
"""
|
||||
Reset measurement data on a unit.
|
||||
|
||||
Args:
|
||||
unit_id: Unit identifier
|
||||
|
||||
Returns:
|
||||
Response from SLMM
|
||||
"""
|
||||
return await self._request("POST", f"/{unit_id}/reset")
|
||||
|
||||
# ========================================================================
|
||||
# Store/Index Management
|
||||
# ========================================================================
|
||||
|
||||
async def get_index_number(self, unit_id: str) -> Dict[str, Any]:
|
||||
"""
|
||||
Get current store/index number from device.
|
||||
|
||||
Args:
|
||||
unit_id: Unit identifier
|
||||
|
||||
Returns:
|
||||
Dict with current index_number (store name)
|
||||
"""
|
||||
return await self._request("GET", f"/{unit_id}/index-number")
|
||||
|
||||
async def set_index_number(
|
||||
self,
|
||||
unit_id: str,
|
||||
index_number: int,
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Set store/index number on device.
|
||||
|
||||
Args:
|
||||
unit_id: Unit identifier
|
||||
index_number: New index number to set
|
||||
|
||||
Returns:
|
||||
Confirmation response
|
||||
"""
|
||||
return await self._request(
|
||||
"PUT",
|
||||
f"/{unit_id}/index-number",
|
||||
data={"index_number": index_number},
|
||||
)
|
||||
|
||||
async def check_overwrite_status(self, unit_id: str) -> Dict[str, Any]:
|
||||
"""
|
||||
Check if data exists at the current store index.
|
||||
|
||||
Args:
|
||||
unit_id: Unit identifier
|
||||
|
||||
Returns:
|
||||
Dict with:
|
||||
- overwrite_status: "None" (safe) or "Exist" (would overwrite)
|
||||
- will_overwrite: bool
|
||||
- safe_to_store: bool
|
||||
"""
|
||||
return await self._request("GET", f"/{unit_id}/overwrite-check")
|
||||
|
||||
async def increment_index(self, unit_id: str, max_attempts: int = 100) -> Dict[str, Any]:
|
||||
"""
|
||||
Find and set the next available (unused) store/index number.
|
||||
|
||||
Checks the current index - if it would overwrite existing data,
|
||||
increments until finding an unused index number.
|
||||
|
||||
Args:
|
||||
unit_id: Unit identifier
|
||||
max_attempts: Maximum number of indices to try before giving up
|
||||
|
||||
Returns:
|
||||
Dict with old_index, new_index, and attempts_made
|
||||
"""
|
||||
# Get current index
|
||||
current = await self.get_index_number(unit_id)
|
||||
old_index = current.get("index_number", 0)
|
||||
|
||||
# Check if current index is safe
|
||||
overwrite_check = await self.check_overwrite_status(unit_id)
|
||||
if overwrite_check.get("safe_to_store", False):
|
||||
# Current index is safe, no need to increment
|
||||
return {
|
||||
"success": True,
|
||||
"old_index": old_index,
|
||||
"new_index": old_index,
|
||||
"unit_id": unit_id,
|
||||
"already_safe": True,
|
||||
"attempts_made": 0,
|
||||
}
|
||||
|
||||
# Need to find an unused index
|
||||
attempts = 0
|
||||
test_index = old_index + 1
|
||||
|
||||
while attempts < max_attempts:
|
||||
# Set the new index
|
||||
await self.set_index_number(unit_id, test_index)
|
||||
|
||||
# Check if this index is safe
|
||||
overwrite_check = await self.check_overwrite_status(unit_id)
|
||||
attempts += 1
|
||||
|
||||
if overwrite_check.get("safe_to_store", False):
|
||||
return {
|
||||
"success": True,
|
||||
"old_index": old_index,
|
||||
"new_index": test_index,
|
||||
"unit_id": unit_id,
|
||||
"already_safe": False,
|
||||
"attempts_made": attempts,
|
||||
}
|
||||
|
||||
# Try next index (wrap around at 9999)
|
||||
test_index = (test_index + 1) % 10000
|
||||
|
||||
# Avoid infinite loops if we've wrapped around
|
||||
if test_index == old_index:
|
||||
break
|
||||
|
||||
# Could not find a safe index
|
||||
raise SLMMDeviceError(
|
||||
f"Could not find unused store index for {unit_id} after {attempts} attempts. "
|
||||
f"Consider downloading and clearing data from the device."
|
||||
)
|
||||
|
||||
# ========================================================================
|
||||
# Device Settings
|
||||
# ========================================================================
|
||||
|
||||
async def get_frequency_weighting(self, unit_id: str) -> Dict[str, Any]:
|
||||
"""
|
||||
Get frequency weighting setting (A, C, or Z).
|
||||
|
||||
Args:
|
||||
unit_id: Unit identifier
|
||||
|
||||
Returns:
|
||||
Dict with current weighting
|
||||
"""
|
||||
return await self._request("GET", f"/{unit_id}/frequency-weighting")
|
||||
|
||||
async def set_frequency_weighting(
|
||||
self,
|
||||
unit_id: str,
|
||||
weighting: str,
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Set frequency weighting (A, C, or Z).
|
||||
|
||||
Args:
|
||||
unit_id: Unit identifier
|
||||
weighting: "A", "C", or "Z"
|
||||
|
||||
Returns:
|
||||
Confirmation response
|
||||
"""
|
||||
return await self._request(
|
||||
"PUT",
|
||||
f"/{unit_id}/frequency-weighting",
|
||||
data={"weighting": weighting},
|
||||
)
|
||||
|
||||
async def get_time_weighting(self, unit_id: str) -> Dict[str, Any]:
|
||||
"""
|
||||
Get time weighting setting (F, S, or I).
|
||||
|
||||
Args:
|
||||
unit_id: Unit identifier
|
||||
|
||||
Returns:
|
||||
Dict with current time weighting
|
||||
"""
|
||||
return await self._request("GET", f"/{unit_id}/time-weighting")
|
||||
|
||||
async def set_time_weighting(
|
||||
self,
|
||||
unit_id: str,
|
||||
weighting: str,
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Set time weighting (F=Fast, S=Slow, I=Impulse).
|
||||
|
||||
Args:
|
||||
unit_id: Unit identifier
|
||||
weighting: "F", "S", or "I"
|
||||
|
||||
Returns:
|
||||
Confirmation response
|
||||
"""
|
||||
return await self._request(
|
||||
"PUT",
|
||||
f"/{unit_id}/time-weighting",
|
||||
data={"weighting": weighting},
|
||||
)
|
||||
|
||||
async def get_all_settings(self, unit_id: str) -> Dict[str, Any]:
|
||||
"""
|
||||
Get all device settings.
|
||||
|
||||
Args:
|
||||
unit_id: Unit identifier
|
||||
|
||||
Returns:
|
||||
Dict with all settings
|
||||
"""
|
||||
return await self._request("GET", f"/{unit_id}/settings")
|
||||
|
||||
# ========================================================================
|
||||
# FTP Control
|
||||
# ========================================================================
|
||||
|
||||
async def enable_ftp(self, unit_id: str) -> Dict[str, Any]:
|
||||
"""
|
||||
Enable FTP server on device.
|
||||
|
||||
Must be called before downloading files. FTP and TCP can work in tandem.
|
||||
|
||||
Args:
|
||||
unit_id: Unit identifier
|
||||
|
||||
Returns:
|
||||
Dict with status message
|
||||
"""
|
||||
return await self._request("POST", f"/{unit_id}/ftp/enable")
|
||||
|
||||
async def disable_ftp(self, unit_id: str) -> Dict[str, Any]:
|
||||
"""
|
||||
Disable FTP server on device.
|
||||
|
||||
Args:
|
||||
unit_id: Unit identifier
|
||||
|
||||
Returns:
|
||||
Dict with status message
|
||||
"""
|
||||
return await self._request("POST", f"/{unit_id}/ftp/disable")
|
||||
|
||||
async def get_ftp_status(self, unit_id: str) -> Dict[str, Any]:
|
||||
"""
|
||||
Get FTP server status on device.
|
||||
|
||||
Args:
|
||||
unit_id: Unit identifier
|
||||
|
||||
Returns:
|
||||
Dict with ftp_enabled status
|
||||
"""
|
||||
return await self._request("GET", f"/{unit_id}/ftp/status")
|
||||
|
||||
# ========================================================================
|
||||
# Data Download
|
||||
# ========================================================================
|
||||
|
||||
async def download_file(
|
||||
self,
|
||||
unit_id: str,
|
||||
remote_path: str,
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Download a single file from unit via FTP.
|
||||
|
||||
Args:
|
||||
unit_id: Unit identifier
|
||||
remote_path: Path on device to download (e.g., "/NL43_DATA/measurement.wav")
|
||||
|
||||
Returns:
|
||||
Binary file content (as response)
|
||||
"""
|
||||
data = {"remote_path": remote_path}
|
||||
return await self._request("POST", f"/{unit_id}/ftp/download", data=data)
|
||||
|
||||
async def download_folder(
|
||||
self,
|
||||
unit_id: str,
|
||||
remote_path: str,
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Download an entire folder from unit via FTP as a ZIP archive.
|
||||
|
||||
Useful for downloading complete measurement sessions (e.g., Auto_0000 folders).
|
||||
|
||||
Args:
|
||||
unit_id: Unit identifier
|
||||
remote_path: Folder path on device to download (e.g., "/NL43_DATA/Auto_0000")
|
||||
|
||||
Returns:
|
||||
Dict with local_path, folder_name, file_count, zip_size_bytes
|
||||
"""
|
||||
data = {"remote_path": remote_path}
|
||||
return await self._request("POST", f"/{unit_id}/ftp/download-folder", data=data)
|
||||
|
||||
async def download_current_measurement(
|
||||
self,
|
||||
unit_id: str,
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Download the current measurement folder based on device's index number.
|
||||
|
||||
This is the recommended method for scheduled downloads - it automatically
|
||||
determines which folder to download based on the device's current store index.
|
||||
|
||||
Args:
|
||||
unit_id: Unit identifier
|
||||
|
||||
Returns:
|
||||
Dict with local_path, folder_name, file_count, zip_size_bytes, index_number
|
||||
"""
|
||||
# Get current index number from device
|
||||
index_info = await self.get_index_number(unit_id)
|
||||
index_number = index_info.get("index_number", 0)
|
||||
|
||||
# Format as Auto_XXXX folder name
|
||||
folder_name = f"Auto_{index_number:04d}"
|
||||
remote_path = f"/NL43_DATA/{folder_name}"
|
||||
|
||||
# Download the folder
|
||||
result = await self.download_folder(unit_id, remote_path)
|
||||
result["index_number"] = index_number
|
||||
return result
|
||||
|
||||
async def download_files(
|
||||
self,
|
||||
unit_id: str,
|
||||
destination_path: str,
|
||||
files: Optional[List[str]] = None,
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Download measurement files from unit via FTP.
|
||||
|
||||
This method automatically determines the current measurement folder and downloads it.
|
||||
The destination_path parameter is logged for reference but actual download location
|
||||
is managed by SLMM (data/downloads/{unit_id}/).
|
||||
|
||||
Args:
|
||||
unit_id: Unit identifier
|
||||
destination_path: Reference path (for logging/metadata, not used by SLMM)
|
||||
files: Ignored - always downloads the current measurement folder
|
||||
|
||||
Returns:
|
||||
Dict with download result including local_path, folder_name, etc.
|
||||
"""
|
||||
# Use the new method that automatically determines what to download
|
||||
result = await self.download_current_measurement(unit_id)
|
||||
result["requested_destination"] = destination_path
|
||||
return result
|
||||
|
||||
# ========================================================================
|
||||
# Cycle Commands (for scheduled automation)
|
||||
# ========================================================================
|
||||
|
||||
async def start_cycle(
|
||||
self,
|
||||
unit_id: str,
|
||||
sync_clock: bool = True,
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Execute complete start cycle on device via SLMM.
|
||||
|
||||
This handles the full pre-recording workflow:
|
||||
1. Sync device clock to server time
|
||||
2. Find next safe index (with overwrite protection)
|
||||
3. Start measurement
|
||||
|
||||
Args:
|
||||
unit_id: Unit identifier
|
||||
sync_clock: Whether to sync device clock to server time
|
||||
|
||||
Returns:
|
||||
Dict with clock_synced, old_index, new_index, started, etc.
|
||||
"""
|
||||
return await self._request(
|
||||
"POST",
|
||||
f"/{unit_id}/start-cycle",
|
||||
data={"sync_clock": sync_clock},
|
||||
)
|
||||
|
||||
async def stop_cycle(
|
||||
self,
|
||||
unit_id: str,
|
||||
download: bool = True,
|
||||
download_path: Optional[str] = None,
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Execute complete stop cycle on device via SLMM.
|
||||
|
||||
This handles the full post-recording workflow:
|
||||
1. Stop measurement
|
||||
2. Enable FTP
|
||||
3. Download measurement folder (if download=True)
|
||||
4. Verify download
|
||||
|
||||
Args:
|
||||
unit_id: Unit identifier
|
||||
download: Whether to download measurement data
|
||||
download_path: Custom path for downloaded ZIP (optional)
|
||||
|
||||
Returns:
|
||||
Dict with stopped, ftp_enabled, download_success, local_path, etc.
|
||||
"""
|
||||
data = {"download": download}
|
||||
if download_path:
|
||||
data["download_path"] = download_path
|
||||
return await self._request(
|
||||
"POST",
|
||||
f"/{unit_id}/stop-cycle",
|
||||
data=data,
|
||||
)
|
||||
|
||||
# ========================================================================
|
||||
# Polling Status (for device monitoring/alerts)
|
||||
# ========================================================================
|
||||
|
||||
async def get_polling_status(self) -> Dict[str, Any]:
|
||||
"""
|
||||
Get global polling status from SLMM.
|
||||
|
||||
Returns device reachability information for all polled devices.
|
||||
Used by DeviceStatusMonitor to detect offline/online transitions.
|
||||
|
||||
Returns:
|
||||
Dict with devices list containing:
|
||||
- unit_id
|
||||
- is_reachable
|
||||
- consecutive_failures
|
||||
- last_poll_attempt
|
||||
- last_success
|
||||
- last_error
|
||||
"""
|
||||
try:
|
||||
async with httpx.AsyncClient(timeout=self.timeout) as client:
|
||||
response = await client.get(f"{self.base_url}/api/nl43/_polling/status")
|
||||
response.raise_for_status()
|
||||
return response.json()
|
||||
except httpx.ConnectError:
|
||||
raise SLMMConnectionError("Cannot connect to SLMM for polling status")
|
||||
except Exception as e:
|
||||
raise SLMMClientError(f"Failed to get polling status: {str(e)}")
|
||||
|
||||
async def get_device_polling_config(self, unit_id: str) -> Dict[str, Any]:
|
||||
"""
|
||||
Get polling configuration for a specific device.
|
||||
|
||||
Args:
|
||||
unit_id: Unit identifier
|
||||
|
||||
Returns:
|
||||
Dict with poll_enabled and poll_interval_seconds
|
||||
"""
|
||||
return await self._request("GET", f"/{unit_id}/polling/config")
|
||||
|
||||
async def update_device_polling_config(
|
||||
self,
|
||||
unit_id: str,
|
||||
poll_enabled: Optional[bool] = None,
|
||||
poll_interval_seconds: Optional[int] = None,
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Update polling configuration for a device.
|
||||
|
||||
Args:
|
||||
unit_id: Unit identifier
|
||||
poll_enabled: Enable/disable polling
|
||||
poll_interval_seconds: Polling interval (10-3600)
|
||||
|
||||
Returns:
|
||||
Updated config
|
||||
"""
|
||||
config = {}
|
||||
if poll_enabled is not None:
|
||||
config["poll_enabled"] = poll_enabled
|
||||
if poll_interval_seconds is not None:
|
||||
config["poll_interval_seconds"] = poll_interval_seconds
|
||||
|
||||
return await self._request("PUT", f"/{unit_id}/polling/config", data=config)
|
||||
|
||||
# ========================================================================
|
||||
# Health Check
|
||||
# ========================================================================
|
||||
|
||||
async def health_check(self) -> bool:
|
||||
"""
|
||||
Check if SLMM backend is reachable.
|
||||
|
||||
Returns:
|
||||
True if SLMM is responding, False otherwise
|
||||
"""
|
||||
try:
|
||||
async with httpx.AsyncClient(timeout=5.0) as client:
|
||||
response = await client.get(f"{self.base_url}/health")
|
||||
return response.status_code == 200
|
||||
except:
|
||||
return False
|
||||
|
||||
|
||||
# Singleton instance for convenience
|
||||
_default_client: Optional[SLMMClient] = None
|
||||
|
||||
|
||||
def get_slmm_client() -> SLMMClient:
|
||||
"""
|
||||
Get the default SLMM client instance.
|
||||
|
||||
Returns:
|
||||
SLMMClient instance
|
||||
"""
|
||||
global _default_client
|
||||
if _default_client is None:
|
||||
_default_client = SLMMClient()
|
||||
return _default_client
|
||||
231
backend/services/slmm_sync.py
Normal file
@@ -0,0 +1,231 @@
|
||||
"""
|
||||
SLMM Synchronization Service
|
||||
|
||||
This service ensures Terra-View roster is the single source of truth for SLM device configuration.
|
||||
When SLM devices are added, edited, or deleted in Terra-View, changes are automatically synced to SLMM.
|
||||
"""
|
||||
|
||||
import logging
|
||||
import httpx
|
||||
import os
|
||||
from typing import Optional
|
||||
from sqlalchemy.orm import Session
|
||||
|
||||
from backend.models import RosterUnit
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
SLMM_BASE_URL = os.getenv("SLMM_BASE_URL", "http://localhost:8100")
|
||||
|
||||
|
||||
async def sync_slm_to_slmm(unit: RosterUnit) -> bool:
|
||||
"""
|
||||
Sync a single SLM device from Terra-View roster to SLMM.
|
||||
|
||||
Args:
|
||||
unit: RosterUnit with device_type="slm"
|
||||
|
||||
Returns:
|
||||
True if sync successful, False otherwise
|
||||
"""
|
||||
if unit.device_type != "slm":
|
||||
logger.warning(f"Attempted to sync non-SLM unit {unit.id} to SLMM")
|
||||
return False
|
||||
|
||||
if not unit.slm_host:
|
||||
logger.warning(f"SLM {unit.id} has no host configured, skipping SLMM sync")
|
||||
return False
|
||||
|
||||
# Disable polling if unit is benched (deployed=False) or retired
|
||||
# Only actively deployed units should be polled
|
||||
should_poll = unit.deployed and not unit.retired
|
||||
|
||||
try:
|
||||
async with httpx.AsyncClient(timeout=5.0) as client:
|
||||
response = await client.put(
|
||||
f"{SLMM_BASE_URL}/api/nl43/{unit.id}/config",
|
||||
json={
|
||||
"host": unit.slm_host,
|
||||
"tcp_port": unit.slm_tcp_port or 2255,
|
||||
"tcp_enabled": True,
|
||||
"ftp_enabled": True,
|
||||
"ftp_username": "USER", # Default NL43 credentials
|
||||
"ftp_password": "0000",
|
||||
"poll_enabled": should_poll, # Disable polling for benched or retired units
|
||||
"poll_interval_seconds": 3600, # Default to 1 hour polling
|
||||
}
|
||||
)
|
||||
|
||||
if response.status_code in [200, 201]:
|
||||
logger.info(f"✓ Synced SLM {unit.id} to SLMM at {unit.slm_host}:{unit.slm_tcp_port or 2255}")
|
||||
return True
|
||||
else:
|
||||
logger.error(f"Failed to sync SLM {unit.id} to SLMM: {response.status_code} {response.text}")
|
||||
return False
|
||||
|
||||
except httpx.TimeoutException:
|
||||
logger.error(f"Timeout syncing SLM {unit.id} to SLMM")
|
||||
return False
|
||||
except Exception as e:
|
||||
logger.error(f"Error syncing SLM {unit.id} to SLMM: {e}")
|
||||
return False
|
||||
|
||||
|
||||
async def delete_slm_from_slmm(unit_id: str) -> bool:
|
||||
"""
|
||||
Delete a device from SLMM database.
|
||||
|
||||
Args:
|
||||
unit_id: The unit ID to delete
|
||||
|
||||
Returns:
|
||||
True if deletion successful or device doesn't exist, False on error
|
||||
"""
|
||||
try:
|
||||
async with httpx.AsyncClient(timeout=5.0) as client:
|
||||
response = await client.delete(
|
||||
f"{SLMM_BASE_URL}/api/nl43/{unit_id}/config"
|
||||
)
|
||||
|
||||
if response.status_code == 200:
|
||||
logger.info(f"✓ Deleted SLM {unit_id} from SLMM")
|
||||
return True
|
||||
elif response.status_code == 404:
|
||||
logger.info(f"SLM {unit_id} not found in SLMM (already deleted)")
|
||||
return True
|
||||
else:
|
||||
logger.error(f"Failed to delete SLM {unit_id} from SLMM: {response.status_code} {response.text}")
|
||||
return False
|
||||
|
||||
except httpx.TimeoutException:
|
||||
logger.error(f"Timeout deleting SLM {unit_id} from SLMM")
|
||||
return False
|
||||
except Exception as e:
|
||||
logger.error(f"Error deleting SLM {unit_id} from SLMM: {e}")
|
||||
return False
|
||||
|
||||
|
||||
async def sync_all_slms_to_slmm(db: Session) -> dict:
|
||||
"""
|
||||
Sync all SLM devices from Terra-View roster to SLMM.
|
||||
|
||||
This ensures SLMM database matches Terra-View roster as the source of truth.
|
||||
Should be called on Terra-View startup and optionally via admin endpoint.
|
||||
|
||||
Args:
|
||||
db: Database session
|
||||
|
||||
Returns:
|
||||
Dictionary with sync results
|
||||
"""
|
||||
logger.info("Starting full SLM sync to SLMM...")
|
||||
|
||||
# Get all SLM units from roster
|
||||
slm_units = db.query(RosterUnit).filter_by(device_type="slm").all()
|
||||
|
||||
results = {
|
||||
"total": len(slm_units),
|
||||
"synced": 0,
|
||||
"skipped": 0,
|
||||
"failed": 0
|
||||
}
|
||||
|
||||
for unit in slm_units:
|
||||
# Skip units without host configured
|
||||
if not unit.slm_host:
|
||||
results["skipped"] += 1
|
||||
logger.debug(f"Skipped {unit.unit_type} - no host configured")
|
||||
continue
|
||||
|
||||
# Sync to SLMM
|
||||
success = await sync_slm_to_slmm(unit)
|
||||
if success:
|
||||
results["synced"] += 1
|
||||
else:
|
||||
results["failed"] += 1
|
||||
|
||||
logger.info(
|
||||
f"SLM sync complete: {results['synced']} synced, "
|
||||
f"{results['skipped']} skipped, {results['failed']} failed"
|
||||
)
|
||||
|
||||
return results
|
||||
|
||||
|
||||
async def get_slmm_devices() -> Optional[list]:
|
||||
"""
|
||||
Get list of all devices currently in SLMM database.
|
||||
|
||||
Returns:
|
||||
List of device unit_ids, or None on error
|
||||
"""
|
||||
try:
|
||||
async with httpx.AsyncClient(timeout=5.0) as client:
|
||||
response = await client.get(f"{SLMM_BASE_URL}/api/nl43/_polling/status")
|
||||
|
||||
if response.status_code == 200:
|
||||
data = response.json()
|
||||
return [device["unit_id"] for device in data["data"]["devices"]]
|
||||
else:
|
||||
logger.error(f"Failed to get SLMM devices: {response.status_code}")
|
||||
return None
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error getting SLMM devices: {e}")
|
||||
return None
|
||||
|
||||
|
||||
async def cleanup_orphaned_slmm_devices(db: Session) -> dict:
|
||||
"""
|
||||
Remove devices from SLMM that are not in Terra-View roster.
|
||||
|
||||
This cleans up orphaned test devices or devices that were manually added to SLMM.
|
||||
|
||||
Args:
|
||||
db: Database session
|
||||
|
||||
Returns:
|
||||
Dictionary with cleanup results
|
||||
"""
|
||||
logger.info("Checking for orphaned devices in SLMM...")
|
||||
|
||||
# Get all device IDs from SLMM
|
||||
slmm_devices = await get_slmm_devices()
|
||||
if slmm_devices is None:
|
||||
return {"error": "Failed to get SLMM device list"}
|
||||
|
||||
# Get all SLM unit IDs from Terra-View roster
|
||||
roster_units = db.query(RosterUnit.id).filter_by(device_type="slm").all()
|
||||
roster_unit_ids = {unit.id for unit in roster_units}
|
||||
|
||||
# Find orphaned devices (in SLMM but not in roster)
|
||||
orphaned = [uid for uid in slmm_devices if uid not in roster_unit_ids]
|
||||
|
||||
results = {
|
||||
"total_in_slmm": len(slmm_devices),
|
||||
"total_in_roster": len(roster_unit_ids),
|
||||
"orphaned": len(orphaned),
|
||||
"deleted": 0,
|
||||
"failed": 0,
|
||||
"orphaned_devices": orphaned
|
||||
}
|
||||
|
||||
if not orphaned:
|
||||
logger.info("No orphaned devices found in SLMM")
|
||||
return results
|
||||
|
||||
logger.info(f"Found {len(orphaned)} orphaned devices in SLMM: {orphaned}")
|
||||
|
||||
# Delete orphaned devices
|
||||
for unit_id in orphaned:
|
||||
success = await delete_slm_from_slmm(unit_id)
|
||||
if success:
|
||||
results["deleted"] += 1
|
||||
else:
|
||||
results["failed"] += 1
|
||||
|
||||
logger.info(
|
||||
f"Cleanup complete: {results['deleted']} deleted, {results['failed']} failed"
|
||||
)
|
||||
|
||||
return results
|
||||
@@ -1,8 +1,8 @@
|
||||
from datetime import datetime, timezone
|
||||
from sqlalchemy.orm import Session
|
||||
|
||||
from app.seismo.database import get_db_session
|
||||
from app.seismo.models import Emitter, RosterUnit, IgnoredUnit
|
||||
from backend.database import get_db_session
|
||||
from backend.models import Emitter, RosterUnit, IgnoredUnit
|
||||
|
||||
|
||||
def ensure_utc(dt):
|
||||
@@ -60,7 +60,7 @@ def emit_status_snapshot():
|
||||
db = get_db_session()
|
||||
try:
|
||||
# Get user preferences for status thresholds
|
||||
from app.seismo.models import UserPreferences
|
||||
from backend.models import UserPreferences
|
||||
prefs = db.query(UserPreferences).filter_by(id=1).first()
|
||||
status_ok_threshold = prefs.status_ok_threshold_hours if prefs else 12
|
||||
status_pending_threshold = prefs.status_pending_threshold_hours if prefs else 24
|
||||
@@ -108,6 +108,7 @@ def emit_status_snapshot():
|
||||
"last_calibrated": r.last_calibrated.isoformat() if r.last_calibrated else None,
|
||||
"next_calibration_due": r.next_calibration_due.isoformat() if r.next_calibration_due else None,
|
||||
"deployed_with_modem_id": r.deployed_with_modem_id,
|
||||
"deployed_with_unit_id": r.deployed_with_unit_id,
|
||||
"ip_address": r.ip_address,
|
||||
"phone_number": r.phone_number,
|
||||
"hardware_model": r.hardware_model,
|
||||
@@ -137,6 +138,7 @@ def emit_status_snapshot():
|
||||
"last_calibrated": None,
|
||||
"next_calibration_due": None,
|
||||
"deployed_with_modem_id": None,
|
||||
"deployed_with_unit_id": None,
|
||||
"ip_address": None,
|
||||
"phone_number": None,
|
||||
"hardware_model": None,
|
||||
@@ -146,6 +148,22 @@ def emit_status_snapshot():
|
||||
"coordinates": "",
|
||||
}
|
||||
|
||||
# --- Derive modem status from paired devices ---
|
||||
# Modems don't have their own check-in system, so we inherit status
|
||||
# from whatever device they're paired with (seismograph or SLM)
|
||||
for unit_id, unit_data in units.items():
|
||||
if unit_data.get("device_type") == "modem" and not unit_data.get("retired"):
|
||||
roster_unit = roster.get(unit_id)
|
||||
if roster_unit and roster_unit.deployed_with_unit_id:
|
||||
paired_unit_id = roster_unit.deployed_with_unit_id
|
||||
paired_unit = units.get(paired_unit_id)
|
||||
if paired_unit:
|
||||
# Inherit status from paired device
|
||||
unit_data["status"] = paired_unit.get("status", "Missing")
|
||||
unit_data["age"] = paired_unit.get("age", "N/A")
|
||||
unit_data["last"] = paired_unit.get("last")
|
||||
unit_data["derived_from"] = paired_unit_id
|
||||
|
||||
# Separate buckets for UI
|
||||
active_units = {
|
||||
uid: u for uid, u in units.items()
|
||||
BIN
backend/static/icons/favicon-16.png
Normal file
|
After Width: | Height: | Size: 424 B |
BIN
backend/static/icons/favicon-32.png
Normal file
|
After Width: | Height: | Size: 1.1 KiB |
BIN
backend/static/icons/icon-128.png
Normal file
|
After Width: | Height: | Size: 7.7 KiB |
|
Before Width: | Height: | Size: 288 B After Width: | Height: | Size: 288 B |
BIN
backend/static/icons/icon-144.png
Normal file
|
After Width: | Height: | Size: 9.2 KiB |
|
Before Width: | Height: | Size: 287 B After Width: | Height: | Size: 287 B |
BIN
backend/static/icons/icon-152.png
Normal file
|
After Width: | Height: | Size: 10 KiB |
|
Before Width: | Height: | Size: 288 B After Width: | Height: | Size: 288 B |
BIN
backend/static/icons/icon-192.png
Normal file
|
After Width: | Height: | Size: 15 KiB |