Compare commits
65 Commits
c1bdf17454
...
main
| Author | SHA1 | Date | |
|---|---|---|---|
| 7ce0f6115d | |||
|
|
6492fdff82 | ||
|
|
44d7841852 | ||
|
|
38c600aca3 | ||
|
|
eeda94926f | ||
|
|
57be9bf1f1 | ||
|
|
8431784708 | ||
|
|
c771a86675 | ||
|
|
65ea0920db | ||
|
|
1f3fa7a718 | ||
|
|
a9c9b1fd48 | ||
|
|
4c213c96ee | ||
|
|
ff38b74548 | ||
|
|
c8a030a3ba | ||
|
|
d8a8330427 | ||
|
|
1ef0557ccb | ||
|
|
6c7ce5aad0 | ||
|
|
54754e2279 | ||
|
|
8787a2dbb8 | ||
|
|
7971092509 | ||
|
|
d349af9444 | ||
|
|
be83cb3fe7 | ||
|
|
e9216b9abc | ||
|
|
d93785c230 | ||
|
|
98ee9d7cea | ||
|
|
04c66bdf9c | ||
|
|
8a5fadb5df | ||
|
|
893cb96e8d | ||
|
|
c30d7fac22 | ||
|
|
6d34e543fe | ||
|
|
4d74eda65f | ||
|
|
96cb27ef83 | ||
|
|
85b211e532 | ||
|
|
e16f61aca7 | ||
|
|
dba4ad168c | ||
|
|
e78d252cf3 | ||
|
|
ab9c650d93 | ||
|
|
2d22d0d329 | ||
|
|
7d17d355a7 | ||
|
|
7c89d203d7 | ||
|
|
27f8719e33 | ||
|
|
d97999e26f | ||
|
|
191dceff2b | ||
|
|
6db958ffa6 | ||
|
|
3a41b81bb6 | ||
|
|
3aff0cb076 | ||
|
|
7cadd972be | ||
|
|
274e390c3e | ||
|
|
195df967e4 | ||
|
|
6fc8721830 | ||
|
|
690669c697 | ||
|
|
83593f7b33 | ||
|
|
4cef580185 | ||
|
|
dc853806bb | ||
|
|
802601ae8d | ||
|
|
e46f668c34 | ||
|
|
90ecada35f | ||
|
|
938e950dd6 | ||
|
|
a6ad9fdecf | ||
|
|
02a99ea47d | ||
|
|
247405c361 | ||
|
|
e7e660a9c3 | ||
|
|
36ce63feb1 | ||
|
|
05c63367c8 | ||
|
|
f976e4e893 |
41
.dockerignore
Normal file
@@ -0,0 +1,41 @@
|
||||
# Python cache / compiled
|
||||
__pycache__
|
||||
*.pyc
|
||||
*.pyo
|
||||
*.pyd
|
||||
.Python
|
||||
|
||||
# Build artifacts
|
||||
*.so
|
||||
*.egg
|
||||
*.egg-info
|
||||
dist
|
||||
build
|
||||
|
||||
# VCS
|
||||
.git
|
||||
.gitignore
|
||||
|
||||
# Databases (must live in volumes)
|
||||
*.db
|
||||
*.db-journal
|
||||
|
||||
# Environment / virtualenv
|
||||
.env
|
||||
.venv
|
||||
venv/
|
||||
ENV/
|
||||
|
||||
# Runtime data (mounted volumes)
|
||||
data/
|
||||
|
||||
# Editors / OS junk
|
||||
.vscode/
|
||||
.idea/
|
||||
.DS_Store
|
||||
Thumbs.db
|
||||
.claude
|
||||
sfm.code-workspace
|
||||
|
||||
# Tests (optional)
|
||||
tests/
|
||||
8
.gitignore
vendored
@@ -205,3 +205,11 @@ cython_debug/
|
||||
marimo/_static/
|
||||
marimo/_lsp/
|
||||
__marimo__/
|
||||
|
||||
# Seismo Fleet Manager
|
||||
# SQLite database files
|
||||
*.db
|
||||
*.db-journal
|
||||
data/
|
||||
.aider*
|
||||
.aider*
|
||||
|
||||
416
CHANGELOG.md
Normal file
@@ -0,0 +1,416 @@
|
||||
# Changelog
|
||||
|
||||
All notable changes to Terra-View will be documented in this file.
|
||||
|
||||
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
|
||||
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
|
||||
|
||||
## [0.5.1] - 2026-01-27
|
||||
|
||||
### Added
|
||||
- **Dashboard Schedule View**: Today's scheduled actions now display directly on the main dashboard
|
||||
- New "Today's Actions" panel showing upcoming and past scheduled events
|
||||
- Schedule list partial for project-specific schedule views
|
||||
- API endpoint for fetching today's schedule data
|
||||
- **New Branding Assets**: Complete logo rework for Terra-View
|
||||
- New Terra-View logos for light and dark themes
|
||||
- Retina-ready (@2x) logo variants
|
||||
- Updated favicons (16px and 32px)
|
||||
- Refreshed PWA icons (72px through 512px)
|
||||
|
||||
### Changed
|
||||
- **Dashboard Layout**: Reorganized to include schedule information panel
|
||||
- **Base Template**: Updated to use new Terra-View logos with theme-aware switching
|
||||
|
||||
## [0.5.0] - 2026-01-23
|
||||
|
||||
_Note: This version was not formally released; changes were included in v0.5.1._
|
||||
|
||||
## [0.4.4] - 2026-01-23
|
||||
|
||||
### Added
|
||||
- **Recurring schedules**: New scheduler service, recurring schedule APIs, and schedule templates (calendar/interval/list).
|
||||
- **Alerts UI + backend**: Alerting service plus dropdown/list templates for surfacing notifications.
|
||||
- **Report templates + viewers**: CRUD API for report templates, report preview screen, and RND file viewer.
|
||||
- **SLM tooling**: SLM settings modal and SLM project report generator workflow.
|
||||
|
||||
### Changed
|
||||
- **Project data management**: Unified files view, refreshed FTP browser, and new project header/templates for file/session/unit/assignment lists.
|
||||
- **Device/SLM sync**: Standardized SLM device types and tightened SLMM sync paths.
|
||||
- **Docs/scripts**: Cleanup pass and expanded device-type documentation.
|
||||
|
||||
### Fixed
|
||||
- **Scheduler actions**: Strict command definitions so actions run reliably.
|
||||
- **Project view title**: Resolved JSON string rendering in project headers.
|
||||
|
||||
## [0.4.3] - 2026-01-14
|
||||
|
||||
### Added
|
||||
- **Sound Level Meter roster tooling**: Roster manager surfaces SLM metadata, supports rename unit flows, and adds return-to-project navigation to keep SLM dashboard users oriented.
|
||||
- **Project management templates**: New schedule and unit list templates plus file/session lists show what each project stores before teams dive into deployments.
|
||||
|
||||
### Changed
|
||||
- **Project view refresh**: FTP browser now downloads folders locally, the countdown timer was rebuilt, and project/device templates gained edit modals for projects and locations so navigation feels smoother.
|
||||
- **SLM control sync & accuracy**: Control center groundwork now runs inside the dev UI, configuration edits propagate to SLMM (which caches configs for faster responses), and the SLM live view reads the correct DRD fields after the refactor.
|
||||
|
||||
### Fixed
|
||||
- **SLM UI syntax bug**: Resolved the unexpected token error that appeared in the refreshed SLM components.
|
||||
|
||||
## [0.4.2] - 2026-01-05
|
||||
|
||||
### Added
|
||||
- **SLM Configuration Interface**: Sound Level Meters can now be configured directly from the SLM dashboard
|
||||
- Configuration modal with comprehensive SLM parameter editing
|
||||
- TCP port configuration for SLM control connections (default: 2255)
|
||||
- FTP port configuration for SLM data retrieval (default: 21)
|
||||
- Modem assignment for network access or direct IP connection support
|
||||
- Test Modem button with ping-based connectivity verification (shows IP and response time)
|
||||
- Test SLM Connection button for end-to-end connectivity validation
|
||||
- Dynamic form fields that hide/show based on modem selection
|
||||
- **SLM Dashboard Endpoints**: New API routes for SLM management
|
||||
- `GET /api/slm-dashboard/config/{unit_id}` - Load SLM configuration form
|
||||
- `POST /api/slm-dashboard/config/{unit_id}` - Save SLM configuration
|
||||
- `GET /api/slm-dashboard/test-modem/{modem_id}` - Ping modem for connectivity test
|
||||
- **Database Schema Updates**: Added `slm_ftp_port` column to roster table
|
||||
- Migration script: `scripts/add_slm_ftp_port.py`
|
||||
- Supports both TCP (control) and FTP (data) port configuration per SLM unit
|
||||
- **Docker Environment Enhancements**:
|
||||
- Added `iputils-ping` and `curl` packages to Docker image for network diagnostics
|
||||
- Health check endpoint support via curl
|
||||
|
||||
### Fixed
|
||||
- **Form Validation**: Fixed 400 Bad Request error when adding modem units
|
||||
- Form fields for device-specific parameters now properly disabled when hidden
|
||||
- Empty string values for integer fields no longer cause validation failures
|
||||
- JavaScript now disables hidden form sections to prevent unwanted data submission
|
||||
- **Unit Status Accuracy**: Fixed issue where unit status was loading from a saved cache instead of actual last-heard time
|
||||
- Unit status now accurately reflects real-time connectivity
|
||||
- Status determination based on actual `slm_last_check` timestamp
|
||||
|
||||
### Changed
|
||||
- **Roster Form Behavior**: Device-specific form fields are now disabled (not just hidden) when not applicable
|
||||
- Prevents SLM fields from submitting when adding modems
|
||||
- Prevents modem fields from submitting when adding SLMs
|
||||
- Cleaner form submissions with only relevant data
|
||||
- **Port Field Handling**: Backend now accepts port fields as strings and converts to integers
|
||||
- Handles empty string values gracefully
|
||||
- Proper type conversion with None fallback for empty values
|
||||
|
||||
### Technical Details
|
||||
- Added `setFieldsDisabled()` helper function for managing form field state
|
||||
- Updated `toggleDeviceFields()` and `toggleEditDeviceFields()` to disable/enable fields
|
||||
- Backend type conversion: `slm_tcp_port` and `slm_ftp_port` accept strings, convert to int with empty string handling
|
||||
- Modem ping uses subprocess with 1 packet, 2-second timeout, returns response time in milliseconds
|
||||
- Configuration form uses 3-column grid layout for TCP Port, FTP Port, and Direct IP fields
|
||||
|
||||
## [0.4.1] - 2026-01-05
|
||||
### Added
|
||||
- **SLM Integration**: Sound Level Meters are now manageable in SFM
|
||||
|
||||
### Fixed
|
||||
- Fixed an issue where unit status was loading from a saved cache and not based on when it was actually heard from last. Unit status is now accurate.
|
||||
|
||||
|
||||
## [0.4.0] - 2025-12-16
|
||||
|
||||
### Added
|
||||
- **Database Management System**: Comprehensive backup and restore capabilities
|
||||
- **Manual Snapshots**: Create on-demand backups of the entire database with optional descriptions
|
||||
- **Restore from Snapshot**: Restore database from any snapshot with automatic safety backup
|
||||
- **Upload/Download Snapshots**: Transfer database snapshots to/from the server
|
||||
- **Database Tab**: New dedicated tab in Settings for all database management operations
|
||||
- **Database Statistics**: View database size, row counts by table, and last modified time
|
||||
- **Snapshot Metadata**: Each snapshot includes creation time, description, size, and type (manual/automatic)
|
||||
- **Safety Backups**: Automatic backup created before any restore operation
|
||||
- **Remote Database Cloning**: Dev tools for cloning production database to remote development servers
|
||||
- **Clone Script**: `scripts/clone_db_to_dev.py` for copying database over WAN
|
||||
- **Network Upload**: Upload snapshots via HTTP to remote servers
|
||||
- **Auto-restore**: Automatically restore uploaded database on target server
|
||||
- **Authentication Support**: Optional token-based authentication for secure transfers
|
||||
- **Automatic Backup Scheduler**: Background service for automated database backups
|
||||
- **Configurable Intervals**: Set backup frequency (default: 24 hours)
|
||||
- **Retention Management**: Automatically delete old backups (configurable keep count)
|
||||
- **Manual Trigger**: Force immediate backup via API
|
||||
- **Status Monitoring**: Check scheduler status and next scheduled run time
|
||||
- **Background Thread**: Non-blocking operation using Python threading
|
||||
- **Settings Reorganization**: Improved tab structure for better organization
|
||||
- Renamed "Data Management" tab to "Roster Management"
|
||||
- Moved CSV Replace Mode from Advanced tab to Roster Management tab
|
||||
- Created dedicated Database tab for all backup/restore operations
|
||||
- **Comprehensive Documentation**: New `docs/DATABASE_MANAGEMENT.md` guide covering:
|
||||
- Manual snapshot creation and restoration workflows
|
||||
- Download/upload procedures for off-site backups
|
||||
- Remote database cloning setup and usage
|
||||
- Automatic backup configuration and integration
|
||||
- API reference for all database endpoints
|
||||
- Best practices and troubleshooting guide
|
||||
|
||||
### Changed
|
||||
- **Settings Tab Organization**: Restructured for better logical grouping
|
||||
- **General**: Display preferences (timezone, theme, auto-refresh)
|
||||
- **Roster Management**: CSV operations and roster table (now includes Replace Mode)
|
||||
- **Database**: All backup/restore operations (NEW)
|
||||
- **Advanced**: Power user settings (calibration, thresholds)
|
||||
- **Danger Zone**: Destructive operations
|
||||
- CSV Replace Mode warnings enhanced and moved to Roster Management context
|
||||
|
||||
### Technical Details
|
||||
- **SQLite Backup API**: Uses native SQLite backup API for concurrent-safe snapshots
|
||||
- **Metadata Tracking**: JSON sidecar files store snapshot metadata alongside database files
|
||||
- **Atomic Operations**: Database restoration is atomic with automatic rollback on failure
|
||||
- **File Structure**: Snapshots stored in `./data/backups/` with timestamped filenames
|
||||
- **API Endpoints**: 7 new endpoints for database management operations
|
||||
- **Backup Service**: `backend/services/database_backup.py` - Core backup/restore logic
|
||||
- **Scheduler Service**: `backend/services/backup_scheduler.py` - Automatic backup automation
|
||||
- **Clone Utility**: `scripts/clone_db_to_dev.py` - Remote database synchronization tool
|
||||
|
||||
### Security Considerations
|
||||
- Snapshots contain full database data and should be secured appropriately
|
||||
- Remote cloning supports optional authentication tokens
|
||||
- Restore operations require safety backup creation by default
|
||||
- All destructive operations remain in Danger Zone with warnings
|
||||
|
||||
### Migration Notes
|
||||
No database migration required for v0.4.0. All new features use existing database structure and add new backup management capabilities without modifying the core schema.
|
||||
|
||||
## [0.3.3] - 2025-12-12
|
||||
|
||||
### Changed
|
||||
- **Mobile Navigation**: Moved hamburger menu button from floating top-right to bottom navigation bar
|
||||
- Bottom nav now shows: Menu (hamburger), Dashboard, Roster, Settings
|
||||
- Removed "Add Unit" from bottom nav (still accessible via sidebar menu)
|
||||
- Hamburger no longer floats over content on mobile
|
||||
- **Status Dot Visibility**: Increased status dot size from 12px to 16px (w-3/h-3 → w-4/h-4) in dashboard fleet overview for better at-a-glance visibility
|
||||
- Affects both Active and Benched tabs in dashboard
|
||||
- Makes status colors (green/yellow/red) easier to spot during quick scroll
|
||||
|
||||
### Fixed
|
||||
- **Location Navigation**: Moved tap-to-navigate functionality from roster card view to unit detail modal only
|
||||
- Roster cards now show simple location text with pin emoji
|
||||
- Navigation links (opening Maps app) only appear in the modal when tapping a unit
|
||||
- Reduces visual clutter and accidental navigation triggers
|
||||
|
||||
### Technical Details
|
||||
- Bottom navigation remains at 4 buttons, first button now triggers sidebar menu
|
||||
- Removed standalone hamburger button element and associated CSS
|
||||
- Modal already had navigation links, no changes needed there
|
||||
|
||||
## [0.3.2] - 2025-12-12
|
||||
|
||||
### Added
|
||||
- **Progressive Web App (PWA) Mobile Optimization**: Complete mobile-first redesign for field deployment usage
|
||||
- **Responsive Navigation**: Hamburger menu with slide-in sidebar for mobile, always-visible sidebar for desktop
|
||||
- **Bottom Navigation Bar**: Quick access to Dashboard, Roster, Add Unit, and Settings (mobile only)
|
||||
- **Mobile Card View**: Compact card layout for roster units with status dots, location, and project ID
|
||||
- **Tap-to-Navigate**: Location addresses and coordinates are clickable and open in user's default navigation app (Google Maps, Apple Maps, Waze, etc.)
|
||||
- **Unit Detail Modal**: Bottom sheet modal showing full unit details with edit capabilities (tap any unit card to open)
|
||||
- **Touch Optimization**: 44x44px minimum button targets following iOS/Android accessibility guidelines
|
||||
- **Service Worker**: Network-first caching strategy for offline-capable operation
|
||||
- **IndexedDB Storage**: Offline data persistence for unit information and pending edits
|
||||
- **Background Sync**: Queues edits made while offline and syncs automatically when connection returns
|
||||
- **Offline Indicator**: Visual banner showing offline status with manual sync button
|
||||
- **PWA Manifest**: Installable as a standalone app on mobile devices with custom icons
|
||||
- **Hard Reload Button**: "Clear Cache & Reload" utility in sidebar menu to force fresh JavaScript/CSS
|
||||
- **Mobile-Specific Files**:
|
||||
- `backend/static/mobile.css` - Mobile UI styles, hamburger menu, bottom nav, cards, modals
|
||||
- `backend/static/mobile.js` - Mobile interactions, offline sync, modal management
|
||||
- `backend/static/sw.js` - Service worker for PWA functionality
|
||||
- `backend/static/offline-db.js` - IndexedDB wrapper for offline storage
|
||||
- `backend/static/manifest.json` - PWA configuration
|
||||
- `backend/static/icons/` - 8 PWA icon sizes (72px-512px)
|
||||
|
||||
### Changed
|
||||
- **Dashboard Alerts**: Only show Missing units in notifications (Pending units no longer appear in alerts)
|
||||
- **Roster Template**: Mobile card view shows status from server-side render instead of fetching separately
|
||||
- **Mobile Status Display**: Benched units show "Benched" label instead of "Unknown" or "N/A"
|
||||
- **Base Template**: Added cache-busting query parameters to JavaScript files (e.g., `mobile.js?v=0.3.2`)
|
||||
- **Sidebar Menu**: Added utility section with "Toggle theme" and "Clear Cache & Reload" buttons
|
||||
|
||||
### Fixed
|
||||
- **Modal Status Display**: Fixed unit detail modal showing "Unknown" status by passing status data from card to modal
|
||||
- **Mobile Card Status**: Fixed grey dot with "Unknown" label for benched units - now properly shows deployment state
|
||||
- **Status Data Passing**: Roster cards now pass status and age to modal via function parameters and global status map
|
||||
- **Service Worker Caching**: Aggressive browser caching issue resolved with version query parameters and hard reload function
|
||||
|
||||
### Technical Details
|
||||
- Mobile breakpoint at 768px (`md:` prefix in TailwindCSS)
|
||||
- PWA installable via Add to Home Screen on iOS/Android
|
||||
- Service worker caches all static assets with network-first strategy
|
||||
- Google Maps search API used for universal navigation links (works across all map apps)
|
||||
- Status map stored in `window.rosterStatusMap` from server-side rendered data
|
||||
- Hard reload function clears service worker caches, unregisters workers, and deletes IndexedDB
|
||||
|
||||
## [0.3.1] - 2025-12-12
|
||||
|
||||
### Fixed
|
||||
- **Dashboard Notifications**: Removed Pending units from alert list - only Missing units now trigger notifications
|
||||
- **Status Dots**: Verified deployed units display correct status dots (OK=green, Pending=yellow, Missing=red) in both active and benched tables
|
||||
- **Mobile Card View**: Fixed roster cards showing "Unknown" status by using `.get()` with defaults in backend routes
|
||||
- **Backend Status Handling**: Added default values for status, age, last_seen fields to prevent KeyError exceptions
|
||||
|
||||
### Changed
|
||||
- Backend roster partial routes (`/partials/roster-deployed`, `/partials/roster-benched`) now use `.get()` method with sensible defaults
|
||||
- Deployed units default to "Unknown" status when data unavailable
|
||||
- Benched units default to "N/A" status when data unavailable
|
||||
|
||||
## [0.3.0] - 2025-12-09
|
||||
|
||||
### Added
|
||||
- **Series 4 (Micromate) Support**: New `/api/series4/heartbeat` endpoint for receiving telemetry from Series 4 Micromate units
|
||||
- Auto-detection of Series 4 units via UM##### ID pattern
|
||||
- Stores project hints from emitter payload in unit notes
|
||||
- Automatic unit type classification across both Series 3 and Series 4 endpoints
|
||||
- **Development Environment Labels**: Visual indicators to distinguish dev from production deployments
|
||||
- Yellow "DEV" badge in sidebar navigation
|
||||
- "[DEV]" prefix in browser title
|
||||
- Yellow banner on dashboard when running in development mode
|
||||
- Environment variable support in docker-compose.yml (ENVIRONMENT=production|development)
|
||||
- **Quality of Life Improvements**:
|
||||
- Human-readable relative timestamps (e.g., "2h 15m ago", "3d ago") with full date in tooltips
|
||||
- "Last Updated" timestamp indicator on dashboard
|
||||
- Status icons for colorblind accessibility (checkmark for OK, clock for Pending, X for Missing)
|
||||
- Breadcrumb navigation on unit detail pages
|
||||
- Copy-to-clipboard buttons for unit IDs
|
||||
- Search/filter functionality for fleet roster table
|
||||
- Improved empty state messages with icons
|
||||
- **Timezone Support**: Comprehensive timezone handling across the application
|
||||
- Timezone selector in Settings (defaults to America/New_York EST)
|
||||
- Human-readable timestamp format (e.g., "9/10/2020 8:00 AM EST")
|
||||
- Timezone-aware display for all timestamps site-wide
|
||||
- Settings stored in localStorage for immediate effect
|
||||
- **Settings Page Redesign**: Complete overhaul with tabbed interface and persistent preferences
|
||||
- **General Tab**: Display preferences (timezone, theme, auto-refresh interval)
|
||||
- **Data Management Tab**: Safe operations (CSV export, merge import, roster table)
|
||||
- **Advanced Tab**: Power user settings (replace mode import, calibration defaults, status thresholds)
|
||||
- **Danger Zone Tab**: Destructive operations isolated with enhanced warnings
|
||||
- Backend preferences storage via new UserPreferences model
|
||||
- Tab state persistence in localStorage
|
||||
- Smooth animations and consistent styling with existing pages
|
||||
- **User Preferences API**: New backend endpoints for persistent settings storage
|
||||
- `GET /api/settings/preferences` - Retrieve all user preferences
|
||||
- `PUT /api/settings/preferences` - Update preferences (supports partial updates)
|
||||
- Database-backed storage for cross-device preference sync
|
||||
- Migration script: `backend/migrate_add_user_preferences.py`
|
||||
|
||||
### Changed
|
||||
- Timestamps now display in user-selected timezone with human-readable format throughout the application
|
||||
- Settings page reorganized from 644-line flat layout to clean 4-tab interface
|
||||
- CSV Replace Mode moved from Data Management to Advanced tab with additional warnings
|
||||
- Import operations separated: safe merge in Data Management tab, destructive replace in Advanced tab
|
||||
- Page title changed from "Roster Manager" to "Settings" for better clarity
|
||||
- All preferences now persist to backend database instead of relying solely on localStorage
|
||||
|
||||
### Fixed
|
||||
- Unit type classification now consistent across Series 3 and Series 4 heartbeat endpoints
|
||||
- Auto-correction of misclassified unit types when they report to wrong endpoint
|
||||
|
||||
### Technical Details
|
||||
- New `detect_unit_type()` helper function for pattern-based unit classification
|
||||
- UserPreferences model with single-row table pattern (id=1) for global settings
|
||||
- Series 4 units identified by UM prefix followed by digits (e.g., UM11719)
|
||||
- JavaScript Intl API used for client-side timezone conversion
|
||||
- Pydantic schema for partial preference updates (PreferencesUpdate model)
|
||||
- Environment context injection via custom FastAPI template response wrapper
|
||||
|
||||
## [0.2.1] - 2025-12-03
|
||||
|
||||
### Added
|
||||
- `/settings` roster manager page with CSV export/import, live stats, and danger-zone reset controls.
|
||||
- `/api/settings` router that exposes `export-csv`, `stats`, `roster-units`, `import-csv-replace`, and the clear-* endpoints backing the UI.
|
||||
- Dedicated HTMX partials/tabs for deployed, benched, retired, and ignored units plus new ignored-table UI to unignore or delete entries.
|
||||
|
||||
### Changed
|
||||
- Roster and unit detail templates now display device-type specific metadata (calibration windows, modem pairings, IP/phone fields) alongside inline actions.
|
||||
- Base navigation highlights the new settings workflow and routes retired/ignored buckets through dedicated endpoints + partials.
|
||||
|
||||
### Fixed
|
||||
- Snapshot summary counts only consider deployed units, preventing dashboard alerts from including benched hardware.
|
||||
- Snapshot payloads now include address/coordinate metadata so map widgets and CSV exports stay accurate.
|
||||
|
||||
## [0.2.0] - 2025-12-03
|
||||
|
||||
### Added
|
||||
- Device-type aware roster schema (seismographs vs modems) with new metadata columns plus `backend/migrate_add_device_types.py` for upgrading existing SQLite files.
|
||||
- `create_test_db.py` helper that generates a ready-to-use demo database with sample seismographs, modems, and emitter rows.
|
||||
- Ignore list persistence/API so noisy legacy emitters can be quarantined via `/api/roster/ignore` and surfaced in the UI.
|
||||
- Roster page enhancements: Add Unit modal, CSV import modal, and HTMX-powered table fed by `/partials/roster-table`.
|
||||
- Unit detail view rewritten to fetch data via API, expose deployment status, and allow edits to all metadata.
|
||||
|
||||
### Changed
|
||||
- Snapshot service now merges roster + emitter data into active/benched/retired/unknown buckets and includes device-specific metadata in each record.
|
||||
- Roster edit endpoints parse date fields, manage modem/seismograph specific attributes, and guarantee records exist when toggling deployed/retired states.
|
||||
- Dashboard partial endpoints are grouped under `/dashboard/*` so HTMX tabs stay in sync with the consolidated snapshot payload.
|
||||
|
||||
### Fixed
|
||||
- Toggling deployed/retired flags no longer fails when a unit does not exist because the router now auto-creates placeholder roster rows.
|
||||
- CSV import applies address/coordinate updates instead of silently dropping unknown columns.
|
||||
|
||||
## [0.1.1] - 2025-12-02
|
||||
|
||||
### Added
|
||||
- **Roster Editing API**: Full CRUD operations for roster management
|
||||
- `POST /api/roster/add` - Add new units to roster
|
||||
- `POST /api/roster/set-deployed/{unit_id}` - Toggle deployment status
|
||||
- `POST /api/roster/set-retired/{unit_id}` - Toggle retired status
|
||||
- `POST /api/roster/set-note/{unit_id}` - Update unit notes
|
||||
- **CSV Import**: Bulk roster import functionality
|
||||
- `POST /api/roster/import-csv` - Import units from CSV file
|
||||
- Support for all roster fields: unit_id, unit_type, deployed, retired, note, project_id, location
|
||||
- Optional update_existing parameter to control duplicate handling
|
||||
- Detailed import summary with added/updated/skipped/error counts
|
||||
- **Enhanced Database Models**:
|
||||
- Added `project_id` field to RosterUnit model
|
||||
- Added `location` field to RosterUnit model
|
||||
- Added `last_updated` timestamp tracking
|
||||
- **Dashboard Enhancements**:
|
||||
- Separate views for Active, Benched, and Retired units
|
||||
- New endpoints: `/dashboard/active` and `/dashboard/benched`
|
||||
|
||||
### Fixed
|
||||
- Database session management bug in `emit_status_snapshot()`
|
||||
- Added `get_db_session()` helper function for direct session access
|
||||
- Implemented proper session cleanup with try/finally blocks
|
||||
- Database schema synchronization issues
|
||||
- Database now properly recreates when model changes are detected
|
||||
|
||||
### Changed
|
||||
- Updated RosterUnit model to include additional metadata fields
|
||||
- Improved error handling in CSV import with row-level error reporting
|
||||
- Enhanced snapshot service to properly manage database connections
|
||||
|
||||
### Technical Details
|
||||
- All roster editing endpoints use Form data for better HTML form compatibility
|
||||
- CSV import uses multipart/form-data for file uploads
|
||||
- Boolean fields in CSV accept: 'true', '1', 'yes' (case-insensitive)
|
||||
- Database sessions now properly closed to prevent connection leaks
|
||||
|
||||
## [0.1.0] - 2024-11-20
|
||||
|
||||
### Added
|
||||
- Initial release of Seismo Fleet Manager
|
||||
- FastAPI-based REST API for fleet management
|
||||
- SQLite database with SQLAlchemy ORM
|
||||
- Emitter reporting endpoints
|
||||
- Basic fleet status monitoring
|
||||
- Docker and Docker Compose support
|
||||
- Web-based dashboard with HTMX
|
||||
- Dark/light mode toggle
|
||||
- Interactive maps with Leaflet
|
||||
- Photo management per unit
|
||||
- Automated status categorization (OK/Pending/Missing)
|
||||
|
||||
[0.5.1]: https://github.com/serversdwn/seismo-fleet-manager/compare/v0.5.0...v0.5.1
|
||||
[0.5.0]: https://github.com/serversdwn/seismo-fleet-manager/compare/v0.4.4...v0.5.0
|
||||
[0.4.4]: https://github.com/serversdwn/seismo-fleet-manager/compare/v0.4.3...v0.4.4
|
||||
[0.4.3]: https://github.com/serversdwn/seismo-fleet-manager/compare/v0.4.2...v0.4.3
|
||||
[0.4.2]: https://github.com/serversdwn/seismo-fleet-manager/compare/v0.4.1...v0.4.2
|
||||
[0.4.1]: https://github.com/serversdwn/seismo-fleet-manager/compare/v0.4.0...v0.4.1
|
||||
[0.4.0]: https://github.com/serversdwn/seismo-fleet-manager/compare/v0.3.3...v0.4.0
|
||||
[0.3.3]: https://github.com/serversdwn/seismo-fleet-manager/compare/v0.3.2...v0.3.3
|
||||
[0.3.2]: https://github.com/serversdwn/seismo-fleet-manager/compare/v0.3.1...v0.3.2
|
||||
[0.3.1]: https://github.com/serversdwn/seismo-fleet-manager/compare/v0.3.0...v0.3.1
|
||||
[0.3.0]: https://github.com/serversdwn/seismo-fleet-manager/compare/v0.2.1...v0.3.0
|
||||
[0.2.1]: https://github.com/serversdwn/seismo-fleet-manager/compare/v0.2.0...v0.2.1
|
||||
[0.2.0]: https://github.com/serversdwn/seismo-fleet-manager/compare/v0.1.1...v0.2.0
|
||||
[0.1.1]: https://github.com/serversdwn/seismo-fleet-manager/compare/v0.1.0...v0.1.1
|
||||
[0.1.0]: https://github.com/serversdwn/seismo-fleet-manager/releases/tag/v0.1.0
|
||||
24
Dockerfile
Normal file
@@ -0,0 +1,24 @@
|
||||
FROM python:3.11-slim
|
||||
|
||||
# Set working directory
|
||||
WORKDIR /app
|
||||
|
||||
# Install system dependencies (ping for network diagnostics)
|
||||
RUN apt-get update && \
|
||||
apt-get install -y --no-install-recommends iputils-ping curl && \
|
||||
rm -rf /var/lib/apt/lists/*
|
||||
|
||||
# Copy requirements first for better caching
|
||||
COPY requirements.txt .
|
||||
|
||||
# Install dependencies
|
||||
RUN pip install --no-cache-dir -r requirements.txt
|
||||
|
||||
# Copy application code
|
||||
COPY . .
|
||||
|
||||
# Expose port
|
||||
EXPOSE 8001
|
||||
|
||||
# Run the application using the new backend structure
|
||||
CMD ["uvicorn", "backend.main:app", "--host", "0.0.0.0", "--port", "8001"]
|
||||
303
FRONTEND_README.md
Normal file
@@ -0,0 +1,303 @@
|
||||
# Seismo Fleet Manager - Frontend Documentation
|
||||
|
||||
## Overview
|
||||
|
||||
This is the MVP frontend scaffold for **Seismo Fleet Manager**, built with:
|
||||
- **FastAPI** (backend framework)
|
||||
- **HTMX** (dynamic updates without JavaScript frameworks)
|
||||
- **TailwindCSS** (utility-first styling)
|
||||
- **Jinja2** (server-side templating)
|
||||
- **Leaflet** (interactive maps)
|
||||
|
||||
No React, Vue, or other frontend frameworks are used.
|
||||
|
||||
## Project Structure
|
||||
|
||||
```
|
||||
seismo-fleet-manager/
|
||||
├── backend/
|
||||
│ ├── main.py # FastAPI app entry point
|
||||
│ ├── routers/
|
||||
│ │ ├── roster.py # Fleet roster endpoints
|
||||
│ │ ├── units.py # Individual unit endpoints
|
||||
│ │ └── photos.py # Photo management endpoints
|
||||
│ ├── services/
|
||||
│ │ └── snapshot.py # Mock status snapshot (replace with real logic)
|
||||
│ ├── static/
|
||||
│ │ └── style.css # Custom CSS
|
||||
│ ├── database.py # SQLAlchemy database setup
|
||||
│ ├── models.py # Database models
|
||||
│ └── routes.py # Legacy API routes
|
||||
├── templates/
|
||||
│ ├── base.html # Base layout with sidebar & dark mode
|
||||
│ ├── dashboard.html # Main dashboard page
|
||||
│ ├── roster.html # Fleet roster page
|
||||
│ ├── unit_detail.html # Unit detail page
|
||||
│ └── partials/
|
||||
│ └── roster_table.html # HTMX partial for roster table
|
||||
├── data/
|
||||
│ └── photos/ # Photo storage (organized by unit_id)
|
||||
└── requirements.txt
|
||||
```
|
||||
|
||||
## Running the Application
|
||||
|
||||
### Install Dependencies
|
||||
|
||||
```bash
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
### Run the Server
|
||||
|
||||
```bash
|
||||
uvicorn backend.main:app --host 0.0.0.0 --port 8001 --reload
|
||||
```
|
||||
|
||||
The application will be available at:
|
||||
- **Web UI**: http://localhost:8001/
|
||||
- **API Docs**: http://localhost:8001/docs
|
||||
- **Health Check**: http://localhost:8001/health
|
||||
|
||||
## Features
|
||||
|
||||
### 1. Dashboard (`/`)
|
||||
|
||||
The main dashboard provides an at-a-glance view of the fleet:
|
||||
|
||||
- **Fleet Summary Card**: Total units, deployed units, status breakdown
|
||||
- **Recent Alerts Card**: Shows units with Missing or Pending status
|
||||
- **Recent Photos Card**: Placeholder for photo gallery
|
||||
- **Fleet Status Preview**: Quick view of first 5 units
|
||||
|
||||
**Auto-refresh**: Dashboard updates every 10 seconds via HTMX
|
||||
|
||||
### 2. Fleet Roster (`/roster`)
|
||||
|
||||
A comprehensive table view of all seismograph units:
|
||||
|
||||
**Columns**:
|
||||
- Status indicator (colored dot: green=OK, yellow=Pending, red=Missing)
|
||||
- Deployment indicator (blue dot if deployed)
|
||||
- Unit ID
|
||||
- Last seen timestamp
|
||||
- Age since last contact
|
||||
- Notes
|
||||
- Actions (View detail button)
|
||||
|
||||
**Features**:
|
||||
- Auto-refresh every 10 seconds
|
||||
- Sorted by priority (Missing > Pending > OK)
|
||||
- Click any row to view unit details
|
||||
|
||||
### 3. Unit Detail Page (`/unit/{unit_id}`)
|
||||
|
||||
Split-screen layout with detailed information:
|
||||
|
||||
**Left Column**:
|
||||
- Status card with real-time updates
|
||||
- Deployment status
|
||||
- Last contact time and file
|
||||
- Notes section
|
||||
- Editable metadata (mock form)
|
||||
|
||||
**Right Column - Tabbed Interface**:
|
||||
- **Photos Tab**: Primary photo with thumbnail gallery
|
||||
- **Map Tab**: Interactive Leaflet map showing unit location
|
||||
- **History Tab**: Placeholder for event history
|
||||
|
||||
**Auto-refresh**: Unit data updates every 10 seconds
|
||||
|
||||
### 4. Dark/Light Mode
|
||||
|
||||
Toggle button in sidebar switches between themes:
|
||||
- Uses Tailwind's `dark:` classes
|
||||
- Preference saved to localStorage
|
||||
- Smooth transitions on theme change
|
||||
|
||||
## API Endpoints
|
||||
|
||||
### Status & Fleet Data
|
||||
|
||||
```http
|
||||
GET /api/status-snapshot
|
||||
```
|
||||
Returns complete fleet status snapshot with statistics.
|
||||
|
||||
```http
|
||||
GET /api/roster
|
||||
```
|
||||
Returns sorted list of all units for roster table.
|
||||
|
||||
```http
|
||||
GET /api/unit/{unit_id}
|
||||
```
|
||||
Returns detailed information for a single unit including coordinates.
|
||||
|
||||
### Photo Management
|
||||
|
||||
```http
|
||||
GET /api/unit/{unit_id}/photos
|
||||
```
|
||||
Returns list of photos for a unit, sorted by recency.
|
||||
|
||||
```http
|
||||
GET /api/unit/{unit_id}/photo/{filename}
|
||||
```
|
||||
Serves a specific photo file.
|
||||
|
||||
### Legacy Endpoints
|
||||
|
||||
```http
|
||||
POST /emitters/report
|
||||
```
|
||||
Endpoint for emitters to report status (from original backend).
|
||||
|
||||
```http
|
||||
GET /fleet/status
|
||||
```
|
||||
Returns database-backed fleet status (from original backend).
|
||||
|
||||
## Mock Data
|
||||
|
||||
### Location: `backend/services/snapshot.py`
|
||||
|
||||
The `emit_status_snapshot()` function currently returns mock data with 8 units:
|
||||
|
||||
- **BE1234**: OK, deployed (San Francisco)
|
||||
- **BE5678**: Pending, deployed (Los Angeles)
|
||||
- **BE9012**: Missing, deployed (New York)
|
||||
- **BE3456**: OK, benched (Chicago)
|
||||
- **BE7890**: OK, deployed (Houston)
|
||||
- **BE2468**: Pending, deployed
|
||||
- **BE1357**: OK, benched
|
||||
- **BE8642**: Missing, deployed
|
||||
|
||||
**To replace with real data**: Update the `emit_status_snapshot()` function to call your Series3 emitter logic.
|
||||
|
||||
## Styling
|
||||
|
||||
### Color Palette
|
||||
|
||||
The application uses your brand colors:
|
||||
|
||||
```css
|
||||
orange: #f48b1c
|
||||
navy: #142a66
|
||||
burgundy: #7d234d
|
||||
```
|
||||
|
||||
These are configured in the Tailwind config as `seismo-orange`, `seismo-navy`, `seismo-burgundy`.
|
||||
|
||||
### Cards
|
||||
|
||||
All cards use the consistent styling:
|
||||
```html
|
||||
<div class="rounded-xl shadow-lg bg-white dark:bg-slate-800 p-6">
|
||||
```
|
||||
|
||||
### Status Indicators
|
||||
|
||||
- Green dot: OK status
|
||||
- Yellow dot: Pending status
|
||||
- Red dot: Missing status
|
||||
- Blue dot: Deployed
|
||||
- Gray dot: Benched
|
||||
|
||||
## HTMX Usage
|
||||
|
||||
HTMX enables dynamic updates without writing JavaScript:
|
||||
|
||||
### Auto-refresh Example (Dashboard)
|
||||
|
||||
```html
|
||||
<div hx-get="/api/status-snapshot"
|
||||
hx-trigger="load, every 10s"
|
||||
hx-swap="none"
|
||||
hx-on::after-request="updateDashboard(event)">
|
||||
```
|
||||
|
||||
This fetches the snapshot on page load and every 10 seconds, then calls a JavaScript function to update the DOM.
|
||||
|
||||
### Partial Template Loading (Roster)
|
||||
|
||||
```html
|
||||
<div hx-get="/partials/roster-table"
|
||||
hx-trigger="load, every 10s"
|
||||
hx-swap="innerHTML">
|
||||
```
|
||||
|
||||
This replaces the entire inner HTML with the server-rendered roster table every 10 seconds.
|
||||
|
||||
## Adding Photos
|
||||
|
||||
To add photos for a unit:
|
||||
|
||||
1. Create a directory: `data/photos/{unit_id}/`
|
||||
2. Add image files (jpg, jpeg, png, gif, webp)
|
||||
3. Photos will automatically appear on the unit detail page
|
||||
4. Most recent file becomes the primary photo
|
||||
|
||||
Example:
|
||||
```bash
|
||||
mkdir -p data/photos/BE1234
|
||||
cp my-photo.jpg data/photos/BE1234/deployment-site.jpg
|
||||
```
|
||||
|
||||
## Customization
|
||||
|
||||
### Adding New Pages
|
||||
|
||||
1. Create a template in `templates/`
|
||||
2. Add a route in `backend/main.py`:
|
||||
|
||||
```python
|
||||
@app.get("/my-page", response_class=HTMLResponse)
|
||||
async def my_page(request: Request):
|
||||
return templates.TemplateResponse("my_page.html", {"request": request})
|
||||
```
|
||||
|
||||
3. Add a navigation link in `templates/base.html`
|
||||
|
||||
### Adding New API Endpoints
|
||||
|
||||
1. Create a router file in `backend/routers/`
|
||||
2. Include the router in `backend/main.py`:
|
||||
|
||||
```python
|
||||
from backend.routers import my_router
|
||||
app.include_router(my_router.router)
|
||||
```
|
||||
|
||||
## Docker Deployment
|
||||
|
||||
The project includes Docker configuration:
|
||||
|
||||
```bash
|
||||
docker-compose up
|
||||
```
|
||||
|
||||
This will start the application on port 8001 (configured to avoid conflicts with port 8000).
|
||||
|
||||
## Next Steps
|
||||
|
||||
1. **Replace Mock Data**: Update `backend/services/snapshot.py` with real Series3 emitter logic
|
||||
2. **Database Integration**: The existing SQLAlchemy models can store historical data
|
||||
3. **Photo Upload**: Add a form to upload photos from the UI
|
||||
4. **Projects Management**: Implement the "Projects" page
|
||||
5. **Settings**: Add user preferences and configuration
|
||||
6. **Event History**: Populate the History tab with real event data
|
||||
7. **Authentication**: Add user login/authentication if needed
|
||||
8. **Notifications**: Add real-time alerts for critical status changes
|
||||
|
||||
## Development Tips
|
||||
|
||||
- The `--reload` flag auto-reloads the server when code changes
|
||||
- Use browser DevTools to debug HTMX requests (look for `HX-Request` headers)
|
||||
- Check `/docs` for interactive API documentation (Swagger UI)
|
||||
- Dark mode state persists in browser localStorage
|
||||
- All timestamps are currently mock data - replace with real values
|
||||
|
||||
## License
|
||||
|
||||
See main README.md for license information.
|
||||
602
README.md
@@ -1,2 +1,600 @@
|
||||
# seismo-fleet-manager
|
||||
Web app and backend for tracking deployed units.
|
||||
# Terra-View v0.5.1
|
||||
Backend API and HTMX-powered web interface for managing a mixed fleet of seismographs and field modems. Track deployments, monitor health in real time, merge roster intent with incoming telemetry, and control your fleet through a unified database and dashboard.
|
||||
|
||||
## Features
|
||||
|
||||
- **Progressive Web App (PWA)**: Mobile-first responsive design optimized for field deployment operations
|
||||
- **Install as App**: Add to home screen on iOS/Android for native app experience
|
||||
- **Offline Capable**: Service worker caching with IndexedDB storage for offline operation
|
||||
- **Touch Optimized**: 44x44px minimum touch targets, hamburger menu, bottom navigation bar
|
||||
- **Mobile Card View**: Compact unit cards with status dots, tap-to-navigate locations, and detail modals
|
||||
- **Background Sync**: Queue edits while offline and automatically sync when connection returns
|
||||
- **Web Dashboard**: Modern, responsive UI with dark/light mode, live HTMX updates, and integrated fleet map
|
||||
- **Fleet Monitoring**: Track deployed, benched, retired, and ignored units in separate buckets with unknown-emitter triage
|
||||
- **Roster Management**: Full CRUD + CSV import/export, device-type aware metadata, and inline actions from the roster tables
|
||||
- **Settings & Safeguards**: `/settings` page exposes roster stats, exports, replace-all imports, and danger-zone reset tools
|
||||
- **Device & Modem Metadata**: Capture calibration windows, modem pairings, phone/IP details, and addresses per unit
|
||||
- **Status Management**: Automatically mark deployed units as OK, Pending (>12h), or Missing (>24h) based on recent telemetry
|
||||
- **Data Ingestion**: Accept reports from emitter scripts via REST API
|
||||
- **Photo Management**: Upload and view photos for each unit
|
||||
- **Interactive Maps**: Leaflet-based maps showing unit locations with tap-to-navigate for mobile
|
||||
- **SQLite Storage**: Lightweight, file-based database for easy deployment
|
||||
- **Database Management**: Comprehensive backup and restore system
|
||||
- **Manual Snapshots**: Create on-demand backups with descriptions
|
||||
- **Restore from Snapshot**: Restore database with automatic safety backups
|
||||
- **Upload/Download**: Transfer database snapshots for off-site storage
|
||||
- **Remote Cloning**: Copy production database to remote dev servers over WAN
|
||||
- **Automatic Backups**: Scheduled background backups with configurable retention
|
||||
|
||||
## Roster Manager & Settings
|
||||
|
||||
Visit [`/settings`](http://localhost:8001/settings) to perform bulk roster operations with guardrails:
|
||||
|
||||
- **CSV export/import**: Download the entire roster, merge updates, or replace all units in one transaction.
|
||||
- **Live roster table**: Fetch every unit via HTMX, edit metadata, toggle deployed/retired states, move emitters to the ignore list, or delete records in-place.
|
||||
- **Database backups**: Create snapshots, restore from backups, upload/download database files, view database statistics.
|
||||
- **Remote cloning**: Clone production database to remote development servers over the network (see `scripts/clone_db_to_dev.py`).
|
||||
- **Stats at a glance**: View counts for the roster, emitters, and ignored units to confirm import/cleanup operations worked.
|
||||
- **Danger zone controls**: Clear specific tables or wipe all fleet data when resetting a lab/demo environment.
|
||||
|
||||
All UI actions call `GET/POST /api/settings/*` endpoints so you can automate the same workflows from scripts. See [docs/DATABASE_MANAGEMENT.md](docs/DATABASE_MANAGEMENT.md) for comprehensive database backup and restore documentation.
|
||||
|
||||
## Tech Stack
|
||||
|
||||
- **FastAPI**: Modern, fast web framework
|
||||
- **SQLAlchemy**: SQL toolkit and ORM
|
||||
- **SQLite**: Lightweight database
|
||||
- **HTMX**: Dynamic updates without heavy JavaScript frameworks
|
||||
- **TailwindCSS**: Utility-first CSS framework
|
||||
- **Leaflet**: Interactive maps
|
||||
- **Jinja2**: Server-side templating
|
||||
- **uvicorn**: ASGI server
|
||||
- **Docker**: Containerization for easy deployment
|
||||
|
||||
## Quick Start with Docker Compose (Recommended)
|
||||
|
||||
### Prerequisites
|
||||
- Docker and Docker Compose installed
|
||||
|
||||
### Running the Application
|
||||
|
||||
1. **Start the service:**
|
||||
```bash
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
2. **Check logs:**
|
||||
```bash
|
||||
docker compose logs -f
|
||||
```
|
||||
|
||||
3. **Stop the service:**
|
||||
```bash
|
||||
docker compose down
|
||||
```
|
||||
|
||||
The application will be available at:
|
||||
- **Web Interface**: http://localhost:8001
|
||||
- **API Documentation**: http://localhost:8001/docs
|
||||
- **Health Check**: http://localhost:8001/health
|
||||
|
||||
### Data Persistence
|
||||
|
||||
The SQLite database and photos are stored in the `./data` directory, which is mounted as a volume. Your data will persist even if you restart or rebuild the container.
|
||||
|
||||
## Local Development (Without Docker)
|
||||
|
||||
### Prerequisites
|
||||
- Python 3.11+
|
||||
- pip
|
||||
|
||||
### Setup
|
||||
|
||||
1. **Install dependencies:**
|
||||
```bash
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
2. **Run the server:**
|
||||
```bash
|
||||
uvicorn backend.main:app --host 0.0.0.0 --port 8001 --reload
|
||||
```
|
||||
|
||||
The application will be available at http://localhost:8001
|
||||
|
||||
### Optional: Generate Sample Data
|
||||
|
||||
Need realistic data quickly? Run:
|
||||
|
||||
```bash
|
||||
python create_test_db.py
|
||||
cp /tmp/sfm_test.db data/seismo_fleet.db
|
||||
```
|
||||
|
||||
The helper script creates a modem/seismograph mix so you can exercise the dashboard, roster tabs, and unit detail screens immediately.
|
||||
|
||||
## Upgrading from Previous Versions
|
||||
|
||||
### From v0.2.x to v0.3.0
|
||||
|
||||
Version 0.3.0 introduces user preferences storage. Run the migration once per database file:
|
||||
|
||||
```bash
|
||||
python backend/migrate_add_user_preferences.py
|
||||
```
|
||||
|
||||
This creates the `user_preferences` table for persistent settings storage (timezone, theme, auto-refresh interval, calibration defaults, status thresholds).
|
||||
|
||||
### From v0.1.x to v0.2.x or later
|
||||
|
||||
Versions ≥0.2 introduce new roster columns (device_type, calibration dates, modem metadata, addresses, etc.). Run the migration once per database file:
|
||||
|
||||
```bash
|
||||
python backend/migrate_add_device_types.py
|
||||
```
|
||||
|
||||
Both migration scripts are idempotent—if the columns/tables already exist, they simply exit.
|
||||
|
||||
## API Endpoints
|
||||
|
||||
### Web Pages
|
||||
- **GET** `/` - Dashboard home page
|
||||
- **GET** `/roster` - Fleet roster page
|
||||
- **GET** `/unit/{unit_id}` - Unit detail page
|
||||
- **GET** `/settings` - Roster manager, CSV import/export, and danger-zone utilities
|
||||
|
||||
### Fleet Status & Monitoring
|
||||
- **GET** `/api/status-snapshot` - Complete fleet status snapshot
|
||||
- **GET** `/api/roster` - List of all units with metadata
|
||||
- **GET** `/api/unit/{unit_id}` - Detailed unit information
|
||||
- **GET** `/health` - Health check endpoint
|
||||
|
||||
### Roster Management
|
||||
- **POST** `/api/roster/add` - Add new unit to roster
|
||||
```bash
|
||||
curl -X POST http://localhost:8001/api/roster/add \
|
||||
-F "id=BE1234" \
|
||||
-F "device_type=seismograph" \
|
||||
-F "unit_type=series3" \
|
||||
-F "project_id=PROJ-001" \
|
||||
-F "deployed=true" \
|
||||
-F "note=Main site sensor"
|
||||
```
|
||||
- **GET** `/api/roster/{unit_id}` - Fetch a single roster entry for editing
|
||||
- **POST** `/api/roster/edit/{unit_id}` - Update all metadata (device type, calibration dates, modem fields, etc.)
|
||||
- **POST** `/api/roster/set-deployed/{unit_id}` - Toggle deployment status
|
||||
- **POST** `/api/roster/set-retired/{unit_id}` - Toggle retired status
|
||||
- **POST** `/api/roster/set-note/{unit_id}` - Update unit notes
|
||||
- **DELETE** `/api/roster/{unit_id}` - Remove a roster/emitter pair entirely
|
||||
- **POST** `/api/roster/import-csv` - Bulk import from CSV (merge/update mode)
|
||||
```bash
|
||||
curl -X POST http://localhost:8001/api/roster/import-csv \
|
||||
-F "file=@roster.csv" \
|
||||
-F "update_existing=true"
|
||||
```
|
||||
- **POST** `/api/roster/ignore/{unit_id}` - Move an unknown emitter to the ignore list
|
||||
- **DELETE** `/api/roster/ignore/{unit_id}` - Remove a unit from the ignore list
|
||||
- **GET** `/api/roster/ignored` - List all ignored units with reasons
|
||||
|
||||
### Settings & Data Management
|
||||
- **GET** `/api/settings/export-csv` - Download the entire roster as CSV
|
||||
- **GET** `/api/settings/stats` - Counts for roster, emitters, and ignored tables
|
||||
- **GET** `/api/settings/roster-units` - Raw roster dump for the settings data grid
|
||||
- **POST** `/api/settings/import-csv-replace` - Replace the entire roster in one atomic transaction
|
||||
- **GET** `/api/settings/preferences` - Get user preferences (timezone, theme, calibration defaults, etc.)
|
||||
- **PUT** `/api/settings/preferences` - Update user preferences (supports partial updates)
|
||||
- **POST** `/api/settings/clear-all` - Danger-zone action that wipes roster, emitters, and ignored tables
|
||||
- **POST** `/api/settings/clear-roster` - Delete only roster entries
|
||||
- **POST** `/api/settings/clear-emitters` - Delete auto-discovered emitters
|
||||
- **POST** `/api/settings/clear-ignored` - Reset ignore list
|
||||
|
||||
### Database Management
|
||||
- **GET** `/api/settings/database/stats` - Database size, row counts, and last modified time
|
||||
- **POST** `/api/settings/database/snapshot` - Create manual database snapshot with optional description
|
||||
- **GET** `/api/settings/database/snapshots` - List all available snapshots with metadata
|
||||
- **GET** `/api/settings/database/snapshot/{filename}` - Download a specific snapshot file
|
||||
- **DELETE** `/api/settings/database/snapshot/{filename}` - Delete a snapshot
|
||||
- **POST** `/api/settings/database/restore` - Restore database from snapshot (creates safety backup)
|
||||
- **POST** `/api/settings/database/upload-snapshot` - Upload snapshot file to server
|
||||
|
||||
See [docs/DATABASE_MANAGEMENT.md](docs/DATABASE_MANAGEMENT.md) for detailed documentation and examples.
|
||||
|
||||
### CSV Import Format
|
||||
Create a CSV file with the following columns (only `unit_id` is required, everything else is optional):
|
||||
|
||||
```
|
||||
unit_id,unit_type,device_type,deployed,retired,note,project_id,location,address,coordinates,last_calibrated,next_calibration_due,deployed_with_modem_id,ip_address,phone_number,hardware_model
|
||||
```
|
||||
|
||||
Boolean columns accept `true/false`, `1/0`, or `yes/no` (case-insensitive). Date columns expect `YYYY-MM-DD`. Use the same schema whether you merge via `/api/roster/import-csv` or replace everything with `/api/settings/import-csv-replace`.
|
||||
|
||||
#### Example
|
||||
|
||||
```csv
|
||||
unit_id,unit_type,device_type,deployed,retired,note,project_id,location,address,coordinates,last_calibrated,next_calibration_due,deployed_with_modem_id,ip_address,phone_number,hardware_model
|
||||
BE1234,series3,seismograph,true,false,Primary sensor,PROJ-001,"Station A","123 Market St, San Francisco, CA","37.7937,-122.3965",2025-01-15,2026-01-15,MDM001,,,
|
||||
MDM001,modem,modem,true,false,Field modem,PROJ-001,"Station A","123 Market St, San Francisco, CA","37.7937,-122.3965",,,,"192.0.2.10","+1-555-0100","Raven XTV"
|
||||
```
|
||||
|
||||
See [sample_roster.csv](sample_roster.csv) for a minimal working example.
|
||||
|
||||
### Emitter Reporting
|
||||
- **POST** `/emitters/report` - Submit status report from a seismograph unit
|
||||
- **POST** `/api/series3/heartbeat` - Series 3 multi-unit telemetry payload
|
||||
- **POST** `/api/series4/heartbeat` - Series 4 (Micromate) multi-unit telemetry payload
|
||||
- **GET** `/fleet/status` - Retrieve status of all seismograph units (legacy)
|
||||
|
||||
### Photo Management
|
||||
- **GET** `/api/unit/{unit_id}/photos` - List photos for a unit
|
||||
- **GET** `/api/unit/{unit_id}/photo/{filename}` - Serve specific photo file
|
||||
|
||||
## API Documentation
|
||||
|
||||
Once running, interactive API documentation is available at:
|
||||
- **Swagger UI**: http://localhost:8001/docs
|
||||
- **ReDoc**: http://localhost:8001/redoc
|
||||
|
||||
## Testing the API
|
||||
|
||||
### Using curl
|
||||
|
||||
**Submit a report:**
|
||||
```bash
|
||||
curl -X POST http://localhost:8001/emitters/report \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"unit": "SEISMO-001",
|
||||
"unit_type": "series3",
|
||||
"timestamp": "2025-11-20T10:30:00",
|
||||
"file": "event_20251120_103000.dat",
|
||||
"status": "OK"
|
||||
}'
|
||||
```
|
||||
|
||||
**Get fleet status:**
|
||||
```bash
|
||||
curl http://localhost:8001/api/roster
|
||||
```
|
||||
|
||||
**Import roster from CSV:**
|
||||
```bash
|
||||
curl -X POST http://localhost:8001/api/roster/import-csv \
|
||||
-F "file=@sample_roster.csv" \
|
||||
-F "update_existing=true"
|
||||
```
|
||||
|
||||
### Using Python
|
||||
|
||||
```python
|
||||
import requests
|
||||
from datetime import datetime
|
||||
|
||||
# Submit report
|
||||
response = requests.post(
|
||||
"http://localhost:8001/emitters/report",
|
||||
json={
|
||||
"unit": "SEISMO-001",
|
||||
"unit_type": "series3",
|
||||
"timestamp": datetime.utcnow().isoformat(),
|
||||
"file": "event_20251120_103000.dat",
|
||||
"status": "OK"
|
||||
}
|
||||
)
|
||||
print(response.json())
|
||||
|
||||
# Get fleet status
|
||||
response = requests.get("http://localhost:8001/api/roster")
|
||||
print(response.json())
|
||||
|
||||
# Import CSV
|
||||
with open('roster.csv', 'rb') as f:
|
||||
files = {'file': f}
|
||||
data = {'update_existing': 'true'}
|
||||
response = requests.post(
|
||||
"http://localhost:8001/api/roster/import-csv",
|
||||
files=files,
|
||||
data=data
|
||||
)
|
||||
print(response.json())
|
||||
```
|
||||
|
||||
## Data Model
|
||||
|
||||
### RosterUnit Table (Fleet Roster)
|
||||
|
||||
**Common fields**
|
||||
|
||||
| Field | Type | Description |
|
||||
|-------|------|-------------|
|
||||
| id | string | Unit identifier (primary key) |
|
||||
| unit_type | string | Hardware model name (default: `series3`) |
|
||||
| device_type | string | Device type: `"seismograph"`, `"modem"`, or `"slm"` (sound level meter) |
|
||||
| deployed | boolean | Whether the unit is in the field |
|
||||
| retired | boolean | Removes the unit from deployments but preserves history |
|
||||
| note | string | Notes about the unit |
|
||||
| project_id | string | Associated project identifier |
|
||||
| location | string | Legacy location label |
|
||||
| address | string | Human-readable address |
|
||||
| coordinates | string | `lat,lon` pair used by the map |
|
||||
| last_updated | datetime | Last modification timestamp |
|
||||
|
||||
**Seismograph-only fields**
|
||||
|
||||
| Field | Type | Description |
|
||||
|-------|------|-------------|
|
||||
| last_calibrated | date | Last calibration date |
|
||||
| next_calibration_due | date | Next calibration date |
|
||||
| deployed_with_modem_id | string | Which modem is paired during deployment |
|
||||
|
||||
**Modem-only fields**
|
||||
|
||||
| Field | Type | Description |
|
||||
|-------|------|-------------|
|
||||
| ip_address | string | Assigned IP (static or DHCP) |
|
||||
| phone_number | string | Cellular number for the modem |
|
||||
| hardware_model | string | Modem hardware reference |
|
||||
|
||||
**Sound Level Meter (SLM) fields**
|
||||
|
||||
| Field | Type | Description |
|
||||
|-------|------|-------------|
|
||||
| slm_host | string | Direct IP address for SLM (if not using modem) |
|
||||
| slm_tcp_port | integer | TCP control port (default: 2255) |
|
||||
| slm_ftp_port | integer | FTP file transfer port (default: 21) |
|
||||
| slm_model | string | Device model (NL-43, NL-53) |
|
||||
| slm_serial_number | string | Manufacturer serial number |
|
||||
| slm_frequency_weighting | string | Frequency weighting setting (A, C, Z) |
|
||||
| slm_time_weighting | string | Time weighting setting (F=Fast, S=Slow) |
|
||||
| slm_measurement_range | string | Measurement range setting |
|
||||
| slm_last_check | datetime | Last status check timestamp |
|
||||
| deployed_with_modem_id | string | Modem pairing (shared with seismographs) |
|
||||
|
||||
### Device Type Schema
|
||||
|
||||
Terra-View supports three device types with the following standardized `device_type` values:
|
||||
|
||||
- **`"seismograph"`** (default) - Seismic monitoring devices (Series 3, Series 4, Micromate)
|
||||
- Uses: calibration dates, modem pairing
|
||||
- Examples: BE1234, UM12345 (Series 3/4 units)
|
||||
|
||||
- **`"modem"`** - Field modems and network equipment
|
||||
- Uses: IP address, phone number, hardware model
|
||||
- Examples: MDM001, MODEM-2025-01
|
||||
|
||||
- **`"slm"`** - Sound level meters (Rion NL-43/NL-53)
|
||||
- Uses: TCP/FTP configuration, measurement settings, modem pairing
|
||||
- Examples: SLM-43-01, NL43-001
|
||||
|
||||
**Important**: All `device_type` values must be lowercase. The legacy value `"sound_level_meter"` has been deprecated in favor of the shorter `"slm"`. Run `backend/migrate_standardize_device_types.py` to update existing databases.
|
||||
|
||||
### Emitter Table (Device Check-ins)
|
||||
|
||||
| Field | Type | Description |
|
||||
|-------|------|-------------|
|
||||
| id | string | Unit identifier (primary key) |
|
||||
| unit_type | string | Reported device type/model |
|
||||
| last_seen | datetime | Last report timestamp |
|
||||
| last_file | string | Last file processed |
|
||||
| status | string | Current status: OK, Pending, Missing |
|
||||
| notes | string | Optional notes (nullable) |
|
||||
|
||||
### IgnoredUnit Table (Noise Management)
|
||||
|
||||
| Field | Type | Description |
|
||||
|-------|------|-------------|
|
||||
| id | string | Unit identifier (primary key) |
|
||||
| reason | string | Optional context for ignoring |
|
||||
| ignored_at | datetime | When the ignore action occurred |
|
||||
|
||||
### UserPreferences Table (Settings Storage)
|
||||
|
||||
| Field | Type | Description |
|
||||
|-------|------|-------------|
|
||||
| id | integer | Always 1 (single-row table) |
|
||||
| timezone | string | Display timezone (default: America/New_York) |
|
||||
| theme | string | UI theme: auto, light, or dark |
|
||||
| auto_refresh_interval | integer | Dashboard refresh interval in seconds |
|
||||
| date_format | string | Date format preference |
|
||||
| table_rows_per_page | integer | Default pagination size |
|
||||
| calibration_interval_days | integer | Default days between calibrations |
|
||||
| calibration_warning_days | integer | Warning threshold before calibration due |
|
||||
| status_ok_threshold_hours | integer | Hours for OK status threshold |
|
||||
| status_pending_threshold_hours | integer | Hours for Pending status threshold |
|
||||
| updated_at | datetime | Last preference update timestamp |
|
||||
|
||||
## Project Structure
|
||||
|
||||
```
|
||||
seismo-fleet-manager/
|
||||
├── backend/
|
||||
│ ├── main.py # FastAPI app entry point
|
||||
│ ├── database.py # SQLAlchemy database configuration
|
||||
│ ├── models.py # Database models (RosterUnit, Emitter, IgnoredUnit, UserPreferences)
|
||||
│ ├── routes.py # Legacy API endpoints + Series 3/4 heartbeat endpoints
|
||||
│ ├── routers/ # Modular API routers
|
||||
│ │ ├── roster.py # Fleet status endpoints
|
||||
│ │ ├── roster_edit.py # Roster management & CSV import
|
||||
│ │ ├── units.py # Unit detail endpoints
|
||||
│ │ ├── photos.py # Photo management
|
||||
│ │ ├── dashboard.py # Dashboard partials
|
||||
│ │ ├── dashboard_tabs.py # Dashboard tab endpoints
|
||||
│ │ └── settings.py # Settings, preferences, and data management
|
||||
│ ├── services/
|
||||
│ │ ├── snapshot.py # Fleet status snapshot logic
|
||||
│ │ ├── database_backup.py # Database backup and restore service
|
||||
│ │ └── backup_scheduler.py # Automatic backup scheduler
|
||||
│ ├── migrate_add_device_types.py # SQLite migration for v0.2 schema
|
||||
│ ├── migrate_add_user_preferences.py # SQLite migration for v0.3 schema
|
||||
│ └── static/ # Static assets (CSS, etc.)
|
||||
├── create_test_db.py # Generate a sample SQLite DB with mixed devices
|
||||
├── templates/ # Jinja2 HTML templates
|
||||
│ ├── base.html # Base layout with sidebar
|
||||
│ ├── dashboard.html # Main dashboard
|
||||
│ ├── roster.html # Fleet roster table
|
||||
│ ├── unit_detail.html # Unit detail page
|
||||
│ ├── settings.html # Roster manager UI
|
||||
│ └── partials/ # HTMX partial templates
|
||||
│ ├── roster_table.html
|
||||
│ ├── retired_table.html
|
||||
│ ├── ignored_table.html
|
||||
│ └── unknown_emitters.html
|
||||
├── data/ # SQLite database & photos (persisted)
|
||||
│ └── backups/ # Database snapshots directory
|
||||
├── scripts/
|
||||
│ └── clone_db_to_dev.py # Remote database cloning utility
|
||||
├── docs/
|
||||
│ └── DATABASE_MANAGEMENT.md # Database backup/restore guide
|
||||
├── requirements.txt # Python dependencies
|
||||
├── Dockerfile # Docker container definition
|
||||
├── docker-compose.yml # Docker Compose configuration
|
||||
├── CHANGELOG.md # Version history
|
||||
├── FRONTEND_README.md # Frontend documentation
|
||||
└── README.md # This file
|
||||
```
|
||||
|
||||
## Docker Commands
|
||||
|
||||
**Build the image:**
|
||||
```bash
|
||||
docker compose build
|
||||
```
|
||||
|
||||
**Start in foreground:**
|
||||
```bash
|
||||
docker compose up
|
||||
```
|
||||
|
||||
**Start in background:**
|
||||
```bash
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
**View logs:**
|
||||
```bash
|
||||
docker compose logs -f seismo-backend
|
||||
```
|
||||
|
||||
**Restart service:**
|
||||
```bash
|
||||
docker compose restart
|
||||
```
|
||||
|
||||
**Rebuild and restart:**
|
||||
```bash
|
||||
docker compose up -d --build
|
||||
```
|
||||
|
||||
**Stop and remove containers:**
|
||||
```bash
|
||||
docker compose down
|
||||
```
|
||||
|
||||
**Remove containers and volumes:**
|
||||
```bash
|
||||
docker compose down -v
|
||||
```
|
||||
|
||||
## Release Highlights
|
||||
|
||||
### v0.4.3 — 2026-01-14
|
||||
- **Sound Level Meter workflow**: Roster manager surfaces SLM metadata, supports rename actions, and adds return-to-project navigation plus schedule/unit templates for project planning.
|
||||
- **Project insight panels**: Project dashboards now expose file and session lists so teams can see what each project stores before diving into units.
|
||||
- **Project view polish**: FTP browser supports folder downloads, the timer display was reimplemented, and the project/device templates gained edit modals for projects and locations to streamline navigation.
|
||||
- **SLM sync & accuracy**: Configuration edits now propagate to SLMM (which caches configs for faster responses) and the live view uses the correct DRD fields so telemetry aligns with the control center.
|
||||
|
||||
### v0.4.0 — 2025-12-16
|
||||
- **Database Management System**: Complete backup and restore functionality with manual snapshots, restore operations, and upload/download capabilities
|
||||
- **Remote Database Cloning**: New `clone_db_to_dev.py` script for copying production database to remote dev servers over WAN
|
||||
- **Automatic Backup Scheduler**: Background service for scheduled backups with configurable retention management
|
||||
- **Database Tab**: New dedicated tab in Settings for all database operations with real-time statistics
|
||||
- **Settings Reorganization**: Improved tab structure - renamed "Data Management" to "Roster Management", moved CSV Replace Mode, created Database tab
|
||||
- **Comprehensive Documentation**: New `docs/DATABASE_MANAGEMENT.md` with complete guide to backup/restore workflows, API reference, and best practices
|
||||
|
||||
### v0.3.3 — 2025-12-12
|
||||
- **Improved Mobile Navigation**: Hamburger menu moved to bottom nav bar (no more floating button covering content)
|
||||
- **Better Status Visibility**: Larger status dots (16px) in dashboard fleet overview for easier at-a-glance status checks
|
||||
- **Cleaner Roster Cards**: Location navigation links moved to detail modal only, reducing clutter in card view
|
||||
|
||||
### v0.3.2 — 2025-12-12
|
||||
- **Progressive Web App (PWA)**: Complete mobile optimization with offline support, installable as standalone app
|
||||
- **Mobile-First UI**: Hamburger menu, bottom navigation bar, card-based roster view optimized for touch
|
||||
- **Tap-to-Navigate**: Location links open in user's preferred navigation app (Google Maps, Apple Maps, Waze)
|
||||
- **Offline Editing**: Service worker + IndexedDB for offline operation with automatic sync when online
|
||||
- **Unit Detail Modals**: Bottom sheet modals for quick unit info access with full edit capabilities
|
||||
- **Hard Reload Utility**: "Clear Cache & Reload" button to force fresh assets (helpful for development)
|
||||
|
||||
### v0.3.1 — 2025-12-12
|
||||
- **Dashboard Alerts**: Only Missing units show in notifications (Pending units no longer alert)
|
||||
- **Status Fixes**: Fixed "Unknown" status issues in mobile card views and detail modals
|
||||
- **Backend Improvements**: Safer data access with `.get()` defaults to prevent errors
|
||||
|
||||
### v0.3.0 — 2025-12-09
|
||||
- **Series 4 Support**: New `/api/series4/heartbeat` endpoint with auto-detection for Micromate units (UM##### pattern)
|
||||
- **Settings Redesign**: Completely redesigned Settings page with 4-tab interface (General, Data Management, Advanced, Danger Zone)
|
||||
- **User Preferences**: Backend storage for timezone, theme, auto-refresh interval, calibration defaults, and status thresholds
|
||||
- **Development Labels**: Visual indicators to distinguish dev from production environments
|
||||
- **Timezone Support**: Comprehensive timezone handling with human-readable timestamps site-wide
|
||||
- **Quality of Life**: Relative timestamps, status icons for accessibility, breadcrumb navigation, copy-to-clipboard, search functionality
|
||||
|
||||
### v0.2.1 — 2025-12-03
|
||||
- Added the `/settings` roster manager with CSV export/import, live stats, and danger-zone table reset actions
|
||||
- Deployed/Benched/Retired/Ignored tabs now have dedicated HTMX partials, sorting, and inline actions
|
||||
- Unit detail pages expose device-type specific metadata (calibration windows, modem pairing, IP/phone fields)
|
||||
- Snapshot summary and dashboard counts now focus on deployed units and include address/coordinate data
|
||||
|
||||
### v0.2.0 — 2025-12-03
|
||||
- Introduced device-type aware roster schema (seismograph vs modem) plus migration + `create_test_db.py` helper
|
||||
- Added Ignore list model/endpoints to quarantine noisy emitters directly from the roster
|
||||
- Roster page gained Add Unit + CSV Import modals, HTMX-driven updates, and unknown emitter callouts
|
||||
- Snapshot service now returns active/benched/retired/unknown buckets containing richer metadata
|
||||
|
||||
### v0.1.1 — 2025-12-02
|
||||
- **Roster Editing API**: Full CRUD operations for managing your fleet roster
|
||||
- **CSV Import**: Bulk upload roster data from CSV files
|
||||
- **Enhanced Data Model**: Added project_id and location fields to roster
|
||||
- **Bug Fixes**: Improved database session management and error handling
|
||||
|
||||
See [CHANGELOG.md](CHANGELOG.md) for the full release notes.
|
||||
|
||||
## Future Enhancements
|
||||
|
||||
- Email/SMS alerts for missing units
|
||||
- Historical data tracking and reporting
|
||||
- Multi-user authentication
|
||||
- PostgreSQL support for larger deployments
|
||||
- Advanced filtering and search
|
||||
- Export roster to various formats
|
||||
|
||||
## License
|
||||
|
||||
MIT
|
||||
|
||||
## Version
|
||||
|
||||
**Current: 0.5.1** — Dashboard schedule view with today's actions panel, new Terra-View branding and logo rework (2026-01-27)
|
||||
|
||||
Previous: 0.4.4 — Recurring schedules, alerting UI, report templates + RND viewer, and SLM workflow polish (2026-01-23)
|
||||
|
||||
0.4.3 — SLM roster/project view refresh, project insight panels, FTP browser folder downloads, and SLMM sync (2026-01-14)
|
||||
|
||||
0.4.2 — SLM configuration interface with TCP/FTP controls, modem diagnostics, and dashboard endpoints for Sound Level Meters (2026-01-05)
|
||||
|
||||
0.4.1 — Sound Level Meter integration with full management UI for SLM units (2026-01-05)
|
||||
|
||||
0.4.0 — Database management system with backup/restore and remote cloning (2025-12-16)
|
||||
|
||||
0.3.3 — Mobile navigation improvements and better status visibility (2025-12-12)
|
||||
|
||||
0.3.2 — Progressive Web App with mobile optimization (2025-12-12)
|
||||
|
||||
0.3.1 — Dashboard alerts and status fixes (2025-12-12)
|
||||
|
||||
0.3.0 — Series 4 support, settings redesign, user preferences (2025-12-09)
|
||||
|
||||
0.2.1 — Settings & roster manager refresh (2025-12-03)
|
||||
|
||||
0.2.0 — Device-type aware roster + ignore list (2025-12-03)
|
||||
|
||||
0.1.1 — Roster Management & CSV Import (2025-12-02)
|
||||
|
||||
0.1.0 — Initial Release (2024-11-20)
|
||||
|
||||
BIN
assets/terra-view-icon_large.png
Normal file
|
After Width: | Height: | Size: 36 KiB |
31
backend/database.py
Normal file
@@ -0,0 +1,31 @@
|
||||
from sqlalchemy import create_engine
|
||||
from sqlalchemy.ext.declarative import declarative_base
|
||||
from sqlalchemy.orm import sessionmaker
|
||||
import os
|
||||
|
||||
# Ensure data directory exists
|
||||
os.makedirs("data", exist_ok=True)
|
||||
|
||||
SQLALCHEMY_DATABASE_URL = "sqlite:///./data/seismo_fleet.db"
|
||||
|
||||
engine = create_engine(
|
||||
SQLALCHEMY_DATABASE_URL, connect_args={"check_same_thread": False}
|
||||
)
|
||||
|
||||
SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)
|
||||
|
||||
Base = declarative_base()
|
||||
|
||||
|
||||
def get_db():
|
||||
"""Dependency for database sessions"""
|
||||
db = SessionLocal()
|
||||
try:
|
||||
yield db
|
||||
finally:
|
||||
db.close()
|
||||
|
||||
|
||||
def get_db_session():
|
||||
"""Get a database session directly (not as a dependency)"""
|
||||
return SessionLocal()
|
||||
108
backend/init_projects_db.py
Normal file
@@ -0,0 +1,108 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Database initialization script for Projects system.
|
||||
|
||||
This script creates the new project management tables and populates
|
||||
the project_types table with default templates.
|
||||
|
||||
Usage:
|
||||
python -m backend.init_projects_db
|
||||
"""
|
||||
|
||||
from sqlalchemy.orm import Session
|
||||
from backend.database import engine, SessionLocal
|
||||
from backend.models import (
|
||||
Base,
|
||||
ProjectType,
|
||||
Project,
|
||||
MonitoringLocation,
|
||||
UnitAssignment,
|
||||
ScheduledAction,
|
||||
RecordingSession,
|
||||
DataFile,
|
||||
)
|
||||
from datetime import datetime
|
||||
|
||||
|
||||
def init_project_types(db: Session):
|
||||
"""Initialize default project types."""
|
||||
project_types = [
|
||||
{
|
||||
"id": "sound_monitoring",
|
||||
"name": "Sound Monitoring",
|
||||
"description": "Noise monitoring projects with sound level meters and NRLs (Noise Recording Locations)",
|
||||
"icon": "volume-2", # Lucide icon name
|
||||
"supports_sound": True,
|
||||
"supports_vibration": False,
|
||||
},
|
||||
{
|
||||
"id": "vibration_monitoring",
|
||||
"name": "Vibration Monitoring",
|
||||
"description": "Seismic/vibration monitoring projects with seismographs and monitoring points",
|
||||
"icon": "activity", # Lucide icon name
|
||||
"supports_sound": False,
|
||||
"supports_vibration": True,
|
||||
},
|
||||
{
|
||||
"id": "combined",
|
||||
"name": "Combined Monitoring",
|
||||
"description": "Full-spectrum monitoring with both sound and vibration capabilities",
|
||||
"icon": "layers", # Lucide icon name
|
||||
"supports_sound": True,
|
||||
"supports_vibration": True,
|
||||
},
|
||||
]
|
||||
|
||||
for pt_data in project_types:
|
||||
existing = db.query(ProjectType).filter_by(id=pt_data["id"]).first()
|
||||
if not existing:
|
||||
pt = ProjectType(**pt_data)
|
||||
db.add(pt)
|
||||
print(f"✓ Created project type: {pt_data['name']}")
|
||||
else:
|
||||
print(f" Project type already exists: {pt_data['name']}")
|
||||
|
||||
db.commit()
|
||||
|
||||
|
||||
def create_tables():
|
||||
"""Create all tables defined in models."""
|
||||
print("Creating project management tables...")
|
||||
Base.metadata.create_all(bind=engine)
|
||||
print("✓ Tables created successfully")
|
||||
|
||||
|
||||
def main():
|
||||
print("=" * 60)
|
||||
print("Terra-View Projects System - Database Initialization")
|
||||
print("=" * 60)
|
||||
print()
|
||||
|
||||
# Create tables
|
||||
create_tables()
|
||||
print()
|
||||
|
||||
# Initialize project types
|
||||
db = SessionLocal()
|
||||
try:
|
||||
print("Initializing project types...")
|
||||
init_project_types(db)
|
||||
print()
|
||||
print("=" * 60)
|
||||
print("✓ Database initialization complete!")
|
||||
print("=" * 60)
|
||||
print()
|
||||
print("Next steps:")
|
||||
print(" 1. Restart Terra-View to load new routes")
|
||||
print(" 2. Navigate to /projects to create your first project")
|
||||
print(" 3. Check documentation for API endpoints")
|
||||
except Exception as e:
|
||||
print(f"✗ Error during initialization: {e}")
|
||||
db.rollback()
|
||||
raise
|
||||
finally:
|
||||
db.close()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
697
backend/main.py
Normal file
@@ -0,0 +1,697 @@
|
||||
import os
|
||||
import logging
|
||||
from fastapi import FastAPI, Request, Depends, HTTPException
|
||||
from fastapi.middleware.cors import CORSMiddleware
|
||||
from fastapi.staticfiles import StaticFiles
|
||||
from fastapi.templating import Jinja2Templates
|
||||
from fastapi.responses import HTMLResponse, FileResponse, JSONResponse
|
||||
from fastapi.exceptions import RequestValidationError
|
||||
from sqlalchemy.orm import Session
|
||||
from typing import List, Dict, Optional
|
||||
from pydantic import BaseModel
|
||||
|
||||
# Configure logging
|
||||
logging.basicConfig(
|
||||
level=logging.INFO,
|
||||
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s'
|
||||
)
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
from backend.database import engine, Base, get_db
|
||||
from backend.routers import roster, units, photos, roster_edit, roster_rename, dashboard, dashboard_tabs, activity, slmm, slm_ui, slm_dashboard, seismo_dashboard, projects, project_locations, scheduler, modem_dashboard
|
||||
from backend.services.snapshot import emit_status_snapshot
|
||||
from backend.models import IgnoredUnit
|
||||
|
||||
# Create database tables
|
||||
Base.metadata.create_all(bind=engine)
|
||||
|
||||
# Read environment (development or production)
|
||||
ENVIRONMENT = os.getenv("ENVIRONMENT", "production")
|
||||
|
||||
# Initialize FastAPI app
|
||||
VERSION = "0.5.1"
|
||||
app = FastAPI(
|
||||
title="Seismo Fleet Manager",
|
||||
description="Backend API for managing seismograph fleet status",
|
||||
version=VERSION
|
||||
)
|
||||
|
||||
# Add validation error handler to log details
|
||||
@app.exception_handler(RequestValidationError)
|
||||
async def validation_exception_handler(request: Request, exc: RequestValidationError):
|
||||
logger.error(f"Validation error on {request.url}: {exc.errors()}")
|
||||
logger.error(f"Body: {await request.body()}")
|
||||
return JSONResponse(
|
||||
status_code=400,
|
||||
content={"detail": exc.errors()}
|
||||
)
|
||||
|
||||
# Configure CORS
|
||||
app.add_middleware(
|
||||
CORSMiddleware,
|
||||
allow_origins=["*"],
|
||||
allow_credentials=True,
|
||||
allow_methods=["*"],
|
||||
allow_headers=["*"],
|
||||
)
|
||||
|
||||
# Mount static files
|
||||
app.mount("/static", StaticFiles(directory="backend/static"), name="static")
|
||||
|
||||
# Use shared templates configuration with timezone filters
|
||||
from backend.templates_config import templates
|
||||
|
||||
# Add custom context processor to inject environment variable into all templates
|
||||
@app.middleware("http")
|
||||
async def add_environment_to_context(request: Request, call_next):
|
||||
"""Middleware to add environment variable to request state"""
|
||||
request.state.environment = ENVIRONMENT
|
||||
response = await call_next(request)
|
||||
return response
|
||||
|
||||
# Override TemplateResponse to include environment and version in context
|
||||
original_template_response = templates.TemplateResponse
|
||||
def custom_template_response(name, context=None, *args, **kwargs):
|
||||
if context is None:
|
||||
context = {}
|
||||
context["environment"] = ENVIRONMENT
|
||||
context["version"] = VERSION
|
||||
return original_template_response(name, context, *args, **kwargs)
|
||||
templates.TemplateResponse = custom_template_response
|
||||
|
||||
# Include API routers
|
||||
app.include_router(roster.router)
|
||||
app.include_router(units.router)
|
||||
app.include_router(photos.router)
|
||||
app.include_router(roster_edit.router)
|
||||
app.include_router(roster_rename.router)
|
||||
app.include_router(dashboard.router)
|
||||
app.include_router(dashboard_tabs.router)
|
||||
app.include_router(activity.router)
|
||||
app.include_router(slmm.router)
|
||||
app.include_router(slm_ui.router)
|
||||
app.include_router(slm_dashboard.router)
|
||||
app.include_router(seismo_dashboard.router)
|
||||
app.include_router(modem_dashboard.router)
|
||||
|
||||
from backend.routers import settings
|
||||
app.include_router(settings.router)
|
||||
|
||||
# Projects system routers
|
||||
app.include_router(projects.router)
|
||||
app.include_router(project_locations.router)
|
||||
app.include_router(scheduler.router)
|
||||
|
||||
# Report templates router
|
||||
from backend.routers import report_templates
|
||||
app.include_router(report_templates.router)
|
||||
|
||||
# Alerts router
|
||||
from backend.routers import alerts
|
||||
app.include_router(alerts.router)
|
||||
|
||||
# Recurring schedules router
|
||||
from backend.routers import recurring_schedules
|
||||
app.include_router(recurring_schedules.router)
|
||||
|
||||
# Start scheduler service and device status monitor on application startup
|
||||
from backend.services.scheduler import start_scheduler, stop_scheduler
|
||||
from backend.services.device_status_monitor import start_device_status_monitor, stop_device_status_monitor
|
||||
|
||||
@app.on_event("startup")
|
||||
async def startup_event():
|
||||
"""Initialize services on app startup"""
|
||||
logger.info("Starting scheduler service...")
|
||||
await start_scheduler()
|
||||
logger.info("Scheduler service started")
|
||||
|
||||
logger.info("Starting device status monitor...")
|
||||
await start_device_status_monitor()
|
||||
logger.info("Device status monitor started")
|
||||
|
||||
@app.on_event("shutdown")
|
||||
def shutdown_event():
|
||||
"""Clean up services on app shutdown"""
|
||||
logger.info("Stopping device status monitor...")
|
||||
stop_device_status_monitor()
|
||||
logger.info("Device status monitor stopped")
|
||||
|
||||
logger.info("Stopping scheduler service...")
|
||||
stop_scheduler()
|
||||
logger.info("Scheduler service stopped")
|
||||
|
||||
|
||||
# Legacy routes from the original backend
|
||||
from backend import routes as legacy_routes
|
||||
app.include_router(legacy_routes.router)
|
||||
|
||||
|
||||
# HTML page routes
|
||||
@app.get("/", response_class=HTMLResponse)
|
||||
async def dashboard(request: Request):
|
||||
"""Dashboard home page"""
|
||||
return templates.TemplateResponse("dashboard.html", {"request": request})
|
||||
|
||||
|
||||
@app.get("/roster", response_class=HTMLResponse)
|
||||
async def roster_page(request: Request):
|
||||
"""Fleet roster page"""
|
||||
return templates.TemplateResponse("roster.html", {"request": request})
|
||||
|
||||
|
||||
@app.get("/unit/{unit_id}", response_class=HTMLResponse)
|
||||
async def unit_detail_page(request: Request, unit_id: str):
|
||||
"""Unit detail page"""
|
||||
return templates.TemplateResponse("unit_detail.html", {
|
||||
"request": request,
|
||||
"unit_id": unit_id
|
||||
})
|
||||
|
||||
|
||||
@app.get("/settings", response_class=HTMLResponse)
|
||||
async def settings_page(request: Request):
|
||||
"""Settings page for roster management"""
|
||||
return templates.TemplateResponse("settings.html", {"request": request})
|
||||
|
||||
|
||||
@app.get("/sound-level-meters", response_class=HTMLResponse)
|
||||
async def sound_level_meters_page(request: Request):
|
||||
"""Sound Level Meters management dashboard"""
|
||||
return templates.TemplateResponse("sound_level_meters.html", {"request": request})
|
||||
|
||||
|
||||
@app.get("/slm/{unit_id}", response_class=HTMLResponse)
|
||||
async def slm_legacy_dashboard(
|
||||
request: Request,
|
||||
unit_id: str,
|
||||
from_project: Optional[str] = None,
|
||||
from_nrl: Optional[str] = None,
|
||||
db: Session = Depends(get_db)
|
||||
):
|
||||
"""Legacy SLM control center dashboard for a specific unit"""
|
||||
# Get project details if from_project is provided
|
||||
project = None
|
||||
if from_project:
|
||||
from backend.models import Project
|
||||
project = db.query(Project).filter_by(id=from_project).first()
|
||||
|
||||
# Get NRL location details if from_nrl is provided
|
||||
nrl_location = None
|
||||
if from_nrl:
|
||||
from backend.models import NRLLocation
|
||||
nrl_location = db.query(NRLLocation).filter_by(id=from_nrl).first()
|
||||
|
||||
return templates.TemplateResponse("slm_legacy_dashboard.html", {
|
||||
"request": request,
|
||||
"unit_id": unit_id,
|
||||
"from_project": from_project,
|
||||
"from_nrl": from_nrl,
|
||||
"project": project,
|
||||
"nrl_location": nrl_location
|
||||
})
|
||||
|
||||
|
||||
@app.get("/seismographs", response_class=HTMLResponse)
|
||||
async def seismographs_page(request: Request):
|
||||
"""Seismographs management dashboard"""
|
||||
return templates.TemplateResponse("seismographs.html", {"request": request})
|
||||
|
||||
|
||||
@app.get("/modems", response_class=HTMLResponse)
|
||||
async def modems_page(request: Request):
|
||||
"""Field modems management dashboard"""
|
||||
return templates.TemplateResponse("modems.html", {"request": request})
|
||||
|
||||
|
||||
@app.get("/projects", response_class=HTMLResponse)
|
||||
async def projects_page(request: Request):
|
||||
"""Projects management and overview"""
|
||||
return templates.TemplateResponse("projects/overview.html", {"request": request})
|
||||
|
||||
|
||||
@app.get("/projects/{project_id}", response_class=HTMLResponse)
|
||||
async def project_detail_page(request: Request, project_id: str):
|
||||
"""Project detail dashboard"""
|
||||
return templates.TemplateResponse("projects/detail.html", {
|
||||
"request": request,
|
||||
"project_id": project_id
|
||||
})
|
||||
|
||||
|
||||
@app.get("/projects/{project_id}/nrl/{location_id}", response_class=HTMLResponse)
|
||||
async def nrl_detail_page(
|
||||
request: Request,
|
||||
project_id: str,
|
||||
location_id: str,
|
||||
db: Session = Depends(get_db)
|
||||
):
|
||||
"""NRL (Noise Recording Location) detail page with tabs"""
|
||||
from backend.models import Project, MonitoringLocation, UnitAssignment, RosterUnit, RecordingSession, DataFile
|
||||
from sqlalchemy import and_
|
||||
|
||||
# Get project
|
||||
project = db.query(Project).filter_by(id=project_id).first()
|
||||
if not project:
|
||||
return templates.TemplateResponse("404.html", {
|
||||
"request": request,
|
||||
"message": "Project not found"
|
||||
}, status_code=404)
|
||||
|
||||
# Get location
|
||||
location = db.query(MonitoringLocation).filter_by(
|
||||
id=location_id,
|
||||
project_id=project_id
|
||||
).first()
|
||||
|
||||
if not location:
|
||||
return templates.TemplateResponse("404.html", {
|
||||
"request": request,
|
||||
"message": "Location not found"
|
||||
}, status_code=404)
|
||||
|
||||
# Get active assignment
|
||||
assignment = db.query(UnitAssignment).filter(
|
||||
and_(
|
||||
UnitAssignment.location_id == location_id,
|
||||
UnitAssignment.status == "active"
|
||||
)
|
||||
).first()
|
||||
|
||||
assigned_unit = None
|
||||
if assignment:
|
||||
assigned_unit = db.query(RosterUnit).filter_by(id=assignment.unit_id).first()
|
||||
|
||||
# Get session count
|
||||
session_count = db.query(RecordingSession).filter_by(location_id=location_id).count()
|
||||
|
||||
# Get file count (DataFile links to session, not directly to location)
|
||||
file_count = db.query(DataFile).join(
|
||||
RecordingSession,
|
||||
DataFile.session_id == RecordingSession.id
|
||||
).filter(RecordingSession.location_id == location_id).count()
|
||||
|
||||
# Check for active session
|
||||
active_session = db.query(RecordingSession).filter(
|
||||
and_(
|
||||
RecordingSession.location_id == location_id,
|
||||
RecordingSession.status == "recording"
|
||||
)
|
||||
).first()
|
||||
|
||||
return templates.TemplateResponse("nrl_detail.html", {
|
||||
"request": request,
|
||||
"project_id": project_id,
|
||||
"location_id": location_id,
|
||||
"project": project,
|
||||
"location": location,
|
||||
"assignment": assignment,
|
||||
"assigned_unit": assigned_unit,
|
||||
"session_count": session_count,
|
||||
"file_count": file_count,
|
||||
"active_session": active_session,
|
||||
})
|
||||
|
||||
|
||||
# ===== PWA ROUTES =====
|
||||
|
||||
@app.get("/sw.js")
|
||||
async def service_worker():
|
||||
"""Serve service worker with proper headers for PWA"""
|
||||
return FileResponse(
|
||||
"backend/static/sw.js",
|
||||
media_type="application/javascript",
|
||||
headers={
|
||||
"Service-Worker-Allowed": "/",
|
||||
"Cache-Control": "no-cache"
|
||||
}
|
||||
)
|
||||
|
||||
|
||||
@app.get("/offline-db.js")
|
||||
async def offline_db_script():
|
||||
"""Serve offline database script"""
|
||||
return FileResponse(
|
||||
"backend/static/offline-db.js",
|
||||
media_type="application/javascript",
|
||||
headers={"Cache-Control": "no-cache"}
|
||||
)
|
||||
|
||||
|
||||
# Pydantic model for sync edits
|
||||
class EditItem(BaseModel):
|
||||
id: int
|
||||
unitId: str
|
||||
changes: Dict
|
||||
timestamp: int
|
||||
|
||||
|
||||
class SyncEditsRequest(BaseModel):
|
||||
edits: List[EditItem]
|
||||
|
||||
|
||||
@app.post("/api/sync-edits")
|
||||
async def sync_edits(request: SyncEditsRequest, db: Session = Depends(get_db)):
|
||||
"""Process offline edit queue and sync to database"""
|
||||
from backend.models import RosterUnit
|
||||
|
||||
results = []
|
||||
synced_ids = []
|
||||
|
||||
for edit in request.edits:
|
||||
try:
|
||||
# Find the unit
|
||||
unit = db.query(RosterUnit).filter_by(id=edit.unitId).first()
|
||||
|
||||
if not unit:
|
||||
results.append({
|
||||
"id": edit.id,
|
||||
"status": "error",
|
||||
"reason": f"Unit {edit.unitId} not found"
|
||||
})
|
||||
continue
|
||||
|
||||
# Apply changes
|
||||
for key, value in edit.changes.items():
|
||||
if hasattr(unit, key):
|
||||
# Handle boolean conversions
|
||||
if key in ['deployed', 'retired']:
|
||||
setattr(unit, key, value in ['true', True, 'True', '1', 1])
|
||||
else:
|
||||
setattr(unit, key, value if value != '' else None)
|
||||
|
||||
db.commit()
|
||||
|
||||
results.append({
|
||||
"id": edit.id,
|
||||
"status": "success"
|
||||
})
|
||||
synced_ids.append(edit.id)
|
||||
|
||||
except Exception as e:
|
||||
db.rollback()
|
||||
results.append({
|
||||
"id": edit.id,
|
||||
"status": "error",
|
||||
"reason": str(e)
|
||||
})
|
||||
|
||||
synced_count = len(synced_ids)
|
||||
|
||||
return JSONResponse({
|
||||
"synced": synced_count,
|
||||
"total": len(request.edits),
|
||||
"synced_ids": synced_ids,
|
||||
"results": results
|
||||
})
|
||||
|
||||
|
||||
@app.get("/partials/roster-deployed", response_class=HTMLResponse)
|
||||
async def roster_deployed_partial(request: Request):
|
||||
"""Partial template for deployed units tab"""
|
||||
from datetime import datetime
|
||||
snapshot = emit_status_snapshot()
|
||||
|
||||
units_list = []
|
||||
for unit_id, unit_data in snapshot["active"].items():
|
||||
units_list.append({
|
||||
"id": unit_id,
|
||||
"status": unit_data.get("status", "Unknown"),
|
||||
"age": unit_data.get("age", "N/A"),
|
||||
"last_seen": unit_data.get("last", "Never"),
|
||||
"deployed": unit_data.get("deployed", False),
|
||||
"note": unit_data.get("note", ""),
|
||||
"device_type": unit_data.get("device_type", "seismograph"),
|
||||
"address": unit_data.get("address", ""),
|
||||
"coordinates": unit_data.get("coordinates", ""),
|
||||
"project_id": unit_data.get("project_id", ""),
|
||||
"last_calibrated": unit_data.get("last_calibrated"),
|
||||
"next_calibration_due": unit_data.get("next_calibration_due"),
|
||||
"deployed_with_modem_id": unit_data.get("deployed_with_modem_id"),
|
||||
"ip_address": unit_data.get("ip_address"),
|
||||
"phone_number": unit_data.get("phone_number"),
|
||||
"hardware_model": unit_data.get("hardware_model"),
|
||||
})
|
||||
|
||||
# Sort by status priority (Missing > Pending > OK) then by ID
|
||||
status_priority = {"Missing": 0, "Pending": 1, "OK": 2}
|
||||
units_list.sort(key=lambda x: (status_priority.get(x["status"], 3), x["id"]))
|
||||
|
||||
return templates.TemplateResponse("partials/roster_table.html", {
|
||||
"request": request,
|
||||
"units": units_list,
|
||||
"timestamp": datetime.now().strftime("%H:%M:%S")
|
||||
})
|
||||
|
||||
|
||||
@app.get("/partials/roster-benched", response_class=HTMLResponse)
|
||||
async def roster_benched_partial(request: Request):
|
||||
"""Partial template for benched units tab"""
|
||||
from datetime import datetime
|
||||
snapshot = emit_status_snapshot()
|
||||
|
||||
units_list = []
|
||||
for unit_id, unit_data in snapshot["benched"].items():
|
||||
units_list.append({
|
||||
"id": unit_id,
|
||||
"status": unit_data.get("status", "N/A"),
|
||||
"age": unit_data.get("age", "N/A"),
|
||||
"last_seen": unit_data.get("last", "Never"),
|
||||
"deployed": unit_data.get("deployed", False),
|
||||
"note": unit_data.get("note", ""),
|
||||
"device_type": unit_data.get("device_type", "seismograph"),
|
||||
"address": unit_data.get("address", ""),
|
||||
"coordinates": unit_data.get("coordinates", ""),
|
||||
"project_id": unit_data.get("project_id", ""),
|
||||
"last_calibrated": unit_data.get("last_calibrated"),
|
||||
"next_calibration_due": unit_data.get("next_calibration_due"),
|
||||
"deployed_with_modem_id": unit_data.get("deployed_with_modem_id"),
|
||||
"ip_address": unit_data.get("ip_address"),
|
||||
"phone_number": unit_data.get("phone_number"),
|
||||
"hardware_model": unit_data.get("hardware_model"),
|
||||
})
|
||||
|
||||
# Sort by ID
|
||||
units_list.sort(key=lambda x: x["id"])
|
||||
|
||||
return templates.TemplateResponse("partials/roster_table.html", {
|
||||
"request": request,
|
||||
"units": units_list,
|
||||
"timestamp": datetime.now().strftime("%H:%M:%S")
|
||||
})
|
||||
|
||||
|
||||
@app.get("/partials/roster-retired", response_class=HTMLResponse)
|
||||
async def roster_retired_partial(request: Request):
|
||||
"""Partial template for retired units tab"""
|
||||
from datetime import datetime
|
||||
snapshot = emit_status_snapshot()
|
||||
|
||||
units_list = []
|
||||
for unit_id, unit_data in snapshot["retired"].items():
|
||||
units_list.append({
|
||||
"id": unit_id,
|
||||
"status": unit_data["status"],
|
||||
"age": unit_data["age"],
|
||||
"last_seen": unit_data["last"],
|
||||
"deployed": unit_data["deployed"],
|
||||
"note": unit_data.get("note", ""),
|
||||
"device_type": unit_data.get("device_type", "seismograph"),
|
||||
"last_calibrated": unit_data.get("last_calibrated"),
|
||||
"next_calibration_due": unit_data.get("next_calibration_due"),
|
||||
"deployed_with_modem_id": unit_data.get("deployed_with_modem_id"),
|
||||
"ip_address": unit_data.get("ip_address"),
|
||||
"phone_number": unit_data.get("phone_number"),
|
||||
"hardware_model": unit_data.get("hardware_model"),
|
||||
})
|
||||
|
||||
# Sort by ID
|
||||
units_list.sort(key=lambda x: x["id"])
|
||||
|
||||
return templates.TemplateResponse("partials/retired_table.html", {
|
||||
"request": request,
|
||||
"units": units_list,
|
||||
"timestamp": datetime.now().strftime("%H:%M:%S")
|
||||
})
|
||||
|
||||
|
||||
@app.get("/partials/roster-ignored", response_class=HTMLResponse)
|
||||
async def roster_ignored_partial(request: Request, db: Session = Depends(get_db)):
|
||||
"""Partial template for ignored units tab"""
|
||||
from datetime import datetime
|
||||
|
||||
ignored = db.query(IgnoredUnit).all()
|
||||
ignored_list = []
|
||||
for unit in ignored:
|
||||
ignored_list.append({
|
||||
"id": unit.id,
|
||||
"reason": unit.reason or "",
|
||||
"ignored_at": unit.ignored_at.strftime("%Y-%m-%d %H:%M:%S") if unit.ignored_at else "Unknown"
|
||||
})
|
||||
|
||||
# Sort by ID
|
||||
ignored_list.sort(key=lambda x: x["id"])
|
||||
|
||||
return templates.TemplateResponse("partials/ignored_table.html", {
|
||||
"request": request,
|
||||
"ignored_units": ignored_list,
|
||||
"timestamp": datetime.now().strftime("%H:%M:%S")
|
||||
})
|
||||
|
||||
|
||||
@app.get("/partials/unknown-emitters", response_class=HTMLResponse)
|
||||
async def unknown_emitters_partial(request: Request):
|
||||
"""Partial template for unknown emitters (HTMX)"""
|
||||
snapshot = emit_status_snapshot()
|
||||
|
||||
unknown_list = []
|
||||
for unit_id, unit_data in snapshot.get("unknown", {}).items():
|
||||
unknown_list.append({
|
||||
"id": unit_id,
|
||||
"status": unit_data["status"],
|
||||
"age": unit_data["age"],
|
||||
"fname": unit_data.get("fname", ""),
|
||||
})
|
||||
|
||||
# Sort by ID
|
||||
unknown_list.sort(key=lambda x: x["id"])
|
||||
|
||||
return templates.TemplateResponse("partials/unknown_emitters.html", {
|
||||
"request": request,
|
||||
"unknown_units": unknown_list
|
||||
})
|
||||
|
||||
|
||||
@app.get("/partials/devices-all", response_class=HTMLResponse)
|
||||
async def devices_all_partial(request: Request):
|
||||
"""Unified partial template for ALL devices with comprehensive filtering support"""
|
||||
from datetime import datetime
|
||||
snapshot = emit_status_snapshot()
|
||||
|
||||
units_list = []
|
||||
|
||||
# Add deployed/active units
|
||||
for unit_id, unit_data in snapshot["active"].items():
|
||||
units_list.append({
|
||||
"id": unit_id,
|
||||
"status": unit_data.get("status", "Unknown"),
|
||||
"age": unit_data.get("age", "N/A"),
|
||||
"last_seen": unit_data.get("last", "Never"),
|
||||
"deployed": True,
|
||||
"retired": False,
|
||||
"ignored": False,
|
||||
"note": unit_data.get("note", ""),
|
||||
"device_type": unit_data.get("device_type", "seismograph"),
|
||||
"address": unit_data.get("address", ""),
|
||||
"coordinates": unit_data.get("coordinates", ""),
|
||||
"project_id": unit_data.get("project_id", ""),
|
||||
"last_calibrated": unit_data.get("last_calibrated"),
|
||||
"next_calibration_due": unit_data.get("next_calibration_due"),
|
||||
"deployed_with_modem_id": unit_data.get("deployed_with_modem_id"),
|
||||
"ip_address": unit_data.get("ip_address"),
|
||||
"phone_number": unit_data.get("phone_number"),
|
||||
"hardware_model": unit_data.get("hardware_model"),
|
||||
})
|
||||
|
||||
# Add benched units
|
||||
for unit_id, unit_data in snapshot["benched"].items():
|
||||
units_list.append({
|
||||
"id": unit_id,
|
||||
"status": unit_data.get("status", "N/A"),
|
||||
"age": unit_data.get("age", "N/A"),
|
||||
"last_seen": unit_data.get("last", "Never"),
|
||||
"deployed": False,
|
||||
"retired": False,
|
||||
"ignored": False,
|
||||
"note": unit_data.get("note", ""),
|
||||
"device_type": unit_data.get("device_type", "seismograph"),
|
||||
"address": unit_data.get("address", ""),
|
||||
"coordinates": unit_data.get("coordinates", ""),
|
||||
"project_id": unit_data.get("project_id", ""),
|
||||
"last_calibrated": unit_data.get("last_calibrated"),
|
||||
"next_calibration_due": unit_data.get("next_calibration_due"),
|
||||
"deployed_with_modem_id": unit_data.get("deployed_with_modem_id"),
|
||||
"ip_address": unit_data.get("ip_address"),
|
||||
"phone_number": unit_data.get("phone_number"),
|
||||
"hardware_model": unit_data.get("hardware_model"),
|
||||
})
|
||||
|
||||
# Add retired units
|
||||
for unit_id, unit_data in snapshot["retired"].items():
|
||||
units_list.append({
|
||||
"id": unit_id,
|
||||
"status": "Retired",
|
||||
"age": "N/A",
|
||||
"last_seen": "N/A",
|
||||
"deployed": False,
|
||||
"retired": True,
|
||||
"ignored": False,
|
||||
"note": unit_data.get("note", ""),
|
||||
"device_type": unit_data.get("device_type", "seismograph"),
|
||||
"address": unit_data.get("address", ""),
|
||||
"coordinates": unit_data.get("coordinates", ""),
|
||||
"project_id": unit_data.get("project_id", ""),
|
||||
"last_calibrated": unit_data.get("last_calibrated"),
|
||||
"next_calibration_due": unit_data.get("next_calibration_due"),
|
||||
"deployed_with_modem_id": unit_data.get("deployed_with_modem_id"),
|
||||
"ip_address": unit_data.get("ip_address"),
|
||||
"phone_number": unit_data.get("phone_number"),
|
||||
"hardware_model": unit_data.get("hardware_model"),
|
||||
})
|
||||
|
||||
# Add ignored units
|
||||
for unit_id, unit_data in snapshot.get("ignored", {}).items():
|
||||
units_list.append({
|
||||
"id": unit_id,
|
||||
"status": "Ignored",
|
||||
"age": "N/A",
|
||||
"last_seen": "N/A",
|
||||
"deployed": False,
|
||||
"retired": False,
|
||||
"ignored": True,
|
||||
"note": unit_data.get("note", unit_data.get("reason", "")),
|
||||
"device_type": unit_data.get("device_type", "unknown"),
|
||||
"address": "",
|
||||
"coordinates": "",
|
||||
"project_id": "",
|
||||
"last_calibrated": None,
|
||||
"next_calibration_due": None,
|
||||
"deployed_with_modem_id": None,
|
||||
"ip_address": None,
|
||||
"phone_number": None,
|
||||
"hardware_model": None,
|
||||
})
|
||||
|
||||
# Sort by status category, then by ID
|
||||
def sort_key(unit):
|
||||
# Priority: deployed (active) -> benched -> retired -> ignored
|
||||
if unit["deployed"]:
|
||||
return (0, unit["id"])
|
||||
elif not unit["retired"] and not unit["ignored"]:
|
||||
return (1, unit["id"])
|
||||
elif unit["retired"]:
|
||||
return (2, unit["id"])
|
||||
else:
|
||||
return (3, unit["id"])
|
||||
|
||||
units_list.sort(key=sort_key)
|
||||
|
||||
return templates.TemplateResponse("partials/devices_table.html", {
|
||||
"request": request,
|
||||
"units": units_list,
|
||||
"timestamp": datetime.now().strftime("%H:%M:%S")
|
||||
})
|
||||
|
||||
|
||||
@app.get("/health")
|
||||
def health_check():
|
||||
"""Health check endpoint"""
|
||||
return {
|
||||
"message": f"Seismo Fleet Manager v{VERSION}",
|
||||
"status": "running",
|
||||
"version": VERSION
|
||||
}
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
import uvicorn
|
||||
uvicorn.run(app, host="0.0.0.0", port=8001)
|
||||
67
backend/migrate_add_auto_increment_index.py
Normal file
@@ -0,0 +1,67 @@
|
||||
"""
|
||||
Migration: Add auto_increment_index column to recurring_schedules table
|
||||
|
||||
This migration adds the auto_increment_index column that controls whether
|
||||
the scheduler should automatically find an unused store index before starting
|
||||
a new measurement.
|
||||
|
||||
Run this script once to update existing databases:
|
||||
python -m backend.migrate_add_auto_increment_index
|
||||
"""
|
||||
|
||||
import sqlite3
|
||||
import os
|
||||
|
||||
DB_PATH = "data/seismo_fleet.db"
|
||||
|
||||
|
||||
def migrate():
|
||||
"""Add auto_increment_index column to recurring_schedules table."""
|
||||
if not os.path.exists(DB_PATH):
|
||||
print(f"Database not found at {DB_PATH}")
|
||||
return False
|
||||
|
||||
conn = sqlite3.connect(DB_PATH)
|
||||
cursor = conn.cursor()
|
||||
|
||||
try:
|
||||
# Check if recurring_schedules table exists
|
||||
cursor.execute("""
|
||||
SELECT name FROM sqlite_master
|
||||
WHERE type='table' AND name='recurring_schedules'
|
||||
""")
|
||||
if not cursor.fetchone():
|
||||
print("recurring_schedules table does not exist yet. Will be created on app startup.")
|
||||
conn.close()
|
||||
return True
|
||||
|
||||
# Check if auto_increment_index column already exists
|
||||
cursor.execute("PRAGMA table_info(recurring_schedules)")
|
||||
columns = [row[1] for row in cursor.fetchall()]
|
||||
|
||||
if "auto_increment_index" in columns:
|
||||
print("auto_increment_index column already exists in recurring_schedules table.")
|
||||
conn.close()
|
||||
return True
|
||||
|
||||
# Add the column
|
||||
print("Adding auto_increment_index column to recurring_schedules table...")
|
||||
cursor.execute("""
|
||||
ALTER TABLE recurring_schedules
|
||||
ADD COLUMN auto_increment_index BOOLEAN DEFAULT 1
|
||||
""")
|
||||
conn.commit()
|
||||
print("Successfully added auto_increment_index column.")
|
||||
|
||||
conn.close()
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
print(f"Migration failed: {e}")
|
||||
conn.close()
|
||||
return False
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
success = migrate()
|
||||
exit(0 if success else 1)
|
||||
84
backend/migrate_add_deployment_type.py
Normal file
@@ -0,0 +1,84 @@
|
||||
"""
|
||||
Migration script to add deployment_type and deployed_with_unit_id fields to roster table.
|
||||
|
||||
deployment_type: tracks what type of device a modem is deployed with:
|
||||
- "seismograph" - Modem is connected to a seismograph
|
||||
- "slm" - Modem is connected to a sound level meter
|
||||
- NULL/empty - Not assigned or unknown
|
||||
|
||||
deployed_with_unit_id: stores the ID of the seismograph/SLM this modem is deployed with
|
||||
(reverse relationship of deployed_with_modem_id)
|
||||
|
||||
Run this script once to migrate an existing database.
|
||||
"""
|
||||
|
||||
import sqlite3
|
||||
import os
|
||||
|
||||
# Database path
|
||||
DB_PATH = "./data/seismo_fleet.db"
|
||||
|
||||
|
||||
def migrate_database():
|
||||
"""Add deployment_type and deployed_with_unit_id columns to roster table"""
|
||||
|
||||
if not os.path.exists(DB_PATH):
|
||||
print(f"Database not found at {DB_PATH}")
|
||||
print("The database will be created automatically when you run the application.")
|
||||
return
|
||||
|
||||
print(f"Migrating database: {DB_PATH}")
|
||||
|
||||
conn = sqlite3.connect(DB_PATH)
|
||||
cursor = conn.cursor()
|
||||
|
||||
# Check if roster table exists
|
||||
cursor.execute("SELECT name FROM sqlite_master WHERE type='table' AND name='roster'")
|
||||
table_exists = cursor.fetchone()
|
||||
|
||||
if not table_exists:
|
||||
print("Roster table does not exist yet - will be created when app runs")
|
||||
conn.close()
|
||||
return
|
||||
|
||||
# Check existing columns
|
||||
cursor.execute("PRAGMA table_info(roster)")
|
||||
columns = [col[1] for col in cursor.fetchall()]
|
||||
|
||||
try:
|
||||
# Add deployment_type if not exists
|
||||
if 'deployment_type' not in columns:
|
||||
print("Adding deployment_type column to roster table...")
|
||||
cursor.execute("ALTER TABLE roster ADD COLUMN deployment_type TEXT")
|
||||
print(" Added deployment_type column")
|
||||
|
||||
cursor.execute("CREATE INDEX IF NOT EXISTS ix_roster_deployment_type ON roster(deployment_type)")
|
||||
print(" Created index on deployment_type")
|
||||
else:
|
||||
print("deployment_type column already exists")
|
||||
|
||||
# Add deployed_with_unit_id if not exists
|
||||
if 'deployed_with_unit_id' not in columns:
|
||||
print("Adding deployed_with_unit_id column to roster table...")
|
||||
cursor.execute("ALTER TABLE roster ADD COLUMN deployed_with_unit_id TEXT")
|
||||
print(" Added deployed_with_unit_id column")
|
||||
|
||||
cursor.execute("CREATE INDEX IF NOT EXISTS ix_roster_deployed_with_unit_id ON roster(deployed_with_unit_id)")
|
||||
print(" Created index on deployed_with_unit_id")
|
||||
else:
|
||||
print("deployed_with_unit_id column already exists")
|
||||
|
||||
conn.commit()
|
||||
print("\nMigration completed successfully!")
|
||||
|
||||
except sqlite3.Error as e:
|
||||
print(f"\nError during migration: {e}")
|
||||
conn.rollback()
|
||||
raise
|
||||
|
||||
finally:
|
||||
conn.close()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
migrate_database()
|
||||
84
backend/migrate_add_device_types.py
Normal file
@@ -0,0 +1,84 @@
|
||||
"""
|
||||
Migration script to add device type support to the roster table.
|
||||
|
||||
This adds columns for:
|
||||
- device_type (seismograph/modem discriminator)
|
||||
- Seismograph-specific fields (calibration dates, modem pairing)
|
||||
- Modem-specific fields (IP address, phone number, hardware model)
|
||||
|
||||
Run this script once to migrate an existing database.
|
||||
"""
|
||||
|
||||
import sqlite3
|
||||
import os
|
||||
|
||||
# Database path
|
||||
DB_PATH = "./data/seismo_fleet.db"
|
||||
|
||||
def migrate_database():
|
||||
"""Add new columns to the roster table"""
|
||||
|
||||
if not os.path.exists(DB_PATH):
|
||||
print(f"Database not found at {DB_PATH}")
|
||||
print("The database will be created automatically when you run the application.")
|
||||
return
|
||||
|
||||
print(f"Migrating database: {DB_PATH}")
|
||||
|
||||
conn = sqlite3.connect(DB_PATH)
|
||||
cursor = conn.cursor()
|
||||
|
||||
# Check if device_type column already exists
|
||||
cursor.execute("PRAGMA table_info(roster)")
|
||||
columns = [col[1] for col in cursor.fetchall()]
|
||||
|
||||
if "device_type" in columns:
|
||||
print("Migration already applied - device_type column exists")
|
||||
conn.close()
|
||||
return
|
||||
|
||||
print("Adding new columns to roster table...")
|
||||
|
||||
try:
|
||||
# Add device type discriminator
|
||||
cursor.execute("ALTER TABLE roster ADD COLUMN device_type TEXT DEFAULT 'seismograph'")
|
||||
print(" ✓ Added device_type column")
|
||||
|
||||
# Add seismograph-specific fields
|
||||
cursor.execute("ALTER TABLE roster ADD COLUMN last_calibrated DATE")
|
||||
print(" ✓ Added last_calibrated column")
|
||||
|
||||
cursor.execute("ALTER TABLE roster ADD COLUMN next_calibration_due DATE")
|
||||
print(" ✓ Added next_calibration_due column")
|
||||
|
||||
cursor.execute("ALTER TABLE roster ADD COLUMN deployed_with_modem_id TEXT")
|
||||
print(" ✓ Added deployed_with_modem_id column")
|
||||
|
||||
# Add modem-specific fields
|
||||
cursor.execute("ALTER TABLE roster ADD COLUMN ip_address TEXT")
|
||||
print(" ✓ Added ip_address column")
|
||||
|
||||
cursor.execute("ALTER TABLE roster ADD COLUMN phone_number TEXT")
|
||||
print(" ✓ Added phone_number column")
|
||||
|
||||
cursor.execute("ALTER TABLE roster ADD COLUMN hardware_model TEXT")
|
||||
print(" ✓ Added hardware_model column")
|
||||
|
||||
# Set all existing units to seismograph type
|
||||
cursor.execute("UPDATE roster SET device_type = 'seismograph' WHERE device_type IS NULL")
|
||||
print(" ✓ Set existing units to seismograph type")
|
||||
|
||||
conn.commit()
|
||||
print("\nMigration completed successfully!")
|
||||
|
||||
except sqlite3.Error as e:
|
||||
print(f"\nError during migration: {e}")
|
||||
conn.rollback()
|
||||
raise
|
||||
|
||||
finally:
|
||||
conn.close()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
migrate_database()
|
||||
80
backend/migrate_add_project_number.py
Normal file
@@ -0,0 +1,80 @@
|
||||
"""
|
||||
Migration script to add project_number field to projects table.
|
||||
|
||||
This adds a new column for TMI internal project numbering:
|
||||
- Format: xxxx-YY (e.g., "2567-23")
|
||||
- xxxx = incremental project number
|
||||
- YY = year project was started
|
||||
|
||||
Combined with client_name and name (project/site name), this enables
|
||||
smart searching across all project identifiers.
|
||||
|
||||
Run this script once to migrate an existing database.
|
||||
"""
|
||||
|
||||
import sqlite3
|
||||
import os
|
||||
|
||||
# Database path
|
||||
DB_PATH = "./data/seismo_fleet.db"
|
||||
|
||||
|
||||
def migrate_database():
|
||||
"""Add project_number column to projects table"""
|
||||
|
||||
if not os.path.exists(DB_PATH):
|
||||
print(f"Database not found at {DB_PATH}")
|
||||
print("The database will be created automatically when you run the application.")
|
||||
return
|
||||
|
||||
print(f"Migrating database: {DB_PATH}")
|
||||
|
||||
conn = sqlite3.connect(DB_PATH)
|
||||
cursor = conn.cursor()
|
||||
|
||||
# Check if projects table exists
|
||||
cursor.execute("SELECT name FROM sqlite_master WHERE type='table' AND name='projects'")
|
||||
table_exists = cursor.fetchone()
|
||||
|
||||
if not table_exists:
|
||||
print("Projects table does not exist yet - will be created when app runs")
|
||||
conn.close()
|
||||
return
|
||||
|
||||
# Check if project_number column already exists
|
||||
cursor.execute("PRAGMA table_info(projects)")
|
||||
columns = [col[1] for col in cursor.fetchall()]
|
||||
|
||||
if 'project_number' in columns:
|
||||
print("Migration already applied - project_number column exists")
|
||||
conn.close()
|
||||
return
|
||||
|
||||
print("Adding project_number column to projects table...")
|
||||
|
||||
try:
|
||||
cursor.execute("ALTER TABLE projects ADD COLUMN project_number TEXT")
|
||||
print(" Added project_number column")
|
||||
|
||||
# Create index for faster searching
|
||||
cursor.execute("CREATE INDEX IF NOT EXISTS ix_projects_project_number ON projects(project_number)")
|
||||
print(" Created index on project_number")
|
||||
|
||||
# Also add index on client_name if it doesn't exist
|
||||
cursor.execute("CREATE INDEX IF NOT EXISTS ix_projects_client_name ON projects(client_name)")
|
||||
print(" Created index on client_name")
|
||||
|
||||
conn.commit()
|
||||
print("\nMigration completed successfully!")
|
||||
|
||||
except sqlite3.Error as e:
|
||||
print(f"\nError during migration: {e}")
|
||||
conn.rollback()
|
||||
raise
|
||||
|
||||
finally:
|
||||
conn.close()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
migrate_database()
|
||||
88
backend/migrate_add_report_templates.py
Normal file
@@ -0,0 +1,88 @@
|
||||
"""
|
||||
Migration script to add report_templates table.
|
||||
|
||||
This creates a new table for storing report generation configurations:
|
||||
- Template name and project association
|
||||
- Time filtering settings (start/end time)
|
||||
- Date range filtering (optional)
|
||||
- Report title defaults
|
||||
|
||||
Run this script once to migrate an existing database.
|
||||
"""
|
||||
|
||||
import sqlite3
|
||||
import os
|
||||
|
||||
# Database path
|
||||
DB_PATH = "./data/seismo_fleet.db"
|
||||
|
||||
def migrate_database():
|
||||
"""Create report_templates table"""
|
||||
|
||||
if not os.path.exists(DB_PATH):
|
||||
print(f"Database not found at {DB_PATH}")
|
||||
print("The database will be created automatically when you run the application.")
|
||||
return
|
||||
|
||||
print(f"Migrating database: {DB_PATH}")
|
||||
|
||||
conn = sqlite3.connect(DB_PATH)
|
||||
cursor = conn.cursor()
|
||||
|
||||
# Check if report_templates table already exists
|
||||
cursor.execute("SELECT name FROM sqlite_master WHERE type='table' AND name='report_templates'")
|
||||
table_exists = cursor.fetchone()
|
||||
|
||||
if table_exists:
|
||||
print("Migration already applied - report_templates table exists")
|
||||
conn.close()
|
||||
return
|
||||
|
||||
print("Creating report_templates table...")
|
||||
|
||||
try:
|
||||
cursor.execute("""
|
||||
CREATE TABLE report_templates (
|
||||
id TEXT PRIMARY KEY,
|
||||
name TEXT NOT NULL,
|
||||
project_id TEXT,
|
||||
report_title TEXT DEFAULT 'Background Noise Study',
|
||||
start_time TEXT,
|
||||
end_time TEXT,
|
||||
start_date TEXT,
|
||||
end_date TEXT,
|
||||
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
|
||||
)
|
||||
""")
|
||||
print(" ✓ Created report_templates table")
|
||||
|
||||
# Insert default templates
|
||||
import uuid
|
||||
|
||||
default_templates = [
|
||||
(str(uuid.uuid4()), "Nighttime (7PM-7AM)", None, "Background Noise Study", "19:00", "07:00", None, None),
|
||||
(str(uuid.uuid4()), "Daytime (7AM-7PM)", None, "Background Noise Study", "07:00", "19:00", None, None),
|
||||
(str(uuid.uuid4()), "Full Day (All Data)", None, "Background Noise Study", None, None, None, None),
|
||||
]
|
||||
|
||||
cursor.executemany("""
|
||||
INSERT INTO report_templates (id, name, project_id, report_title, start_time, end_time, start_date, end_date)
|
||||
VALUES (?, ?, ?, ?, ?, ?, ?, ?)
|
||||
""", default_templates)
|
||||
print(" ✓ Inserted default templates (Nighttime, Daytime, Full Day)")
|
||||
|
||||
conn.commit()
|
||||
print("\nMigration completed successfully!")
|
||||
|
||||
except sqlite3.Error as e:
|
||||
print(f"\nError during migration: {e}")
|
||||
conn.rollback()
|
||||
raise
|
||||
|
||||
finally:
|
||||
conn.close()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
migrate_database()
|
||||
78
backend/migrate_add_slm_fields.py
Normal file
@@ -0,0 +1,78 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Database migration: Add sound level meter fields to roster table.
|
||||
|
||||
Adds columns for sound_level_meter device type support.
|
||||
"""
|
||||
|
||||
import sqlite3
|
||||
from pathlib import Path
|
||||
|
||||
def migrate():
|
||||
"""Add SLM fields to roster table if they don't exist."""
|
||||
|
||||
# Try multiple possible database locations
|
||||
possible_paths = [
|
||||
Path("data/seismo_fleet.db"),
|
||||
Path("data/sfm.db"),
|
||||
Path("data/seismo.db"),
|
||||
]
|
||||
|
||||
db_path = None
|
||||
for path in possible_paths:
|
||||
if path.exists():
|
||||
db_path = path
|
||||
break
|
||||
|
||||
if db_path is None:
|
||||
print(f"Database not found in any of: {[str(p) for p in possible_paths]}")
|
||||
print("Creating database with models.py will include new fields automatically.")
|
||||
return
|
||||
|
||||
print(f"Using database: {db_path}")
|
||||
|
||||
conn = sqlite3.connect(db_path)
|
||||
cursor = conn.cursor()
|
||||
|
||||
# Check if columns already exist
|
||||
cursor.execute("PRAGMA table_info(roster)")
|
||||
existing_columns = {row[1] for row in cursor.fetchall()}
|
||||
|
||||
new_columns = {
|
||||
"slm_host": "TEXT",
|
||||
"slm_tcp_port": "INTEGER",
|
||||
"slm_model": "TEXT",
|
||||
"slm_serial_number": "TEXT",
|
||||
"slm_frequency_weighting": "TEXT",
|
||||
"slm_time_weighting": "TEXT",
|
||||
"slm_measurement_range": "TEXT",
|
||||
"slm_last_check": "DATETIME",
|
||||
}
|
||||
|
||||
migrations_applied = []
|
||||
|
||||
for column_name, column_type in new_columns.items():
|
||||
if column_name not in existing_columns:
|
||||
try:
|
||||
cursor.execute(f"ALTER TABLE roster ADD COLUMN {column_name} {column_type}")
|
||||
migrations_applied.append(column_name)
|
||||
print(f"✓ Added column: {column_name} ({column_type})")
|
||||
except sqlite3.OperationalError as e:
|
||||
print(f"✗ Failed to add column {column_name}: {e}")
|
||||
else:
|
||||
print(f"○ Column already exists: {column_name}")
|
||||
|
||||
conn.commit()
|
||||
conn.close()
|
||||
|
||||
if migrations_applied:
|
||||
print(f"\n✓ Migration complete! Added {len(migrations_applied)} new columns.")
|
||||
else:
|
||||
print("\n○ No migration needed - all columns already exist.")
|
||||
|
||||
print("\nSound level meter fields are now available in the roster table.")
|
||||
print("Note: Use device_type='slm' for Sound Level Meters. Legacy 'sound_level_meter' has been deprecated.")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
migrate()
|
||||
78
backend/migrate_add_unit_history.py
Normal file
@@ -0,0 +1,78 @@
|
||||
"""
|
||||
Migration script to add unit history timeline support.
|
||||
|
||||
This creates the unit_history table to track all changes to units:
|
||||
- Note changes (archived old notes, new notes)
|
||||
- Deployment status changes (deployed/benched)
|
||||
- Retired status changes
|
||||
- Other field changes
|
||||
|
||||
Run this script once to migrate an existing database.
|
||||
"""
|
||||
|
||||
import sqlite3
|
||||
import os
|
||||
|
||||
# Database path
|
||||
DB_PATH = "./data/seismo_fleet.db"
|
||||
|
||||
def migrate_database():
|
||||
"""Create the unit_history table"""
|
||||
|
||||
if not os.path.exists(DB_PATH):
|
||||
print(f"Database not found at {DB_PATH}")
|
||||
print("The database will be created automatically when you run the application.")
|
||||
return
|
||||
|
||||
print(f"Migrating database: {DB_PATH}")
|
||||
|
||||
conn = sqlite3.connect(DB_PATH)
|
||||
cursor = conn.cursor()
|
||||
|
||||
# Check if unit_history table already exists
|
||||
cursor.execute("SELECT name FROM sqlite_master WHERE type='table' AND name='unit_history'")
|
||||
if cursor.fetchone():
|
||||
print("Migration already applied - unit_history table exists")
|
||||
conn.close()
|
||||
return
|
||||
|
||||
print("Creating unit_history table...")
|
||||
|
||||
try:
|
||||
cursor.execute("""
|
||||
CREATE TABLE unit_history (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
unit_id TEXT NOT NULL,
|
||||
change_type TEXT NOT NULL,
|
||||
field_name TEXT,
|
||||
old_value TEXT,
|
||||
new_value TEXT,
|
||||
changed_at TIMESTAMP NOT NULL,
|
||||
source TEXT DEFAULT 'manual',
|
||||
notes TEXT
|
||||
)
|
||||
""")
|
||||
print(" ✓ Created unit_history table")
|
||||
|
||||
# Create indexes for better query performance
|
||||
cursor.execute("CREATE INDEX idx_unit_history_unit_id ON unit_history(unit_id)")
|
||||
print(" ✓ Created index on unit_id")
|
||||
|
||||
cursor.execute("CREATE INDEX idx_unit_history_changed_at ON unit_history(changed_at)")
|
||||
print(" ✓ Created index on changed_at")
|
||||
|
||||
conn.commit()
|
||||
print("\nMigration completed successfully!")
|
||||
print("Units will now track their complete history of changes.")
|
||||
|
||||
except sqlite3.Error as e:
|
||||
print(f"\nError during migration: {e}")
|
||||
conn.rollback()
|
||||
raise
|
||||
|
||||
finally:
|
||||
conn.close()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
migrate_database()
|
||||
80
backend/migrate_add_user_preferences.py
Normal file
@@ -0,0 +1,80 @@
|
||||
"""
|
||||
Migration script to add user_preferences table.
|
||||
|
||||
This creates a new table for storing persistent user preferences:
|
||||
- Display settings (timezone, theme, date format)
|
||||
- Auto-refresh configuration
|
||||
- Calibration defaults
|
||||
- Status threshold customization
|
||||
|
||||
Run this script once to migrate an existing database.
|
||||
"""
|
||||
|
||||
import sqlite3
|
||||
import os
|
||||
|
||||
# Database path
|
||||
DB_PATH = "./data/seismo_fleet.db"
|
||||
|
||||
def migrate_database():
|
||||
"""Create user_preferences table"""
|
||||
|
||||
if not os.path.exists(DB_PATH):
|
||||
print(f"Database not found at {DB_PATH}")
|
||||
print("The database will be created automatically when you run the application.")
|
||||
return
|
||||
|
||||
print(f"Migrating database: {DB_PATH}")
|
||||
|
||||
conn = sqlite3.connect(DB_PATH)
|
||||
cursor = conn.cursor()
|
||||
|
||||
# Check if user_preferences table already exists
|
||||
cursor.execute("SELECT name FROM sqlite_master WHERE type='table' AND name='user_preferences'")
|
||||
table_exists = cursor.fetchone()
|
||||
|
||||
if table_exists:
|
||||
print("Migration already applied - user_preferences table exists")
|
||||
conn.close()
|
||||
return
|
||||
|
||||
print("Creating user_preferences table...")
|
||||
|
||||
try:
|
||||
cursor.execute("""
|
||||
CREATE TABLE user_preferences (
|
||||
id INTEGER PRIMARY KEY DEFAULT 1,
|
||||
timezone TEXT DEFAULT 'America/New_York',
|
||||
theme TEXT DEFAULT 'auto',
|
||||
auto_refresh_interval INTEGER DEFAULT 10,
|
||||
date_format TEXT DEFAULT 'MM/DD/YYYY',
|
||||
table_rows_per_page INTEGER DEFAULT 25,
|
||||
calibration_interval_days INTEGER DEFAULT 365,
|
||||
calibration_warning_days INTEGER DEFAULT 30,
|
||||
status_ok_threshold_hours INTEGER DEFAULT 12,
|
||||
status_pending_threshold_hours INTEGER DEFAULT 24,
|
||||
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
|
||||
)
|
||||
""")
|
||||
print(" ✓ Created user_preferences table")
|
||||
|
||||
# Insert default preferences
|
||||
cursor.execute("""
|
||||
INSERT INTO user_preferences (id) VALUES (1)
|
||||
""")
|
||||
print(" ✓ Inserted default preferences")
|
||||
|
||||
conn.commit()
|
||||
print("\nMigration completed successfully!")
|
||||
|
||||
except sqlite3.Error as e:
|
||||
print(f"\nError during migration: {e}")
|
||||
conn.rollback()
|
||||
raise
|
||||
|
||||
finally:
|
||||
conn.close()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
migrate_database()
|
||||
106
backend/migrate_standardize_device_types.py
Normal file
@@ -0,0 +1,106 @@
|
||||
"""
|
||||
Database Migration: Standardize device_type values
|
||||
|
||||
This migration ensures all device_type values follow the official schema:
|
||||
- "seismograph" - Seismic monitoring devices
|
||||
- "modem" - Field modems and network equipment
|
||||
- "slm" - Sound level meters (NL-43/NL-53)
|
||||
|
||||
Changes:
|
||||
- Converts "sound_level_meter" → "slm"
|
||||
- Safe to run multiple times (idempotent)
|
||||
- No data loss
|
||||
|
||||
Usage:
|
||||
python backend/migrate_standardize_device_types.py
|
||||
"""
|
||||
|
||||
import sys
|
||||
import os
|
||||
|
||||
# Add parent directory to path so we can import backend modules
|
||||
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
||||
|
||||
from sqlalchemy import create_engine, text
|
||||
from sqlalchemy.orm import sessionmaker
|
||||
|
||||
# Database configuration
|
||||
SQLALCHEMY_DATABASE_URL = "sqlite:///./data/seismo_fleet.db"
|
||||
engine = create_engine(SQLALCHEMY_DATABASE_URL, connect_args={"check_same_thread": False})
|
||||
SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)
|
||||
|
||||
|
||||
def migrate():
|
||||
"""Standardize device_type values in the database"""
|
||||
db = SessionLocal()
|
||||
|
||||
try:
|
||||
print("=" * 70)
|
||||
print("Database Migration: Standardize device_type values")
|
||||
print("=" * 70)
|
||||
print()
|
||||
|
||||
# Check for existing "sound_level_meter" values
|
||||
result = db.execute(
|
||||
text("SELECT COUNT(*) as count FROM roster WHERE device_type = 'sound_level_meter'")
|
||||
).fetchone()
|
||||
|
||||
count_to_migrate = result[0] if result else 0
|
||||
|
||||
if count_to_migrate == 0:
|
||||
print("✓ No records need migration - all device_type values are already standardized")
|
||||
print()
|
||||
print("Current device_type distribution:")
|
||||
|
||||
# Show distribution
|
||||
distribution = db.execute(
|
||||
text("SELECT device_type, COUNT(*) as count FROM roster GROUP BY device_type ORDER BY count DESC")
|
||||
).fetchall()
|
||||
|
||||
for row in distribution:
|
||||
device_type, count = row
|
||||
print(f" - {device_type}: {count} units")
|
||||
|
||||
print()
|
||||
print("Migration not needed.")
|
||||
return
|
||||
|
||||
print(f"Found {count_to_migrate} record(s) with device_type='sound_level_meter'")
|
||||
print()
|
||||
print("Converting 'sound_level_meter' → 'slm'...")
|
||||
|
||||
# Perform the migration
|
||||
db.execute(
|
||||
text("UPDATE roster SET device_type = 'slm' WHERE device_type = 'sound_level_meter'")
|
||||
)
|
||||
db.commit()
|
||||
|
||||
print(f"✓ Successfully migrated {count_to_migrate} record(s)")
|
||||
print()
|
||||
|
||||
# Show final distribution
|
||||
print("Updated device_type distribution:")
|
||||
distribution = db.execute(
|
||||
text("SELECT device_type, COUNT(*) as count FROM roster GROUP BY device_type ORDER BY count DESC")
|
||||
).fetchall()
|
||||
|
||||
for row in distribution:
|
||||
device_type, count = row
|
||||
print(f" - {device_type}: {count} units")
|
||||
|
||||
print()
|
||||
print("=" * 70)
|
||||
print("Migration completed successfully!")
|
||||
print("=" * 70)
|
||||
|
||||
except Exception as e:
|
||||
db.rollback()
|
||||
print(f"\n❌ Error during migration: {e}")
|
||||
print("\nRolling back changes...")
|
||||
raise
|
||||
finally:
|
||||
db.close()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
migrate()
|
||||
404
backend/models.py
Normal file
@@ -0,0 +1,404 @@
|
||||
from sqlalchemy import Column, String, DateTime, Boolean, Text, Date, Integer
|
||||
from datetime import datetime
|
||||
from backend.database import Base
|
||||
|
||||
|
||||
class Emitter(Base):
|
||||
__tablename__ = "emitters"
|
||||
|
||||
id = Column(String, primary_key=True, index=True)
|
||||
unit_type = Column(String, nullable=False)
|
||||
last_seen = Column(DateTime, default=datetime.utcnow)
|
||||
last_file = Column(String, nullable=False)
|
||||
status = Column(String, nullable=False)
|
||||
notes = Column(String, nullable=True)
|
||||
|
||||
|
||||
class RosterUnit(Base):
|
||||
"""
|
||||
Roster table: represents our *intended assignment* of a unit.
|
||||
This is editable from the GUI.
|
||||
|
||||
Supports multiple device types with type-specific fields:
|
||||
- "seismograph" - Seismic monitoring devices (default)
|
||||
- "modem" - Field modems and network equipment
|
||||
- "slm" - Sound level meters (NL-43/NL-53)
|
||||
"""
|
||||
__tablename__ = "roster"
|
||||
|
||||
# Core fields (all device types)
|
||||
id = Column(String, primary_key=True, index=True)
|
||||
unit_type = Column(String, default="series3") # Backward compatibility
|
||||
device_type = Column(String, default="seismograph") # "seismograph" | "modem" | "slm"
|
||||
deployed = Column(Boolean, default=True)
|
||||
retired = Column(Boolean, default=False)
|
||||
note = Column(String, nullable=True)
|
||||
project_id = Column(String, nullable=True)
|
||||
location = Column(String, nullable=True) # Legacy field - use address/coordinates instead
|
||||
address = Column(String, nullable=True) # Human-readable address
|
||||
coordinates = Column(String, nullable=True) # Lat,Lon format: "34.0522,-118.2437"
|
||||
last_updated = Column(DateTime, default=datetime.utcnow)
|
||||
|
||||
# Seismograph-specific fields (nullable for modems and SLMs)
|
||||
last_calibrated = Column(Date, nullable=True)
|
||||
next_calibration_due = Column(Date, nullable=True)
|
||||
|
||||
# Modem assignment (shared by seismographs and SLMs)
|
||||
deployed_with_modem_id = Column(String, nullable=True) # FK to another RosterUnit (device_type=modem)
|
||||
|
||||
# Modem-specific fields (nullable for seismographs and SLMs)
|
||||
ip_address = Column(String, nullable=True)
|
||||
phone_number = Column(String, nullable=True)
|
||||
hardware_model = Column(String, nullable=True)
|
||||
deployment_type = Column(String, nullable=True) # "seismograph" | "slm" - what type of device this modem is deployed with
|
||||
deployed_with_unit_id = Column(String, nullable=True) # ID of seismograph/SLM this modem is deployed with
|
||||
|
||||
# Sound Level Meter-specific fields (nullable for seismographs and modems)
|
||||
slm_host = Column(String, nullable=True) # Device IP or hostname
|
||||
slm_tcp_port = Column(Integer, nullable=True) # TCP control port (default 2255)
|
||||
slm_ftp_port = Column(Integer, nullable=True) # FTP data retrieval port (default 21)
|
||||
slm_model = Column(String, nullable=True) # NL-43, NL-53, etc.
|
||||
slm_serial_number = Column(String, nullable=True) # Device serial number
|
||||
slm_frequency_weighting = Column(String, nullable=True) # A, C, Z
|
||||
slm_time_weighting = Column(String, nullable=True) # F (Fast), S (Slow), I (Impulse)
|
||||
slm_measurement_range = Column(String, nullable=True) # e.g., "30-130 dB"
|
||||
slm_last_check = Column(DateTime, nullable=True) # Last communication check
|
||||
|
||||
|
||||
class IgnoredUnit(Base):
|
||||
"""
|
||||
Ignored units: units that report but should be filtered out from unknown emitters.
|
||||
Used to suppress noise from old projects.
|
||||
"""
|
||||
__tablename__ = "ignored_units"
|
||||
|
||||
id = Column(String, primary_key=True, index=True)
|
||||
reason = Column(String, nullable=True)
|
||||
ignored_at = Column(DateTime, default=datetime.utcnow)
|
||||
|
||||
|
||||
class UnitHistory(Base):
|
||||
"""
|
||||
Unit history: complete timeline of changes to each unit.
|
||||
Tracks note changes, status changes, deployment/benched events, and more.
|
||||
"""
|
||||
__tablename__ = "unit_history"
|
||||
|
||||
id = Column(Integer, primary_key=True, autoincrement=True)
|
||||
unit_id = Column(String, nullable=False, index=True) # FK to RosterUnit.id
|
||||
change_type = Column(String, nullable=False) # note_change, deployed_change, retired_change, etc.
|
||||
field_name = Column(String, nullable=True) # Which field changed
|
||||
old_value = Column(Text, nullable=True) # Previous value
|
||||
new_value = Column(Text, nullable=True) # New value
|
||||
changed_at = Column(DateTime, default=datetime.utcnow, nullable=False, index=True)
|
||||
source = Column(String, default="manual") # manual, csv_import, telemetry, offline_sync
|
||||
notes = Column(Text, nullable=True) # Optional reason/context for the change
|
||||
|
||||
|
||||
class UserPreferences(Base):
|
||||
"""
|
||||
User preferences: persistent storage for application settings.
|
||||
Single-row table (id=1) to store global user preferences.
|
||||
"""
|
||||
__tablename__ = "user_preferences"
|
||||
|
||||
id = Column(Integer, primary_key=True, default=1)
|
||||
timezone = Column(String, default="America/New_York")
|
||||
theme = Column(String, default="auto") # auto, light, dark
|
||||
auto_refresh_interval = Column(Integer, default=10) # seconds
|
||||
date_format = Column(String, default="MM/DD/YYYY")
|
||||
table_rows_per_page = Column(Integer, default=25)
|
||||
calibration_interval_days = Column(Integer, default=365)
|
||||
calibration_warning_days = Column(Integer, default=30)
|
||||
status_ok_threshold_hours = Column(Integer, default=12)
|
||||
status_pending_threshold_hours = Column(Integer, default=24)
|
||||
updated_at = Column(DateTime, default=datetime.utcnow, onupdate=datetime.utcnow)
|
||||
|
||||
|
||||
# ============================================================================
|
||||
# Project Management System
|
||||
# ============================================================================
|
||||
|
||||
class ProjectType(Base):
|
||||
"""
|
||||
Project type templates: defines available project types and their capabilities.
|
||||
Pre-populated with: sound_monitoring, vibration_monitoring, combined.
|
||||
"""
|
||||
__tablename__ = "project_types"
|
||||
|
||||
id = Column(String, primary_key=True) # sound_monitoring, vibration_monitoring, combined
|
||||
name = Column(String, nullable=False, unique=True) # "Sound Monitoring", "Vibration Monitoring"
|
||||
description = Column(Text, nullable=True)
|
||||
icon = Column(String, nullable=True) # Icon identifier for UI
|
||||
supports_sound = Column(Boolean, default=False) # Enables SLM features
|
||||
supports_vibration = Column(Boolean, default=False) # Enables seismograph features
|
||||
created_at = Column(DateTime, default=datetime.utcnow)
|
||||
|
||||
|
||||
class Project(Base):
|
||||
"""
|
||||
Projects: top-level organization for monitoring work.
|
||||
Type-aware to enable/disable features based on project_type_id.
|
||||
|
||||
Project naming convention:
|
||||
- project_number: TMI internal ID format xxxx-YY (e.g., "2567-23")
|
||||
- client_name: Client/contractor name (e.g., "PJ Dick")
|
||||
- name: Project/site name (e.g., "RKM Hall", "CMU Campus")
|
||||
|
||||
Display format: "2567-23 - PJ Dick - RKM Hall"
|
||||
Users can search by any of these fields.
|
||||
"""
|
||||
__tablename__ = "projects"
|
||||
|
||||
id = Column(String, primary_key=True, index=True) # UUID
|
||||
project_number = Column(String, nullable=True, index=True) # TMI ID: xxxx-YY format (e.g., "2567-23")
|
||||
name = Column(String, nullable=False, unique=True) # Project/site name (e.g., "RKM Hall")
|
||||
description = Column(Text, nullable=True)
|
||||
project_type_id = Column(String, nullable=False) # FK to ProjectType.id
|
||||
status = Column(String, default="active") # active, completed, archived
|
||||
|
||||
# Project metadata
|
||||
client_name = Column(String, nullable=True, index=True) # Client name (e.g., "PJ Dick")
|
||||
site_address = Column(String, nullable=True)
|
||||
site_coordinates = Column(String, nullable=True) # "lat,lon"
|
||||
start_date = Column(Date, nullable=True)
|
||||
end_date = Column(Date, nullable=True)
|
||||
|
||||
created_at = Column(DateTime, default=datetime.utcnow)
|
||||
updated_at = Column(DateTime, default=datetime.utcnow, onupdate=datetime.utcnow)
|
||||
|
||||
|
||||
class MonitoringLocation(Base):
|
||||
"""
|
||||
Monitoring locations: generic location for monitoring activities.
|
||||
Can be NRL (Noise Recording Location) for sound projects,
|
||||
or monitoring point for vibration projects.
|
||||
"""
|
||||
__tablename__ = "monitoring_locations"
|
||||
|
||||
id = Column(String, primary_key=True, index=True) # UUID
|
||||
project_id = Column(String, nullable=False, index=True) # FK to Project.id
|
||||
location_type = Column(String, nullable=False) # "sound" | "vibration"
|
||||
|
||||
name = Column(String, nullable=False) # NRL-001, VP-North, etc.
|
||||
description = Column(Text, nullable=True)
|
||||
coordinates = Column(String, nullable=True) # "lat,lon"
|
||||
address = Column(String, nullable=True)
|
||||
|
||||
# Type-specific metadata stored as JSON
|
||||
# For sound: {"ambient_conditions": "urban", "expected_sources": ["traffic"]}
|
||||
# For vibration: {"ground_type": "bedrock", "depth": "10m"}
|
||||
location_metadata = Column(Text, nullable=True)
|
||||
|
||||
created_at = Column(DateTime, default=datetime.utcnow)
|
||||
updated_at = Column(DateTime, default=datetime.utcnow, onupdate=datetime.utcnow)
|
||||
|
||||
|
||||
class UnitAssignment(Base):
|
||||
"""
|
||||
Unit assignments: links devices (SLMs or seismographs) to monitoring locations.
|
||||
Supports temporary assignments with assigned_until.
|
||||
"""
|
||||
__tablename__ = "unit_assignments"
|
||||
|
||||
id = Column(String, primary_key=True, index=True) # UUID
|
||||
unit_id = Column(String, nullable=False, index=True) # FK to RosterUnit.id
|
||||
location_id = Column(String, nullable=False, index=True) # FK to MonitoringLocation.id
|
||||
|
||||
assigned_at = Column(DateTime, default=datetime.utcnow)
|
||||
assigned_until = Column(DateTime, nullable=True) # Null = indefinite
|
||||
status = Column(String, default="active") # active, completed, cancelled
|
||||
notes = Column(Text, nullable=True)
|
||||
|
||||
# Denormalized for efficient queries
|
||||
device_type = Column(String, nullable=False) # "slm" | "seismograph"
|
||||
project_id = Column(String, nullable=False, index=True) # FK to Project.id
|
||||
|
||||
created_at = Column(DateTime, default=datetime.utcnow)
|
||||
|
||||
|
||||
class ScheduledAction(Base):
|
||||
"""
|
||||
Scheduled actions: automation for recording start/stop/download.
|
||||
Terra-View executes these by calling SLMM or SFM endpoints.
|
||||
"""
|
||||
__tablename__ = "scheduled_actions"
|
||||
|
||||
id = Column(String, primary_key=True, index=True) # UUID
|
||||
project_id = Column(String, nullable=False, index=True) # FK to Project.id
|
||||
location_id = Column(String, nullable=False, index=True) # FK to MonitoringLocation.id
|
||||
unit_id = Column(String, nullable=True, index=True) # FK to RosterUnit.id (nullable if location-based)
|
||||
|
||||
action_type = Column(String, nullable=False) # start, stop, download, calibrate
|
||||
device_type = Column(String, nullable=False) # "slm" | "seismograph"
|
||||
|
||||
scheduled_time = Column(DateTime, nullable=False, index=True)
|
||||
executed_at = Column(DateTime, nullable=True)
|
||||
execution_status = Column(String, default="pending") # pending, completed, failed, cancelled
|
||||
|
||||
# Response from device module (SLMM or SFM)
|
||||
module_response = Column(Text, nullable=True) # JSON
|
||||
error_message = Column(Text, nullable=True)
|
||||
|
||||
notes = Column(Text, nullable=True)
|
||||
created_at = Column(DateTime, default=datetime.utcnow)
|
||||
|
||||
|
||||
class RecordingSession(Base):
|
||||
"""
|
||||
Recording sessions: tracks actual monitoring sessions.
|
||||
Created when recording starts, updated when it stops.
|
||||
"""
|
||||
__tablename__ = "recording_sessions"
|
||||
|
||||
id = Column(String, primary_key=True, index=True) # UUID
|
||||
project_id = Column(String, nullable=False, index=True) # FK to Project.id
|
||||
location_id = Column(String, nullable=False, index=True) # FK to MonitoringLocation.id
|
||||
unit_id = Column(String, nullable=False, index=True) # FK to RosterUnit.id
|
||||
|
||||
session_type = Column(String, nullable=False) # sound | vibration
|
||||
started_at = Column(DateTime, nullable=False)
|
||||
stopped_at = Column(DateTime, nullable=True)
|
||||
duration_seconds = Column(Integer, nullable=True)
|
||||
status = Column(String, default="recording") # recording, completed, failed
|
||||
|
||||
# Snapshot of device configuration at recording time
|
||||
session_metadata = Column(Text, nullable=True) # JSON
|
||||
|
||||
created_at = Column(DateTime, default=datetime.utcnow)
|
||||
updated_at = Column(DateTime, default=datetime.utcnow, onupdate=datetime.utcnow)
|
||||
|
||||
|
||||
class DataFile(Base):
|
||||
"""
|
||||
Data files: references to recorded data files.
|
||||
Terra-View tracks file metadata; actual files stored in data/Projects/ directory.
|
||||
"""
|
||||
__tablename__ = "data_files"
|
||||
|
||||
id = Column(String, primary_key=True, index=True) # UUID
|
||||
session_id = Column(String, nullable=False, index=True) # FK to RecordingSession.id
|
||||
|
||||
file_path = Column(String, nullable=False) # Relative to data/Projects/
|
||||
file_type = Column(String, nullable=False) # wav, csv, mseed, json
|
||||
file_size_bytes = Column(Integer, nullable=True)
|
||||
downloaded_at = Column(DateTime, nullable=True)
|
||||
checksum = Column(String, nullable=True) # SHA256 or MD5
|
||||
|
||||
# Additional file metadata
|
||||
file_metadata = Column(Text, nullable=True) # JSON
|
||||
|
||||
created_at = Column(DateTime, default=datetime.utcnow)
|
||||
|
||||
|
||||
class ReportTemplate(Base):
|
||||
"""
|
||||
Report templates: saved configurations for generating Excel reports.
|
||||
Allows users to save time filter presets, titles, etc. for reuse.
|
||||
"""
|
||||
__tablename__ = "report_templates"
|
||||
|
||||
id = Column(String, primary_key=True, index=True) # UUID
|
||||
name = Column(String, nullable=False) # "Nighttime Report", "Full Day Report"
|
||||
project_id = Column(String, nullable=True) # Optional: project-specific template
|
||||
|
||||
# Template settings
|
||||
report_title = Column(String, default="Background Noise Study")
|
||||
start_time = Column(String, nullable=True) # "19:00" format
|
||||
end_time = Column(String, nullable=True) # "07:00" format
|
||||
start_date = Column(String, nullable=True) # "2025-01-15" format (optional)
|
||||
end_date = Column(String, nullable=True) # "2025-01-20" format (optional)
|
||||
|
||||
created_at = Column(DateTime, default=datetime.utcnow)
|
||||
updated_at = Column(DateTime, default=datetime.utcnow, onupdate=datetime.utcnow)
|
||||
|
||||
|
||||
# ============================================================================
|
||||
# Sound Monitoring Scheduler
|
||||
# ============================================================================
|
||||
|
||||
class RecurringSchedule(Base):
|
||||
"""
|
||||
Recurring schedule definitions for automated sound monitoring.
|
||||
|
||||
Supports two schedule types:
|
||||
- "weekly_calendar": Select specific days with start/end times (e.g., Mon/Wed/Fri 7pm-7am)
|
||||
- "simple_interval": For 24/7 monitoring with daily stop/download/restart cycles
|
||||
"""
|
||||
__tablename__ = "recurring_schedules"
|
||||
|
||||
id = Column(String, primary_key=True, index=True) # UUID
|
||||
project_id = Column(String, nullable=False, index=True) # FK to Project.id
|
||||
location_id = Column(String, nullable=False, index=True) # FK to MonitoringLocation.id
|
||||
unit_id = Column(String, nullable=True, index=True) # FK to RosterUnit.id (optional, can use assignment)
|
||||
|
||||
name = Column(String, nullable=False) # "Weeknight Monitoring", "24/7 Continuous"
|
||||
schedule_type = Column(String, nullable=False) # "weekly_calendar" | "simple_interval"
|
||||
device_type = Column(String, nullable=False) # "slm" | "seismograph"
|
||||
|
||||
# Weekly Calendar fields (schedule_type = "weekly_calendar")
|
||||
# JSON format: {
|
||||
# "monday": {"enabled": true, "start": "19:00", "end": "07:00"},
|
||||
# "tuesday": {"enabled": false},
|
||||
# ...
|
||||
# }
|
||||
weekly_pattern = Column(Text, nullable=True)
|
||||
|
||||
# Simple Interval fields (schedule_type = "simple_interval")
|
||||
interval_type = Column(String, nullable=True) # "daily" | "hourly"
|
||||
cycle_time = Column(String, nullable=True) # "00:00" - time to run stop/download/restart
|
||||
include_download = Column(Boolean, default=True) # Download data before restart
|
||||
|
||||
# Automation options (applies to both schedule types)
|
||||
auto_increment_index = Column(Boolean, default=True) # Auto-increment store/index number before start
|
||||
# When True: prevents "overwrite data?" prompts by using a new index each time
|
||||
|
||||
# Shared configuration
|
||||
enabled = Column(Boolean, default=True)
|
||||
timezone = Column(String, default="America/New_York")
|
||||
|
||||
# Tracking
|
||||
last_generated_at = Column(DateTime, nullable=True) # When actions were last generated
|
||||
next_occurrence = Column(DateTime, nullable=True) # Computed next action time
|
||||
|
||||
created_at = Column(DateTime, default=datetime.utcnow)
|
||||
updated_at = Column(DateTime, default=datetime.utcnow, onupdate=datetime.utcnow)
|
||||
|
||||
|
||||
class Alert(Base):
|
||||
"""
|
||||
In-app alerts for device status changes and system events.
|
||||
|
||||
Designed for future expansion to email/webhook notifications.
|
||||
Currently supports:
|
||||
- device_offline: Device became unreachable
|
||||
- device_online: Device came back online
|
||||
- schedule_failed: Scheduled action failed to execute
|
||||
- schedule_completed: Scheduled action completed successfully
|
||||
"""
|
||||
__tablename__ = "alerts"
|
||||
|
||||
id = Column(String, primary_key=True, index=True) # UUID
|
||||
|
||||
# Alert classification
|
||||
alert_type = Column(String, nullable=False) # "device_offline" | "device_online" | "schedule_failed" | "schedule_completed"
|
||||
severity = Column(String, default="warning") # "info" | "warning" | "critical"
|
||||
|
||||
# Related entities (nullable - may not all apply)
|
||||
project_id = Column(String, nullable=True, index=True)
|
||||
location_id = Column(String, nullable=True, index=True)
|
||||
unit_id = Column(String, nullable=True, index=True)
|
||||
schedule_id = Column(String, nullable=True) # RecurringSchedule or ScheduledAction id
|
||||
|
||||
# Alert content
|
||||
title = Column(String, nullable=False) # "NRL-001 Device Offline"
|
||||
message = Column(Text, nullable=True) # Detailed description
|
||||
alert_metadata = Column(Text, nullable=True) # JSON: additional context data
|
||||
|
||||
# Status tracking
|
||||
status = Column(String, default="active") # "active" | "acknowledged" | "resolved" | "dismissed"
|
||||
acknowledged_at = Column(DateTime, nullable=True)
|
||||
resolved_at = Column(DateTime, nullable=True)
|
||||
|
||||
created_at = Column(DateTime, default=datetime.utcnow)
|
||||
expires_at = Column(DateTime, nullable=True) # Auto-dismiss after this time
|
||||
146
backend/routers/activity.py
Normal file
@@ -0,0 +1,146 @@
|
||||
from fastapi import APIRouter, Depends
|
||||
from sqlalchemy.orm import Session
|
||||
from sqlalchemy import desc
|
||||
from pathlib import Path
|
||||
from datetime import datetime, timedelta, timezone
|
||||
from typing import List, Dict, Any
|
||||
from backend.database import get_db
|
||||
from backend.models import UnitHistory, Emitter, RosterUnit
|
||||
|
||||
router = APIRouter(prefix="/api", tags=["activity"])
|
||||
|
||||
PHOTOS_BASE_DIR = Path("data/photos")
|
||||
|
||||
|
||||
@router.get("/recent-activity")
|
||||
def get_recent_activity(limit: int = 20, db: Session = Depends(get_db)):
|
||||
"""
|
||||
Get recent activity feed combining unit history changes and photo uploads.
|
||||
Returns a unified timeline of events sorted by timestamp (newest first).
|
||||
"""
|
||||
activities = []
|
||||
|
||||
# Get recent history entries
|
||||
history_entries = db.query(UnitHistory)\
|
||||
.order_by(desc(UnitHistory.changed_at))\
|
||||
.limit(limit * 2)\
|
||||
.all() # Get more than needed to mix with photos
|
||||
|
||||
for entry in history_entries:
|
||||
activity = {
|
||||
"type": "history",
|
||||
"timestamp": entry.changed_at.isoformat(),
|
||||
"timestamp_unix": entry.changed_at.timestamp(),
|
||||
"unit_id": entry.unit_id,
|
||||
"change_type": entry.change_type,
|
||||
"field_name": entry.field_name,
|
||||
"old_value": entry.old_value,
|
||||
"new_value": entry.new_value,
|
||||
"source": entry.source,
|
||||
"notes": entry.notes
|
||||
}
|
||||
activities.append(activity)
|
||||
|
||||
# Get recent photos
|
||||
if PHOTOS_BASE_DIR.exists():
|
||||
image_extensions = {".jpg", ".jpeg", ".png", ".gif", ".webp"}
|
||||
photo_activities = []
|
||||
|
||||
for unit_dir in PHOTOS_BASE_DIR.iterdir():
|
||||
if not unit_dir.is_dir():
|
||||
continue
|
||||
|
||||
unit_id = unit_dir.name
|
||||
|
||||
for file_path in unit_dir.iterdir():
|
||||
if file_path.is_file() and file_path.suffix.lower() in image_extensions:
|
||||
modified_time = file_path.stat().st_mtime
|
||||
photo_activities.append({
|
||||
"type": "photo",
|
||||
"timestamp": datetime.fromtimestamp(modified_time).isoformat(),
|
||||
"timestamp_unix": modified_time,
|
||||
"unit_id": unit_id,
|
||||
"filename": file_path.name,
|
||||
"photo_url": f"/api/unit/{unit_id}/photo/{file_path.name}"
|
||||
})
|
||||
|
||||
activities.extend(photo_activities)
|
||||
|
||||
# Sort all activities by timestamp (newest first)
|
||||
activities.sort(key=lambda x: x["timestamp_unix"], reverse=True)
|
||||
|
||||
# Limit to requested number
|
||||
activities = activities[:limit]
|
||||
|
||||
return {
|
||||
"activities": activities,
|
||||
"total": len(activities)
|
||||
}
|
||||
|
||||
|
||||
@router.get("/recent-callins")
|
||||
def get_recent_callins(hours: int = 6, limit: int = None, db: Session = Depends(get_db)):
|
||||
"""
|
||||
Get recent unit call-ins (units that have reported recently).
|
||||
Returns units sorted by most recent last_seen timestamp.
|
||||
|
||||
Args:
|
||||
hours: Look back this many hours (default: 6)
|
||||
limit: Maximum number of results (default: None = all)
|
||||
"""
|
||||
# Calculate the time threshold
|
||||
time_threshold = datetime.now(timezone.utc) - timedelta(hours=hours)
|
||||
|
||||
# Query emitters with recent activity, joined with roster info
|
||||
recent_emitters = db.query(Emitter)\
|
||||
.filter(Emitter.last_seen >= time_threshold)\
|
||||
.order_by(desc(Emitter.last_seen))\
|
||||
.all()
|
||||
|
||||
# Get roster info for all units
|
||||
roster_dict = {r.id: r for r in db.query(RosterUnit).all()}
|
||||
|
||||
call_ins = []
|
||||
for emitter in recent_emitters:
|
||||
roster_unit = roster_dict.get(emitter.id)
|
||||
|
||||
# Calculate time since last seen
|
||||
last_seen_utc = emitter.last_seen.replace(tzinfo=timezone.utc) if emitter.last_seen.tzinfo is None else emitter.last_seen
|
||||
time_diff = datetime.now(timezone.utc) - last_seen_utc
|
||||
|
||||
# Format time ago
|
||||
if time_diff.total_seconds() < 60:
|
||||
time_ago = "just now"
|
||||
elif time_diff.total_seconds() < 3600:
|
||||
minutes = int(time_diff.total_seconds() / 60)
|
||||
time_ago = f"{minutes}m ago"
|
||||
else:
|
||||
hours_ago = time_diff.total_seconds() / 3600
|
||||
if hours_ago < 24:
|
||||
time_ago = f"{int(hours_ago)}h {int((hours_ago % 1) * 60)}m ago"
|
||||
else:
|
||||
days = int(hours_ago / 24)
|
||||
time_ago = f"{days}d ago"
|
||||
|
||||
call_in = {
|
||||
"unit_id": emitter.id,
|
||||
"last_seen": emitter.last_seen.isoformat(),
|
||||
"time_ago": time_ago,
|
||||
"status": emitter.status,
|
||||
"device_type": roster_unit.device_type if roster_unit else "seismograph",
|
||||
"deployed": roster_unit.deployed if roster_unit else False,
|
||||
"note": roster_unit.note if roster_unit and roster_unit.note else "",
|
||||
"location": roster_unit.address if roster_unit and roster_unit.address else (roster_unit.location if roster_unit else "")
|
||||
}
|
||||
call_ins.append(call_in)
|
||||
|
||||
# Apply limit if specified
|
||||
if limit:
|
||||
call_ins = call_ins[:limit]
|
||||
|
||||
return {
|
||||
"call_ins": call_ins,
|
||||
"total": len(call_ins),
|
||||
"hours": hours,
|
||||
"time_threshold": time_threshold.isoformat()
|
||||
}
|
||||
326
backend/routers/alerts.py
Normal file
@@ -0,0 +1,326 @@
|
||||
"""
|
||||
Alerts Router
|
||||
|
||||
API endpoints for managing in-app alerts.
|
||||
"""
|
||||
|
||||
from fastapi import APIRouter, Request, Depends, HTTPException, Query
|
||||
from fastapi.responses import HTMLResponse, JSONResponse
|
||||
from sqlalchemy.orm import Session
|
||||
from typing import Optional
|
||||
from datetime import datetime, timedelta
|
||||
|
||||
from backend.database import get_db
|
||||
from backend.models import Alert, RosterUnit
|
||||
from backend.services.alert_service import get_alert_service
|
||||
from backend.templates_config import templates
|
||||
|
||||
router = APIRouter(prefix="/api/alerts", tags=["alerts"])
|
||||
|
||||
|
||||
# ============================================================================
|
||||
# Alert List and Count
|
||||
# ============================================================================
|
||||
|
||||
@router.get("/")
|
||||
async def list_alerts(
|
||||
db: Session = Depends(get_db),
|
||||
status: Optional[str] = Query(None, description="Filter by status: active, acknowledged, resolved, dismissed"),
|
||||
project_id: Optional[str] = Query(None),
|
||||
unit_id: Optional[str] = Query(None),
|
||||
alert_type: Optional[str] = Query(None, description="Filter by type: device_offline, device_online, schedule_failed"),
|
||||
limit: int = Query(50, le=100),
|
||||
offset: int = Query(0, ge=0),
|
||||
):
|
||||
"""
|
||||
List alerts with optional filters.
|
||||
"""
|
||||
alert_service = get_alert_service(db)
|
||||
|
||||
alerts = alert_service.get_all_alerts(
|
||||
status=status,
|
||||
project_id=project_id,
|
||||
unit_id=unit_id,
|
||||
alert_type=alert_type,
|
||||
limit=limit,
|
||||
offset=offset,
|
||||
)
|
||||
|
||||
return {
|
||||
"alerts": [
|
||||
{
|
||||
"id": a.id,
|
||||
"alert_type": a.alert_type,
|
||||
"severity": a.severity,
|
||||
"title": a.title,
|
||||
"message": a.message,
|
||||
"status": a.status,
|
||||
"unit_id": a.unit_id,
|
||||
"project_id": a.project_id,
|
||||
"location_id": a.location_id,
|
||||
"created_at": a.created_at.isoformat() if a.created_at else None,
|
||||
"acknowledged_at": a.acknowledged_at.isoformat() if a.acknowledged_at else None,
|
||||
"resolved_at": a.resolved_at.isoformat() if a.resolved_at else None,
|
||||
}
|
||||
for a in alerts
|
||||
],
|
||||
"count": len(alerts),
|
||||
"limit": limit,
|
||||
"offset": offset,
|
||||
}
|
||||
|
||||
|
||||
@router.get("/active")
|
||||
async def list_active_alerts(
|
||||
db: Session = Depends(get_db),
|
||||
project_id: Optional[str] = Query(None),
|
||||
unit_id: Optional[str] = Query(None),
|
||||
alert_type: Optional[str] = Query(None),
|
||||
min_severity: Optional[str] = Query(None, description="Minimum severity: info, warning, critical"),
|
||||
limit: int = Query(50, le=100),
|
||||
):
|
||||
"""
|
||||
List only active alerts.
|
||||
"""
|
||||
alert_service = get_alert_service(db)
|
||||
|
||||
alerts = alert_service.get_active_alerts(
|
||||
project_id=project_id,
|
||||
unit_id=unit_id,
|
||||
alert_type=alert_type,
|
||||
min_severity=min_severity,
|
||||
limit=limit,
|
||||
)
|
||||
|
||||
return {
|
||||
"alerts": [
|
||||
{
|
||||
"id": a.id,
|
||||
"alert_type": a.alert_type,
|
||||
"severity": a.severity,
|
||||
"title": a.title,
|
||||
"message": a.message,
|
||||
"unit_id": a.unit_id,
|
||||
"project_id": a.project_id,
|
||||
"created_at": a.created_at.isoformat() if a.created_at else None,
|
||||
}
|
||||
for a in alerts
|
||||
],
|
||||
"count": len(alerts),
|
||||
}
|
||||
|
||||
|
||||
@router.get("/active/count")
|
||||
async def get_active_alert_count(db: Session = Depends(get_db)):
|
||||
"""
|
||||
Get count of active alerts (for navbar badge).
|
||||
"""
|
||||
alert_service = get_alert_service(db)
|
||||
count = alert_service.get_active_alert_count()
|
||||
return {"count": count}
|
||||
|
||||
|
||||
# ============================================================================
|
||||
# Single Alert Operations
|
||||
# ============================================================================
|
||||
|
||||
@router.get("/{alert_id}")
|
||||
async def get_alert(
|
||||
alert_id: str,
|
||||
db: Session = Depends(get_db),
|
||||
):
|
||||
"""
|
||||
Get a specific alert.
|
||||
"""
|
||||
alert = db.query(Alert).filter_by(id=alert_id).first()
|
||||
if not alert:
|
||||
raise HTTPException(status_code=404, detail="Alert not found")
|
||||
|
||||
# Get related unit info
|
||||
unit = None
|
||||
if alert.unit_id:
|
||||
unit = db.query(RosterUnit).filter_by(id=alert.unit_id).first()
|
||||
|
||||
return {
|
||||
"id": alert.id,
|
||||
"alert_type": alert.alert_type,
|
||||
"severity": alert.severity,
|
||||
"title": alert.title,
|
||||
"message": alert.message,
|
||||
"metadata": alert.alert_metadata,
|
||||
"status": alert.status,
|
||||
"unit_id": alert.unit_id,
|
||||
"unit_name": unit.id if unit else None,
|
||||
"project_id": alert.project_id,
|
||||
"location_id": alert.location_id,
|
||||
"schedule_id": alert.schedule_id,
|
||||
"created_at": alert.created_at.isoformat() if alert.created_at else None,
|
||||
"acknowledged_at": alert.acknowledged_at.isoformat() if alert.acknowledged_at else None,
|
||||
"resolved_at": alert.resolved_at.isoformat() if alert.resolved_at else None,
|
||||
"expires_at": alert.expires_at.isoformat() if alert.expires_at else None,
|
||||
}
|
||||
|
||||
|
||||
@router.post("/{alert_id}/acknowledge")
|
||||
async def acknowledge_alert(
|
||||
alert_id: str,
|
||||
db: Session = Depends(get_db),
|
||||
):
|
||||
"""
|
||||
Mark alert as acknowledged.
|
||||
"""
|
||||
alert_service = get_alert_service(db)
|
||||
alert = alert_service.acknowledge_alert(alert_id)
|
||||
|
||||
if not alert:
|
||||
raise HTTPException(status_code=404, detail="Alert not found")
|
||||
|
||||
return {
|
||||
"success": True,
|
||||
"alert_id": alert.id,
|
||||
"status": alert.status,
|
||||
}
|
||||
|
||||
|
||||
@router.post("/{alert_id}/dismiss")
|
||||
async def dismiss_alert(
|
||||
alert_id: str,
|
||||
db: Session = Depends(get_db),
|
||||
):
|
||||
"""
|
||||
Dismiss alert.
|
||||
"""
|
||||
alert_service = get_alert_service(db)
|
||||
alert = alert_service.dismiss_alert(alert_id)
|
||||
|
||||
if not alert:
|
||||
raise HTTPException(status_code=404, detail="Alert not found")
|
||||
|
||||
return {
|
||||
"success": True,
|
||||
"alert_id": alert.id,
|
||||
"status": alert.status,
|
||||
}
|
||||
|
||||
|
||||
@router.post("/{alert_id}/resolve")
|
||||
async def resolve_alert(
|
||||
alert_id: str,
|
||||
db: Session = Depends(get_db),
|
||||
):
|
||||
"""
|
||||
Manually resolve an alert.
|
||||
"""
|
||||
alert_service = get_alert_service(db)
|
||||
alert = alert_service.resolve_alert(alert_id)
|
||||
|
||||
if not alert:
|
||||
raise HTTPException(status_code=404, detail="Alert not found")
|
||||
|
||||
return {
|
||||
"success": True,
|
||||
"alert_id": alert.id,
|
||||
"status": alert.status,
|
||||
}
|
||||
|
||||
|
||||
# ============================================================================
|
||||
# HTML Partials for HTMX
|
||||
# ============================================================================
|
||||
|
||||
@router.get("/partials/dropdown", response_class=HTMLResponse)
|
||||
async def get_alert_dropdown(
|
||||
request: Request,
|
||||
db: Session = Depends(get_db),
|
||||
):
|
||||
"""
|
||||
Return HTML partial for alert dropdown in navbar.
|
||||
"""
|
||||
alert_service = get_alert_service(db)
|
||||
alerts = alert_service.get_active_alerts(limit=10)
|
||||
|
||||
# Calculate relative time for each alert
|
||||
now = datetime.utcnow()
|
||||
alerts_data = []
|
||||
for alert in alerts:
|
||||
delta = now - alert.created_at
|
||||
if delta.days > 0:
|
||||
time_ago = f"{delta.days}d ago"
|
||||
elif delta.seconds >= 3600:
|
||||
time_ago = f"{delta.seconds // 3600}h ago"
|
||||
elif delta.seconds >= 60:
|
||||
time_ago = f"{delta.seconds // 60}m ago"
|
||||
else:
|
||||
time_ago = "just now"
|
||||
|
||||
alerts_data.append({
|
||||
"alert": alert,
|
||||
"time_ago": time_ago,
|
||||
})
|
||||
|
||||
return templates.TemplateResponse("partials/alerts/alert_dropdown.html", {
|
||||
"request": request,
|
||||
"alerts": alerts_data,
|
||||
"total_count": alert_service.get_active_alert_count(),
|
||||
})
|
||||
|
||||
|
||||
@router.get("/partials/list", response_class=HTMLResponse)
|
||||
async def get_alert_list(
|
||||
request: Request,
|
||||
db: Session = Depends(get_db),
|
||||
status: Optional[str] = Query(None),
|
||||
limit: int = Query(20),
|
||||
):
|
||||
"""
|
||||
Return HTML partial for alert list page.
|
||||
"""
|
||||
alert_service = get_alert_service(db)
|
||||
|
||||
if status:
|
||||
alerts = alert_service.get_all_alerts(status=status, limit=limit)
|
||||
else:
|
||||
alerts = alert_service.get_all_alerts(limit=limit)
|
||||
|
||||
# Calculate relative time for each alert
|
||||
now = datetime.utcnow()
|
||||
alerts_data = []
|
||||
for alert in alerts:
|
||||
delta = now - alert.created_at
|
||||
if delta.days > 0:
|
||||
time_ago = f"{delta.days}d ago"
|
||||
elif delta.seconds >= 3600:
|
||||
time_ago = f"{delta.seconds // 3600}h ago"
|
||||
elif delta.seconds >= 60:
|
||||
time_ago = f"{delta.seconds // 60}m ago"
|
||||
else:
|
||||
time_ago = "just now"
|
||||
|
||||
alerts_data.append({
|
||||
"alert": alert,
|
||||
"time_ago": time_ago,
|
||||
})
|
||||
|
||||
return templates.TemplateResponse("partials/alerts/alert_list.html", {
|
||||
"request": request,
|
||||
"alerts": alerts_data,
|
||||
"status_filter": status,
|
||||
})
|
||||
|
||||
|
||||
# ============================================================================
|
||||
# Cleanup
|
||||
# ============================================================================
|
||||
|
||||
@router.post("/cleanup-expired")
|
||||
async def cleanup_expired_alerts(db: Session = Depends(get_db)):
|
||||
"""
|
||||
Cleanup expired alerts (admin/maintenance endpoint).
|
||||
"""
|
||||
alert_service = get_alert_service(db)
|
||||
count = alert_service.cleanup_expired_alerts()
|
||||
|
||||
return {
|
||||
"success": True,
|
||||
"cleaned_up": count,
|
||||
}
|
||||
97
backend/routers/dashboard.py
Normal file
@@ -0,0 +1,97 @@
|
||||
from fastapi import APIRouter, Request, Depends
|
||||
from sqlalchemy.orm import Session
|
||||
from datetime import datetime, timedelta
|
||||
|
||||
from backend.database import get_db
|
||||
from backend.models import ScheduledAction, MonitoringLocation, Project
|
||||
from backend.services.snapshot import emit_status_snapshot
|
||||
from backend.templates_config import templates
|
||||
from backend.utils.timezone import utc_to_local, local_to_utc, get_user_timezone
|
||||
|
||||
router = APIRouter()
|
||||
|
||||
|
||||
@router.get("/dashboard/active")
|
||||
def dashboard_active(request: Request):
|
||||
snapshot = emit_status_snapshot()
|
||||
return templates.TemplateResponse(
|
||||
"partials/active_table.html",
|
||||
{"request": request, "units": snapshot["active"]}
|
||||
)
|
||||
|
||||
|
||||
@router.get("/dashboard/benched")
|
||||
def dashboard_benched(request: Request):
|
||||
snapshot = emit_status_snapshot()
|
||||
return templates.TemplateResponse(
|
||||
"partials/benched_table.html",
|
||||
{"request": request, "units": snapshot["benched"]}
|
||||
)
|
||||
|
||||
|
||||
@router.get("/dashboard/todays-actions")
|
||||
def dashboard_todays_actions(request: Request, db: Session = Depends(get_db)):
|
||||
"""
|
||||
Get today's scheduled actions for the dashboard card.
|
||||
Shows upcoming, completed, and failed actions for today.
|
||||
"""
|
||||
import json
|
||||
from zoneinfo import ZoneInfo
|
||||
|
||||
# Get today's date range in local timezone
|
||||
tz = ZoneInfo(get_user_timezone())
|
||||
now_local = datetime.now(tz)
|
||||
today_start_local = now_local.replace(hour=0, minute=0, second=0, microsecond=0)
|
||||
today_end_local = today_start_local + timedelta(days=1)
|
||||
|
||||
# Convert to UTC for database query
|
||||
today_start_utc = today_start_local.astimezone(ZoneInfo("UTC")).replace(tzinfo=None)
|
||||
today_end_utc = today_end_local.astimezone(ZoneInfo("UTC")).replace(tzinfo=None)
|
||||
|
||||
# Query today's actions
|
||||
actions = db.query(ScheduledAction).filter(
|
||||
ScheduledAction.scheduled_time >= today_start_utc,
|
||||
ScheduledAction.scheduled_time < today_end_utc,
|
||||
).order_by(ScheduledAction.scheduled_time.asc()).all()
|
||||
|
||||
# Enrich with location/project info and parse results
|
||||
enriched_actions = []
|
||||
for action in actions:
|
||||
location = None
|
||||
project = None
|
||||
if action.location_id:
|
||||
location = db.query(MonitoringLocation).filter_by(id=action.location_id).first()
|
||||
if action.project_id:
|
||||
project = db.query(Project).filter_by(id=action.project_id).first()
|
||||
|
||||
# Parse module_response for result details
|
||||
result_data = None
|
||||
if action.module_response:
|
||||
try:
|
||||
result_data = json.loads(action.module_response)
|
||||
except json.JSONDecodeError:
|
||||
pass
|
||||
|
||||
enriched_actions.append({
|
||||
"action": action,
|
||||
"location": location,
|
||||
"project": project,
|
||||
"result": result_data,
|
||||
})
|
||||
|
||||
# Count by status
|
||||
pending_count = sum(1 for a in actions if a.execution_status == "pending")
|
||||
completed_count = sum(1 for a in actions if a.execution_status == "completed")
|
||||
failed_count = sum(1 for a in actions if a.execution_status == "failed")
|
||||
|
||||
return templates.TemplateResponse(
|
||||
"partials/dashboard/todays_actions.html",
|
||||
{
|
||||
"request": request,
|
||||
"actions": enriched_actions,
|
||||
"pending_count": pending_count,
|
||||
"completed_count": completed_count,
|
||||
"failed_count": failed_count,
|
||||
"total_count": len(actions),
|
||||
}
|
||||
)
|
||||
34
backend/routers/dashboard_tabs.py
Normal file
@@ -0,0 +1,34 @@
|
||||
# backend/routers/dashboard_tabs.py
|
||||
from fastapi import APIRouter, Depends
|
||||
from sqlalchemy.orm import Session
|
||||
|
||||
from backend.database import get_db
|
||||
from backend.services.snapshot import emit_status_snapshot
|
||||
|
||||
router = APIRouter(prefix="/dashboard", tags=["dashboard-tabs"])
|
||||
|
||||
@router.get("/active")
|
||||
def get_active_units(db: Session = Depends(get_db)):
|
||||
"""
|
||||
Return only ACTIVE (deployed) units for dashboard table swap.
|
||||
"""
|
||||
snap = emit_status_snapshot()
|
||||
units = {
|
||||
uid: u
|
||||
for uid, u in snap["units"].items()
|
||||
if u["deployed"] is True
|
||||
}
|
||||
return {"units": units}
|
||||
|
||||
@router.get("/benched")
|
||||
def get_benched_units(db: Session = Depends(get_db)):
|
||||
"""
|
||||
Return only BENCHED (not deployed) units for dashboard table swap.
|
||||
"""
|
||||
snap = emit_status_snapshot()
|
||||
units = {
|
||||
uid: u
|
||||
for uid, u in snap["units"].items()
|
||||
if u["deployed"] is False
|
||||
}
|
||||
return {"units": units}
|
||||
286
backend/routers/modem_dashboard.py
Normal file
@@ -0,0 +1,286 @@
|
||||
"""
|
||||
Modem Dashboard Router
|
||||
|
||||
Provides API endpoints for the Field Modems management page.
|
||||
"""
|
||||
|
||||
from fastapi import APIRouter, Request, Depends, Query
|
||||
from fastapi.responses import HTMLResponse
|
||||
from sqlalchemy.orm import Session
|
||||
from datetime import datetime
|
||||
import subprocess
|
||||
import time
|
||||
import logging
|
||||
|
||||
from backend.database import get_db
|
||||
from backend.models import RosterUnit
|
||||
from backend.templates_config import templates
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
router = APIRouter(prefix="/api/modem-dashboard", tags=["modem-dashboard"])
|
||||
|
||||
|
||||
@router.get("/stats", response_class=HTMLResponse)
|
||||
async def get_modem_stats(request: Request, db: Session = Depends(get_db)):
|
||||
"""
|
||||
Get summary statistics for modem dashboard.
|
||||
Returns HTML partial with stat cards.
|
||||
"""
|
||||
# Query all modems
|
||||
all_modems = db.query(RosterUnit).filter_by(device_type="modem").all()
|
||||
|
||||
# Get IDs of modems that have devices paired to them
|
||||
paired_modem_ids = set()
|
||||
devices_with_modems = db.query(RosterUnit).filter(
|
||||
RosterUnit.deployed_with_modem_id.isnot(None),
|
||||
RosterUnit.retired == False
|
||||
).all()
|
||||
for device in devices_with_modems:
|
||||
if device.deployed_with_modem_id:
|
||||
paired_modem_ids.add(device.deployed_with_modem_id)
|
||||
|
||||
# Count categories
|
||||
total_count = len(all_modems)
|
||||
retired_count = sum(1 for m in all_modems if m.retired)
|
||||
|
||||
# In use = deployed AND paired with a device
|
||||
in_use_count = sum(1 for m in all_modems
|
||||
if m.deployed and not m.retired and m.id in paired_modem_ids)
|
||||
|
||||
# Spare = deployed but NOT paired (available for assignment)
|
||||
spare_count = sum(1 for m in all_modems
|
||||
if m.deployed and not m.retired and m.id not in paired_modem_ids)
|
||||
|
||||
# Benched = not deployed and not retired
|
||||
benched_count = sum(1 for m in all_modems if not m.deployed and not m.retired)
|
||||
|
||||
return templates.TemplateResponse("partials/modem_stats.html", {
|
||||
"request": request,
|
||||
"total_count": total_count,
|
||||
"in_use_count": in_use_count,
|
||||
"spare_count": spare_count,
|
||||
"benched_count": benched_count,
|
||||
"retired_count": retired_count
|
||||
})
|
||||
|
||||
|
||||
@router.get("/units", response_class=HTMLResponse)
|
||||
async def get_modem_units(
|
||||
request: Request,
|
||||
db: Session = Depends(get_db),
|
||||
search: str = Query(None),
|
||||
filter_status: str = Query(None), # "in_use", "spare", "benched", "retired"
|
||||
):
|
||||
"""
|
||||
Get list of modem units for the dashboard.
|
||||
Returns HTML partial with modem cards.
|
||||
"""
|
||||
query = db.query(RosterUnit).filter_by(device_type="modem")
|
||||
|
||||
# Filter by search term if provided
|
||||
if search:
|
||||
search_term = f"%{search}%"
|
||||
query = query.filter(
|
||||
(RosterUnit.id.ilike(search_term)) |
|
||||
(RosterUnit.ip_address.ilike(search_term)) |
|
||||
(RosterUnit.hardware_model.ilike(search_term)) |
|
||||
(RosterUnit.phone_number.ilike(search_term)) |
|
||||
(RosterUnit.location.ilike(search_term))
|
||||
)
|
||||
|
||||
modems = query.order_by(
|
||||
RosterUnit.retired.asc(),
|
||||
RosterUnit.deployed.desc(),
|
||||
RosterUnit.id.asc()
|
||||
).all()
|
||||
|
||||
# Get paired device info for each modem
|
||||
paired_devices = {}
|
||||
devices_with_modems = db.query(RosterUnit).filter(
|
||||
RosterUnit.deployed_with_modem_id.isnot(None),
|
||||
RosterUnit.retired == False
|
||||
).all()
|
||||
for device in devices_with_modems:
|
||||
if device.deployed_with_modem_id:
|
||||
paired_devices[device.deployed_with_modem_id] = {
|
||||
"id": device.id,
|
||||
"device_type": device.device_type,
|
||||
"deployed": device.deployed
|
||||
}
|
||||
|
||||
# Annotate modems with paired device info
|
||||
modem_list = []
|
||||
for modem in modems:
|
||||
paired = paired_devices.get(modem.id)
|
||||
|
||||
# Determine status category
|
||||
if modem.retired:
|
||||
status = "retired"
|
||||
elif not modem.deployed:
|
||||
status = "benched"
|
||||
elif paired:
|
||||
status = "in_use"
|
||||
else:
|
||||
status = "spare"
|
||||
|
||||
# Apply filter if specified
|
||||
if filter_status and status != filter_status:
|
||||
continue
|
||||
|
||||
modem_list.append({
|
||||
"id": modem.id,
|
||||
"ip_address": modem.ip_address,
|
||||
"phone_number": modem.phone_number,
|
||||
"hardware_model": modem.hardware_model,
|
||||
"deployed": modem.deployed,
|
||||
"retired": modem.retired,
|
||||
"location": modem.location,
|
||||
"project_id": modem.project_id,
|
||||
"paired_device": paired,
|
||||
"status": status
|
||||
})
|
||||
|
||||
return templates.TemplateResponse("partials/modem_list.html", {
|
||||
"request": request,
|
||||
"modems": modem_list
|
||||
})
|
||||
|
||||
|
||||
@router.get("/{modem_id}/paired-device")
|
||||
async def get_paired_device(modem_id: str, db: Session = Depends(get_db)):
|
||||
"""
|
||||
Get the device (SLM/seismograph) that is paired with this modem.
|
||||
Returns JSON with device info or null if not paired.
|
||||
"""
|
||||
# Check modem exists
|
||||
modem = db.query(RosterUnit).filter_by(id=modem_id, device_type="modem").first()
|
||||
if not modem:
|
||||
return {"status": "error", "detail": f"Modem {modem_id} not found"}
|
||||
|
||||
# Find device paired with this modem
|
||||
device = db.query(RosterUnit).filter(
|
||||
RosterUnit.deployed_with_modem_id == modem_id,
|
||||
RosterUnit.retired == False
|
||||
).first()
|
||||
|
||||
if device:
|
||||
return {
|
||||
"paired": True,
|
||||
"device": {
|
||||
"id": device.id,
|
||||
"device_type": device.device_type,
|
||||
"deployed": device.deployed,
|
||||
"project_id": device.project_id,
|
||||
"location": device.location or device.address
|
||||
}
|
||||
}
|
||||
|
||||
return {"paired": False, "device": None}
|
||||
|
||||
|
||||
@router.get("/{modem_id}/paired-device-html", response_class=HTMLResponse)
|
||||
async def get_paired_device_html(modem_id: str, request: Request, db: Session = Depends(get_db)):
|
||||
"""
|
||||
Get HTML partial showing the device paired with this modem.
|
||||
Used by unit_detail.html for modems.
|
||||
"""
|
||||
# Check modem exists
|
||||
modem = db.query(RosterUnit).filter_by(id=modem_id, device_type="modem").first()
|
||||
if not modem:
|
||||
return HTMLResponse('<p class="text-red-500">Modem not found</p>')
|
||||
|
||||
# Find device paired with this modem
|
||||
device = db.query(RosterUnit).filter(
|
||||
RosterUnit.deployed_with_modem_id == modem_id,
|
||||
RosterUnit.retired == False
|
||||
).first()
|
||||
|
||||
return templates.TemplateResponse("partials/modem_paired_device.html", {
|
||||
"request": request,
|
||||
"modem_id": modem_id,
|
||||
"device": device
|
||||
})
|
||||
|
||||
|
||||
@router.get("/{modem_id}/ping")
|
||||
async def ping_modem(modem_id: str, db: Session = Depends(get_db)):
|
||||
"""
|
||||
Test modem connectivity with a simple ping.
|
||||
Returns response time and connection status.
|
||||
"""
|
||||
# Get modem from database
|
||||
modem = db.query(RosterUnit).filter_by(id=modem_id, device_type="modem").first()
|
||||
|
||||
if not modem:
|
||||
return {"status": "error", "detail": f"Modem {modem_id} not found"}
|
||||
|
||||
if not modem.ip_address:
|
||||
return {"status": "error", "detail": f"Modem {modem_id} has no IP address configured"}
|
||||
|
||||
try:
|
||||
# Ping the modem (1 packet, 2 second timeout)
|
||||
start_time = time.time()
|
||||
result = subprocess.run(
|
||||
["ping", "-c", "1", "-W", "2", modem.ip_address],
|
||||
capture_output=True,
|
||||
text=True,
|
||||
timeout=3
|
||||
)
|
||||
response_time = int((time.time() - start_time) * 1000) # Convert to milliseconds
|
||||
|
||||
if result.returncode == 0:
|
||||
return {
|
||||
"status": "success",
|
||||
"modem_id": modem_id,
|
||||
"ip_address": modem.ip_address,
|
||||
"response_time_ms": response_time,
|
||||
"message": "Modem is responding"
|
||||
}
|
||||
else:
|
||||
return {
|
||||
"status": "error",
|
||||
"modem_id": modem_id,
|
||||
"ip_address": modem.ip_address,
|
||||
"detail": "Modem not responding to ping"
|
||||
}
|
||||
|
||||
except subprocess.TimeoutExpired:
|
||||
return {
|
||||
"status": "error",
|
||||
"modem_id": modem_id,
|
||||
"ip_address": modem.ip_address,
|
||||
"detail": "Ping timeout"
|
||||
}
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to ping modem {modem_id}: {e}")
|
||||
return {
|
||||
"status": "error",
|
||||
"modem_id": modem_id,
|
||||
"detail": str(e)
|
||||
}
|
||||
|
||||
|
||||
@router.get("/{modem_id}/diagnostics")
|
||||
async def get_modem_diagnostics(modem_id: str, db: Session = Depends(get_db)):
|
||||
"""
|
||||
Get modem diagnostics (signal strength, data usage, uptime).
|
||||
|
||||
Currently returns placeholders. When ModemManager is available,
|
||||
this endpoint will query it for real diagnostics.
|
||||
"""
|
||||
modem = db.query(RosterUnit).filter_by(id=modem_id, device_type="modem").first()
|
||||
if not modem:
|
||||
return {"status": "error", "detail": f"Modem {modem_id} not found"}
|
||||
|
||||
# TODO: Query ModemManager backend when available
|
||||
return {
|
||||
"status": "unavailable",
|
||||
"message": "ModemManager integration not yet available",
|
||||
"modem_id": modem_id,
|
||||
"signal_strength_dbm": None,
|
||||
"data_usage_mb": None,
|
||||
"uptime_seconds": None,
|
||||
"carrier": None,
|
||||
"connection_type": None # LTE, 5G, etc.
|
||||
}
|
||||
242
backend/routers/photos.py
Normal file
@@ -0,0 +1,242 @@
|
||||
from fastapi import APIRouter, HTTPException, UploadFile, File, Depends
|
||||
from fastapi.responses import FileResponse, JSONResponse
|
||||
from pathlib import Path
|
||||
from typing import List, Optional
|
||||
from datetime import datetime
|
||||
import os
|
||||
import shutil
|
||||
from PIL import Image
|
||||
from PIL.ExifTags import TAGS, GPSTAGS
|
||||
from sqlalchemy.orm import Session
|
||||
from backend.database import get_db
|
||||
from backend.models import RosterUnit
|
||||
|
||||
router = APIRouter(prefix="/api", tags=["photos"])
|
||||
|
||||
PHOTOS_BASE_DIR = Path("data/photos")
|
||||
|
||||
|
||||
def extract_exif_data(image_path: Path) -> dict:
|
||||
"""
|
||||
Extract EXIF metadata from an image file.
|
||||
Returns dict with timestamp, GPS coordinates, and other metadata.
|
||||
"""
|
||||
try:
|
||||
image = Image.open(image_path)
|
||||
exif_data = image._getexif()
|
||||
|
||||
if not exif_data:
|
||||
return {}
|
||||
|
||||
metadata = {}
|
||||
|
||||
# Extract standard EXIF tags
|
||||
for tag_id, value in exif_data.items():
|
||||
tag = TAGS.get(tag_id, tag_id)
|
||||
|
||||
# Extract datetime
|
||||
if tag == "DateTime" or tag == "DateTimeOriginal":
|
||||
try:
|
||||
metadata["timestamp"] = datetime.strptime(str(value), "%Y:%m:%d %H:%M:%S")
|
||||
except:
|
||||
pass
|
||||
|
||||
# Extract GPS data
|
||||
if tag == "GPSInfo":
|
||||
gps_data = {}
|
||||
for gps_tag_id in value:
|
||||
gps_tag = GPSTAGS.get(gps_tag_id, gps_tag_id)
|
||||
gps_data[gps_tag] = value[gps_tag_id]
|
||||
|
||||
# Convert GPS data to decimal degrees
|
||||
lat = gps_data.get("GPSLatitude")
|
||||
lat_ref = gps_data.get("GPSLatitudeRef")
|
||||
lon = gps_data.get("GPSLongitude")
|
||||
lon_ref = gps_data.get("GPSLongitudeRef")
|
||||
|
||||
if lat and lon and lat_ref and lon_ref:
|
||||
# Convert to decimal degrees
|
||||
lat_decimal = convert_to_degrees(lat)
|
||||
if lat_ref == "S":
|
||||
lat_decimal = -lat_decimal
|
||||
|
||||
lon_decimal = convert_to_degrees(lon)
|
||||
if lon_ref == "W":
|
||||
lon_decimal = -lon_decimal
|
||||
|
||||
metadata["latitude"] = lat_decimal
|
||||
metadata["longitude"] = lon_decimal
|
||||
metadata["coordinates"] = f"{lat_decimal},{lon_decimal}"
|
||||
|
||||
return metadata
|
||||
except Exception as e:
|
||||
print(f"Error extracting EXIF data: {e}")
|
||||
return {}
|
||||
|
||||
|
||||
def convert_to_degrees(value):
|
||||
"""
|
||||
Convert GPS coordinates from degrees/minutes/seconds to decimal degrees.
|
||||
"""
|
||||
d, m, s = value
|
||||
return float(d) + (float(m) / 60.0) + (float(s) / 3600.0)
|
||||
|
||||
|
||||
@router.post("/unit/{unit_id}/upload-photo")
|
||||
async def upload_photo(
|
||||
unit_id: str,
|
||||
photo: UploadFile = File(...),
|
||||
auto_populate_coords: bool = True,
|
||||
db: Session = Depends(get_db)
|
||||
):
|
||||
"""
|
||||
Upload a photo for a unit and extract EXIF metadata.
|
||||
If GPS data exists and auto_populate_coords is True, update the unit's coordinates.
|
||||
"""
|
||||
# Validate file type
|
||||
allowed_extensions = {".jpg", ".jpeg", ".png", ".gif", ".webp"}
|
||||
file_ext = Path(photo.filename).suffix.lower()
|
||||
|
||||
if file_ext not in allowed_extensions:
|
||||
raise HTTPException(
|
||||
status_code=400,
|
||||
detail=f"Invalid file type. Allowed: {', '.join(allowed_extensions)}"
|
||||
)
|
||||
|
||||
# Create photos directory for this unit
|
||||
unit_photo_dir = PHOTOS_BASE_DIR / unit_id
|
||||
unit_photo_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
# Generate filename with timestamp to avoid collisions
|
||||
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
|
||||
filename = f"{timestamp}_{photo.filename}"
|
||||
file_path = unit_photo_dir / filename
|
||||
|
||||
# Save the file
|
||||
try:
|
||||
with open(file_path, "wb") as buffer:
|
||||
shutil.copyfileobj(photo.file, buffer)
|
||||
except Exception as e:
|
||||
raise HTTPException(status_code=500, detail=f"Failed to save photo: {str(e)}")
|
||||
|
||||
# Extract EXIF metadata
|
||||
metadata = extract_exif_data(file_path)
|
||||
|
||||
# Update unit coordinates if GPS data exists and auto_populate_coords is True
|
||||
coordinates_updated = False
|
||||
if auto_populate_coords and "coordinates" in metadata:
|
||||
roster_unit = db.query(RosterUnit).filter(RosterUnit.id == unit_id).first()
|
||||
|
||||
if roster_unit:
|
||||
roster_unit.coordinates = metadata["coordinates"]
|
||||
roster_unit.last_updated = datetime.utcnow()
|
||||
db.commit()
|
||||
coordinates_updated = True
|
||||
|
||||
return JSONResponse(content={
|
||||
"success": True,
|
||||
"filename": filename,
|
||||
"file_path": f"/api/unit/{unit_id}/photo/{filename}",
|
||||
"metadata": {
|
||||
"timestamp": metadata.get("timestamp").isoformat() if metadata.get("timestamp") else None,
|
||||
"latitude": metadata.get("latitude"),
|
||||
"longitude": metadata.get("longitude"),
|
||||
"coordinates": metadata.get("coordinates")
|
||||
},
|
||||
"coordinates_updated": coordinates_updated
|
||||
})
|
||||
|
||||
|
||||
@router.get("/unit/{unit_id}/photos")
|
||||
def get_unit_photos(unit_id: str):
|
||||
"""
|
||||
Reads /data/photos/<unit_id>/ and returns list of image filenames.
|
||||
Primary photo = most recent file.
|
||||
"""
|
||||
unit_photo_dir = PHOTOS_BASE_DIR / unit_id
|
||||
|
||||
if not unit_photo_dir.exists():
|
||||
# Return empty list if no photos directory exists
|
||||
return {
|
||||
"unit_id": unit_id,
|
||||
"photos": [],
|
||||
"primary_photo": None
|
||||
}
|
||||
|
||||
# Get all image files
|
||||
image_extensions = {".jpg", ".jpeg", ".png", ".gif", ".webp"}
|
||||
photos = []
|
||||
|
||||
for file_path in unit_photo_dir.iterdir():
|
||||
if file_path.is_file() and file_path.suffix.lower() in image_extensions:
|
||||
photos.append({
|
||||
"filename": file_path.name,
|
||||
"path": f"/api/unit/{unit_id}/photo/{file_path.name}",
|
||||
"modified": file_path.stat().st_mtime
|
||||
})
|
||||
|
||||
# Sort by modification time (most recent first)
|
||||
photos.sort(key=lambda x: x["modified"], reverse=True)
|
||||
|
||||
# Primary photo is the most recent
|
||||
primary_photo = photos[0]["filename"] if photos else None
|
||||
|
||||
return {
|
||||
"unit_id": unit_id,
|
||||
"photos": [p["filename"] for p in photos],
|
||||
"primary_photo": primary_photo,
|
||||
"photo_urls": [p["path"] for p in photos]
|
||||
}
|
||||
|
||||
|
||||
@router.get("/recent-photos")
|
||||
def get_recent_photos(limit: int = 12):
|
||||
"""
|
||||
Get the most recently uploaded photos across all units.
|
||||
Returns photos sorted by modification time (newest first).
|
||||
"""
|
||||
if not PHOTOS_BASE_DIR.exists():
|
||||
return {"photos": []}
|
||||
|
||||
all_photos = []
|
||||
image_extensions = {".jpg", ".jpeg", ".png", ".gif", ".webp"}
|
||||
|
||||
# Scan all unit directories
|
||||
for unit_dir in PHOTOS_BASE_DIR.iterdir():
|
||||
if not unit_dir.is_dir():
|
||||
continue
|
||||
|
||||
unit_id = unit_dir.name
|
||||
|
||||
# Get all photos in this unit's directory
|
||||
for file_path in unit_dir.iterdir():
|
||||
if file_path.is_file() and file_path.suffix.lower() in image_extensions:
|
||||
all_photos.append({
|
||||
"unit_id": unit_id,
|
||||
"filename": file_path.name,
|
||||
"path": f"/api/unit/{unit_id}/photo/{file_path.name}",
|
||||
"modified": file_path.stat().st_mtime,
|
||||
"modified_iso": datetime.fromtimestamp(file_path.stat().st_mtime).isoformat()
|
||||
})
|
||||
|
||||
# Sort by modification time (most recent first) and limit
|
||||
all_photos.sort(key=lambda x: x["modified"], reverse=True)
|
||||
recent_photos = all_photos[:limit]
|
||||
|
||||
return {
|
||||
"photos": recent_photos,
|
||||
"total": len(all_photos)
|
||||
}
|
||||
|
||||
|
||||
@router.get("/unit/{unit_id}/photo/{filename}")
|
||||
def get_photo(unit_id: str, filename: str):
|
||||
"""
|
||||
Serves a specific photo file.
|
||||
"""
|
||||
file_path = PHOTOS_BASE_DIR / unit_id / filename
|
||||
|
||||
if not file_path.exists() or not file_path.is_file():
|
||||
raise HTTPException(status_code=404, detail="Photo not found")
|
||||
|
||||
return FileResponse(file_path)
|
||||
521
backend/routers/project_locations.py
Normal file
@@ -0,0 +1,521 @@
|
||||
"""
|
||||
Project Locations Router
|
||||
|
||||
Handles monitoring locations (NRLs for sound, monitoring points for vibration)
|
||||
and unit assignments within projects.
|
||||
"""
|
||||
|
||||
from fastapi import APIRouter, Request, Depends, HTTPException, Query
|
||||
from fastapi.responses import HTMLResponse, JSONResponse
|
||||
from sqlalchemy.orm import Session
|
||||
from sqlalchemy import and_, or_
|
||||
from datetime import datetime
|
||||
from typing import Optional
|
||||
import uuid
|
||||
import json
|
||||
|
||||
from backend.database import get_db
|
||||
from backend.models import (
|
||||
Project,
|
||||
ProjectType,
|
||||
MonitoringLocation,
|
||||
UnitAssignment,
|
||||
RosterUnit,
|
||||
RecordingSession,
|
||||
)
|
||||
from backend.templates_config import templates
|
||||
|
||||
router = APIRouter(prefix="/api/projects/{project_id}", tags=["project-locations"])
|
||||
|
||||
|
||||
# ============================================================================
|
||||
# Monitoring Locations CRUD
|
||||
# ============================================================================
|
||||
|
||||
@router.get("/locations", response_class=HTMLResponse)
|
||||
async def get_project_locations(
|
||||
project_id: str,
|
||||
request: Request,
|
||||
db: Session = Depends(get_db),
|
||||
location_type: Optional[str] = Query(None),
|
||||
):
|
||||
"""
|
||||
Get all monitoring locations for a project.
|
||||
Returns HTML partial with location list.
|
||||
"""
|
||||
project = db.query(Project).filter_by(id=project_id).first()
|
||||
if not project:
|
||||
raise HTTPException(status_code=404, detail="Project not found")
|
||||
|
||||
query = db.query(MonitoringLocation).filter_by(project_id=project_id)
|
||||
|
||||
# Filter by type if provided
|
||||
if location_type:
|
||||
query = query.filter_by(location_type=location_type)
|
||||
|
||||
locations = query.order_by(MonitoringLocation.name).all()
|
||||
|
||||
# Enrich with assignment info
|
||||
locations_data = []
|
||||
for location in locations:
|
||||
# Get active assignment
|
||||
assignment = db.query(UnitAssignment).filter(
|
||||
and_(
|
||||
UnitAssignment.location_id == location.id,
|
||||
UnitAssignment.status == "active",
|
||||
)
|
||||
).first()
|
||||
|
||||
assigned_unit = None
|
||||
if assignment:
|
||||
assigned_unit = db.query(RosterUnit).filter_by(id=assignment.unit_id).first()
|
||||
|
||||
# Count recording sessions
|
||||
session_count = db.query(RecordingSession).filter_by(
|
||||
location_id=location.id
|
||||
).count()
|
||||
|
||||
locations_data.append({
|
||||
"location": location,
|
||||
"assignment": assignment,
|
||||
"assigned_unit": assigned_unit,
|
||||
"session_count": session_count,
|
||||
})
|
||||
|
||||
return templates.TemplateResponse("partials/projects/location_list.html", {
|
||||
"request": request,
|
||||
"project": project,
|
||||
"locations": locations_data,
|
||||
})
|
||||
|
||||
|
||||
@router.get("/locations-json")
|
||||
async def get_project_locations_json(
|
||||
project_id: str,
|
||||
db: Session = Depends(get_db),
|
||||
location_type: Optional[str] = Query(None),
|
||||
):
|
||||
"""
|
||||
Get all monitoring locations for a project as JSON.
|
||||
Used by the schedule modal to populate location dropdown.
|
||||
"""
|
||||
project = db.query(Project).filter_by(id=project_id).first()
|
||||
if not project:
|
||||
raise HTTPException(status_code=404, detail="Project not found")
|
||||
|
||||
query = db.query(MonitoringLocation).filter_by(project_id=project_id)
|
||||
|
||||
if location_type:
|
||||
query = query.filter_by(location_type=location_type)
|
||||
|
||||
locations = query.order_by(MonitoringLocation.name).all()
|
||||
|
||||
return [
|
||||
{
|
||||
"id": loc.id,
|
||||
"name": loc.name,
|
||||
"location_type": loc.location_type,
|
||||
"description": loc.description,
|
||||
"address": loc.address,
|
||||
"coordinates": loc.coordinates,
|
||||
}
|
||||
for loc in locations
|
||||
]
|
||||
|
||||
|
||||
@router.post("/locations/create")
|
||||
async def create_location(
|
||||
project_id: str,
|
||||
request: Request,
|
||||
db: Session = Depends(get_db),
|
||||
):
|
||||
"""
|
||||
Create a new monitoring location within a project.
|
||||
"""
|
||||
project = db.query(Project).filter_by(id=project_id).first()
|
||||
if not project:
|
||||
raise HTTPException(status_code=404, detail="Project not found")
|
||||
|
||||
form_data = await request.form()
|
||||
|
||||
location = MonitoringLocation(
|
||||
id=str(uuid.uuid4()),
|
||||
project_id=project_id,
|
||||
location_type=form_data.get("location_type"),
|
||||
name=form_data.get("name"),
|
||||
description=form_data.get("description"),
|
||||
coordinates=form_data.get("coordinates"),
|
||||
address=form_data.get("address"),
|
||||
location_metadata=form_data.get("location_metadata"), # JSON string
|
||||
)
|
||||
|
||||
db.add(location)
|
||||
db.commit()
|
||||
db.refresh(location)
|
||||
|
||||
return JSONResponse({
|
||||
"success": True,
|
||||
"location_id": location.id,
|
||||
"message": f"Location '{location.name}' created successfully",
|
||||
})
|
||||
|
||||
|
||||
@router.put("/locations/{location_id}")
|
||||
async def update_location(
|
||||
project_id: str,
|
||||
location_id: str,
|
||||
request: Request,
|
||||
db: Session = Depends(get_db),
|
||||
):
|
||||
"""
|
||||
Update a monitoring location.
|
||||
"""
|
||||
location = db.query(MonitoringLocation).filter_by(
|
||||
id=location_id,
|
||||
project_id=project_id,
|
||||
).first()
|
||||
|
||||
if not location:
|
||||
raise HTTPException(status_code=404, detail="Location not found")
|
||||
|
||||
data = await request.json()
|
||||
|
||||
# Update fields if provided
|
||||
if "name" in data:
|
||||
location.name = data["name"]
|
||||
if "description" in data:
|
||||
location.description = data["description"]
|
||||
if "location_type" in data:
|
||||
location.location_type = data["location_type"]
|
||||
if "coordinates" in data:
|
||||
location.coordinates = data["coordinates"]
|
||||
if "address" in data:
|
||||
location.address = data["address"]
|
||||
if "location_metadata" in data:
|
||||
location.location_metadata = data["location_metadata"]
|
||||
|
||||
location.updated_at = datetime.utcnow()
|
||||
|
||||
db.commit()
|
||||
|
||||
return {"success": True, "message": "Location updated successfully"}
|
||||
|
||||
|
||||
@router.delete("/locations/{location_id}")
|
||||
async def delete_location(
|
||||
project_id: str,
|
||||
location_id: str,
|
||||
db: Session = Depends(get_db),
|
||||
):
|
||||
"""
|
||||
Delete a monitoring location.
|
||||
"""
|
||||
location = db.query(MonitoringLocation).filter_by(
|
||||
id=location_id,
|
||||
project_id=project_id,
|
||||
).first()
|
||||
|
||||
if not location:
|
||||
raise HTTPException(status_code=404, detail="Location not found")
|
||||
|
||||
# Check if location has active assignments
|
||||
active_assignments = db.query(UnitAssignment).filter(
|
||||
and_(
|
||||
UnitAssignment.location_id == location_id,
|
||||
UnitAssignment.status == "active",
|
||||
)
|
||||
).count()
|
||||
|
||||
if active_assignments > 0:
|
||||
raise HTTPException(
|
||||
status_code=400,
|
||||
detail="Cannot delete location with active unit assignments. Unassign units first.",
|
||||
)
|
||||
|
||||
db.delete(location)
|
||||
db.commit()
|
||||
|
||||
return {"success": True, "message": "Location deleted successfully"}
|
||||
|
||||
|
||||
# ============================================================================
|
||||
# Unit Assignments
|
||||
# ============================================================================
|
||||
|
||||
@router.get("/assignments", response_class=HTMLResponse)
|
||||
async def get_project_assignments(
|
||||
project_id: str,
|
||||
request: Request,
|
||||
db: Session = Depends(get_db),
|
||||
status: Optional[str] = Query("active"),
|
||||
):
|
||||
"""
|
||||
Get all unit assignments for a project.
|
||||
Returns HTML partial with assignment list.
|
||||
"""
|
||||
query = db.query(UnitAssignment).filter_by(project_id=project_id)
|
||||
|
||||
if status:
|
||||
query = query.filter_by(status=status)
|
||||
|
||||
assignments = query.order_by(UnitAssignment.assigned_at.desc()).all()
|
||||
|
||||
# Enrich with unit and location details
|
||||
assignments_data = []
|
||||
for assignment in assignments:
|
||||
unit = db.query(RosterUnit).filter_by(id=assignment.unit_id).first()
|
||||
location = db.query(MonitoringLocation).filter_by(id=assignment.location_id).first()
|
||||
|
||||
assignments_data.append({
|
||||
"assignment": assignment,
|
||||
"unit": unit,
|
||||
"location": location,
|
||||
})
|
||||
|
||||
return templates.TemplateResponse("partials/projects/assignment_list.html", {
|
||||
"request": request,
|
||||
"project_id": project_id,
|
||||
"assignments": assignments_data,
|
||||
})
|
||||
|
||||
|
||||
@router.post("/locations/{location_id}/assign")
|
||||
async def assign_unit_to_location(
|
||||
project_id: str,
|
||||
location_id: str,
|
||||
request: Request,
|
||||
db: Session = Depends(get_db),
|
||||
):
|
||||
"""
|
||||
Assign a unit to a monitoring location.
|
||||
"""
|
||||
location = db.query(MonitoringLocation).filter_by(
|
||||
id=location_id,
|
||||
project_id=project_id,
|
||||
).first()
|
||||
|
||||
if not location:
|
||||
raise HTTPException(status_code=404, detail="Location not found")
|
||||
|
||||
form_data = await request.form()
|
||||
unit_id = form_data.get("unit_id")
|
||||
|
||||
# Verify unit exists and matches location type
|
||||
unit = db.query(RosterUnit).filter_by(id=unit_id).first()
|
||||
if not unit:
|
||||
raise HTTPException(status_code=404, detail="Unit not found")
|
||||
|
||||
# Check device type matches location type
|
||||
expected_device_type = "slm" if location.location_type == "sound" else "seismograph"
|
||||
if unit.device_type != expected_device_type:
|
||||
raise HTTPException(
|
||||
status_code=400,
|
||||
detail=f"Unit type '{unit.device_type}' does not match location type '{location.location_type}'",
|
||||
)
|
||||
|
||||
# Check if location already has an active assignment
|
||||
existing_assignment = db.query(UnitAssignment).filter(
|
||||
and_(
|
||||
UnitAssignment.location_id == location_id,
|
||||
UnitAssignment.status == "active",
|
||||
)
|
||||
).first()
|
||||
|
||||
if existing_assignment:
|
||||
raise HTTPException(
|
||||
status_code=400,
|
||||
detail=f"Location already has an active unit assignment ({existing_assignment.unit_id}). Unassign first.",
|
||||
)
|
||||
|
||||
# Create new assignment
|
||||
assigned_until_str = form_data.get("assigned_until")
|
||||
assigned_until = datetime.fromisoformat(assigned_until_str) if assigned_until_str else None
|
||||
|
||||
assignment = UnitAssignment(
|
||||
id=str(uuid.uuid4()),
|
||||
unit_id=unit_id,
|
||||
location_id=location_id,
|
||||
project_id=project_id,
|
||||
device_type=unit.device_type,
|
||||
assigned_until=assigned_until,
|
||||
status="active",
|
||||
notes=form_data.get("notes"),
|
||||
)
|
||||
|
||||
db.add(assignment)
|
||||
db.commit()
|
||||
db.refresh(assignment)
|
||||
|
||||
return JSONResponse({
|
||||
"success": True,
|
||||
"assignment_id": assignment.id,
|
||||
"message": f"Unit '{unit_id}' assigned to '{location.name}'",
|
||||
})
|
||||
|
||||
|
||||
@router.post("/assignments/{assignment_id}/unassign")
|
||||
async def unassign_unit(
|
||||
project_id: str,
|
||||
assignment_id: str,
|
||||
db: Session = Depends(get_db),
|
||||
):
|
||||
"""
|
||||
Unassign a unit from a location.
|
||||
"""
|
||||
assignment = db.query(UnitAssignment).filter_by(
|
||||
id=assignment_id,
|
||||
project_id=project_id,
|
||||
).first()
|
||||
|
||||
if not assignment:
|
||||
raise HTTPException(status_code=404, detail="Assignment not found")
|
||||
|
||||
# Check if there are active recording sessions
|
||||
active_sessions = db.query(RecordingSession).filter(
|
||||
and_(
|
||||
RecordingSession.location_id == assignment.location_id,
|
||||
RecordingSession.unit_id == assignment.unit_id,
|
||||
RecordingSession.status == "recording",
|
||||
)
|
||||
).count()
|
||||
|
||||
if active_sessions > 0:
|
||||
raise HTTPException(
|
||||
status_code=400,
|
||||
detail="Cannot unassign unit with active recording sessions. Stop recording first.",
|
||||
)
|
||||
|
||||
assignment.status = "completed"
|
||||
assignment.assigned_until = datetime.utcnow()
|
||||
|
||||
db.commit()
|
||||
|
||||
return {"success": True, "message": "Unit unassigned successfully"}
|
||||
|
||||
|
||||
# ============================================================================
|
||||
# Available Units for Assignment
|
||||
# ============================================================================
|
||||
|
||||
@router.get("/available-units", response_class=JSONResponse)
|
||||
async def get_available_units(
|
||||
project_id: str,
|
||||
location_type: str = Query(...),
|
||||
db: Session = Depends(get_db),
|
||||
):
|
||||
"""
|
||||
Get list of available units for assignment to a location.
|
||||
Filters by device type matching the location type.
|
||||
"""
|
||||
# Determine required device type
|
||||
required_device_type = "slm" if location_type == "sound" else "seismograph"
|
||||
|
||||
# Get all units of the required type that are deployed and not retired
|
||||
all_units = db.query(RosterUnit).filter(
|
||||
and_(
|
||||
RosterUnit.device_type == required_device_type,
|
||||
RosterUnit.deployed == True,
|
||||
RosterUnit.retired == False,
|
||||
)
|
||||
).all()
|
||||
|
||||
# Filter out units that already have active assignments
|
||||
assigned_unit_ids = db.query(UnitAssignment.unit_id).filter(
|
||||
UnitAssignment.status == "active"
|
||||
).distinct().all()
|
||||
assigned_unit_ids = [uid[0] for uid in assigned_unit_ids]
|
||||
|
||||
available_units = [
|
||||
{
|
||||
"id": unit.id,
|
||||
"device_type": unit.device_type,
|
||||
"location": unit.address or unit.location,
|
||||
"model": unit.slm_model if unit.device_type == "slm" else unit.unit_type,
|
||||
}
|
||||
for unit in all_units
|
||||
if unit.id not in assigned_unit_ids
|
||||
]
|
||||
|
||||
return available_units
|
||||
|
||||
|
||||
# ============================================================================
|
||||
# NRL-specific endpoints for detail page
|
||||
# ============================================================================
|
||||
|
||||
@router.get("/nrl/{location_id}/sessions", response_class=HTMLResponse)
|
||||
async def get_nrl_sessions(
|
||||
project_id: str,
|
||||
location_id: str,
|
||||
request: Request,
|
||||
db: Session = Depends(get_db),
|
||||
):
|
||||
"""
|
||||
Get recording sessions for a specific NRL.
|
||||
Returns HTML partial with session list.
|
||||
"""
|
||||
from backend.models import RecordingSession, RosterUnit
|
||||
|
||||
sessions = db.query(RecordingSession).filter_by(
|
||||
location_id=location_id
|
||||
).order_by(RecordingSession.started_at.desc()).all()
|
||||
|
||||
# Enrich with unit details
|
||||
sessions_data = []
|
||||
for session in sessions:
|
||||
unit = None
|
||||
if session.unit_id:
|
||||
unit = db.query(RosterUnit).filter_by(id=session.unit_id).first()
|
||||
|
||||
sessions_data.append({
|
||||
"session": session,
|
||||
"unit": unit,
|
||||
})
|
||||
|
||||
return templates.TemplateResponse("partials/projects/session_list.html", {
|
||||
"request": request,
|
||||
"project_id": project_id,
|
||||
"location_id": location_id,
|
||||
"sessions": sessions_data,
|
||||
})
|
||||
|
||||
|
||||
@router.get("/nrl/{location_id}/files", response_class=HTMLResponse)
|
||||
async def get_nrl_files(
|
||||
project_id: str,
|
||||
location_id: str,
|
||||
request: Request,
|
||||
db: Session = Depends(get_db),
|
||||
):
|
||||
"""
|
||||
Get data files for a specific NRL.
|
||||
Returns HTML partial with file list.
|
||||
"""
|
||||
from backend.models import DataFile, RecordingSession
|
||||
|
||||
# Join DataFile with RecordingSession to filter by location_id
|
||||
files = db.query(DataFile).join(
|
||||
RecordingSession,
|
||||
DataFile.session_id == RecordingSession.id
|
||||
).filter(
|
||||
RecordingSession.location_id == location_id
|
||||
).order_by(DataFile.created_at.desc()).all()
|
||||
|
||||
# Enrich with session details
|
||||
files_data = []
|
||||
for file in files:
|
||||
session = None
|
||||
if file.session_id:
|
||||
session = db.query(RecordingSession).filter_by(id=file.session_id).first()
|
||||
|
||||
files_data.append({
|
||||
"file": file,
|
||||
"session": session,
|
||||
})
|
||||
|
||||
return templates.TemplateResponse("partials/projects/file_list.html", {
|
||||
"request": request,
|
||||
"project_id": project_id,
|
||||
"location_id": location_id,
|
||||
"files": files_data,
|
||||
})
|
||||
2620
backend/routers/projects.py
Normal file
465
backend/routers/recurring_schedules.py
Normal file
@@ -0,0 +1,465 @@
|
||||
"""
|
||||
Recurring Schedules Router
|
||||
|
||||
API endpoints for managing recurring monitoring schedules.
|
||||
"""
|
||||
|
||||
from fastapi import APIRouter, Request, Depends, HTTPException, Query
|
||||
from fastapi.responses import HTMLResponse, JSONResponse
|
||||
from sqlalchemy.orm import Session
|
||||
from typing import Optional
|
||||
from datetime import datetime
|
||||
import json
|
||||
|
||||
from backend.database import get_db
|
||||
from backend.models import RecurringSchedule, MonitoringLocation, Project, RosterUnit
|
||||
from backend.services.recurring_schedule_service import get_recurring_schedule_service
|
||||
from backend.templates_config import templates
|
||||
|
||||
router = APIRouter(prefix="/api/projects/{project_id}/recurring-schedules", tags=["recurring-schedules"])
|
||||
|
||||
|
||||
# ============================================================================
|
||||
# List and Get
|
||||
# ============================================================================
|
||||
|
||||
@router.get("/")
|
||||
async def list_recurring_schedules(
|
||||
project_id: str,
|
||||
db: Session = Depends(get_db),
|
||||
enabled_only: bool = Query(False),
|
||||
):
|
||||
"""
|
||||
List all recurring schedules for a project.
|
||||
"""
|
||||
project = db.query(Project).filter_by(id=project_id).first()
|
||||
if not project:
|
||||
raise HTTPException(status_code=404, detail="Project not found")
|
||||
|
||||
query = db.query(RecurringSchedule).filter_by(project_id=project_id)
|
||||
if enabled_only:
|
||||
query = query.filter_by(enabled=True)
|
||||
|
||||
schedules = query.order_by(RecurringSchedule.created_at.desc()).all()
|
||||
|
||||
return {
|
||||
"schedules": [
|
||||
{
|
||||
"id": s.id,
|
||||
"name": s.name,
|
||||
"schedule_type": s.schedule_type,
|
||||
"device_type": s.device_type,
|
||||
"location_id": s.location_id,
|
||||
"unit_id": s.unit_id,
|
||||
"enabled": s.enabled,
|
||||
"weekly_pattern": json.loads(s.weekly_pattern) if s.weekly_pattern else None,
|
||||
"interval_type": s.interval_type,
|
||||
"cycle_time": s.cycle_time,
|
||||
"include_download": s.include_download,
|
||||
"timezone": s.timezone,
|
||||
"next_occurrence": s.next_occurrence.isoformat() if s.next_occurrence else None,
|
||||
"last_generated_at": s.last_generated_at.isoformat() if s.last_generated_at else None,
|
||||
"created_at": s.created_at.isoformat() if s.created_at else None,
|
||||
}
|
||||
for s in schedules
|
||||
],
|
||||
"count": len(schedules),
|
||||
}
|
||||
|
||||
|
||||
@router.get("/{schedule_id}")
|
||||
async def get_recurring_schedule(
|
||||
project_id: str,
|
||||
schedule_id: str,
|
||||
db: Session = Depends(get_db),
|
||||
):
|
||||
"""
|
||||
Get a specific recurring schedule.
|
||||
"""
|
||||
schedule = db.query(RecurringSchedule).filter_by(
|
||||
id=schedule_id,
|
||||
project_id=project_id,
|
||||
).first()
|
||||
|
||||
if not schedule:
|
||||
raise HTTPException(status_code=404, detail="Schedule not found")
|
||||
|
||||
# Get related location and unit info
|
||||
location = db.query(MonitoringLocation).filter_by(id=schedule.location_id).first()
|
||||
unit = None
|
||||
if schedule.unit_id:
|
||||
unit = db.query(RosterUnit).filter_by(id=schedule.unit_id).first()
|
||||
|
||||
return {
|
||||
"id": schedule.id,
|
||||
"name": schedule.name,
|
||||
"schedule_type": schedule.schedule_type,
|
||||
"device_type": schedule.device_type,
|
||||
"location_id": schedule.location_id,
|
||||
"location_name": location.name if location else None,
|
||||
"unit_id": schedule.unit_id,
|
||||
"unit_name": unit.id if unit else None,
|
||||
"enabled": schedule.enabled,
|
||||
"weekly_pattern": json.loads(schedule.weekly_pattern) if schedule.weekly_pattern else None,
|
||||
"interval_type": schedule.interval_type,
|
||||
"cycle_time": schedule.cycle_time,
|
||||
"include_download": schedule.include_download,
|
||||
"timezone": schedule.timezone,
|
||||
"next_occurrence": schedule.next_occurrence.isoformat() if schedule.next_occurrence else None,
|
||||
"last_generated_at": schedule.last_generated_at.isoformat() if schedule.last_generated_at else None,
|
||||
"created_at": schedule.created_at.isoformat() if schedule.created_at else None,
|
||||
"updated_at": schedule.updated_at.isoformat() if schedule.updated_at else None,
|
||||
}
|
||||
|
||||
|
||||
# ============================================================================
|
||||
# Create
|
||||
# ============================================================================
|
||||
|
||||
@router.post("/")
|
||||
async def create_recurring_schedule(
|
||||
project_id: str,
|
||||
request: Request,
|
||||
db: Session = Depends(get_db),
|
||||
):
|
||||
"""
|
||||
Create recurring schedules for one or more locations.
|
||||
|
||||
Body for weekly_calendar (supports multiple locations):
|
||||
{
|
||||
"name": "Weeknight Monitoring",
|
||||
"schedule_type": "weekly_calendar",
|
||||
"location_ids": ["uuid1", "uuid2"], // Array of location IDs
|
||||
"weekly_pattern": {
|
||||
"monday": {"enabled": true, "start": "19:00", "end": "07:00"},
|
||||
"tuesday": {"enabled": false},
|
||||
...
|
||||
},
|
||||
"include_download": true,
|
||||
"auto_increment_index": true,
|
||||
"timezone": "America/New_York"
|
||||
}
|
||||
|
||||
Body for simple_interval (supports multiple locations):
|
||||
{
|
||||
"name": "24/7 Continuous",
|
||||
"schedule_type": "simple_interval",
|
||||
"location_ids": ["uuid1", "uuid2"], // Array of location IDs
|
||||
"interval_type": "daily",
|
||||
"cycle_time": "00:00",
|
||||
"include_download": true,
|
||||
"auto_increment_index": true,
|
||||
"timezone": "America/New_York"
|
||||
}
|
||||
|
||||
Legacy single location support (backwards compatible):
|
||||
{
|
||||
"name": "...",
|
||||
"location_id": "uuid", // Single location ID
|
||||
...
|
||||
}
|
||||
"""
|
||||
project = db.query(Project).filter_by(id=project_id).first()
|
||||
if not project:
|
||||
raise HTTPException(status_code=404, detail="Project not found")
|
||||
|
||||
data = await request.json()
|
||||
|
||||
# Support both location_ids (array) and location_id (single) for backwards compatibility
|
||||
location_ids = data.get("location_ids", [])
|
||||
if not location_ids and data.get("location_id"):
|
||||
location_ids = [data.get("location_id")]
|
||||
|
||||
if not location_ids:
|
||||
raise HTTPException(status_code=400, detail="At least one location is required")
|
||||
|
||||
# Validate all locations exist
|
||||
locations = db.query(MonitoringLocation).filter(
|
||||
MonitoringLocation.id.in_(location_ids),
|
||||
MonitoringLocation.project_id == project_id,
|
||||
).all()
|
||||
|
||||
if len(locations) != len(location_ids):
|
||||
raise HTTPException(status_code=404, detail="One or more locations not found")
|
||||
|
||||
service = get_recurring_schedule_service(db)
|
||||
created_schedules = []
|
||||
base_name = data.get("name", "Unnamed Schedule")
|
||||
|
||||
# Create a schedule for each location
|
||||
for location in locations:
|
||||
# Determine device type from location
|
||||
device_type = "slm" if location.location_type == "sound" else "seismograph"
|
||||
|
||||
# Append location name if multiple locations
|
||||
schedule_name = f"{base_name} - {location.name}" if len(locations) > 1 else base_name
|
||||
|
||||
schedule = service.create_schedule(
|
||||
project_id=project_id,
|
||||
location_id=location.id,
|
||||
name=schedule_name,
|
||||
schedule_type=data.get("schedule_type", "weekly_calendar"),
|
||||
device_type=device_type,
|
||||
unit_id=data.get("unit_id"),
|
||||
weekly_pattern=data.get("weekly_pattern"),
|
||||
interval_type=data.get("interval_type"),
|
||||
cycle_time=data.get("cycle_time"),
|
||||
include_download=data.get("include_download", True),
|
||||
auto_increment_index=data.get("auto_increment_index", True),
|
||||
timezone=data.get("timezone", "America/New_York"),
|
||||
)
|
||||
|
||||
# Generate actions immediately so they appear right away
|
||||
generated_actions = service.generate_actions_for_schedule(schedule, horizon_days=7)
|
||||
|
||||
created_schedules.append({
|
||||
"schedule_id": schedule.id,
|
||||
"location_id": location.id,
|
||||
"location_name": location.name,
|
||||
"actions_generated": len(generated_actions),
|
||||
})
|
||||
|
||||
total_actions = sum(s.get("actions_generated", 0) for s in created_schedules)
|
||||
|
||||
return JSONResponse({
|
||||
"success": True,
|
||||
"schedules": created_schedules,
|
||||
"count": len(created_schedules),
|
||||
"actions_generated": total_actions,
|
||||
"message": f"Created {len(created_schedules)} recurring schedule(s) with {total_actions} upcoming actions",
|
||||
})
|
||||
|
||||
|
||||
# ============================================================================
|
||||
# Update
|
||||
# ============================================================================
|
||||
|
||||
@router.put("/{schedule_id}")
|
||||
async def update_recurring_schedule(
|
||||
project_id: str,
|
||||
schedule_id: str,
|
||||
request: Request,
|
||||
db: Session = Depends(get_db),
|
||||
):
|
||||
"""
|
||||
Update a recurring schedule.
|
||||
"""
|
||||
schedule = db.query(RecurringSchedule).filter_by(
|
||||
id=schedule_id,
|
||||
project_id=project_id,
|
||||
).first()
|
||||
|
||||
if not schedule:
|
||||
raise HTTPException(status_code=404, detail="Schedule not found")
|
||||
|
||||
data = await request.json()
|
||||
service = get_recurring_schedule_service(db)
|
||||
|
||||
# Build update kwargs
|
||||
update_kwargs = {}
|
||||
for field in ["name", "weekly_pattern", "interval_type", "cycle_time",
|
||||
"include_download", "auto_increment_index", "timezone", "unit_id"]:
|
||||
if field in data:
|
||||
update_kwargs[field] = data[field]
|
||||
|
||||
updated = service.update_schedule(schedule_id, **update_kwargs)
|
||||
|
||||
return {
|
||||
"success": True,
|
||||
"schedule_id": updated.id,
|
||||
"message": "Schedule updated successfully",
|
||||
}
|
||||
|
||||
|
||||
# ============================================================================
|
||||
# Delete
|
||||
# ============================================================================
|
||||
|
||||
@router.delete("/{schedule_id}")
|
||||
async def delete_recurring_schedule(
|
||||
project_id: str,
|
||||
schedule_id: str,
|
||||
db: Session = Depends(get_db),
|
||||
):
|
||||
"""
|
||||
Delete a recurring schedule.
|
||||
"""
|
||||
service = get_recurring_schedule_service(db)
|
||||
deleted = service.delete_schedule(schedule_id)
|
||||
|
||||
if not deleted:
|
||||
raise HTTPException(status_code=404, detail="Schedule not found")
|
||||
|
||||
return {
|
||||
"success": True,
|
||||
"message": "Schedule deleted successfully",
|
||||
}
|
||||
|
||||
|
||||
# ============================================================================
|
||||
# Enable/Disable
|
||||
# ============================================================================
|
||||
|
||||
@router.post("/{schedule_id}/enable")
|
||||
async def enable_schedule(
|
||||
project_id: str,
|
||||
schedule_id: str,
|
||||
db: Session = Depends(get_db),
|
||||
):
|
||||
"""
|
||||
Enable a disabled schedule.
|
||||
"""
|
||||
service = get_recurring_schedule_service(db)
|
||||
schedule = service.enable_schedule(schedule_id)
|
||||
|
||||
if not schedule:
|
||||
raise HTTPException(status_code=404, detail="Schedule not found")
|
||||
|
||||
return {
|
||||
"success": True,
|
||||
"schedule_id": schedule.id,
|
||||
"enabled": schedule.enabled,
|
||||
"message": "Schedule enabled",
|
||||
}
|
||||
|
||||
|
||||
@router.post("/{schedule_id}/disable")
|
||||
async def disable_schedule(
|
||||
project_id: str,
|
||||
schedule_id: str,
|
||||
db: Session = Depends(get_db),
|
||||
):
|
||||
"""
|
||||
Disable a schedule.
|
||||
"""
|
||||
service = get_recurring_schedule_service(db)
|
||||
schedule = service.disable_schedule(schedule_id)
|
||||
|
||||
if not schedule:
|
||||
raise HTTPException(status_code=404, detail="Schedule not found")
|
||||
|
||||
return {
|
||||
"success": True,
|
||||
"schedule_id": schedule.id,
|
||||
"enabled": schedule.enabled,
|
||||
"message": "Schedule disabled",
|
||||
}
|
||||
|
||||
|
||||
# ============================================================================
|
||||
# Preview Generated Actions
|
||||
# ============================================================================
|
||||
|
||||
@router.post("/{schedule_id}/generate-preview")
|
||||
async def preview_generated_actions(
|
||||
project_id: str,
|
||||
schedule_id: str,
|
||||
db: Session = Depends(get_db),
|
||||
days: int = Query(7, ge=1, le=30),
|
||||
):
|
||||
"""
|
||||
Preview what actions would be generated without saving them.
|
||||
"""
|
||||
schedule = db.query(RecurringSchedule).filter_by(
|
||||
id=schedule_id,
|
||||
project_id=project_id,
|
||||
).first()
|
||||
|
||||
if not schedule:
|
||||
raise HTTPException(status_code=404, detail="Schedule not found")
|
||||
|
||||
service = get_recurring_schedule_service(db)
|
||||
actions = service.generate_actions_for_schedule(
|
||||
schedule,
|
||||
horizon_days=days,
|
||||
preview_only=True,
|
||||
)
|
||||
|
||||
return {
|
||||
"schedule_id": schedule_id,
|
||||
"schedule_name": schedule.name,
|
||||
"preview_days": days,
|
||||
"actions": [
|
||||
{
|
||||
"action_type": a.action_type,
|
||||
"scheduled_time": a.scheduled_time.isoformat(),
|
||||
"notes": a.notes,
|
||||
}
|
||||
for a in actions
|
||||
],
|
||||
"action_count": len(actions),
|
||||
}
|
||||
|
||||
|
||||
# ============================================================================
|
||||
# Manual Generation Trigger
|
||||
# ============================================================================
|
||||
|
||||
@router.post("/{schedule_id}/generate")
|
||||
async def generate_actions_now(
|
||||
project_id: str,
|
||||
schedule_id: str,
|
||||
db: Session = Depends(get_db),
|
||||
days: int = Query(7, ge=1, le=30),
|
||||
):
|
||||
"""
|
||||
Manually trigger action generation for a schedule.
|
||||
"""
|
||||
schedule = db.query(RecurringSchedule).filter_by(
|
||||
id=schedule_id,
|
||||
project_id=project_id,
|
||||
).first()
|
||||
|
||||
if not schedule:
|
||||
raise HTTPException(status_code=404, detail="Schedule not found")
|
||||
|
||||
if not schedule.enabled:
|
||||
raise HTTPException(status_code=400, detail="Schedule is disabled")
|
||||
|
||||
service = get_recurring_schedule_service(db)
|
||||
actions = service.generate_actions_for_schedule(
|
||||
schedule,
|
||||
horizon_days=days,
|
||||
preview_only=False,
|
||||
)
|
||||
|
||||
return {
|
||||
"success": True,
|
||||
"schedule_id": schedule_id,
|
||||
"generated_count": len(actions),
|
||||
"message": f"Generated {len(actions)} scheduled actions",
|
||||
}
|
||||
|
||||
|
||||
# ============================================================================
|
||||
# HTML Partials
|
||||
# ============================================================================
|
||||
|
||||
@router.get("/partials/list", response_class=HTMLResponse)
|
||||
async def get_schedule_list_partial(
|
||||
project_id: str,
|
||||
request: Request,
|
||||
db: Session = Depends(get_db),
|
||||
):
|
||||
"""
|
||||
Return HTML partial for schedule list.
|
||||
"""
|
||||
schedules = db.query(RecurringSchedule).filter_by(
|
||||
project_id=project_id
|
||||
).order_by(RecurringSchedule.created_at.desc()).all()
|
||||
|
||||
# Enrich with location info
|
||||
schedule_data = []
|
||||
for s in schedules:
|
||||
location = db.query(MonitoringLocation).filter_by(id=s.location_id).first()
|
||||
schedule_data.append({
|
||||
"schedule": s,
|
||||
"location": location,
|
||||
"pattern": json.loads(s.weekly_pattern) if s.weekly_pattern else None,
|
||||
})
|
||||
|
||||
return templates.TemplateResponse("partials/projects/recurring_schedule_list.html", {
|
||||
"request": request,
|
||||
"project_id": project_id,
|
||||
"schedules": schedule_data,
|
||||
})
|
||||
187
backend/routers/report_templates.py
Normal file
@@ -0,0 +1,187 @@
|
||||
"""
|
||||
Report Templates Router
|
||||
|
||||
CRUD operations for report template management.
|
||||
Templates store time filter presets and report configuration for reuse.
|
||||
"""
|
||||
|
||||
from fastapi import APIRouter, Depends, HTTPException
|
||||
from fastapi.responses import JSONResponse
|
||||
from sqlalchemy.orm import Session
|
||||
from datetime import datetime
|
||||
from typing import Optional
|
||||
import uuid
|
||||
|
||||
from backend.database import get_db
|
||||
from backend.models import ReportTemplate
|
||||
|
||||
router = APIRouter(prefix="/api/report-templates", tags=["report-templates"])
|
||||
|
||||
|
||||
@router.get("")
|
||||
async def list_templates(
|
||||
project_id: Optional[str] = None,
|
||||
db: Session = Depends(get_db),
|
||||
):
|
||||
"""
|
||||
List all report templates.
|
||||
Optionally filter by project_id (includes global templates with project_id=None).
|
||||
"""
|
||||
query = db.query(ReportTemplate)
|
||||
|
||||
if project_id:
|
||||
# Include global templates (project_id=None) AND project-specific templates
|
||||
query = query.filter(
|
||||
(ReportTemplate.project_id == None) | (ReportTemplate.project_id == project_id)
|
||||
)
|
||||
|
||||
templates = query.order_by(ReportTemplate.name).all()
|
||||
|
||||
return [
|
||||
{
|
||||
"id": t.id,
|
||||
"name": t.name,
|
||||
"project_id": t.project_id,
|
||||
"report_title": t.report_title,
|
||||
"start_time": t.start_time,
|
||||
"end_time": t.end_time,
|
||||
"start_date": t.start_date,
|
||||
"end_date": t.end_date,
|
||||
"created_at": t.created_at.isoformat() if t.created_at else None,
|
||||
"updated_at": t.updated_at.isoformat() if t.updated_at else None,
|
||||
}
|
||||
for t in templates
|
||||
]
|
||||
|
||||
|
||||
@router.post("")
|
||||
async def create_template(
|
||||
data: dict,
|
||||
db: Session = Depends(get_db),
|
||||
):
|
||||
"""
|
||||
Create a new report template.
|
||||
|
||||
Request body:
|
||||
- name: Template name (required)
|
||||
- project_id: Optional project ID for project-specific template
|
||||
- report_title: Default report title
|
||||
- start_time: Start time filter (HH:MM format)
|
||||
- end_time: End time filter (HH:MM format)
|
||||
- start_date: Start date filter (YYYY-MM-DD format)
|
||||
- end_date: End date filter (YYYY-MM-DD format)
|
||||
"""
|
||||
name = data.get("name")
|
||||
if not name:
|
||||
raise HTTPException(status_code=400, detail="Template name is required")
|
||||
|
||||
template = ReportTemplate(
|
||||
id=str(uuid.uuid4()),
|
||||
name=name,
|
||||
project_id=data.get("project_id"),
|
||||
report_title=data.get("report_title", "Background Noise Study"),
|
||||
start_time=data.get("start_time"),
|
||||
end_time=data.get("end_time"),
|
||||
start_date=data.get("start_date"),
|
||||
end_date=data.get("end_date"),
|
||||
)
|
||||
|
||||
db.add(template)
|
||||
db.commit()
|
||||
db.refresh(template)
|
||||
|
||||
return {
|
||||
"id": template.id,
|
||||
"name": template.name,
|
||||
"project_id": template.project_id,
|
||||
"report_title": template.report_title,
|
||||
"start_time": template.start_time,
|
||||
"end_time": template.end_time,
|
||||
"start_date": template.start_date,
|
||||
"end_date": template.end_date,
|
||||
"created_at": template.created_at.isoformat() if template.created_at else None,
|
||||
}
|
||||
|
||||
|
||||
@router.get("/{template_id}")
|
||||
async def get_template(
|
||||
template_id: str,
|
||||
db: Session = Depends(get_db),
|
||||
):
|
||||
"""Get a specific report template by ID."""
|
||||
template = db.query(ReportTemplate).filter_by(id=template_id).first()
|
||||
if not template:
|
||||
raise HTTPException(status_code=404, detail="Template not found")
|
||||
|
||||
return {
|
||||
"id": template.id,
|
||||
"name": template.name,
|
||||
"project_id": template.project_id,
|
||||
"report_title": template.report_title,
|
||||
"start_time": template.start_time,
|
||||
"end_time": template.end_time,
|
||||
"start_date": template.start_date,
|
||||
"end_date": template.end_date,
|
||||
"created_at": template.created_at.isoformat() if template.created_at else None,
|
||||
"updated_at": template.updated_at.isoformat() if template.updated_at else None,
|
||||
}
|
||||
|
||||
|
||||
@router.put("/{template_id}")
|
||||
async def update_template(
|
||||
template_id: str,
|
||||
data: dict,
|
||||
db: Session = Depends(get_db),
|
||||
):
|
||||
"""Update an existing report template."""
|
||||
template = db.query(ReportTemplate).filter_by(id=template_id).first()
|
||||
if not template:
|
||||
raise HTTPException(status_code=404, detail="Template not found")
|
||||
|
||||
# Update fields if provided
|
||||
if "name" in data:
|
||||
template.name = data["name"]
|
||||
if "project_id" in data:
|
||||
template.project_id = data["project_id"]
|
||||
if "report_title" in data:
|
||||
template.report_title = data["report_title"]
|
||||
if "start_time" in data:
|
||||
template.start_time = data["start_time"]
|
||||
if "end_time" in data:
|
||||
template.end_time = data["end_time"]
|
||||
if "start_date" in data:
|
||||
template.start_date = data["start_date"]
|
||||
if "end_date" in data:
|
||||
template.end_date = data["end_date"]
|
||||
|
||||
template.updated_at = datetime.utcnow()
|
||||
db.commit()
|
||||
db.refresh(template)
|
||||
|
||||
return {
|
||||
"id": template.id,
|
||||
"name": template.name,
|
||||
"project_id": template.project_id,
|
||||
"report_title": template.report_title,
|
||||
"start_time": template.start_time,
|
||||
"end_time": template.end_time,
|
||||
"start_date": template.start_date,
|
||||
"end_date": template.end_date,
|
||||
"updated_at": template.updated_at.isoformat() if template.updated_at else None,
|
||||
}
|
||||
|
||||
|
||||
@router.delete("/{template_id}")
|
||||
async def delete_template(
|
||||
template_id: str,
|
||||
db: Session = Depends(get_db),
|
||||
):
|
||||
"""Delete a report template."""
|
||||
template = db.query(ReportTemplate).filter_by(id=template_id).first()
|
||||
if not template:
|
||||
raise HTTPException(status_code=404, detail="Template not found")
|
||||
|
||||
db.delete(template)
|
||||
db.commit()
|
||||
|
||||
return JSONResponse({"status": "success", "message": "Template deleted"})
|
||||
46
backend/routers/roster.py
Normal file
@@ -0,0 +1,46 @@
|
||||
from fastapi import APIRouter, Depends
|
||||
from sqlalchemy.orm import Session
|
||||
from datetime import datetime, timedelta
|
||||
from typing import Dict, Any
|
||||
import random
|
||||
|
||||
from backend.database import get_db
|
||||
from backend.services.snapshot import emit_status_snapshot
|
||||
|
||||
router = APIRouter(prefix="/api", tags=["roster"])
|
||||
|
||||
|
||||
@router.get("/status-snapshot")
|
||||
def get_status_snapshot(db: Session = Depends(get_db)):
|
||||
"""
|
||||
Calls emit_status_snapshot() to get current fleet status.
|
||||
This will be replaced with real Series3 emitter logic later.
|
||||
"""
|
||||
return emit_status_snapshot()
|
||||
|
||||
|
||||
@router.get("/roster")
|
||||
def get_roster(db: Session = Depends(get_db)):
|
||||
"""
|
||||
Returns list of units with their metadata and status.
|
||||
Uses mock data for now.
|
||||
"""
|
||||
snapshot = emit_status_snapshot()
|
||||
units_list = []
|
||||
|
||||
for unit_id, unit_data in snapshot["units"].items():
|
||||
units_list.append({
|
||||
"id": unit_id,
|
||||
"status": unit_data["status"],
|
||||
"age": unit_data["age"],
|
||||
"last_seen": unit_data["last"],
|
||||
"deployed": unit_data["deployed"],
|
||||
"note": unit_data.get("note", ""),
|
||||
"last_file": unit_data.get("fname", "")
|
||||
})
|
||||
|
||||
# Sort by status priority (Missing > Pending > OK) then by ID
|
||||
status_priority = {"Missing": 0, "Pending": 1, "OK": 2}
|
||||
units_list.sort(key=lambda x: (status_priority.get(x["status"], 3), x["id"]))
|
||||
|
||||
return {"units": units_list}
|
||||
1158
backend/routers/roster_edit.py
Normal file
139
backend/routers/roster_rename.py
Normal file
@@ -0,0 +1,139 @@
|
||||
"""
|
||||
Roster Unit Rename Router
|
||||
|
||||
Provides endpoint for safely renaming unit IDs across all database tables.
|
||||
"""
|
||||
|
||||
from fastapi import APIRouter, Depends, HTTPException, Form
|
||||
from sqlalchemy.orm import Session
|
||||
from datetime import datetime
|
||||
import logging
|
||||
|
||||
from backend.database import get_db
|
||||
from backend.models import RosterUnit, Emitter, UnitHistory
|
||||
from backend.routers.roster_edit import record_history, sync_slm_to_slmm_cache
|
||||
|
||||
router = APIRouter(prefix="/api/roster", tags=["roster-rename"])
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
@router.post("/rename")
|
||||
async def rename_unit(
|
||||
old_id: str = Form(...),
|
||||
new_id: str = Form(...),
|
||||
db: Session = Depends(get_db)
|
||||
):
|
||||
"""
|
||||
Rename a unit ID across all tables.
|
||||
Updates the unit ID in roster, emitters, unit_history, and all foreign key references.
|
||||
|
||||
IMPORTANT: This operation updates the primary key, which affects all relationships.
|
||||
"""
|
||||
# Validate input
|
||||
if not old_id or not new_id:
|
||||
raise HTTPException(status_code=400, detail="Both old_id and new_id are required")
|
||||
|
||||
if old_id == new_id:
|
||||
raise HTTPException(status_code=400, detail="New ID must be different from old ID")
|
||||
|
||||
# Check if old unit exists
|
||||
old_unit = db.query(RosterUnit).filter(RosterUnit.id == old_id).first()
|
||||
if not old_unit:
|
||||
raise HTTPException(status_code=404, detail=f"Unit '{old_id}' not found")
|
||||
|
||||
# Check if new ID already exists
|
||||
existing_unit = db.query(RosterUnit).filter(RosterUnit.id == new_id).first()
|
||||
if existing_unit:
|
||||
raise HTTPException(status_code=409, detail=f"Unit ID '{new_id}' already exists")
|
||||
|
||||
device_type = old_unit.device_type
|
||||
|
||||
try:
|
||||
# Record history for the rename operation (using old_id since that's still valid)
|
||||
record_history(
|
||||
db=db,
|
||||
unit_id=old_id,
|
||||
change_type="id_change",
|
||||
field_name="id",
|
||||
old_value=old_id,
|
||||
new_value=new_id,
|
||||
source="manual",
|
||||
notes=f"Unit renamed from '{old_id}' to '{new_id}'"
|
||||
)
|
||||
|
||||
# Update roster table (primary)
|
||||
old_unit.id = new_id
|
||||
old_unit.last_updated = datetime.utcnow()
|
||||
|
||||
# Update emitters table
|
||||
emitter = db.query(Emitter).filter(Emitter.id == old_id).first()
|
||||
if emitter:
|
||||
emitter.id = new_id
|
||||
|
||||
# Update unit_history table (all entries for this unit)
|
||||
db.query(UnitHistory).filter(UnitHistory.unit_id == old_id).update(
|
||||
{"unit_id": new_id},
|
||||
synchronize_session=False
|
||||
)
|
||||
|
||||
# Update deployed_with_modem_id references (units that reference this as modem)
|
||||
db.query(RosterUnit).filter(RosterUnit.deployed_with_modem_id == old_id).update(
|
||||
{"deployed_with_modem_id": new_id},
|
||||
synchronize_session=False
|
||||
)
|
||||
|
||||
# Update unit_assignments table (if exists)
|
||||
try:
|
||||
from backend.models import UnitAssignment
|
||||
db.query(UnitAssignment).filter(UnitAssignment.unit_id == old_id).update(
|
||||
{"unit_id": new_id},
|
||||
synchronize_session=False
|
||||
)
|
||||
except Exception as e:
|
||||
logger.warning(f"Could not update unit_assignments: {e}")
|
||||
|
||||
# Update recording_sessions table (if exists)
|
||||
try:
|
||||
from backend.models import RecordingSession
|
||||
db.query(RecordingSession).filter(RecordingSession.unit_id == old_id).update(
|
||||
{"unit_id": new_id},
|
||||
synchronize_session=False
|
||||
)
|
||||
except Exception as e:
|
||||
logger.warning(f"Could not update recording_sessions: {e}")
|
||||
|
||||
# Commit all changes
|
||||
db.commit()
|
||||
|
||||
# If sound level meter, sync updated config to SLMM cache
|
||||
if device_type == "slm":
|
||||
logger.info(f"Syncing renamed SLM {new_id} (was {old_id}) config to SLMM cache...")
|
||||
result = await sync_slm_to_slmm_cache(
|
||||
unit_id=new_id,
|
||||
host=old_unit.slm_host,
|
||||
tcp_port=old_unit.slm_tcp_port,
|
||||
ftp_port=old_unit.slm_ftp_port,
|
||||
deployed_with_modem_id=old_unit.deployed_with_modem_id,
|
||||
db=db
|
||||
)
|
||||
|
||||
if not result["success"]:
|
||||
logger.warning(f"SLMM cache sync warning for renamed unit {new_id}: {result['message']}")
|
||||
|
||||
logger.info(f"Successfully renamed unit '{old_id}' to '{new_id}'")
|
||||
|
||||
return {
|
||||
"success": True,
|
||||
"message": f"Successfully renamed unit from '{old_id}' to '{new_id}'",
|
||||
"old_id": old_id,
|
||||
"new_id": new_id,
|
||||
"device_type": device_type
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
db.rollback()
|
||||
logger.error(f"Error renaming unit '{old_id}' to '{new_id}': {e}")
|
||||
raise HTTPException(
|
||||
status_code=500,
|
||||
detail=f"Failed to rename unit: {str(e)}"
|
||||
)
|
||||
408
backend/routers/scheduler.py
Normal file
@@ -0,0 +1,408 @@
|
||||
"""
|
||||
Scheduler Router
|
||||
|
||||
Handles scheduled actions for automated recording control.
|
||||
"""
|
||||
|
||||
from fastapi import APIRouter, Request, Depends, HTTPException, Query
|
||||
from fastapi.responses import HTMLResponse, JSONResponse
|
||||
from sqlalchemy.orm import Session
|
||||
from sqlalchemy import and_, or_
|
||||
from datetime import datetime, timedelta
|
||||
from typing import Optional
|
||||
import uuid
|
||||
import json
|
||||
|
||||
from backend.database import get_db
|
||||
from backend.models import (
|
||||
Project,
|
||||
ScheduledAction,
|
||||
MonitoringLocation,
|
||||
UnitAssignment,
|
||||
RosterUnit,
|
||||
)
|
||||
from backend.services.scheduler import get_scheduler
|
||||
from backend.templates_config import templates
|
||||
|
||||
router = APIRouter(prefix="/api/projects/{project_id}/scheduler", tags=["scheduler"])
|
||||
|
||||
|
||||
# ============================================================================
|
||||
# Scheduled Actions List
|
||||
# ============================================================================
|
||||
|
||||
@router.get("/actions", response_class=HTMLResponse)
|
||||
async def get_scheduled_actions(
|
||||
project_id: str,
|
||||
request: Request,
|
||||
db: Session = Depends(get_db),
|
||||
status: Optional[str] = Query(None),
|
||||
start_date: Optional[str] = Query(None),
|
||||
end_date: Optional[str] = Query(None),
|
||||
):
|
||||
"""
|
||||
Get scheduled actions for a project.
|
||||
Returns HTML partial with agenda/calendar view.
|
||||
"""
|
||||
query = db.query(ScheduledAction).filter_by(project_id=project_id)
|
||||
|
||||
# Filter by status
|
||||
if status:
|
||||
query = query.filter_by(execution_status=status)
|
||||
else:
|
||||
# By default, show pending and upcoming completed/failed
|
||||
query = query.filter(
|
||||
or_(
|
||||
ScheduledAction.execution_status == "pending",
|
||||
and_(
|
||||
ScheduledAction.execution_status.in_(["completed", "failed"]),
|
||||
ScheduledAction.scheduled_time >= datetime.utcnow() - timedelta(days=7),
|
||||
),
|
||||
)
|
||||
)
|
||||
|
||||
# Filter by date range
|
||||
if start_date:
|
||||
query = query.filter(ScheduledAction.scheduled_time >= datetime.fromisoformat(start_date))
|
||||
if end_date:
|
||||
query = query.filter(ScheduledAction.scheduled_time <= datetime.fromisoformat(end_date))
|
||||
|
||||
actions = query.order_by(ScheduledAction.scheduled_time).all()
|
||||
|
||||
# Enrich with location and unit details
|
||||
actions_data = []
|
||||
for action in actions:
|
||||
location = db.query(MonitoringLocation).filter_by(id=action.location_id).first()
|
||||
|
||||
unit = None
|
||||
if action.unit_id:
|
||||
unit = db.query(RosterUnit).filter_by(id=action.unit_id).first()
|
||||
else:
|
||||
# Get from assignment
|
||||
assignment = db.query(UnitAssignment).filter(
|
||||
and_(
|
||||
UnitAssignment.location_id == action.location_id,
|
||||
UnitAssignment.status == "active",
|
||||
)
|
||||
).first()
|
||||
if assignment:
|
||||
unit = db.query(RosterUnit).filter_by(id=assignment.unit_id).first()
|
||||
|
||||
actions_data.append({
|
||||
"action": action,
|
||||
"location": location,
|
||||
"unit": unit,
|
||||
})
|
||||
|
||||
return templates.TemplateResponse("partials/projects/scheduler_agenda.html", {
|
||||
"request": request,
|
||||
"project_id": project_id,
|
||||
"actions": actions_data,
|
||||
})
|
||||
|
||||
|
||||
# ============================================================================
|
||||
# Create Scheduled Action
|
||||
# ============================================================================
|
||||
|
||||
@router.post("/actions/create")
|
||||
async def create_scheduled_action(
|
||||
project_id: str,
|
||||
request: Request,
|
||||
db: Session = Depends(get_db),
|
||||
):
|
||||
"""
|
||||
Create a new scheduled action.
|
||||
"""
|
||||
project = db.query(Project).filter_by(id=project_id).first()
|
||||
if not project:
|
||||
raise HTTPException(status_code=404, detail="Project not found")
|
||||
|
||||
form_data = await request.form()
|
||||
|
||||
location_id = form_data.get("location_id")
|
||||
location = db.query(MonitoringLocation).filter_by(
|
||||
id=location_id,
|
||||
project_id=project_id,
|
||||
).first()
|
||||
|
||||
if not location:
|
||||
raise HTTPException(status_code=404, detail="Location not found")
|
||||
|
||||
# Determine device type from location
|
||||
device_type = "slm" if location.location_type == "sound" else "seismograph"
|
||||
|
||||
# Get unit_id (optional - can be determined from assignment at execution time)
|
||||
unit_id = form_data.get("unit_id")
|
||||
|
||||
action = ScheduledAction(
|
||||
id=str(uuid.uuid4()),
|
||||
project_id=project_id,
|
||||
location_id=location_id,
|
||||
unit_id=unit_id,
|
||||
action_type=form_data.get("action_type"),
|
||||
device_type=device_type,
|
||||
scheduled_time=datetime.fromisoformat(form_data.get("scheduled_time")),
|
||||
execution_status="pending",
|
||||
notes=form_data.get("notes"),
|
||||
)
|
||||
|
||||
db.add(action)
|
||||
db.commit()
|
||||
db.refresh(action)
|
||||
|
||||
return JSONResponse({
|
||||
"success": True,
|
||||
"action_id": action.id,
|
||||
"message": f"Scheduled action '{action.action_type}' created for {action.scheduled_time}",
|
||||
})
|
||||
|
||||
|
||||
# ============================================================================
|
||||
# Schedule Recording Session
|
||||
# ============================================================================
|
||||
|
||||
@router.post("/schedule-session")
|
||||
async def schedule_recording_session(
|
||||
project_id: str,
|
||||
request: Request,
|
||||
db: Session = Depends(get_db),
|
||||
):
|
||||
"""
|
||||
Schedule a complete recording session (start + stop).
|
||||
Creates two scheduled actions: start and stop.
|
||||
"""
|
||||
project = db.query(Project).filter_by(id=project_id).first()
|
||||
if not project:
|
||||
raise HTTPException(status_code=404, detail="Project not found")
|
||||
|
||||
form_data = await request.form()
|
||||
|
||||
location_id = form_data.get("location_id")
|
||||
location = db.query(MonitoringLocation).filter_by(
|
||||
id=location_id,
|
||||
project_id=project_id,
|
||||
).first()
|
||||
|
||||
if not location:
|
||||
raise HTTPException(status_code=404, detail="Location not found")
|
||||
|
||||
device_type = "slm" if location.location_type == "sound" else "seismograph"
|
||||
unit_id = form_data.get("unit_id")
|
||||
|
||||
start_time = datetime.fromisoformat(form_data.get("start_time"))
|
||||
duration_minutes = int(form_data.get("duration_minutes", 60))
|
||||
stop_time = start_time + timedelta(minutes=duration_minutes)
|
||||
|
||||
# Create START action
|
||||
start_action = ScheduledAction(
|
||||
id=str(uuid.uuid4()),
|
||||
project_id=project_id,
|
||||
location_id=location_id,
|
||||
unit_id=unit_id,
|
||||
action_type="start",
|
||||
device_type=device_type,
|
||||
scheduled_time=start_time,
|
||||
execution_status="pending",
|
||||
notes=form_data.get("notes"),
|
||||
)
|
||||
|
||||
# Create STOP action
|
||||
stop_action = ScheduledAction(
|
||||
id=str(uuid.uuid4()),
|
||||
project_id=project_id,
|
||||
location_id=location_id,
|
||||
unit_id=unit_id,
|
||||
action_type="stop",
|
||||
device_type=device_type,
|
||||
scheduled_time=stop_time,
|
||||
execution_status="pending",
|
||||
notes=f"Auto-stop after {duration_minutes} minutes",
|
||||
)
|
||||
|
||||
db.add(start_action)
|
||||
db.add(stop_action)
|
||||
db.commit()
|
||||
|
||||
return JSONResponse({
|
||||
"success": True,
|
||||
"start_action_id": start_action.id,
|
||||
"stop_action_id": stop_action.id,
|
||||
"message": f"Recording session scheduled from {start_time} to {stop_time}",
|
||||
})
|
||||
|
||||
|
||||
# ============================================================================
|
||||
# Update/Cancel Scheduled Action
|
||||
# ============================================================================
|
||||
|
||||
@router.put("/actions/{action_id}")
|
||||
async def update_scheduled_action(
|
||||
project_id: str,
|
||||
action_id: str,
|
||||
request: Request,
|
||||
db: Session = Depends(get_db),
|
||||
):
|
||||
"""
|
||||
Update a scheduled action (only if not yet executed).
|
||||
"""
|
||||
action = db.query(ScheduledAction).filter_by(
|
||||
id=action_id,
|
||||
project_id=project_id,
|
||||
).first()
|
||||
|
||||
if not action:
|
||||
raise HTTPException(status_code=404, detail="Action not found")
|
||||
|
||||
if action.execution_status != "pending":
|
||||
raise HTTPException(
|
||||
status_code=400,
|
||||
detail="Cannot update action that has already been executed",
|
||||
)
|
||||
|
||||
data = await request.json()
|
||||
|
||||
if "scheduled_time" in data:
|
||||
action.scheduled_time = datetime.fromisoformat(data["scheduled_time"])
|
||||
if "notes" in data:
|
||||
action.notes = data["notes"]
|
||||
|
||||
db.commit()
|
||||
|
||||
return {"success": True, "message": "Action updated successfully"}
|
||||
|
||||
|
||||
@router.post("/actions/{action_id}/cancel")
|
||||
async def cancel_scheduled_action(
|
||||
project_id: str,
|
||||
action_id: str,
|
||||
db: Session = Depends(get_db),
|
||||
):
|
||||
"""
|
||||
Cancel a pending scheduled action.
|
||||
"""
|
||||
action = db.query(ScheduledAction).filter_by(
|
||||
id=action_id,
|
||||
project_id=project_id,
|
||||
).first()
|
||||
|
||||
if not action:
|
||||
raise HTTPException(status_code=404, detail="Action not found")
|
||||
|
||||
if action.execution_status != "pending":
|
||||
raise HTTPException(
|
||||
status_code=400,
|
||||
detail="Can only cancel pending actions",
|
||||
)
|
||||
|
||||
action.execution_status = "cancelled"
|
||||
db.commit()
|
||||
|
||||
return {"success": True, "message": "Action cancelled successfully"}
|
||||
|
||||
|
||||
@router.delete("/actions/{action_id}")
|
||||
async def delete_scheduled_action(
|
||||
project_id: str,
|
||||
action_id: str,
|
||||
db: Session = Depends(get_db),
|
||||
):
|
||||
"""
|
||||
Delete a scheduled action (only if pending or cancelled).
|
||||
"""
|
||||
action = db.query(ScheduledAction).filter_by(
|
||||
id=action_id,
|
||||
project_id=project_id,
|
||||
).first()
|
||||
|
||||
if not action:
|
||||
raise HTTPException(status_code=404, detail="Action not found")
|
||||
|
||||
if action.execution_status not in ["pending", "cancelled"]:
|
||||
raise HTTPException(
|
||||
status_code=400,
|
||||
detail="Cannot delete action that has been executed",
|
||||
)
|
||||
|
||||
db.delete(action)
|
||||
db.commit()
|
||||
|
||||
return {"success": True, "message": "Action deleted successfully"}
|
||||
|
||||
|
||||
# ============================================================================
|
||||
# Manual Execution
|
||||
# ============================================================================
|
||||
|
||||
@router.post("/actions/{action_id}/execute")
|
||||
async def execute_action_now(
|
||||
project_id: str,
|
||||
action_id: str,
|
||||
db: Session = Depends(get_db),
|
||||
):
|
||||
"""
|
||||
Manually trigger execution of a scheduled action (for testing/debugging).
|
||||
"""
|
||||
action = db.query(ScheduledAction).filter_by(
|
||||
id=action_id,
|
||||
project_id=project_id,
|
||||
).first()
|
||||
|
||||
if not action:
|
||||
raise HTTPException(status_code=404, detail="Action not found")
|
||||
|
||||
if action.execution_status != "pending":
|
||||
raise HTTPException(
|
||||
status_code=400,
|
||||
detail="Action is not pending",
|
||||
)
|
||||
|
||||
# Execute via scheduler service
|
||||
scheduler = get_scheduler()
|
||||
result = await scheduler.execute_action_by_id(action_id)
|
||||
|
||||
# Refresh from DB to get updated status
|
||||
db.refresh(action)
|
||||
|
||||
return JSONResponse({
|
||||
"success": result.get("success", False),
|
||||
"result": result,
|
||||
"action": {
|
||||
"id": action.id,
|
||||
"execution_status": action.execution_status,
|
||||
"executed_at": action.executed_at.isoformat() if action.executed_at else None,
|
||||
"error_message": action.error_message,
|
||||
},
|
||||
})
|
||||
|
||||
|
||||
# ============================================================================
|
||||
# Scheduler Status
|
||||
# ============================================================================
|
||||
|
||||
@router.get("/status")
|
||||
async def get_scheduler_status():
|
||||
"""
|
||||
Get scheduler service status.
|
||||
"""
|
||||
scheduler = get_scheduler()
|
||||
|
||||
return {
|
||||
"running": scheduler.running,
|
||||
"check_interval": scheduler.check_interval,
|
||||
}
|
||||
|
||||
|
||||
@router.post("/execute-pending")
|
||||
async def trigger_pending_execution():
|
||||
"""
|
||||
Manually trigger execution of all pending actions (for testing).
|
||||
"""
|
||||
scheduler = get_scheduler()
|
||||
results = await scheduler.execute_pending_actions()
|
||||
|
||||
return {
|
||||
"success": True,
|
||||
"executed_count": len(results),
|
||||
"results": results,
|
||||
}
|
||||
80
backend/routers/seismo_dashboard.py
Normal file
@@ -0,0 +1,80 @@
|
||||
"""
|
||||
Seismograph Dashboard API Router
|
||||
Provides endpoints for the seismograph-specific dashboard
|
||||
"""
|
||||
|
||||
from fastapi import APIRouter, Request, Depends, Query
|
||||
from fastapi.responses import HTMLResponse
|
||||
from sqlalchemy.orm import Session
|
||||
from backend.database import get_db
|
||||
from backend.models import RosterUnit
|
||||
from backend.templates_config import templates
|
||||
|
||||
router = APIRouter(prefix="/api/seismo-dashboard", tags=["seismo-dashboard"])
|
||||
|
||||
|
||||
@router.get("/stats", response_class=HTMLResponse)
|
||||
async def get_seismo_stats(request: Request, db: Session = Depends(get_db)):
|
||||
"""
|
||||
Returns HTML partial with seismograph statistics summary
|
||||
"""
|
||||
# Get all seismograph units
|
||||
seismos = db.query(RosterUnit).filter_by(
|
||||
device_type="seismograph",
|
||||
retired=False
|
||||
).all()
|
||||
|
||||
total = len(seismos)
|
||||
deployed = sum(1 for s in seismos if s.deployed)
|
||||
benched = sum(1 for s in seismos if not s.deployed)
|
||||
|
||||
# Count modems assigned to deployed seismographs
|
||||
with_modem = sum(1 for s in seismos if s.deployed and s.deployed_with_modem_id)
|
||||
without_modem = deployed - with_modem
|
||||
|
||||
return templates.TemplateResponse(
|
||||
"partials/seismo_stats.html",
|
||||
{
|
||||
"request": request,
|
||||
"total": total,
|
||||
"deployed": deployed,
|
||||
"benched": benched,
|
||||
"with_modem": with_modem,
|
||||
"without_modem": without_modem
|
||||
}
|
||||
)
|
||||
|
||||
|
||||
@router.get("/units", response_class=HTMLResponse)
|
||||
async def get_seismo_units(
|
||||
request: Request,
|
||||
db: Session = Depends(get_db),
|
||||
search: str = Query(None)
|
||||
):
|
||||
"""
|
||||
Returns HTML partial with filterable seismograph unit list
|
||||
"""
|
||||
query = db.query(RosterUnit).filter_by(
|
||||
device_type="seismograph",
|
||||
retired=False
|
||||
)
|
||||
|
||||
# Apply search filter
|
||||
if search:
|
||||
search_lower = search.lower()
|
||||
query = query.filter(
|
||||
(RosterUnit.id.ilike(f"%{search}%")) |
|
||||
(RosterUnit.note.ilike(f"%{search}%")) |
|
||||
(RosterUnit.address.ilike(f"%{search}%"))
|
||||
)
|
||||
|
||||
seismos = query.order_by(RosterUnit.id).all()
|
||||
|
||||
return templates.TemplateResponse(
|
||||
"partials/seismo_unit_list.html",
|
||||
{
|
||||
"request": request,
|
||||
"units": seismos,
|
||||
"search": search or ""
|
||||
}
|
||||
)
|
||||
551
backend/routers/settings.py
Normal file
@@ -0,0 +1,551 @@
|
||||
from fastapi import APIRouter, Depends, HTTPException, UploadFile, File
|
||||
from fastapi.responses import StreamingResponse, FileResponse
|
||||
from sqlalchemy.orm import Session
|
||||
from datetime import datetime, date
|
||||
from pydantic import BaseModel
|
||||
from typing import Optional
|
||||
import csv
|
||||
import io
|
||||
import shutil
|
||||
from pathlib import Path
|
||||
|
||||
from backend.database import get_db
|
||||
from backend.models import RosterUnit, Emitter, IgnoredUnit, UserPreferences
|
||||
from backend.services.database_backup import DatabaseBackupService
|
||||
|
||||
router = APIRouter(prefix="/api/settings", tags=["settings"])
|
||||
|
||||
|
||||
@router.get("/export-csv")
|
||||
def export_roster_csv(db: Session = Depends(get_db)):
|
||||
"""Export all roster units to CSV"""
|
||||
units = db.query(RosterUnit).all()
|
||||
|
||||
# Create CSV in memory
|
||||
output = io.StringIO()
|
||||
fieldnames = [
|
||||
'unit_id', 'unit_type', 'device_type', 'deployed', 'retired',
|
||||
'note', 'project_id', 'location', 'address', 'coordinates',
|
||||
'last_calibrated', 'next_calibration_due', 'deployed_with_modem_id',
|
||||
'ip_address', 'phone_number', 'hardware_model'
|
||||
]
|
||||
|
||||
writer = csv.DictWriter(output, fieldnames=fieldnames)
|
||||
writer.writeheader()
|
||||
|
||||
for unit in units:
|
||||
writer.writerow({
|
||||
'unit_id': unit.id,
|
||||
'unit_type': unit.unit_type or '',
|
||||
'device_type': unit.device_type or 'seismograph',
|
||||
'deployed': 'true' if unit.deployed else 'false',
|
||||
'retired': 'true' if unit.retired else 'false',
|
||||
'note': unit.note or '',
|
||||
'project_id': unit.project_id or '',
|
||||
'location': unit.location or '',
|
||||
'address': unit.address or '',
|
||||
'coordinates': unit.coordinates or '',
|
||||
'last_calibrated': unit.last_calibrated.strftime('%Y-%m-%d') if unit.last_calibrated else '',
|
||||
'next_calibration_due': unit.next_calibration_due.strftime('%Y-%m-%d') if unit.next_calibration_due else '',
|
||||
'deployed_with_modem_id': unit.deployed_with_modem_id or '',
|
||||
'ip_address': unit.ip_address or '',
|
||||
'phone_number': unit.phone_number or '',
|
||||
'hardware_model': unit.hardware_model or ''
|
||||
})
|
||||
|
||||
output.seek(0)
|
||||
filename = f"roster_export_{date.today().isoformat()}.csv"
|
||||
|
||||
return StreamingResponse(
|
||||
io.BytesIO(output.getvalue().encode('utf-8')),
|
||||
media_type="text/csv",
|
||||
headers={"Content-Disposition": f"attachment; filename={filename}"}
|
||||
)
|
||||
|
||||
|
||||
@router.get("/stats")
|
||||
def get_table_stats(db: Session = Depends(get_db)):
|
||||
"""Get counts for all tables"""
|
||||
roster_count = db.query(RosterUnit).count()
|
||||
emitters_count = db.query(Emitter).count()
|
||||
ignored_count = db.query(IgnoredUnit).count()
|
||||
|
||||
return {
|
||||
"roster": roster_count,
|
||||
"emitters": emitters_count,
|
||||
"ignored": ignored_count,
|
||||
"total": roster_count + emitters_count + ignored_count
|
||||
}
|
||||
|
||||
|
||||
@router.get("/roster-units")
|
||||
def get_all_roster_units(db: Session = Depends(get_db)):
|
||||
"""Get all roster units for management table"""
|
||||
units = db.query(RosterUnit).order_by(RosterUnit.id).all()
|
||||
|
||||
return [{
|
||||
"id": unit.id,
|
||||
"device_type": unit.device_type or "seismograph",
|
||||
"unit_type": unit.unit_type or "series3",
|
||||
"deployed": unit.deployed,
|
||||
"retired": unit.retired,
|
||||
"note": unit.note or "",
|
||||
"project_id": unit.project_id or "",
|
||||
"location": unit.location or "",
|
||||
"address": unit.address or "",
|
||||
"coordinates": unit.coordinates or "",
|
||||
"last_calibrated": unit.last_calibrated.isoformat() if unit.last_calibrated else None,
|
||||
"next_calibration_due": unit.next_calibration_due.isoformat() if unit.next_calibration_due else None,
|
||||
"deployed_with_modem_id": unit.deployed_with_modem_id or "",
|
||||
"ip_address": unit.ip_address or "",
|
||||
"phone_number": unit.phone_number or "",
|
||||
"hardware_model": unit.hardware_model or "",
|
||||
"slm_host": unit.slm_host or "",
|
||||
"slm_tcp_port": unit.slm_tcp_port,
|
||||
"slm_model": unit.slm_model or "",
|
||||
"slm_serial_number": unit.slm_serial_number or "",
|
||||
"slm_frequency_weighting": unit.slm_frequency_weighting or "",
|
||||
"slm_time_weighting": unit.slm_time_weighting or "",
|
||||
"slm_measurement_range": unit.slm_measurement_range or "",
|
||||
"slm_last_check": unit.slm_last_check.isoformat() if unit.slm_last_check else None,
|
||||
"last_updated": unit.last_updated.isoformat() if unit.last_updated else None
|
||||
} for unit in units]
|
||||
|
||||
|
||||
def parse_date(date_str):
|
||||
"""Helper function to parse date strings"""
|
||||
if not date_str or not date_str.strip():
|
||||
return None
|
||||
try:
|
||||
return datetime.strptime(date_str.strip(), "%Y-%m-%d").date()
|
||||
except ValueError:
|
||||
return None
|
||||
|
||||
|
||||
@router.post("/import-csv-replace")
|
||||
async def import_csv_replace(
|
||||
file: UploadFile = File(...),
|
||||
db: Session = Depends(get_db)
|
||||
):
|
||||
"""
|
||||
Replace all roster data with CSV import (atomic transaction).
|
||||
Clears roster table first, then imports all rows from CSV.
|
||||
"""
|
||||
|
||||
if not file.filename.endswith('.csv'):
|
||||
raise HTTPException(status_code=400, detail="File must be a CSV")
|
||||
|
||||
# Read and parse CSV
|
||||
contents = await file.read()
|
||||
csv_text = contents.decode('utf-8')
|
||||
csv_reader = csv.DictReader(io.StringIO(csv_text))
|
||||
|
||||
# Parse all rows FIRST (fail fast before deletion)
|
||||
parsed_units = []
|
||||
for row_num, row in enumerate(csv_reader, start=2):
|
||||
unit_id = row.get('unit_id', '').strip()
|
||||
if not unit_id:
|
||||
raise HTTPException(
|
||||
status_code=400,
|
||||
detail=f"Row {row_num}: Missing required field unit_id"
|
||||
)
|
||||
|
||||
# Parse and validate dates
|
||||
last_cal_date = parse_date(row.get('last_calibrated'))
|
||||
next_cal_date = parse_date(row.get('next_calibration_due'))
|
||||
|
||||
parsed_units.append({
|
||||
'id': unit_id,
|
||||
'unit_type': row.get('unit_type', 'series3'),
|
||||
'device_type': row.get('device_type', 'seismograph'),
|
||||
'deployed': row.get('deployed', '').lower() in ('true', '1', 'yes'),
|
||||
'retired': row.get('retired', '').lower() in ('true', '1', 'yes'),
|
||||
'note': row.get('note', ''),
|
||||
'project_id': row.get('project_id') or None,
|
||||
'location': row.get('location') or None,
|
||||
'address': row.get('address') or None,
|
||||
'coordinates': row.get('coordinates') or None,
|
||||
'last_calibrated': last_cal_date,
|
||||
'next_calibration_due': next_cal_date,
|
||||
'deployed_with_modem_id': row.get('deployed_with_modem_id') or None,
|
||||
'ip_address': row.get('ip_address') or None,
|
||||
'phone_number': row.get('phone_number') or None,
|
||||
'hardware_model': row.get('hardware_model') or None,
|
||||
})
|
||||
|
||||
# Atomic transaction: delete all, then insert all
|
||||
try:
|
||||
deleted_count = db.query(RosterUnit).delete()
|
||||
|
||||
for unit_data in parsed_units:
|
||||
new_unit = RosterUnit(**unit_data, last_updated=datetime.utcnow())
|
||||
db.add(new_unit)
|
||||
|
||||
db.commit()
|
||||
|
||||
return {
|
||||
"message": "Roster replaced successfully",
|
||||
"deleted": deleted_count,
|
||||
"added": len(parsed_units)
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
db.rollback()
|
||||
raise HTTPException(status_code=500, detail=f"Import failed: {str(e)}")
|
||||
|
||||
|
||||
@router.post("/clear-all")
|
||||
def clear_all_data(db: Session = Depends(get_db)):
|
||||
"""Clear all tables (roster, emitters, ignored)"""
|
||||
try:
|
||||
roster_count = db.query(RosterUnit).delete()
|
||||
emitters_count = db.query(Emitter).delete()
|
||||
ignored_count = db.query(IgnoredUnit).delete()
|
||||
|
||||
db.commit()
|
||||
|
||||
return {
|
||||
"message": "All data cleared",
|
||||
"deleted": {
|
||||
"roster": roster_count,
|
||||
"emitters": emitters_count,
|
||||
"ignored": ignored_count,
|
||||
"total": roster_count + emitters_count + ignored_count
|
||||
}
|
||||
}
|
||||
except Exception as e:
|
||||
db.rollback()
|
||||
raise HTTPException(status_code=500, detail=f"Clear failed: {str(e)}")
|
||||
|
||||
|
||||
@router.post("/clear-roster")
|
||||
def clear_roster(db: Session = Depends(get_db)):
|
||||
"""Clear roster table only"""
|
||||
try:
|
||||
count = db.query(RosterUnit).delete()
|
||||
db.commit()
|
||||
return {"message": "Roster cleared", "deleted": count}
|
||||
except Exception as e:
|
||||
db.rollback()
|
||||
raise HTTPException(status_code=500, detail=f"Clear failed: {str(e)}")
|
||||
|
||||
|
||||
@router.post("/clear-emitters")
|
||||
def clear_emitters(db: Session = Depends(get_db)):
|
||||
"""Clear emitters table only"""
|
||||
try:
|
||||
count = db.query(Emitter).delete()
|
||||
db.commit()
|
||||
return {"message": "Emitters cleared", "deleted": count}
|
||||
except Exception as e:
|
||||
db.rollback()
|
||||
raise HTTPException(status_code=500, detail=f"Clear failed: {str(e)}")
|
||||
|
||||
|
||||
@router.post("/clear-ignored")
|
||||
def clear_ignored(db: Session = Depends(get_db)):
|
||||
"""Clear ignored units table only"""
|
||||
try:
|
||||
count = db.query(IgnoredUnit).delete()
|
||||
db.commit()
|
||||
return {"message": "Ignored units cleared", "deleted": count}
|
||||
except Exception as e:
|
||||
db.rollback()
|
||||
raise HTTPException(status_code=500, detail=f"Clear failed: {str(e)}")
|
||||
|
||||
|
||||
# User Preferences Endpoints
|
||||
|
||||
class PreferencesUpdate(BaseModel):
|
||||
"""Schema for updating user preferences (all fields optional)"""
|
||||
timezone: Optional[str] = None
|
||||
theme: Optional[str] = None
|
||||
auto_refresh_interval: Optional[int] = None
|
||||
date_format: Optional[str] = None
|
||||
table_rows_per_page: Optional[int] = None
|
||||
calibration_interval_days: Optional[int] = None
|
||||
calibration_warning_days: Optional[int] = None
|
||||
status_ok_threshold_hours: Optional[int] = None
|
||||
status_pending_threshold_hours: Optional[int] = None
|
||||
|
||||
|
||||
@router.get("/preferences")
|
||||
def get_preferences(db: Session = Depends(get_db)):
|
||||
"""
|
||||
Get user preferences. Creates default preferences if none exist.
|
||||
"""
|
||||
prefs = db.query(UserPreferences).filter(UserPreferences.id == 1).first()
|
||||
|
||||
if not prefs:
|
||||
# Create default preferences
|
||||
prefs = UserPreferences(id=1)
|
||||
db.add(prefs)
|
||||
db.commit()
|
||||
db.refresh(prefs)
|
||||
|
||||
return {
|
||||
"timezone": prefs.timezone,
|
||||
"theme": prefs.theme,
|
||||
"auto_refresh_interval": prefs.auto_refresh_interval,
|
||||
"date_format": prefs.date_format,
|
||||
"table_rows_per_page": prefs.table_rows_per_page,
|
||||
"calibration_interval_days": prefs.calibration_interval_days,
|
||||
"calibration_warning_days": prefs.calibration_warning_days,
|
||||
"status_ok_threshold_hours": prefs.status_ok_threshold_hours,
|
||||
"status_pending_threshold_hours": prefs.status_pending_threshold_hours,
|
||||
"updated_at": prefs.updated_at.isoformat() if prefs.updated_at else None
|
||||
}
|
||||
|
||||
|
||||
@router.put("/preferences")
|
||||
def update_preferences(
|
||||
updates: PreferencesUpdate,
|
||||
db: Session = Depends(get_db)
|
||||
):
|
||||
"""
|
||||
Update user preferences. Accepts partial updates.
|
||||
Creates default preferences if none exist.
|
||||
"""
|
||||
prefs = db.query(UserPreferences).filter(UserPreferences.id == 1).first()
|
||||
|
||||
if not prefs:
|
||||
# Create default preferences
|
||||
prefs = UserPreferences(id=1)
|
||||
db.add(prefs)
|
||||
|
||||
# Update only provided fields
|
||||
update_data = updates.dict(exclude_unset=True)
|
||||
for field, value in update_data.items():
|
||||
setattr(prefs, field, value)
|
||||
|
||||
prefs.updated_at = datetime.utcnow()
|
||||
|
||||
db.commit()
|
||||
db.refresh(prefs)
|
||||
|
||||
return {
|
||||
"message": "Preferences updated successfully",
|
||||
"timezone": prefs.timezone,
|
||||
"theme": prefs.theme,
|
||||
"auto_refresh_interval": prefs.auto_refresh_interval,
|
||||
"date_format": prefs.date_format,
|
||||
"table_rows_per_page": prefs.table_rows_per_page,
|
||||
"calibration_interval_days": prefs.calibration_interval_days,
|
||||
"calibration_warning_days": prefs.calibration_warning_days,
|
||||
"status_ok_threshold_hours": prefs.status_ok_threshold_hours,
|
||||
"status_pending_threshold_hours": prefs.status_pending_threshold_hours,
|
||||
"updated_at": prefs.updated_at.isoformat() if prefs.updated_at else None
|
||||
}
|
||||
|
||||
|
||||
# Database Management Endpoints
|
||||
|
||||
backup_service = DatabaseBackupService()
|
||||
|
||||
|
||||
@router.get("/database/stats")
|
||||
def get_database_stats():
|
||||
"""Get current database statistics"""
|
||||
try:
|
||||
stats = backup_service.get_database_stats()
|
||||
return stats
|
||||
except Exception as e:
|
||||
raise HTTPException(status_code=500, detail=f"Failed to get database stats: {str(e)}")
|
||||
|
||||
|
||||
@router.post("/database/snapshot")
|
||||
def create_database_snapshot(description: Optional[str] = None):
|
||||
"""Create a full database snapshot"""
|
||||
try:
|
||||
snapshot = backup_service.create_snapshot(description=description)
|
||||
return {
|
||||
"message": "Snapshot created successfully",
|
||||
"snapshot": snapshot
|
||||
}
|
||||
except Exception as e:
|
||||
raise HTTPException(status_code=500, detail=f"Snapshot creation failed: {str(e)}")
|
||||
|
||||
|
||||
@router.get("/database/snapshots")
|
||||
def list_database_snapshots():
|
||||
"""List all available database snapshots"""
|
||||
try:
|
||||
snapshots = backup_service.list_snapshots()
|
||||
return {
|
||||
"snapshots": snapshots,
|
||||
"count": len(snapshots)
|
||||
}
|
||||
except Exception as e:
|
||||
raise HTTPException(status_code=500, detail=f"Failed to list snapshots: {str(e)}")
|
||||
|
||||
|
||||
@router.get("/database/snapshot/{filename}")
|
||||
def download_snapshot(filename: str):
|
||||
"""Download a specific snapshot file"""
|
||||
try:
|
||||
snapshot_path = backup_service.download_snapshot(filename)
|
||||
return FileResponse(
|
||||
path=str(snapshot_path),
|
||||
filename=filename,
|
||||
media_type="application/x-sqlite3"
|
||||
)
|
||||
except FileNotFoundError:
|
||||
raise HTTPException(status_code=404, detail=f"Snapshot {filename} not found")
|
||||
except Exception as e:
|
||||
raise HTTPException(status_code=500, detail=f"Download failed: {str(e)}")
|
||||
|
||||
|
||||
@router.delete("/database/snapshot/{filename}")
|
||||
def delete_database_snapshot(filename: str):
|
||||
"""Delete a specific snapshot"""
|
||||
try:
|
||||
backup_service.delete_snapshot(filename)
|
||||
return {
|
||||
"message": f"Snapshot {filename} deleted successfully",
|
||||
"filename": filename
|
||||
}
|
||||
except FileNotFoundError:
|
||||
raise HTTPException(status_code=404, detail=f"Snapshot {filename} not found")
|
||||
except Exception as e:
|
||||
raise HTTPException(status_code=500, detail=f"Delete failed: {str(e)}")
|
||||
|
||||
|
||||
class RestoreRequest(BaseModel):
|
||||
"""Schema for restore request"""
|
||||
filename: str
|
||||
create_backup: bool = True
|
||||
|
||||
|
||||
@router.post("/database/restore")
|
||||
def restore_database(request: RestoreRequest, db: Session = Depends(get_db)):
|
||||
"""Restore database from a snapshot"""
|
||||
try:
|
||||
# Close the database connection before restoring
|
||||
db.close()
|
||||
|
||||
result = backup_service.restore_snapshot(
|
||||
filename=request.filename,
|
||||
create_backup_before_restore=request.create_backup
|
||||
)
|
||||
|
||||
return result
|
||||
except FileNotFoundError:
|
||||
raise HTTPException(status_code=404, detail=f"Snapshot {request.filename} not found")
|
||||
except Exception as e:
|
||||
raise HTTPException(status_code=500, detail=f"Restore failed: {str(e)}")
|
||||
|
||||
|
||||
@router.post("/database/upload-snapshot")
|
||||
async def upload_snapshot(file: UploadFile = File(...)):
|
||||
"""Upload a snapshot file to the backups directory"""
|
||||
if not file.filename.endswith('.db'):
|
||||
raise HTTPException(status_code=400, detail="File must be a .db file")
|
||||
|
||||
try:
|
||||
# Save uploaded file to backups directory
|
||||
backups_dir = Path("./data/backups")
|
||||
backups_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
timestamp = datetime.utcnow().strftime("%Y%m%d_%H%M%S")
|
||||
uploaded_filename = f"snapshot_uploaded_{timestamp}.db"
|
||||
file_path = backups_dir / uploaded_filename
|
||||
|
||||
# Save file
|
||||
with open(file_path, "wb") as buffer:
|
||||
shutil.copyfileobj(file.file, buffer)
|
||||
|
||||
# Create metadata
|
||||
metadata = {
|
||||
"filename": uploaded_filename,
|
||||
"created_at": timestamp,
|
||||
"created_at_iso": datetime.utcnow().isoformat(),
|
||||
"description": f"Uploaded: {file.filename}",
|
||||
"size_bytes": file_path.stat().st_size,
|
||||
"size_mb": round(file_path.stat().st_size / (1024 * 1024), 2),
|
||||
"type": "uploaded"
|
||||
}
|
||||
|
||||
metadata_path = backups_dir / f"{uploaded_filename}.meta.json"
|
||||
import json
|
||||
with open(metadata_path, 'w') as f:
|
||||
json.dump(metadata, f, indent=2)
|
||||
|
||||
return {
|
||||
"message": "Snapshot uploaded successfully",
|
||||
"snapshot": metadata
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
raise HTTPException(status_code=500, detail=f"Upload failed: {str(e)}")
|
||||
|
||||
|
||||
# ============================================================================
|
||||
# SLMM SYNC ENDPOINTS
|
||||
# ============================================================================
|
||||
|
||||
@router.post("/slmm/sync-all")
|
||||
async def sync_all_slms(db: Session = Depends(get_db)):
|
||||
"""
|
||||
Manually trigger full sync of all SLM devices from Terra-View roster to SLMM.
|
||||
|
||||
This ensures SLMM database matches Terra-View roster (source of truth).
|
||||
Also cleans up orphaned devices in SLMM that are not in Terra-View.
|
||||
"""
|
||||
from backend.services.slmm_sync import sync_all_slms_to_slmm, cleanup_orphaned_slmm_devices
|
||||
|
||||
try:
|
||||
# Sync all SLMs
|
||||
sync_results = await sync_all_slms_to_slmm(db)
|
||||
|
||||
# Clean up orphaned devices
|
||||
cleanup_results = await cleanup_orphaned_slmm_devices(db)
|
||||
|
||||
return {
|
||||
"status": "ok",
|
||||
"sync": sync_results,
|
||||
"cleanup": cleanup_results
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
raise HTTPException(status_code=500, detail=f"Sync failed: {str(e)}")
|
||||
|
||||
|
||||
@router.get("/slmm/status")
|
||||
async def get_slmm_sync_status(db: Session = Depends(get_db)):
|
||||
"""
|
||||
Get status of SLMM synchronization.
|
||||
|
||||
Shows which devices are in Terra-View roster vs SLMM database.
|
||||
"""
|
||||
from backend.services.slmm_sync import get_slmm_devices
|
||||
|
||||
try:
|
||||
# Get devices from both systems
|
||||
roster_slms = db.query(RosterUnit).filter_by(device_type="slm").all()
|
||||
slmm_devices = await get_slmm_devices()
|
||||
|
||||
if slmm_devices is None:
|
||||
raise HTTPException(status_code=503, detail="SLMM service unavailable")
|
||||
|
||||
roster_unit_ids = {unit.unit_type for unit in roster_slms}
|
||||
slmm_unit_ids = set(slmm_devices)
|
||||
|
||||
# Find differences
|
||||
in_roster_only = roster_unit_ids - slmm_unit_ids
|
||||
in_slmm_only = slmm_unit_ids - roster_unit_ids
|
||||
in_both = roster_unit_ids & slmm_unit_ids
|
||||
|
||||
return {
|
||||
"status": "ok",
|
||||
"terra_view_total": len(roster_unit_ids),
|
||||
"slmm_total": len(slmm_unit_ids),
|
||||
"synced": len(in_both),
|
||||
"missing_from_slmm": list(in_roster_only),
|
||||
"orphaned_in_slmm": list(in_slmm_only),
|
||||
"in_sync": len(in_roster_only) == 0 and len(in_slmm_only) == 0
|
||||
}
|
||||
|
||||
except HTTPException:
|
||||
raise
|
||||
except Exception as e:
|
||||
raise HTTPException(status_code=500, detail=f"Status check failed: {str(e)}")
|
||||
379
backend/routers/slm_dashboard.py
Normal file
@@ -0,0 +1,379 @@
|
||||
"""
|
||||
SLM Dashboard Router
|
||||
|
||||
Provides API endpoints for the Sound Level Meters dashboard page.
|
||||
"""
|
||||
|
||||
from fastapi import APIRouter, Request, Depends, Query
|
||||
from fastapi.responses import HTMLResponse
|
||||
from sqlalchemy.orm import Session
|
||||
from sqlalchemy import func
|
||||
from datetime import datetime, timedelta
|
||||
import asyncio
|
||||
import httpx
|
||||
import logging
|
||||
import os
|
||||
|
||||
from backend.database import get_db
|
||||
from backend.models import RosterUnit
|
||||
from backend.routers.roster_edit import sync_slm_to_slmm_cache
|
||||
from backend.templates_config import templates
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
router = APIRouter(prefix="/api/slm-dashboard", tags=["slm-dashboard"])
|
||||
|
||||
# SLMM backend URL - configurable via environment variable
|
||||
SLMM_BASE_URL = os.getenv("SLMM_BASE_URL", "http://localhost:8100")
|
||||
|
||||
|
||||
@router.get("/stats", response_class=HTMLResponse)
|
||||
async def get_slm_stats(request: Request, db: Session = Depends(get_db)):
|
||||
"""
|
||||
Get summary statistics for SLM dashboard.
|
||||
Returns HTML partial with stat cards.
|
||||
"""
|
||||
# Query all SLMs
|
||||
all_slms = db.query(RosterUnit).filter_by(device_type="slm").all()
|
||||
|
||||
# Count deployed vs benched
|
||||
deployed_count = sum(1 for slm in all_slms if slm.deployed and not slm.retired)
|
||||
benched_count = sum(1 for slm in all_slms if not slm.deployed and not slm.retired)
|
||||
retired_count = sum(1 for slm in all_slms if slm.retired)
|
||||
|
||||
# Count recently active (checked in last hour)
|
||||
one_hour_ago = datetime.utcnow() - timedelta(hours=1)
|
||||
active_count = sum(1 for slm in all_slms
|
||||
if slm.slm_last_check and slm.slm_last_check > one_hour_ago)
|
||||
|
||||
return templates.TemplateResponse("partials/slm_stats.html", {
|
||||
"request": request,
|
||||
"total_count": len(all_slms),
|
||||
"deployed_count": deployed_count,
|
||||
"benched_count": benched_count,
|
||||
"active_count": active_count,
|
||||
"retired_count": retired_count
|
||||
})
|
||||
|
||||
|
||||
@router.get("/units", response_class=HTMLResponse)
|
||||
async def get_slm_units(
|
||||
request: Request,
|
||||
db: Session = Depends(get_db),
|
||||
search: str = Query(None),
|
||||
project: str = Query(None),
|
||||
include_measurement: bool = Query(False),
|
||||
):
|
||||
"""
|
||||
Get list of SLM units for the sidebar.
|
||||
Returns HTML partial with unit cards.
|
||||
"""
|
||||
query = db.query(RosterUnit).filter_by(device_type="slm")
|
||||
|
||||
# Filter by project if provided
|
||||
if project:
|
||||
query = query.filter(RosterUnit.project_id == project)
|
||||
|
||||
# Filter by search term if provided
|
||||
if search:
|
||||
search_term = f"%{search}%"
|
||||
query = query.filter(
|
||||
(RosterUnit.id.like(search_term)) |
|
||||
(RosterUnit.slm_model.like(search_term)) |
|
||||
(RosterUnit.address.like(search_term))
|
||||
)
|
||||
|
||||
units = query.order_by(
|
||||
RosterUnit.retired.asc(),
|
||||
RosterUnit.deployed.desc(),
|
||||
RosterUnit.id.asc()
|
||||
).all()
|
||||
|
||||
one_hour_ago = datetime.utcnow() - timedelta(hours=1)
|
||||
for unit in units:
|
||||
unit.is_recent = bool(unit.slm_last_check and unit.slm_last_check > one_hour_ago)
|
||||
|
||||
if include_measurement:
|
||||
async def fetch_measurement_state(client: httpx.AsyncClient, unit_id: str) -> str | None:
|
||||
try:
|
||||
response = await client.get(f"{SLMM_BASE_URL}/api/nl43/{unit_id}/measurement-state")
|
||||
if response.status_code == 200:
|
||||
return response.json().get("measurement_state")
|
||||
except Exception:
|
||||
return None
|
||||
return None
|
||||
|
||||
deployed_units = [unit for unit in units if unit.deployed and not unit.retired]
|
||||
if deployed_units:
|
||||
async with httpx.AsyncClient(timeout=3.0) as client:
|
||||
tasks = [fetch_measurement_state(client, unit.id) for unit in deployed_units]
|
||||
results = await asyncio.gather(*tasks, return_exceptions=True)
|
||||
|
||||
for unit, state in zip(deployed_units, results):
|
||||
if isinstance(state, Exception):
|
||||
unit.measurement_state = None
|
||||
else:
|
||||
unit.measurement_state = state
|
||||
|
||||
return templates.TemplateResponse("partials/slm_device_list.html", {
|
||||
"request": request,
|
||||
"units": units
|
||||
})
|
||||
|
||||
|
||||
@router.get("/live-view/{unit_id}", response_class=HTMLResponse)
|
||||
async def get_live_view(request: Request, unit_id: str, db: Session = Depends(get_db)):
|
||||
"""
|
||||
Get live view panel for a specific SLM unit.
|
||||
Returns HTML partial with live metrics and chart.
|
||||
"""
|
||||
# Get unit from database
|
||||
unit = db.query(RosterUnit).filter_by(id=unit_id, device_type="slm").first()
|
||||
|
||||
if not unit:
|
||||
return templates.TemplateResponse("partials/slm_live_view_error.html", {
|
||||
"request": request,
|
||||
"error": f"Unit {unit_id} not found"
|
||||
})
|
||||
|
||||
# Get modem information if assigned
|
||||
modem = None
|
||||
modem_ip = None
|
||||
if unit.deployed_with_modem_id:
|
||||
modem = db.query(RosterUnit).filter_by(id=unit.deployed_with_modem_id, device_type="modem").first()
|
||||
if modem:
|
||||
modem_ip = modem.ip_address
|
||||
else:
|
||||
logger.warning(f"SLM {unit_id} is assigned to modem {unit.deployed_with_modem_id} but modem not found")
|
||||
|
||||
# Fallback to direct slm_host if no modem assigned (backward compatibility)
|
||||
if not modem_ip and unit.slm_host:
|
||||
modem_ip = unit.slm_host
|
||||
logger.info(f"Using legacy slm_host for {unit_id}: {modem_ip}")
|
||||
|
||||
# Try to get current status from SLMM
|
||||
current_status = None
|
||||
measurement_state = None
|
||||
is_measuring = False
|
||||
|
||||
try:
|
||||
async with httpx.AsyncClient(timeout=10.0) as client:
|
||||
# Get measurement state
|
||||
state_response = await client.get(
|
||||
f"{SLMM_BASE_URL}/api/nl43/{unit_id}/measurement-state"
|
||||
)
|
||||
if state_response.status_code == 200:
|
||||
state_data = state_response.json()
|
||||
measurement_state = state_data.get("measurement_state", "Unknown")
|
||||
is_measuring = state_data.get("is_measuring", False)
|
||||
|
||||
# If measuring, sync start time from FTP to database (fixes wrong timestamps)
|
||||
if is_measuring:
|
||||
try:
|
||||
sync_response = await client.post(
|
||||
f"{SLMM_BASE_URL}/api/nl43/{unit_id}/sync-start-time",
|
||||
timeout=10.0
|
||||
)
|
||||
if sync_response.status_code == 200:
|
||||
sync_data = sync_response.json()
|
||||
logger.info(f"Synced start time for {unit_id}: {sync_data.get('message')}")
|
||||
else:
|
||||
logger.warning(f"Failed to sync start time for {unit_id}: {sync_response.status_code}")
|
||||
except Exception as e:
|
||||
# Don't fail the whole request if sync fails
|
||||
logger.warning(f"Could not sync start time for {unit_id}: {e}")
|
||||
|
||||
# Get live status (now with corrected start time)
|
||||
status_response = await client.get(
|
||||
f"{SLMM_BASE_URL}/api/nl43/{unit_id}/live"
|
||||
)
|
||||
if status_response.status_code == 200:
|
||||
status_data = status_response.json()
|
||||
current_status = status_data.get("data", {})
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to get status for {unit_id}: {e}")
|
||||
|
||||
return templates.TemplateResponse("partials/slm_live_view.html", {
|
||||
"request": request,
|
||||
"unit": unit,
|
||||
"modem": modem,
|
||||
"modem_ip": modem_ip,
|
||||
"current_status": current_status,
|
||||
"measurement_state": measurement_state,
|
||||
"is_measuring": is_measuring
|
||||
})
|
||||
|
||||
|
||||
@router.post("/control/{unit_id}/{action}")
|
||||
async def control_slm(unit_id: str, action: str):
|
||||
"""
|
||||
Send control commands to SLM (start, stop, pause, resume, reset).
|
||||
Proxies to SLMM backend.
|
||||
"""
|
||||
valid_actions = ["start", "stop", "pause", "resume", "reset"]
|
||||
|
||||
if action not in valid_actions:
|
||||
return {"status": "error", "detail": f"Invalid action. Must be one of: {valid_actions}"}
|
||||
|
||||
try:
|
||||
async with httpx.AsyncClient(timeout=10.0) as client:
|
||||
response = await client.post(
|
||||
f"{SLMM_BASE_URL}/api/nl43/{unit_id}/{action}"
|
||||
)
|
||||
|
||||
if response.status_code == 200:
|
||||
return response.json()
|
||||
else:
|
||||
return {
|
||||
"status": "error",
|
||||
"detail": f"SLMM returned status {response.status_code}"
|
||||
}
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to control {unit_id}: {e}")
|
||||
return {
|
||||
"status": "error",
|
||||
"detail": str(e)
|
||||
}
|
||||
|
||||
@router.get("/config/{unit_id}", response_class=HTMLResponse)
|
||||
async def get_slm_config(request: Request, unit_id: str, db: Session = Depends(get_db)):
|
||||
"""
|
||||
Get configuration form for a specific SLM unit.
|
||||
Returns HTML partial with configuration form.
|
||||
"""
|
||||
unit = db.query(RosterUnit).filter_by(id=unit_id, device_type="slm").first()
|
||||
|
||||
if not unit:
|
||||
return HTMLResponse(
|
||||
content=f'<div class="text-red-500">Unit {unit_id} not found</div>',
|
||||
status_code=404
|
||||
)
|
||||
|
||||
return templates.TemplateResponse("partials/slm_config_form.html", {
|
||||
"request": request,
|
||||
"unit": unit
|
||||
})
|
||||
|
||||
|
||||
@router.post("/config/{unit_id}")
|
||||
async def save_slm_config(request: Request, unit_id: str, db: Session = Depends(get_db)):
|
||||
"""
|
||||
Save SLM configuration.
|
||||
Updates unit parameters in the database.
|
||||
"""
|
||||
unit = db.query(RosterUnit).filter_by(id=unit_id, device_type="slm").first()
|
||||
|
||||
if not unit:
|
||||
return {"status": "error", "detail": f"Unit {unit_id} not found"}
|
||||
|
||||
try:
|
||||
# Get form data
|
||||
form_data = await request.form()
|
||||
|
||||
# Update SLM-specific fields
|
||||
unit.slm_model = form_data.get("slm_model") or None
|
||||
unit.slm_serial_number = form_data.get("slm_serial_number") or None
|
||||
unit.slm_frequency_weighting = form_data.get("slm_frequency_weighting") or None
|
||||
unit.slm_time_weighting = form_data.get("slm_time_weighting") or None
|
||||
unit.slm_measurement_range = form_data.get("slm_measurement_range") or None
|
||||
|
||||
# Update network configuration
|
||||
modem_id = form_data.get("deployed_with_modem_id")
|
||||
unit.deployed_with_modem_id = modem_id if modem_id else None
|
||||
|
||||
# Always update TCP and FTP ports (used regardless of modem assignment)
|
||||
unit.slm_tcp_port = int(form_data.get("slm_tcp_port")) if form_data.get("slm_tcp_port") else None
|
||||
unit.slm_ftp_port = int(form_data.get("slm_ftp_port")) if form_data.get("slm_ftp_port") else None
|
||||
|
||||
# Only update direct IP if no modem is assigned
|
||||
if not modem_id:
|
||||
unit.slm_host = form_data.get("slm_host") or None
|
||||
else:
|
||||
# Clear legacy direct IP field when modem is assigned
|
||||
unit.slm_host = None
|
||||
|
||||
db.commit()
|
||||
logger.info(f"Updated configuration for SLM {unit_id}")
|
||||
|
||||
# Sync updated configuration to SLMM cache
|
||||
logger.info(f"Syncing SLM {unit_id} config changes to SLMM cache...")
|
||||
result = await sync_slm_to_slmm_cache(
|
||||
unit_id=unit_id,
|
||||
host=unit.slm_host, # Use the updated host from Terra-View
|
||||
tcp_port=unit.slm_tcp_port,
|
||||
ftp_port=unit.slm_ftp_port,
|
||||
deployed_with_modem_id=unit.deployed_with_modem_id, # Resolve modem IP if assigned
|
||||
db=db
|
||||
)
|
||||
|
||||
if not result["success"]:
|
||||
logger.warning(f"SLMM cache sync warning for {unit_id}: {result['message']}")
|
||||
# Config still saved in Terra-View (source of truth)
|
||||
|
||||
return {"status": "success", "unit_id": unit_id}
|
||||
|
||||
except Exception as e:
|
||||
db.rollback()
|
||||
logger.error(f"Failed to save config for {unit_id}: {e}")
|
||||
return {"status": "error", "detail": str(e)}
|
||||
|
||||
|
||||
@router.get("/test-modem/{modem_id}")
|
||||
async def test_modem_connection(modem_id: str, db: Session = Depends(get_db)):
|
||||
"""
|
||||
Test modem connectivity with a simple ping/health check.
|
||||
Returns response time and connection status.
|
||||
"""
|
||||
import subprocess
|
||||
import time
|
||||
|
||||
# Get modem from database
|
||||
modem = db.query(RosterUnit).filter_by(id=modem_id, device_type="modem").first()
|
||||
|
||||
if not modem:
|
||||
return {"status": "error", "detail": f"Modem {modem_id} not found"}
|
||||
|
||||
if not modem.ip_address:
|
||||
return {"status": "error", "detail": f"Modem {modem_id} has no IP address configured"}
|
||||
|
||||
try:
|
||||
# Ping the modem (1 packet, 2 second timeout)
|
||||
start_time = time.time()
|
||||
result = subprocess.run(
|
||||
["ping", "-c", "1", "-W", "2", modem.ip_address],
|
||||
capture_output=True,
|
||||
text=True,
|
||||
timeout=3
|
||||
)
|
||||
response_time = int((time.time() - start_time) * 1000) # Convert to milliseconds
|
||||
|
||||
if result.returncode == 0:
|
||||
return {
|
||||
"status": "success",
|
||||
"modem_id": modem_id,
|
||||
"ip_address": modem.ip_address,
|
||||
"response_time": response_time,
|
||||
"message": "Modem is responding to ping"
|
||||
}
|
||||
else:
|
||||
return {
|
||||
"status": "error",
|
||||
"modem_id": modem_id,
|
||||
"ip_address": modem.ip_address,
|
||||
"detail": "Modem not responding to ping"
|
||||
}
|
||||
|
||||
except subprocess.TimeoutExpired:
|
||||
return {
|
||||
"status": "error",
|
||||
"modem_id": modem_id,
|
||||
"ip_address": modem.ip_address,
|
||||
"detail": "Ping timeout (> 2 seconds)"
|
||||
}
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to ping modem {modem_id}: {e}")
|
||||
return {
|
||||
"status": "error",
|
||||
"modem_id": modem_id,
|
||||
"detail": str(e)
|
||||
}
|
||||
122
backend/routers/slm_ui.py
Normal file
@@ -0,0 +1,122 @@
|
||||
"""
|
||||
Sound Level Meter UI Router
|
||||
|
||||
Provides endpoints for SLM dashboard cards, detail pages, and real-time data.
|
||||
"""
|
||||
|
||||
from fastapi import APIRouter, Depends, HTTPException, Request
|
||||
from fastapi.responses import HTMLResponse
|
||||
from sqlalchemy.orm import Session
|
||||
from datetime import datetime
|
||||
import httpx
|
||||
import logging
|
||||
import os
|
||||
|
||||
from backend.database import get_db
|
||||
from backend.models import RosterUnit
|
||||
from backend.templates_config import templates
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
router = APIRouter(prefix="/slm", tags=["slm-ui"])
|
||||
|
||||
SLMM_BASE_URL = os.getenv("SLMM_BASE_URL", "http://172.19.0.1:8100")
|
||||
|
||||
|
||||
@router.get("/{unit_id}", response_class=HTMLResponse)
|
||||
async def slm_detail_page(request: Request, unit_id: str, db: Session = Depends(get_db)):
|
||||
"""Sound level meter detail page with controls."""
|
||||
|
||||
# Get roster unit
|
||||
unit = db.query(RosterUnit).filter_by(id=unit_id).first()
|
||||
if not unit or unit.device_type != "slm":
|
||||
raise HTTPException(status_code=404, detail="Sound level meter not found")
|
||||
|
||||
return templates.TemplateResponse("slm_detail.html", {
|
||||
"request": request,
|
||||
"unit": unit,
|
||||
"unit_id": unit_id
|
||||
})
|
||||
|
||||
|
||||
@router.get("/api/{unit_id}/summary")
|
||||
async def get_slm_summary(unit_id: str, db: Session = Depends(get_db)):
|
||||
"""Get SLM summary data for dashboard card."""
|
||||
|
||||
# Get roster unit
|
||||
unit = db.query(RosterUnit).filter_by(id=unit_id).first()
|
||||
if not unit or unit.device_type != "slm":
|
||||
raise HTTPException(status_code=404, detail="Sound level meter not found")
|
||||
|
||||
# Try to get live status from SLMM
|
||||
status_data = None
|
||||
try:
|
||||
async with httpx.AsyncClient(timeout=3.0) as client:
|
||||
response = await client.get(f"{SLMM_BASE_URL}/api/nl43/{unit_id}/status")
|
||||
if response.status_code == 200:
|
||||
status_data = response.json().get("data")
|
||||
except Exception as e:
|
||||
logger.warning(f"Failed to get SLM status for {unit_id}: {e}")
|
||||
|
||||
return {
|
||||
"unit_id": unit_id,
|
||||
"device_type": "slm",
|
||||
"deployed": unit.deployed,
|
||||
"model": unit.slm_model or "NL-43",
|
||||
"location": unit.address or unit.location,
|
||||
"coordinates": unit.coordinates,
|
||||
"note": unit.note,
|
||||
"status": status_data,
|
||||
"last_check": unit.slm_last_check.isoformat() if unit.slm_last_check else None,
|
||||
}
|
||||
|
||||
|
||||
@router.get("/partials/{unit_id}/card", response_class=HTMLResponse)
|
||||
async def slm_dashboard_card(request: Request, unit_id: str, db: Session = Depends(get_db)):
|
||||
"""Render SLM dashboard card partial."""
|
||||
|
||||
summary = await get_slm_summary(unit_id, db)
|
||||
|
||||
return templates.TemplateResponse("partials/slm_card.html", {
|
||||
"request": request,
|
||||
"slm": summary
|
||||
})
|
||||
|
||||
|
||||
@router.get("/partials/{unit_id}/controls", response_class=HTMLResponse)
|
||||
async def slm_controls_partial(request: Request, unit_id: str, db: Session = Depends(get_db)):
|
||||
"""Render SLM control panel partial."""
|
||||
|
||||
unit = db.query(RosterUnit).filter_by(id=unit_id).first()
|
||||
if not unit or unit.device_type != "slm":
|
||||
raise HTTPException(status_code=404, detail="Sound level meter not found")
|
||||
|
||||
# Get current status from SLMM
|
||||
measurement_state = None
|
||||
battery_level = None
|
||||
try:
|
||||
async with httpx.AsyncClient(timeout=3.0) as client:
|
||||
# Get measurement state
|
||||
state_response = await client.get(
|
||||
f"{SLMM_BASE_URL}/api/nl43/{unit_id}/measurement-state"
|
||||
)
|
||||
if state_response.status_code == 200:
|
||||
measurement_state = state_response.json().get("measurement_state")
|
||||
|
||||
# Get battery level
|
||||
battery_response = await client.get(
|
||||
f"{SLMM_BASE_URL}/api/nl43/{unit_id}/battery"
|
||||
)
|
||||
if battery_response.status_code == 200:
|
||||
battery_level = battery_response.json().get("battery_level")
|
||||
except Exception as e:
|
||||
logger.warning(f"Failed to get SLM control data for {unit_id}: {e}")
|
||||
|
||||
return templates.TemplateResponse("partials/slm_controls.html", {
|
||||
"request": request,
|
||||
"unit_id": unit_id,
|
||||
"unit": unit,
|
||||
"measurement_state": measurement_state,
|
||||
"battery_level": battery_level,
|
||||
"is_measuring": measurement_state == "Start"
|
||||
})
|
||||
301
backend/routers/slmm.py
Normal file
@@ -0,0 +1,301 @@
|
||||
"""
|
||||
SLMM (Sound Level Meter Manager) Proxy Router
|
||||
|
||||
Proxies requests from SFM to the standalone SLMM backend service.
|
||||
SLMM runs on port 8100 and handles NL43/NL53 sound level meter communication.
|
||||
"""
|
||||
|
||||
from fastapi import APIRouter, HTTPException, Request, Response, WebSocket, WebSocketDisconnect
|
||||
from fastapi.responses import StreamingResponse
|
||||
import httpx
|
||||
import websockets
|
||||
import asyncio
|
||||
import logging
|
||||
import os
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
router = APIRouter(prefix="/api/slmm", tags=["slmm"])
|
||||
|
||||
# SLMM backend URL - configurable via environment variable
|
||||
SLMM_BASE_URL = os.getenv("SLMM_BASE_URL", "http://localhost:8100")
|
||||
# WebSocket URL derived from HTTP URL
|
||||
SLMM_WS_BASE_URL = SLMM_BASE_URL.replace("http://", "ws://").replace("https://", "wss://")
|
||||
|
||||
|
||||
@router.get("/health")
|
||||
async def check_slmm_health():
|
||||
"""
|
||||
Check if the SLMM backend service is reachable and healthy.
|
||||
"""
|
||||
try:
|
||||
async with httpx.AsyncClient(timeout=5.0) as client:
|
||||
response = await client.get(f"{SLMM_BASE_URL}/health")
|
||||
|
||||
if response.status_code == 200:
|
||||
data = response.json()
|
||||
return {
|
||||
"status": "ok",
|
||||
"slmm_status": "connected",
|
||||
"slmm_url": SLMM_BASE_URL,
|
||||
"slmm_version": data.get("version", "unknown"),
|
||||
"slmm_response": data
|
||||
}
|
||||
else:
|
||||
return {
|
||||
"status": "degraded",
|
||||
"slmm_status": "error",
|
||||
"slmm_url": SLMM_BASE_URL,
|
||||
"detail": f"SLMM returned status {response.status_code}"
|
||||
}
|
||||
|
||||
except httpx.ConnectError:
|
||||
return {
|
||||
"status": "error",
|
||||
"slmm_status": "unreachable",
|
||||
"slmm_url": SLMM_BASE_URL,
|
||||
"detail": "Cannot connect to SLMM backend. Is it running?"
|
||||
}
|
||||
except Exception as e:
|
||||
return {
|
||||
"status": "error",
|
||||
"slmm_status": "error",
|
||||
"slmm_url": SLMM_BASE_URL,
|
||||
"detail": str(e)
|
||||
}
|
||||
|
||||
|
||||
# WebSocket routes MUST come before the catch-all route
|
||||
@router.websocket("/{unit_id}/stream")
|
||||
async def proxy_websocket_stream(websocket: WebSocket, unit_id: str):
|
||||
"""
|
||||
Proxy WebSocket connections to SLMM's /stream endpoint.
|
||||
|
||||
This allows real-time streaming of measurement data from NL43 devices
|
||||
through the SFM unified interface.
|
||||
"""
|
||||
await websocket.accept()
|
||||
logger.info(f"WebSocket connection accepted for SLMM unit {unit_id}")
|
||||
|
||||
# Build target WebSocket URL
|
||||
target_ws_url = f"{SLMM_WS_BASE_URL}/api/nl43/{unit_id}/stream"
|
||||
logger.info(f"Connecting to SLMM WebSocket: {target_ws_url}")
|
||||
|
||||
backend_ws = None
|
||||
|
||||
try:
|
||||
# Connect to SLMM backend WebSocket
|
||||
backend_ws = await websockets.connect(target_ws_url)
|
||||
logger.info(f"Connected to SLMM backend WebSocket for {unit_id}")
|
||||
|
||||
# Create tasks for bidirectional communication
|
||||
async def forward_to_backend():
|
||||
"""Forward messages from client to SLMM backend"""
|
||||
try:
|
||||
while True:
|
||||
data = await websocket.receive_text()
|
||||
await backend_ws.send(data)
|
||||
except WebSocketDisconnect:
|
||||
logger.info(f"Client WebSocket disconnected for {unit_id}")
|
||||
except Exception as e:
|
||||
logger.error(f"Error forwarding to backend: {e}")
|
||||
|
||||
async def forward_to_client():
|
||||
"""Forward messages from SLMM backend to client"""
|
||||
try:
|
||||
async for message in backend_ws:
|
||||
await websocket.send_text(message)
|
||||
except websockets.exceptions.ConnectionClosed:
|
||||
logger.info(f"Backend WebSocket closed for {unit_id}")
|
||||
except Exception as e:
|
||||
logger.error(f"Error forwarding to client: {e}")
|
||||
|
||||
# Run both forwarding tasks concurrently
|
||||
await asyncio.gather(
|
||||
forward_to_backend(),
|
||||
forward_to_client(),
|
||||
return_exceptions=True
|
||||
)
|
||||
|
||||
except websockets.exceptions.WebSocketException as e:
|
||||
logger.error(f"WebSocket error connecting to SLMM backend: {e}")
|
||||
try:
|
||||
await websocket.send_json({
|
||||
"error": "Failed to connect to SLMM backend",
|
||||
"detail": str(e)
|
||||
})
|
||||
except Exception:
|
||||
pass
|
||||
except Exception as e:
|
||||
logger.error(f"Unexpected error in WebSocket proxy for {unit_id}: {e}")
|
||||
try:
|
||||
await websocket.send_json({
|
||||
"error": "Internal server error",
|
||||
"detail": str(e)
|
||||
})
|
||||
except Exception:
|
||||
pass
|
||||
finally:
|
||||
# Clean up connections
|
||||
if backend_ws:
|
||||
try:
|
||||
await backend_ws.close()
|
||||
except Exception:
|
||||
pass
|
||||
try:
|
||||
await websocket.close()
|
||||
except Exception:
|
||||
pass
|
||||
logger.info(f"WebSocket proxy closed for {unit_id}")
|
||||
|
||||
|
||||
@router.websocket("/{unit_id}/live")
|
||||
async def proxy_websocket_live(websocket: WebSocket, unit_id: str):
|
||||
"""
|
||||
Proxy WebSocket connections to SLMM's /live endpoint.
|
||||
|
||||
Alternative WebSocket endpoint that may be used by some frontend components.
|
||||
"""
|
||||
await websocket.accept()
|
||||
logger.info(f"WebSocket connection accepted for SLMM unit {unit_id} (live endpoint)")
|
||||
|
||||
# Build target WebSocket URL - try /stream endpoint as SLMM uses that for WebSocket
|
||||
target_ws_url = f"{SLMM_WS_BASE_URL}/api/nl43/{unit_id}/stream"
|
||||
logger.info(f"Connecting to SLMM WebSocket: {target_ws_url}")
|
||||
|
||||
backend_ws = None
|
||||
|
||||
try:
|
||||
# Connect to SLMM backend WebSocket
|
||||
backend_ws = await websockets.connect(target_ws_url)
|
||||
logger.info(f"Connected to SLMM backend WebSocket for {unit_id} (live endpoint)")
|
||||
|
||||
# Create tasks for bidirectional communication
|
||||
async def forward_to_backend():
|
||||
"""Forward messages from client to SLMM backend"""
|
||||
try:
|
||||
while True:
|
||||
data = await websocket.receive_text()
|
||||
await backend_ws.send(data)
|
||||
except WebSocketDisconnect:
|
||||
logger.info(f"Client WebSocket disconnected for {unit_id} (live)")
|
||||
except Exception as e:
|
||||
logger.error(f"Error forwarding to backend (live): {e}")
|
||||
|
||||
async def forward_to_client():
|
||||
"""Forward messages from SLMM backend to client"""
|
||||
try:
|
||||
async for message in backend_ws:
|
||||
await websocket.send_text(message)
|
||||
except websockets.exceptions.ConnectionClosed:
|
||||
logger.info(f"Backend WebSocket closed for {unit_id} (live)")
|
||||
except Exception as e:
|
||||
logger.error(f"Error forwarding to client (live): {e}")
|
||||
|
||||
# Run both forwarding tasks concurrently
|
||||
await asyncio.gather(
|
||||
forward_to_backend(),
|
||||
forward_to_client(),
|
||||
return_exceptions=True
|
||||
)
|
||||
|
||||
except websockets.exceptions.WebSocketException as e:
|
||||
logger.error(f"WebSocket error connecting to SLMM backend (live): {e}")
|
||||
try:
|
||||
await websocket.send_json({
|
||||
"error": "Failed to connect to SLMM backend",
|
||||
"detail": str(e)
|
||||
})
|
||||
except Exception:
|
||||
pass
|
||||
except Exception as e:
|
||||
logger.error(f"Unexpected error in WebSocket proxy for {unit_id} (live): {e}")
|
||||
try:
|
||||
await websocket.send_json({
|
||||
"error": "Internal server error",
|
||||
"detail": str(e)
|
||||
})
|
||||
except Exception:
|
||||
pass
|
||||
finally:
|
||||
# Clean up connections
|
||||
if backend_ws:
|
||||
try:
|
||||
await backend_ws.close()
|
||||
except Exception:
|
||||
pass
|
||||
try:
|
||||
await websocket.close()
|
||||
except Exception:
|
||||
pass
|
||||
logger.info(f"WebSocket proxy closed for {unit_id} (live)")
|
||||
|
||||
|
||||
# HTTP catch-all route MUST come after specific routes (including WebSocket routes)
|
||||
@router.api_route("/{path:path}", methods=["GET", "POST", "PUT", "DELETE", "PATCH"])
|
||||
async def proxy_to_slmm(path: str, request: Request):
|
||||
"""
|
||||
Proxy all requests to the SLMM backend service.
|
||||
|
||||
This allows SFM to act as a unified frontend for all device types,
|
||||
while SLMM remains a standalone backend service.
|
||||
"""
|
||||
# Build target URL
|
||||
target_url = f"{SLMM_BASE_URL}/api/nl43/{path}"
|
||||
|
||||
# Get query parameters
|
||||
query_params = dict(request.query_params)
|
||||
|
||||
# Get request body if present
|
||||
body = None
|
||||
if request.method in ["POST", "PUT", "PATCH"]:
|
||||
try:
|
||||
body = await request.body()
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to read request body: {e}")
|
||||
body = None
|
||||
|
||||
# Get headers (exclude host and other proxy-specific headers)
|
||||
headers = dict(request.headers)
|
||||
headers_to_exclude = ["host", "content-length", "transfer-encoding", "connection"]
|
||||
proxy_headers = {k: v for k, v in headers.items() if k.lower() not in headers_to_exclude}
|
||||
|
||||
logger.info(f"Proxying {request.method} request to SLMM: {target_url}")
|
||||
|
||||
try:
|
||||
async with httpx.AsyncClient(timeout=30.0) as client:
|
||||
# Forward the request to SLMM
|
||||
response = await client.request(
|
||||
method=request.method,
|
||||
url=target_url,
|
||||
params=query_params,
|
||||
headers=proxy_headers,
|
||||
content=body
|
||||
)
|
||||
|
||||
# Return the response from SLMM
|
||||
return Response(
|
||||
content=response.content,
|
||||
status_code=response.status_code,
|
||||
headers=dict(response.headers),
|
||||
media_type=response.headers.get("content-type")
|
||||
)
|
||||
|
||||
except httpx.ConnectError:
|
||||
logger.error(f"Failed to connect to SLMM backend at {SLMM_BASE_URL}")
|
||||
raise HTTPException(
|
||||
status_code=503,
|
||||
detail=f"SLMM backend service unavailable. Is SLMM running on {SLMM_BASE_URL}?"
|
||||
)
|
||||
except httpx.TimeoutException:
|
||||
logger.error(f"Timeout connecting to SLMM backend at {SLMM_BASE_URL}")
|
||||
raise HTTPException(
|
||||
status_code=504,
|
||||
detail="SLMM backend timeout"
|
||||
)
|
||||
except Exception as e:
|
||||
logger.error(f"Error proxying to SLMM: {e}")
|
||||
raise HTTPException(
|
||||
status_code=500,
|
||||
detail=f"Failed to proxy request to SLMM: {str(e)}"
|
||||
)
|
||||
74
backend/routers/units.py
Normal file
@@ -0,0 +1,74 @@
|
||||
from fastapi import APIRouter, Depends, HTTPException
|
||||
from sqlalchemy.orm import Session
|
||||
from datetime import datetime
|
||||
from typing import Dict, Any
|
||||
|
||||
from backend.database import get_db
|
||||
from backend.services.snapshot import emit_status_snapshot
|
||||
from backend.models import RosterUnit
|
||||
|
||||
router = APIRouter(prefix="/api", tags=["units"])
|
||||
|
||||
|
||||
@router.get("/unit/{unit_id}")
|
||||
def get_unit_detail(unit_id: str, db: Session = Depends(get_db)):
|
||||
"""
|
||||
Returns detailed data for a single unit.
|
||||
"""
|
||||
snapshot = emit_status_snapshot()
|
||||
|
||||
if unit_id not in snapshot["units"]:
|
||||
raise HTTPException(status_code=404, detail=f"Unit {unit_id} not found")
|
||||
|
||||
unit_data = snapshot["units"][unit_id]
|
||||
|
||||
# Mock coordinates for now (will be replaced with real data)
|
||||
mock_coords = {
|
||||
"BE1234": {"lat": 37.7749, "lon": -122.4194, "location": "San Francisco, CA"},
|
||||
"BE5678": {"lat": 34.0522, "lon": -118.2437, "location": "Los Angeles, CA"},
|
||||
"BE9012": {"lat": 40.7128, "lon": -74.0060, "location": "New York, NY"},
|
||||
"BE3456": {"lat": 41.8781, "lon": -87.6298, "location": "Chicago, IL"},
|
||||
"BE7890": {"lat": 29.7604, "lon": -95.3698, "location": "Houston, TX"},
|
||||
}
|
||||
|
||||
coords = mock_coords.get(unit_id, {"lat": 39.8283, "lon": -98.5795, "location": "Unknown"})
|
||||
|
||||
return {
|
||||
"id": unit_id,
|
||||
"status": unit_data["status"],
|
||||
"age": unit_data["age"],
|
||||
"last_seen": unit_data["last"],
|
||||
"last_file": unit_data.get("fname", ""),
|
||||
"deployed": unit_data["deployed"],
|
||||
"note": unit_data.get("note", ""),
|
||||
"coordinates": coords
|
||||
}
|
||||
|
||||
|
||||
@router.get("/units/{unit_id}")
|
||||
def get_unit_by_id(unit_id: str, db: Session = Depends(get_db)):
|
||||
"""
|
||||
Get unit data directly from the roster (for settings/configuration).
|
||||
"""
|
||||
unit = db.query(RosterUnit).filter_by(id=unit_id).first()
|
||||
|
||||
if not unit:
|
||||
raise HTTPException(status_code=404, detail=f"Unit {unit_id} not found")
|
||||
|
||||
return {
|
||||
"id": unit.id,
|
||||
"unit_type": unit.unit_type,
|
||||
"device_type": unit.device_type,
|
||||
"deployed": unit.deployed,
|
||||
"retired": unit.retired,
|
||||
"note": unit.note,
|
||||
"location": unit.location,
|
||||
"address": unit.address,
|
||||
"coordinates": unit.coordinates,
|
||||
"slm_host": unit.slm_host,
|
||||
"slm_tcp_port": unit.slm_tcp_port,
|
||||
"slm_ftp_port": unit.slm_ftp_port,
|
||||
"slm_model": unit.slm_model,
|
||||
"slm_serial_number": unit.slm_serial_number,
|
||||
"deployed_with_modem_id": unit.deployed_with_modem_id
|
||||
}
|
||||
286
backend/routes.py
Normal file
@@ -0,0 +1,286 @@
|
||||
from fastapi import APIRouter, Depends, HTTPException
|
||||
from sqlalchemy.orm import Session
|
||||
from pydantic import BaseModel
|
||||
from datetime import datetime
|
||||
from typing import Optional, List
|
||||
|
||||
from backend.database import get_db
|
||||
from backend.models import Emitter
|
||||
|
||||
router = APIRouter()
|
||||
|
||||
|
||||
# Helper function to detect unit type from unit ID
|
||||
def detect_unit_type(unit_id: str) -> str:
|
||||
"""
|
||||
Automatically detect if a unit is Series 3 or Series 4 based on ID pattern.
|
||||
|
||||
Series 4 (Micromate) units have IDs starting with "UM" followed by digits (e.g., UM11719)
|
||||
Series 3 units typically have other patterns
|
||||
|
||||
Returns:
|
||||
"series4" if the unit ID matches Micromate pattern (UM#####)
|
||||
"series3" otherwise
|
||||
"""
|
||||
if not unit_id:
|
||||
return "unknown"
|
||||
|
||||
# Series 4 (Micromate) pattern: UM followed by digits
|
||||
if unit_id.upper().startswith("UM") and len(unit_id) > 2:
|
||||
# Check if remaining characters after "UM" are digits
|
||||
rest = unit_id[2:]
|
||||
if rest.isdigit():
|
||||
return "series4"
|
||||
|
||||
# Default to series3 for other patterns
|
||||
return "series3"
|
||||
|
||||
|
||||
# Pydantic schemas for request/response validation
|
||||
class EmitterReport(BaseModel):
|
||||
unit: str
|
||||
unit_type: str
|
||||
timestamp: str
|
||||
file: str
|
||||
status: str
|
||||
|
||||
|
||||
class EmitterResponse(BaseModel):
|
||||
id: str
|
||||
unit_type: str
|
||||
last_seen: datetime
|
||||
last_file: str
|
||||
status: str
|
||||
notes: Optional[str] = None
|
||||
|
||||
class Config:
|
||||
from_attributes = True
|
||||
|
||||
|
||||
@router.post("/emitters/report", status_code=200)
|
||||
def report_emitter(report: EmitterReport, db: Session = Depends(get_db)):
|
||||
"""
|
||||
Endpoint for emitters to report their status.
|
||||
Creates a new emitter if it doesn't exist, or updates an existing one.
|
||||
"""
|
||||
try:
|
||||
# Parse the timestamp
|
||||
timestamp = datetime.fromisoformat(report.timestamp.replace('Z', '+00:00'))
|
||||
except ValueError:
|
||||
raise HTTPException(status_code=400, detail="Invalid timestamp format")
|
||||
|
||||
# Check if emitter already exists
|
||||
emitter = db.query(Emitter).filter(Emitter.id == report.unit).first()
|
||||
|
||||
if emitter:
|
||||
# Update existing emitter
|
||||
emitter.unit_type = report.unit_type
|
||||
emitter.last_seen = timestamp
|
||||
emitter.last_file = report.file
|
||||
emitter.status = report.status
|
||||
else:
|
||||
# Create new emitter
|
||||
emitter = Emitter(
|
||||
id=report.unit,
|
||||
unit_type=report.unit_type,
|
||||
last_seen=timestamp,
|
||||
last_file=report.file,
|
||||
status=report.status
|
||||
)
|
||||
db.add(emitter)
|
||||
|
||||
db.commit()
|
||||
db.refresh(emitter)
|
||||
|
||||
return {
|
||||
"message": "Emitter report received",
|
||||
"unit": emitter.id,
|
||||
"status": emitter.status
|
||||
}
|
||||
|
||||
|
||||
@router.get("/fleet/status", response_model=List[EmitterResponse])
|
||||
def get_fleet_status(db: Session = Depends(get_db)):
|
||||
"""
|
||||
Returns a list of all emitters and their current status.
|
||||
"""
|
||||
emitters = db.query(Emitter).all()
|
||||
return emitters
|
||||
|
||||
# series3v1.1 Standardized Heartbeat Schema (multi-unit)
|
||||
from fastapi import Request
|
||||
|
||||
@router.post("/api/series3/heartbeat", status_code=200)
|
||||
async def series3_heartbeat(request: Request, db: Session = Depends(get_db)):
|
||||
"""
|
||||
Accepts a full telemetry payload from the Series3 emitter.
|
||||
Updates or inserts each unit into the database.
|
||||
"""
|
||||
payload = await request.json()
|
||||
|
||||
source = payload.get("source_id")
|
||||
units = payload.get("units", [])
|
||||
|
||||
print("\n=== Series 3 Heartbeat ===")
|
||||
print("Source:", source)
|
||||
print("Units received:", len(units))
|
||||
print("==========================\n")
|
||||
|
||||
results = []
|
||||
|
||||
for u in units:
|
||||
uid = u.get("unit_id")
|
||||
last_event_time = u.get("last_event_time")
|
||||
event_meta = u.get("event_metadata", {})
|
||||
age_minutes = u.get("age_minutes")
|
||||
|
||||
try:
|
||||
if last_event_time:
|
||||
ts = datetime.fromisoformat(last_event_time.replace("Z", "+00:00"))
|
||||
else:
|
||||
ts = None
|
||||
except:
|
||||
ts = None
|
||||
|
||||
# Pull from DB
|
||||
emitter = db.query(Emitter).filter(Emitter.id == uid).first()
|
||||
|
||||
# File name (from event_metadata)
|
||||
last_file = event_meta.get("file_name")
|
||||
status = "Unknown"
|
||||
|
||||
# Determine status based on age
|
||||
if age_minutes is None:
|
||||
status = "Missing"
|
||||
elif age_minutes > 24 * 60:
|
||||
status = "Missing"
|
||||
elif age_minutes > 12 * 60:
|
||||
status = "Pending"
|
||||
else:
|
||||
status = "OK"
|
||||
|
||||
if emitter:
|
||||
# Update existing
|
||||
emitter.last_seen = ts
|
||||
emitter.last_file = last_file
|
||||
emitter.status = status
|
||||
# Update unit_type if it was incorrectly classified
|
||||
detected_type = detect_unit_type(uid)
|
||||
if emitter.unit_type != detected_type:
|
||||
emitter.unit_type = detected_type
|
||||
else:
|
||||
# Insert new - auto-detect unit type from ID
|
||||
detected_type = detect_unit_type(uid)
|
||||
emitter = Emitter(
|
||||
id=uid,
|
||||
unit_type=detected_type,
|
||||
last_seen=ts,
|
||||
last_file=last_file,
|
||||
status=status
|
||||
)
|
||||
db.add(emitter)
|
||||
|
||||
results.append({"unit": uid, "status": status})
|
||||
|
||||
db.commit()
|
||||
|
||||
return {
|
||||
"message": "Heartbeat processed",
|
||||
"source": source,
|
||||
"units_processed": len(results),
|
||||
"results": results
|
||||
}
|
||||
|
||||
|
||||
# series4 (Micromate) Standardized Heartbeat Schema
|
||||
@router.post("/api/series4/heartbeat", status_code=200)
|
||||
async def series4_heartbeat(request: Request, db: Session = Depends(get_db)):
|
||||
"""
|
||||
Accepts a full telemetry payload from the Series4 (Micromate) emitter.
|
||||
Updates or inserts each unit into the database.
|
||||
|
||||
Expected payload:
|
||||
{
|
||||
"source": "series4_emitter",
|
||||
"generated_at": "2025-12-04T20:01:00",
|
||||
"units": [
|
||||
{
|
||||
"unit_id": "UM11719",
|
||||
"type": "micromate",
|
||||
"project_hint": "Clearwater - ECMS 57940",
|
||||
"last_call": "2025-12-04T19:30:42",
|
||||
"status": "OK",
|
||||
"age_days": 0.04,
|
||||
"age_hours": 0.9,
|
||||
"mlg_path": "C:\\THORDATA\\..."
|
||||
}
|
||||
]
|
||||
}
|
||||
"""
|
||||
payload = await request.json()
|
||||
|
||||
source = payload.get("source", "series4_emitter")
|
||||
units = payload.get("units", [])
|
||||
|
||||
print("\n=== Series 4 Heartbeat ===")
|
||||
print("Source:", source)
|
||||
print("Units received:", len(units))
|
||||
print("==========================\n")
|
||||
|
||||
results = []
|
||||
|
||||
for u in units:
|
||||
uid = u.get("unit_id")
|
||||
last_call_str = u.get("last_call")
|
||||
status = u.get("status", "Unknown")
|
||||
mlg_path = u.get("mlg_path")
|
||||
project_hint = u.get("project_hint")
|
||||
|
||||
# Parse last_call timestamp
|
||||
try:
|
||||
if last_call_str:
|
||||
ts = datetime.fromisoformat(last_call_str.replace("Z", "+00:00"))
|
||||
else:
|
||||
ts = None
|
||||
except:
|
||||
ts = None
|
||||
|
||||
# Pull from DB
|
||||
emitter = db.query(Emitter).filter(Emitter.id == uid).first()
|
||||
|
||||
if emitter:
|
||||
# Update existing
|
||||
emitter.last_seen = ts
|
||||
emitter.last_file = mlg_path
|
||||
emitter.status = status
|
||||
# Update unit_type if it was incorrectly classified
|
||||
detected_type = detect_unit_type(uid)
|
||||
if emitter.unit_type != detected_type:
|
||||
emitter.unit_type = detected_type
|
||||
# Optionally update notes with project hint if it exists
|
||||
if project_hint and not emitter.notes:
|
||||
emitter.notes = f"Project: {project_hint}"
|
||||
else:
|
||||
# Insert new - auto-detect unit type from ID
|
||||
detected_type = detect_unit_type(uid)
|
||||
notes = f"Project: {project_hint}" if project_hint else None
|
||||
emitter = Emitter(
|
||||
id=uid,
|
||||
unit_type=detected_type,
|
||||
last_seen=ts,
|
||||
last_file=mlg_path,
|
||||
status=status,
|
||||
notes=notes
|
||||
)
|
||||
db.add(emitter)
|
||||
|
||||
results.append({"unit": uid, "status": status})
|
||||
|
||||
db.commit()
|
||||
|
||||
return {
|
||||
"message": "Heartbeat processed",
|
||||
"source": source,
|
||||
"units_processed": len(results),
|
||||
"results": results
|
||||
}
|
||||
462
backend/services/alert_service.py
Normal file
@@ -0,0 +1,462 @@
|
||||
"""
|
||||
Alert Service
|
||||
|
||||
Manages in-app alerts for device status changes and system events.
|
||||
Provides foundation for future notification channels (email, webhook).
|
||||
"""
|
||||
|
||||
import json
|
||||
import uuid
|
||||
import logging
|
||||
from datetime import datetime, timedelta
|
||||
from typing import Optional, List, Dict, Any
|
||||
|
||||
from sqlalchemy.orm import Session
|
||||
from sqlalchemy import and_, or_
|
||||
|
||||
from backend.models import Alert, RosterUnit
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class AlertService:
|
||||
"""
|
||||
Service for managing alerts.
|
||||
|
||||
Handles alert lifecycle:
|
||||
- Create alerts from various triggers
|
||||
- Query active alerts
|
||||
- Acknowledge/resolve/dismiss alerts
|
||||
- (Future) Dispatch to notification channels
|
||||
"""
|
||||
|
||||
def __init__(self, db: Session):
|
||||
self.db = db
|
||||
|
||||
def create_alert(
|
||||
self,
|
||||
alert_type: str,
|
||||
title: str,
|
||||
message: str = None,
|
||||
severity: str = "warning",
|
||||
unit_id: str = None,
|
||||
project_id: str = None,
|
||||
location_id: str = None,
|
||||
schedule_id: str = None,
|
||||
metadata: dict = None,
|
||||
expires_hours: int = 24,
|
||||
) -> Alert:
|
||||
"""
|
||||
Create a new alert.
|
||||
|
||||
Args:
|
||||
alert_type: Type of alert (device_offline, device_online, schedule_failed)
|
||||
title: Short alert title
|
||||
message: Detailed description
|
||||
severity: info, warning, or critical
|
||||
unit_id: Related unit ID (optional)
|
||||
project_id: Related project ID (optional)
|
||||
location_id: Related location ID (optional)
|
||||
schedule_id: Related schedule ID (optional)
|
||||
metadata: Additional JSON data
|
||||
expires_hours: Hours until auto-expiry (default 24)
|
||||
|
||||
Returns:
|
||||
Created Alert instance
|
||||
"""
|
||||
alert = Alert(
|
||||
id=str(uuid.uuid4()),
|
||||
alert_type=alert_type,
|
||||
title=title,
|
||||
message=message,
|
||||
severity=severity,
|
||||
unit_id=unit_id,
|
||||
project_id=project_id,
|
||||
location_id=location_id,
|
||||
schedule_id=schedule_id,
|
||||
alert_metadata=json.dumps(metadata) if metadata else None,
|
||||
status="active",
|
||||
expires_at=datetime.utcnow() + timedelta(hours=expires_hours),
|
||||
)
|
||||
|
||||
self.db.add(alert)
|
||||
self.db.commit()
|
||||
self.db.refresh(alert)
|
||||
|
||||
logger.info(f"Created alert: {alert.title} ({alert.alert_type})")
|
||||
return alert
|
||||
|
||||
def create_device_offline_alert(
|
||||
self,
|
||||
unit_id: str,
|
||||
consecutive_failures: int = 0,
|
||||
last_error: str = None,
|
||||
) -> Optional[Alert]:
|
||||
"""
|
||||
Create alert when device becomes unreachable.
|
||||
|
||||
Only creates if no active offline alert exists for this device.
|
||||
|
||||
Args:
|
||||
unit_id: The unit that went offline
|
||||
consecutive_failures: Number of consecutive poll failures
|
||||
last_error: Last error message from polling
|
||||
|
||||
Returns:
|
||||
Created Alert or None if alert already exists
|
||||
"""
|
||||
# Check if active offline alert already exists
|
||||
existing = self.db.query(Alert).filter(
|
||||
and_(
|
||||
Alert.unit_id == unit_id,
|
||||
Alert.alert_type == "device_offline",
|
||||
Alert.status == "active",
|
||||
)
|
||||
).first()
|
||||
|
||||
if existing:
|
||||
logger.debug(f"Offline alert already exists for {unit_id}")
|
||||
return None
|
||||
|
||||
# Get unit info for title
|
||||
unit = self.db.query(RosterUnit).filter_by(id=unit_id).first()
|
||||
unit_name = unit.id if unit else unit_id
|
||||
|
||||
# Determine severity based on failure count
|
||||
severity = "critical" if consecutive_failures >= 5 else "warning"
|
||||
|
||||
return self.create_alert(
|
||||
alert_type="device_offline",
|
||||
title=f"{unit_name} is offline",
|
||||
message=f"Device has been unreachable after {consecutive_failures} failed connection attempts."
|
||||
+ (f" Last error: {last_error}" if last_error else ""),
|
||||
severity=severity,
|
||||
unit_id=unit_id,
|
||||
metadata={
|
||||
"consecutive_failures": consecutive_failures,
|
||||
"last_error": last_error,
|
||||
},
|
||||
expires_hours=48, # Offline alerts stay longer
|
||||
)
|
||||
|
||||
def resolve_device_offline_alert(self, unit_id: str) -> Optional[Alert]:
|
||||
"""
|
||||
Auto-resolve offline alert when device comes back online.
|
||||
|
||||
Also creates an "device_online" info alert to notify user.
|
||||
|
||||
Args:
|
||||
unit_id: The unit that came back online
|
||||
|
||||
Returns:
|
||||
The resolved Alert or None if no alert existed
|
||||
"""
|
||||
# Find active offline alert
|
||||
alert = self.db.query(Alert).filter(
|
||||
and_(
|
||||
Alert.unit_id == unit_id,
|
||||
Alert.alert_type == "device_offline",
|
||||
Alert.status == "active",
|
||||
)
|
||||
).first()
|
||||
|
||||
if not alert:
|
||||
return None
|
||||
|
||||
# Resolve the offline alert
|
||||
alert.status = "resolved"
|
||||
alert.resolved_at = datetime.utcnow()
|
||||
self.db.commit()
|
||||
|
||||
logger.info(f"Resolved offline alert for {unit_id}")
|
||||
|
||||
# Create online notification
|
||||
unit = self.db.query(RosterUnit).filter_by(id=unit_id).first()
|
||||
unit_name = unit.id if unit else unit_id
|
||||
|
||||
self.create_alert(
|
||||
alert_type="device_online",
|
||||
title=f"{unit_name} is back online",
|
||||
message="Device connection has been restored.",
|
||||
severity="info",
|
||||
unit_id=unit_id,
|
||||
expires_hours=6, # Info alerts expire quickly
|
||||
)
|
||||
|
||||
return alert
|
||||
|
||||
def create_schedule_failed_alert(
|
||||
self,
|
||||
schedule_id: str,
|
||||
action_type: str,
|
||||
unit_id: str = None,
|
||||
error_message: str = None,
|
||||
project_id: str = None,
|
||||
location_id: str = None,
|
||||
) -> Alert:
|
||||
"""
|
||||
Create alert when a scheduled action fails.
|
||||
|
||||
Args:
|
||||
schedule_id: The ScheduledAction or RecurringSchedule ID
|
||||
action_type: start, stop, download
|
||||
unit_id: Related unit
|
||||
error_message: Error from execution
|
||||
project_id: Related project
|
||||
location_id: Related location
|
||||
|
||||
Returns:
|
||||
Created Alert
|
||||
"""
|
||||
return self.create_alert(
|
||||
alert_type="schedule_failed",
|
||||
title=f"Scheduled {action_type} failed",
|
||||
message=error_message or f"The scheduled {action_type} action did not complete successfully.",
|
||||
severity="warning",
|
||||
unit_id=unit_id,
|
||||
project_id=project_id,
|
||||
location_id=location_id,
|
||||
schedule_id=schedule_id,
|
||||
metadata={"action_type": action_type},
|
||||
expires_hours=24,
|
||||
)
|
||||
|
||||
def create_schedule_completed_alert(
|
||||
self,
|
||||
schedule_id: str,
|
||||
action_type: str,
|
||||
unit_id: str = None,
|
||||
project_id: str = None,
|
||||
location_id: str = None,
|
||||
metadata: dict = None,
|
||||
) -> Alert:
|
||||
"""
|
||||
Create alert when a scheduled action completes successfully.
|
||||
|
||||
Args:
|
||||
schedule_id: The ScheduledAction ID
|
||||
action_type: start, stop, download
|
||||
unit_id: Related unit
|
||||
project_id: Related project
|
||||
location_id: Related location
|
||||
metadata: Additional info (e.g., downloaded folder, index numbers)
|
||||
|
||||
Returns:
|
||||
Created Alert
|
||||
"""
|
||||
# Build descriptive message based on action type and metadata
|
||||
if action_type == "stop" and metadata:
|
||||
download_folder = metadata.get("downloaded_folder")
|
||||
download_success = metadata.get("download_success", False)
|
||||
if download_success and download_folder:
|
||||
message = f"Measurement stopped and data downloaded ({download_folder})"
|
||||
elif download_success is False and metadata.get("download_attempted"):
|
||||
message = "Measurement stopped but download failed"
|
||||
else:
|
||||
message = "Measurement stopped successfully"
|
||||
elif action_type == "start" and metadata:
|
||||
new_index = metadata.get("new_index")
|
||||
if new_index is not None:
|
||||
message = f"Measurement started (index {new_index:04d})"
|
||||
else:
|
||||
message = "Measurement started successfully"
|
||||
else:
|
||||
message = f"Scheduled {action_type} completed successfully"
|
||||
|
||||
return self.create_alert(
|
||||
alert_type="schedule_completed",
|
||||
title=f"Scheduled {action_type} completed",
|
||||
message=message,
|
||||
severity="info",
|
||||
unit_id=unit_id,
|
||||
project_id=project_id,
|
||||
location_id=location_id,
|
||||
schedule_id=schedule_id,
|
||||
metadata={"action_type": action_type, **(metadata or {})},
|
||||
expires_hours=12, # Info alerts expire quickly
|
||||
)
|
||||
|
||||
def get_active_alerts(
|
||||
self,
|
||||
project_id: str = None,
|
||||
unit_id: str = None,
|
||||
alert_type: str = None,
|
||||
min_severity: str = None,
|
||||
limit: int = 50,
|
||||
) -> List[Alert]:
|
||||
"""
|
||||
Query active alerts with optional filters.
|
||||
|
||||
Args:
|
||||
project_id: Filter by project
|
||||
unit_id: Filter by unit
|
||||
alert_type: Filter by alert type
|
||||
min_severity: Minimum severity (info, warning, critical)
|
||||
limit: Maximum results
|
||||
|
||||
Returns:
|
||||
List of matching alerts
|
||||
"""
|
||||
query = self.db.query(Alert).filter(Alert.status == "active")
|
||||
|
||||
if project_id:
|
||||
query = query.filter(Alert.project_id == project_id)
|
||||
|
||||
if unit_id:
|
||||
query = query.filter(Alert.unit_id == unit_id)
|
||||
|
||||
if alert_type:
|
||||
query = query.filter(Alert.alert_type == alert_type)
|
||||
|
||||
if min_severity:
|
||||
# Map severity to numeric for comparison
|
||||
severity_levels = {"info": 1, "warning": 2, "critical": 3}
|
||||
min_level = severity_levels.get(min_severity, 1)
|
||||
|
||||
if min_level == 2:
|
||||
query = query.filter(Alert.severity.in_(["warning", "critical"]))
|
||||
elif min_level == 3:
|
||||
query = query.filter(Alert.severity == "critical")
|
||||
|
||||
return query.order_by(Alert.created_at.desc()).limit(limit).all()
|
||||
|
||||
def get_all_alerts(
|
||||
self,
|
||||
status: str = None,
|
||||
project_id: str = None,
|
||||
unit_id: str = None,
|
||||
alert_type: str = None,
|
||||
limit: int = 50,
|
||||
offset: int = 0,
|
||||
) -> List[Alert]:
|
||||
"""
|
||||
Query all alerts with optional filters (includes non-active).
|
||||
|
||||
Args:
|
||||
status: Filter by status (active, acknowledged, resolved, dismissed)
|
||||
project_id: Filter by project
|
||||
unit_id: Filter by unit
|
||||
alert_type: Filter by alert type
|
||||
limit: Maximum results
|
||||
offset: Pagination offset
|
||||
|
||||
Returns:
|
||||
List of matching alerts
|
||||
"""
|
||||
query = self.db.query(Alert)
|
||||
|
||||
if status:
|
||||
query = query.filter(Alert.status == status)
|
||||
|
||||
if project_id:
|
||||
query = query.filter(Alert.project_id == project_id)
|
||||
|
||||
if unit_id:
|
||||
query = query.filter(Alert.unit_id == unit_id)
|
||||
|
||||
if alert_type:
|
||||
query = query.filter(Alert.alert_type == alert_type)
|
||||
|
||||
return (
|
||||
query.order_by(Alert.created_at.desc())
|
||||
.offset(offset)
|
||||
.limit(limit)
|
||||
.all()
|
||||
)
|
||||
|
||||
def get_active_alert_count(self) -> int:
|
||||
"""Get count of active alerts for badge display."""
|
||||
return self.db.query(Alert).filter(Alert.status == "active").count()
|
||||
|
||||
def acknowledge_alert(self, alert_id: str) -> Optional[Alert]:
|
||||
"""
|
||||
Mark alert as acknowledged.
|
||||
|
||||
Args:
|
||||
alert_id: Alert to acknowledge
|
||||
|
||||
Returns:
|
||||
Updated Alert or None if not found
|
||||
"""
|
||||
alert = self.db.query(Alert).filter_by(id=alert_id).first()
|
||||
if not alert:
|
||||
return None
|
||||
|
||||
alert.status = "acknowledged"
|
||||
alert.acknowledged_at = datetime.utcnow()
|
||||
self.db.commit()
|
||||
|
||||
logger.info(f"Acknowledged alert: {alert.title}")
|
||||
return alert
|
||||
|
||||
def dismiss_alert(self, alert_id: str) -> Optional[Alert]:
|
||||
"""
|
||||
Dismiss alert (user chose to ignore).
|
||||
|
||||
Args:
|
||||
alert_id: Alert to dismiss
|
||||
|
||||
Returns:
|
||||
Updated Alert or None if not found
|
||||
"""
|
||||
alert = self.db.query(Alert).filter_by(id=alert_id).first()
|
||||
if not alert:
|
||||
return None
|
||||
|
||||
alert.status = "dismissed"
|
||||
self.db.commit()
|
||||
|
||||
logger.info(f"Dismissed alert: {alert.title}")
|
||||
return alert
|
||||
|
||||
def resolve_alert(self, alert_id: str) -> Optional[Alert]:
|
||||
"""
|
||||
Manually resolve an alert.
|
||||
|
||||
Args:
|
||||
alert_id: Alert to resolve
|
||||
|
||||
Returns:
|
||||
Updated Alert or None if not found
|
||||
"""
|
||||
alert = self.db.query(Alert).filter_by(id=alert_id).first()
|
||||
if not alert:
|
||||
return None
|
||||
|
||||
alert.status = "resolved"
|
||||
alert.resolved_at = datetime.utcnow()
|
||||
self.db.commit()
|
||||
|
||||
logger.info(f"Resolved alert: {alert.title}")
|
||||
return alert
|
||||
|
||||
def cleanup_expired_alerts(self) -> int:
|
||||
"""
|
||||
Remove alerts past their expiration time.
|
||||
|
||||
Returns:
|
||||
Number of alerts cleaned up
|
||||
"""
|
||||
now = datetime.utcnow()
|
||||
expired = self.db.query(Alert).filter(
|
||||
and_(
|
||||
Alert.expires_at.isnot(None),
|
||||
Alert.expires_at < now,
|
||||
Alert.status == "active",
|
||||
)
|
||||
).all()
|
||||
|
||||
count = len(expired)
|
||||
for alert in expired:
|
||||
alert.status = "dismissed"
|
||||
|
||||
if count > 0:
|
||||
self.db.commit()
|
||||
logger.info(f"Cleaned up {count} expired alerts")
|
||||
|
||||
return count
|
||||
|
||||
|
||||
def get_alert_service(db: Session) -> AlertService:
|
||||
"""Get an AlertService instance with the given database session."""
|
||||
return AlertService(db)
|
||||
145
backend/services/backup_scheduler.py
Normal file
@@ -0,0 +1,145 @@
|
||||
"""
|
||||
Automatic Database Backup Scheduler
|
||||
Handles scheduled automatic backups of the database
|
||||
"""
|
||||
|
||||
import schedule
|
||||
import time
|
||||
import threading
|
||||
from datetime import datetime
|
||||
from typing import Optional
|
||||
import logging
|
||||
|
||||
from backend.services.database_backup import DatabaseBackupService
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class BackupScheduler:
|
||||
"""Manages automatic database backups on a schedule"""
|
||||
|
||||
def __init__(self, db_path: str = "./data/seismo_fleet.db", backups_dir: str = "./data/backups"):
|
||||
self.backup_service = DatabaseBackupService(db_path=db_path, backups_dir=backups_dir)
|
||||
self.scheduler_thread: Optional[threading.Thread] = None
|
||||
self.is_running = False
|
||||
|
||||
# Default settings
|
||||
self.backup_interval_hours = 24 # Daily backups
|
||||
self.keep_count = 10 # Keep last 10 backups
|
||||
self.enabled = False
|
||||
|
||||
def configure(self, interval_hours: int = 24, keep_count: int = 10, enabled: bool = True):
|
||||
"""
|
||||
Configure backup scheduler settings
|
||||
|
||||
Args:
|
||||
interval_hours: Hours between automatic backups
|
||||
keep_count: Number of backups to retain
|
||||
enabled: Whether automatic backups are enabled
|
||||
"""
|
||||
self.backup_interval_hours = interval_hours
|
||||
self.keep_count = keep_count
|
||||
self.enabled = enabled
|
||||
|
||||
logger.info(f"Backup scheduler configured: interval={interval_hours}h, keep={keep_count}, enabled={enabled}")
|
||||
|
||||
def create_automatic_backup(self):
|
||||
"""Create an automatic backup and cleanup old ones"""
|
||||
if not self.enabled:
|
||||
logger.info("Automatic backups are disabled, skipping")
|
||||
return
|
||||
|
||||
try:
|
||||
timestamp = datetime.utcnow().strftime("%Y-%m-%d %H:%M UTC")
|
||||
description = f"Automatic backup - {timestamp}"
|
||||
|
||||
logger.info("Creating automatic backup...")
|
||||
snapshot = self.backup_service.create_snapshot(description=description)
|
||||
|
||||
logger.info(f"Automatic backup created: {snapshot['filename']} ({snapshot['size_mb']} MB)")
|
||||
|
||||
# Cleanup old backups
|
||||
cleanup_result = self.backup_service.cleanup_old_snapshots(keep_count=self.keep_count)
|
||||
if cleanup_result['deleted'] > 0:
|
||||
logger.info(f"Cleaned up {cleanup_result['deleted']} old snapshots")
|
||||
|
||||
return snapshot
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Automatic backup failed: {str(e)}")
|
||||
return None
|
||||
|
||||
def start(self):
|
||||
"""Start the backup scheduler in a background thread"""
|
||||
if self.is_running:
|
||||
logger.warning("Backup scheduler is already running")
|
||||
return
|
||||
|
||||
if not self.enabled:
|
||||
logger.info("Backup scheduler is disabled, not starting")
|
||||
return
|
||||
|
||||
logger.info(f"Starting backup scheduler (every {self.backup_interval_hours} hours)")
|
||||
|
||||
# Clear any existing scheduled jobs
|
||||
schedule.clear()
|
||||
|
||||
# Schedule the backup job
|
||||
schedule.every(self.backup_interval_hours).hours.do(self.create_automatic_backup)
|
||||
|
||||
# Also run immediately on startup
|
||||
self.create_automatic_backup()
|
||||
|
||||
# Start the scheduler thread
|
||||
self.is_running = True
|
||||
self.scheduler_thread = threading.Thread(target=self._run_scheduler, daemon=True)
|
||||
self.scheduler_thread.start()
|
||||
|
||||
logger.info("Backup scheduler started successfully")
|
||||
|
||||
def _run_scheduler(self):
|
||||
"""Internal method to run the scheduler loop"""
|
||||
while self.is_running:
|
||||
schedule.run_pending()
|
||||
time.sleep(60) # Check every minute
|
||||
|
||||
def stop(self):
|
||||
"""Stop the backup scheduler"""
|
||||
if not self.is_running:
|
||||
logger.warning("Backup scheduler is not running")
|
||||
return
|
||||
|
||||
logger.info("Stopping backup scheduler...")
|
||||
self.is_running = False
|
||||
schedule.clear()
|
||||
|
||||
if self.scheduler_thread:
|
||||
self.scheduler_thread.join(timeout=5)
|
||||
|
||||
logger.info("Backup scheduler stopped")
|
||||
|
||||
def get_status(self) -> dict:
|
||||
"""Get current scheduler status"""
|
||||
next_run = None
|
||||
if self.is_running and schedule.jobs:
|
||||
next_run = schedule.jobs[0].next_run.isoformat() if schedule.jobs[0].next_run else None
|
||||
|
||||
return {
|
||||
"enabled": self.enabled,
|
||||
"running": self.is_running,
|
||||
"interval_hours": self.backup_interval_hours,
|
||||
"keep_count": self.keep_count,
|
||||
"next_run": next_run
|
||||
}
|
||||
|
||||
|
||||
# Global scheduler instance
|
||||
_scheduler_instance: Optional[BackupScheduler] = None
|
||||
|
||||
|
||||
def get_backup_scheduler() -> BackupScheduler:
|
||||
"""Get or create the global backup scheduler instance"""
|
||||
global _scheduler_instance
|
||||
if _scheduler_instance is None:
|
||||
_scheduler_instance = BackupScheduler()
|
||||
return _scheduler_instance
|
||||
192
backend/services/database_backup.py
Normal file
@@ -0,0 +1,192 @@
|
||||
"""
|
||||
Database Backup and Restore Service
|
||||
Handles full database snapshots, restoration, and remote synchronization
|
||||
"""
|
||||
|
||||
import os
|
||||
import shutil
|
||||
import sqlite3
|
||||
from datetime import datetime
|
||||
from pathlib import Path
|
||||
from typing import List, Dict, Optional
|
||||
import json
|
||||
|
||||
|
||||
class DatabaseBackupService:
|
||||
"""Manages database backup operations"""
|
||||
|
||||
def __init__(self, db_path: str = "./data/seismo_fleet.db", backups_dir: str = "./data/backups"):
|
||||
self.db_path = Path(db_path)
|
||||
self.backups_dir = Path(backups_dir)
|
||||
self.backups_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
def create_snapshot(self, description: Optional[str] = None) -> Dict:
|
||||
"""
|
||||
Create a full database snapshot using SQLite backup API
|
||||
Returns snapshot metadata
|
||||
"""
|
||||
if not self.db_path.exists():
|
||||
raise FileNotFoundError(f"Database not found at {self.db_path}")
|
||||
|
||||
# Generate snapshot filename with timestamp
|
||||
timestamp = datetime.utcnow().strftime("%Y%m%d_%H%M%S")
|
||||
snapshot_name = f"snapshot_{timestamp}.db"
|
||||
snapshot_path = self.backups_dir / snapshot_name
|
||||
|
||||
# Get database size before backup
|
||||
db_size = self.db_path.stat().st_size
|
||||
|
||||
try:
|
||||
# Use SQLite backup API for safe backup (handles concurrent access)
|
||||
source_conn = sqlite3.connect(str(self.db_path))
|
||||
dest_conn = sqlite3.connect(str(snapshot_path))
|
||||
|
||||
# Perform the backup
|
||||
with dest_conn:
|
||||
source_conn.backup(dest_conn)
|
||||
|
||||
source_conn.close()
|
||||
dest_conn.close()
|
||||
|
||||
# Create metadata
|
||||
metadata = {
|
||||
"filename": snapshot_name,
|
||||
"created_at": timestamp,
|
||||
"created_at_iso": datetime.utcnow().isoformat(),
|
||||
"description": description or "Manual snapshot",
|
||||
"size_bytes": snapshot_path.stat().st_size,
|
||||
"size_mb": round(snapshot_path.stat().st_size / (1024 * 1024), 2),
|
||||
"original_db_size_bytes": db_size,
|
||||
"type": "manual"
|
||||
}
|
||||
|
||||
# Save metadata as JSON sidecar file
|
||||
metadata_path = self.backups_dir / f"{snapshot_name}.meta.json"
|
||||
with open(metadata_path, 'w') as f:
|
||||
json.dump(metadata, f, indent=2)
|
||||
|
||||
return metadata
|
||||
|
||||
except Exception as e:
|
||||
# Clean up partial snapshot if it exists
|
||||
if snapshot_path.exists():
|
||||
snapshot_path.unlink()
|
||||
raise Exception(f"Snapshot creation failed: {str(e)}")
|
||||
|
||||
def list_snapshots(self) -> List[Dict]:
|
||||
"""
|
||||
List all available snapshots with metadata
|
||||
Returns list sorted by creation date (newest first)
|
||||
"""
|
||||
snapshots = []
|
||||
|
||||
for db_file in sorted(self.backups_dir.glob("snapshot_*.db"), reverse=True):
|
||||
metadata_file = self.backups_dir / f"{db_file.name}.meta.json"
|
||||
|
||||
if metadata_file.exists():
|
||||
with open(metadata_file, 'r') as f:
|
||||
metadata = json.load(f)
|
||||
else:
|
||||
# Fallback for legacy snapshots without metadata
|
||||
stat_info = db_file.stat()
|
||||
metadata = {
|
||||
"filename": db_file.name,
|
||||
"created_at": datetime.fromtimestamp(stat_info.st_mtime).strftime("%Y%m%d_%H%M%S"),
|
||||
"created_at_iso": datetime.fromtimestamp(stat_info.st_mtime).isoformat(),
|
||||
"description": "Legacy snapshot",
|
||||
"size_bytes": stat_info.st_size,
|
||||
"size_mb": round(stat_info.st_size / (1024 * 1024), 2),
|
||||
"type": "manual"
|
||||
}
|
||||
|
||||
snapshots.append(metadata)
|
||||
|
||||
return snapshots
|
||||
|
||||
def delete_snapshot(self, filename: str) -> bool:
|
||||
"""Delete a snapshot and its metadata"""
|
||||
snapshot_path = self.backups_dir / filename
|
||||
metadata_path = self.backups_dir / f"{filename}.meta.json"
|
||||
|
||||
if not snapshot_path.exists():
|
||||
raise FileNotFoundError(f"Snapshot {filename} not found")
|
||||
|
||||
snapshot_path.unlink()
|
||||
if metadata_path.exists():
|
||||
metadata_path.unlink()
|
||||
|
||||
return True
|
||||
|
||||
def restore_snapshot(self, filename: str, create_backup_before_restore: bool = True) -> Dict:
|
||||
"""
|
||||
Restore database from a snapshot
|
||||
Creates a safety backup before restoring if requested
|
||||
"""
|
||||
snapshot_path = self.backups_dir / filename
|
||||
|
||||
if not snapshot_path.exists():
|
||||
raise FileNotFoundError(f"Snapshot {filename} not found")
|
||||
|
||||
if not self.db_path.exists():
|
||||
raise FileNotFoundError(f"Database not found at {self.db_path}")
|
||||
|
||||
backup_info = None
|
||||
|
||||
# Create safety backup before restore
|
||||
if create_backup_before_restore:
|
||||
backup_info = self.create_snapshot(description="Auto-backup before restore")
|
||||
|
||||
try:
|
||||
# Replace database file
|
||||
shutil.copy2(str(snapshot_path), str(self.db_path))
|
||||
|
||||
return {
|
||||
"message": "Database restored successfully",
|
||||
"restored_from": filename,
|
||||
"restored_at": datetime.utcnow().isoformat(),
|
||||
"backup_created": backup_info["filename"] if backup_info else None
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
raise Exception(f"Restore failed: {str(e)}")
|
||||
|
||||
def get_database_stats(self) -> Dict:
|
||||
"""Get statistics about the current database"""
|
||||
if not self.db_path.exists():
|
||||
return {"error": "Database not found"}
|
||||
|
||||
conn = sqlite3.connect(str(self.db_path))
|
||||
cursor = conn.cursor()
|
||||
|
||||
# Get table counts
|
||||
cursor.execute("SELECT name FROM sqlite_master WHERE type='table' AND name NOT LIKE 'sqlite_%'")
|
||||
tables = cursor.fetchall()
|
||||
|
||||
table_stats = {}
|
||||
total_rows = 0
|
||||
|
||||
for (table_name,) in tables:
|
||||
cursor.execute(f"SELECT COUNT(*) FROM {table_name}")
|
||||
count = cursor.fetchone()[0]
|
||||
table_stats[table_name] = count
|
||||
total_rows += count
|
||||
|
||||
conn.close()
|
||||
|
||||
db_size = self.db_path.stat().st_size
|
||||
|
||||
return {
|
||||
"database_path": str(self.db_path),
|
||||
"size_bytes": db_size,
|
||||
"size_mb": round(db_size / (1024 * 1024), 2),
|
||||
"total_rows": total_rows,
|
||||
"tables": table_stats,
|
||||
"last_modified": datetime.fromtimestamp(self.db_path.stat().st_mtime).isoformat()
|
||||
}
|
||||
|
||||
def download_snapshot(self, filename: str) -> Path:
|
||||
"""Get the file path for downloading a snapshot"""
|
||||
snapshot_path = self.backups_dir / filename
|
||||
if not snapshot_path.exists():
|
||||
raise FileNotFoundError(f"Snapshot {filename} not found")
|
||||
return snapshot_path
|
||||
535
backend/services/device_controller.py
Normal file
@@ -0,0 +1,535 @@
|
||||
"""
|
||||
Device Controller Service
|
||||
|
||||
Routes device operations to the appropriate backend module:
|
||||
- SLMM for sound level meters
|
||||
- SFM for seismographs (future implementation)
|
||||
|
||||
This abstraction allows Projects system to work with any device type
|
||||
without knowing the underlying communication protocol.
|
||||
"""
|
||||
|
||||
from typing import Dict, Any, Optional, List
|
||||
from backend.services.slmm_client import get_slmm_client, SLMMClientError
|
||||
|
||||
|
||||
class DeviceControllerError(Exception):
|
||||
"""Base exception for device controller errors."""
|
||||
pass
|
||||
|
||||
|
||||
class UnsupportedDeviceTypeError(DeviceControllerError):
|
||||
"""Raised when device type is not supported."""
|
||||
pass
|
||||
|
||||
|
||||
class DeviceController:
|
||||
"""
|
||||
Unified interface for controlling all device types.
|
||||
|
||||
Routes commands to appropriate backend module based on device_type.
|
||||
|
||||
Usage:
|
||||
controller = DeviceController()
|
||||
await controller.start_recording("nl43-001", "slm", config={})
|
||||
await controller.stop_recording("seismo-042", "seismograph")
|
||||
"""
|
||||
|
||||
def __init__(self):
|
||||
self.slmm_client = get_slmm_client()
|
||||
|
||||
# ========================================================================
|
||||
# Recording Control
|
||||
# ========================================================================
|
||||
|
||||
async def start_recording(
|
||||
self,
|
||||
unit_id: str,
|
||||
device_type: str,
|
||||
config: Optional[Dict[str, Any]] = None,
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Start recording on a device.
|
||||
|
||||
Args:
|
||||
unit_id: Unit identifier
|
||||
device_type: "slm" | "seismograph"
|
||||
config: Device-specific recording configuration
|
||||
|
||||
Returns:
|
||||
Response dict from device module
|
||||
|
||||
Raises:
|
||||
UnsupportedDeviceTypeError: Device type not supported
|
||||
DeviceControllerError: Operation failed
|
||||
"""
|
||||
if device_type == "slm":
|
||||
try:
|
||||
return await self.slmm_client.start_recording(unit_id, config)
|
||||
except SLMMClientError as e:
|
||||
raise DeviceControllerError(f"SLMM error: {str(e)}")
|
||||
|
||||
elif device_type == "seismograph":
|
||||
# TODO: Implement SFM client for seismograph control
|
||||
# For now, return a placeholder response
|
||||
return {
|
||||
"status": "not_implemented",
|
||||
"message": "Seismograph recording control not yet implemented",
|
||||
"unit_id": unit_id,
|
||||
}
|
||||
|
||||
else:
|
||||
raise UnsupportedDeviceTypeError(
|
||||
f"Device type '{device_type}' is not supported. "
|
||||
f"Supported types: slm, seismograph"
|
||||
)
|
||||
|
||||
async def stop_recording(
|
||||
self,
|
||||
unit_id: str,
|
||||
device_type: str,
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Stop recording on a device.
|
||||
|
||||
Args:
|
||||
unit_id: Unit identifier
|
||||
device_type: "slm" | "seismograph"
|
||||
|
||||
Returns:
|
||||
Response dict from device module
|
||||
"""
|
||||
if device_type == "slm":
|
||||
try:
|
||||
return await self.slmm_client.stop_recording(unit_id)
|
||||
except SLMMClientError as e:
|
||||
raise DeviceControllerError(f"SLMM error: {str(e)}")
|
||||
|
||||
elif device_type == "seismograph":
|
||||
# TODO: Implement SFM client
|
||||
return {
|
||||
"status": "not_implemented",
|
||||
"message": "Seismograph recording control not yet implemented",
|
||||
"unit_id": unit_id,
|
||||
}
|
||||
|
||||
else:
|
||||
raise UnsupportedDeviceTypeError(f"Unsupported device type: {device_type}")
|
||||
|
||||
async def pause_recording(
|
||||
self,
|
||||
unit_id: str,
|
||||
device_type: str,
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Pause recording on a device.
|
||||
|
||||
Args:
|
||||
unit_id: Unit identifier
|
||||
device_type: "slm" | "seismograph"
|
||||
|
||||
Returns:
|
||||
Response dict from device module
|
||||
"""
|
||||
if device_type == "slm":
|
||||
try:
|
||||
return await self.slmm_client.pause_recording(unit_id)
|
||||
except SLMMClientError as e:
|
||||
raise DeviceControllerError(f"SLMM error: {str(e)}")
|
||||
|
||||
elif device_type == "seismograph":
|
||||
return {
|
||||
"status": "not_implemented",
|
||||
"message": "Seismograph pause not yet implemented",
|
||||
"unit_id": unit_id,
|
||||
}
|
||||
|
||||
else:
|
||||
raise UnsupportedDeviceTypeError(f"Unsupported device type: {device_type}")
|
||||
|
||||
async def resume_recording(
|
||||
self,
|
||||
unit_id: str,
|
||||
device_type: str,
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Resume paused recording on a device.
|
||||
|
||||
Args:
|
||||
unit_id: Unit identifier
|
||||
device_type: "slm" | "seismograph"
|
||||
|
||||
Returns:
|
||||
Response dict from device module
|
||||
"""
|
||||
if device_type == "slm":
|
||||
try:
|
||||
return await self.slmm_client.resume_recording(unit_id)
|
||||
except SLMMClientError as e:
|
||||
raise DeviceControllerError(f"SLMM error: {str(e)}")
|
||||
|
||||
elif device_type == "seismograph":
|
||||
return {
|
||||
"status": "not_implemented",
|
||||
"message": "Seismograph resume not yet implemented",
|
||||
"unit_id": unit_id,
|
||||
}
|
||||
|
||||
else:
|
||||
raise UnsupportedDeviceTypeError(f"Unsupported device type: {device_type}")
|
||||
|
||||
# ========================================================================
|
||||
# Status & Monitoring
|
||||
# ========================================================================
|
||||
|
||||
async def get_device_status(
|
||||
self,
|
||||
unit_id: str,
|
||||
device_type: str,
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Get current device status.
|
||||
|
||||
Args:
|
||||
unit_id: Unit identifier
|
||||
device_type: "slm" | "seismograph"
|
||||
|
||||
Returns:
|
||||
Status dict from device module
|
||||
"""
|
||||
if device_type == "slm":
|
||||
try:
|
||||
return await self.slmm_client.get_unit_status(unit_id)
|
||||
except SLMMClientError as e:
|
||||
raise DeviceControllerError(f"SLMM error: {str(e)}")
|
||||
|
||||
elif device_type == "seismograph":
|
||||
# TODO: Implement SFM status check
|
||||
return {
|
||||
"status": "not_implemented",
|
||||
"message": "Seismograph status not yet implemented",
|
||||
"unit_id": unit_id,
|
||||
}
|
||||
|
||||
else:
|
||||
raise UnsupportedDeviceTypeError(f"Unsupported device type: {device_type}")
|
||||
|
||||
async def get_live_data(
|
||||
self,
|
||||
unit_id: str,
|
||||
device_type: str,
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Get live data from device.
|
||||
|
||||
Args:
|
||||
unit_id: Unit identifier
|
||||
device_type: "slm" | "seismograph"
|
||||
|
||||
Returns:
|
||||
Live data dict from device module
|
||||
"""
|
||||
if device_type == "slm":
|
||||
try:
|
||||
return await self.slmm_client.get_live_data(unit_id)
|
||||
except SLMMClientError as e:
|
||||
raise DeviceControllerError(f"SLMM error: {str(e)}")
|
||||
|
||||
elif device_type == "seismograph":
|
||||
return {
|
||||
"status": "not_implemented",
|
||||
"message": "Seismograph live data not yet implemented",
|
||||
"unit_id": unit_id,
|
||||
}
|
||||
|
||||
else:
|
||||
raise UnsupportedDeviceTypeError(f"Unsupported device type: {device_type}")
|
||||
|
||||
# ========================================================================
|
||||
# Data Download
|
||||
# ========================================================================
|
||||
|
||||
async def download_files(
|
||||
self,
|
||||
unit_id: str,
|
||||
device_type: str,
|
||||
destination_path: str,
|
||||
files: Optional[List[str]] = None,
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Download data files from device.
|
||||
|
||||
Args:
|
||||
unit_id: Unit identifier
|
||||
device_type: "slm" | "seismograph"
|
||||
destination_path: Local path to save files
|
||||
files: List of filenames, or None for all
|
||||
|
||||
Returns:
|
||||
Download result with file list
|
||||
"""
|
||||
if device_type == "slm":
|
||||
try:
|
||||
return await self.slmm_client.download_files(
|
||||
unit_id,
|
||||
destination_path,
|
||||
files,
|
||||
)
|
||||
except SLMMClientError as e:
|
||||
raise DeviceControllerError(f"SLMM error: {str(e)}")
|
||||
|
||||
elif device_type == "seismograph":
|
||||
# TODO: Implement SFM file download
|
||||
return {
|
||||
"status": "not_implemented",
|
||||
"message": "Seismograph file download not yet implemented",
|
||||
"unit_id": unit_id,
|
||||
}
|
||||
|
||||
else:
|
||||
raise UnsupportedDeviceTypeError(f"Unsupported device type: {device_type}")
|
||||
|
||||
# ========================================================================
|
||||
# Device Configuration
|
||||
# ========================================================================
|
||||
|
||||
async def update_device_config(
|
||||
self,
|
||||
unit_id: str,
|
||||
device_type: str,
|
||||
config: Dict[str, Any],
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Update device configuration.
|
||||
|
||||
Args:
|
||||
unit_id: Unit identifier
|
||||
device_type: "slm" | "seismograph"
|
||||
config: Configuration parameters
|
||||
|
||||
Returns:
|
||||
Updated config from device module
|
||||
"""
|
||||
if device_type == "slm":
|
||||
try:
|
||||
return await self.slmm_client.update_unit_config(
|
||||
unit_id,
|
||||
host=config.get("host"),
|
||||
tcp_port=config.get("tcp_port"),
|
||||
ftp_port=config.get("ftp_port"),
|
||||
ftp_username=config.get("ftp_username"),
|
||||
ftp_password=config.get("ftp_password"),
|
||||
)
|
||||
except SLMMClientError as e:
|
||||
raise DeviceControllerError(f"SLMM error: {str(e)}")
|
||||
|
||||
elif device_type == "seismograph":
|
||||
return {
|
||||
"status": "not_implemented",
|
||||
"message": "Seismograph config update not yet implemented",
|
||||
"unit_id": unit_id,
|
||||
}
|
||||
|
||||
else:
|
||||
raise UnsupportedDeviceTypeError(f"Unsupported device type: {device_type}")
|
||||
|
||||
# ========================================================================
|
||||
# Store/Index Management
|
||||
# ========================================================================
|
||||
|
||||
async def increment_index(
|
||||
self,
|
||||
unit_id: str,
|
||||
device_type: str,
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Increment the store/index number on a device.
|
||||
|
||||
For SLMs, this increments the store name to prevent "overwrite data?" prompts.
|
||||
Should be called before starting a new measurement if auto_increment_index is enabled.
|
||||
|
||||
Args:
|
||||
unit_id: Unit identifier
|
||||
device_type: "slm" | "seismograph"
|
||||
|
||||
Returns:
|
||||
Response dict with old_index and new_index
|
||||
"""
|
||||
if device_type == "slm":
|
||||
try:
|
||||
return await self.slmm_client.increment_index(unit_id)
|
||||
except SLMMClientError as e:
|
||||
raise DeviceControllerError(f"SLMM error: {str(e)}")
|
||||
|
||||
elif device_type == "seismograph":
|
||||
# Seismographs may not have the same concept of store index
|
||||
return {
|
||||
"status": "not_applicable",
|
||||
"message": "Index increment not applicable for seismographs",
|
||||
"unit_id": unit_id,
|
||||
}
|
||||
|
||||
else:
|
||||
raise UnsupportedDeviceTypeError(f"Unsupported device type: {device_type}")
|
||||
|
||||
async def get_index_number(
|
||||
self,
|
||||
unit_id: str,
|
||||
device_type: str,
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Get current store/index number from device.
|
||||
|
||||
Args:
|
||||
unit_id: Unit identifier
|
||||
device_type: "slm" | "seismograph"
|
||||
|
||||
Returns:
|
||||
Response dict with current index_number
|
||||
"""
|
||||
if device_type == "slm":
|
||||
try:
|
||||
return await self.slmm_client.get_index_number(unit_id)
|
||||
except SLMMClientError as e:
|
||||
raise DeviceControllerError(f"SLMM error: {str(e)}")
|
||||
|
||||
elif device_type == "seismograph":
|
||||
return {
|
||||
"status": "not_applicable",
|
||||
"message": "Index number not applicable for seismographs",
|
||||
"unit_id": unit_id,
|
||||
}
|
||||
|
||||
else:
|
||||
raise UnsupportedDeviceTypeError(f"Unsupported device type: {device_type}")
|
||||
|
||||
# ========================================================================
|
||||
# Cycle Commands (for scheduled automation)
|
||||
# ========================================================================
|
||||
|
||||
async def start_cycle(
|
||||
self,
|
||||
unit_id: str,
|
||||
device_type: str,
|
||||
sync_clock: bool = True,
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Execute complete start cycle for scheduled automation.
|
||||
|
||||
This handles the full pre-recording workflow:
|
||||
1. Sync device clock to server time
|
||||
2. Find next safe index (with overwrite protection)
|
||||
3. Start measurement
|
||||
|
||||
Args:
|
||||
unit_id: Unit identifier
|
||||
device_type: "slm" | "seismograph"
|
||||
sync_clock: Whether to sync device clock to server time
|
||||
|
||||
Returns:
|
||||
Response dict from device module
|
||||
"""
|
||||
if device_type == "slm":
|
||||
try:
|
||||
return await self.slmm_client.start_cycle(unit_id, sync_clock)
|
||||
except SLMMClientError as e:
|
||||
raise DeviceControllerError(f"SLMM error: {str(e)}")
|
||||
|
||||
elif device_type == "seismograph":
|
||||
return {
|
||||
"status": "not_implemented",
|
||||
"message": "Seismograph start cycle not yet implemented",
|
||||
"unit_id": unit_id,
|
||||
}
|
||||
|
||||
else:
|
||||
raise UnsupportedDeviceTypeError(f"Unsupported device type: {device_type}")
|
||||
|
||||
async def stop_cycle(
|
||||
self,
|
||||
unit_id: str,
|
||||
device_type: str,
|
||||
download: bool = True,
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Execute complete stop cycle for scheduled automation.
|
||||
|
||||
This handles the full post-recording workflow:
|
||||
1. Stop measurement
|
||||
2. Enable FTP
|
||||
3. Download measurement folder
|
||||
4. Verify download
|
||||
|
||||
Args:
|
||||
unit_id: Unit identifier
|
||||
device_type: "slm" | "seismograph"
|
||||
download: Whether to download measurement data
|
||||
|
||||
Returns:
|
||||
Response dict from device module
|
||||
"""
|
||||
if device_type == "slm":
|
||||
try:
|
||||
return await self.slmm_client.stop_cycle(unit_id, download)
|
||||
except SLMMClientError as e:
|
||||
raise DeviceControllerError(f"SLMM error: {str(e)}")
|
||||
|
||||
elif device_type == "seismograph":
|
||||
return {
|
||||
"status": "not_implemented",
|
||||
"message": "Seismograph stop cycle not yet implemented",
|
||||
"unit_id": unit_id,
|
||||
}
|
||||
|
||||
else:
|
||||
raise UnsupportedDeviceTypeError(f"Unsupported device type: {device_type}")
|
||||
|
||||
# ========================================================================
|
||||
# Health Check
|
||||
# ========================================================================
|
||||
|
||||
async def check_device_connectivity(
|
||||
self,
|
||||
unit_id: str,
|
||||
device_type: str,
|
||||
) -> bool:
|
||||
"""
|
||||
Check if device is reachable.
|
||||
|
||||
Args:
|
||||
unit_id: Unit identifier
|
||||
device_type: "slm" | "seismograph"
|
||||
|
||||
Returns:
|
||||
True if device is reachable, False otherwise
|
||||
"""
|
||||
if device_type == "slm":
|
||||
try:
|
||||
status = await self.slmm_client.get_unit_status(unit_id)
|
||||
return status.get("last_seen") is not None
|
||||
except:
|
||||
return False
|
||||
|
||||
elif device_type == "seismograph":
|
||||
# TODO: Implement SFM connectivity check
|
||||
return False
|
||||
|
||||
else:
|
||||
return False
|
||||
|
||||
|
||||
# Singleton instance
|
||||
_default_controller: Optional[DeviceController] = None
|
||||
|
||||
|
||||
def get_device_controller() -> DeviceController:
|
||||
"""
|
||||
Get the default device controller instance.
|
||||
|
||||
Returns:
|
||||
DeviceController instance
|
||||
"""
|
||||
global _default_controller
|
||||
if _default_controller is None:
|
||||
_default_controller = DeviceController()
|
||||
return _default_controller
|
||||
184
backend/services/device_status_monitor.py
Normal file
@@ -0,0 +1,184 @@
|
||||
"""
|
||||
Device Status Monitor
|
||||
|
||||
Background task that monitors device reachability via SLMM polling status
|
||||
and triggers alerts when devices go offline or come back online.
|
||||
|
||||
This service bridges SLMM's device polling with Terra-View's alert system.
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import logging
|
||||
from datetime import datetime
|
||||
from typing import Optional, Dict
|
||||
|
||||
from backend.database import SessionLocal
|
||||
from backend.services.slmm_client import get_slmm_client, SLMMClientError
|
||||
from backend.services.alert_service import get_alert_service
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class DeviceStatusMonitor:
|
||||
"""
|
||||
Monitors device reachability via SLMM's polling status endpoint.
|
||||
|
||||
Detects state transitions (online→offline, offline→online) and
|
||||
triggers AlertService to create/resolve alerts.
|
||||
|
||||
Usage:
|
||||
monitor = DeviceStatusMonitor()
|
||||
await monitor.start() # Start background monitoring
|
||||
monitor.stop() # Stop monitoring
|
||||
"""
|
||||
|
||||
def __init__(self, check_interval: int = 60):
|
||||
"""
|
||||
Initialize the monitor.
|
||||
|
||||
Args:
|
||||
check_interval: Seconds between status checks (default: 60)
|
||||
"""
|
||||
self.check_interval = check_interval
|
||||
self.running = False
|
||||
self.task: Optional[asyncio.Task] = None
|
||||
self.slmm_client = get_slmm_client()
|
||||
|
||||
# Track previous device states to detect transitions
|
||||
self._device_states: Dict[str, bool] = {}
|
||||
|
||||
async def start(self):
|
||||
"""Start the monitoring background task."""
|
||||
if self.running:
|
||||
logger.warning("DeviceStatusMonitor is already running")
|
||||
return
|
||||
|
||||
self.running = True
|
||||
self.task = asyncio.create_task(self._monitor_loop())
|
||||
logger.info(f"DeviceStatusMonitor started (checking every {self.check_interval}s)")
|
||||
|
||||
def stop(self):
|
||||
"""Stop the monitoring background task."""
|
||||
self.running = False
|
||||
if self.task:
|
||||
self.task.cancel()
|
||||
logger.info("DeviceStatusMonitor stopped")
|
||||
|
||||
async def _monitor_loop(self):
|
||||
"""Main monitoring loop."""
|
||||
while self.running:
|
||||
try:
|
||||
await self._check_all_devices()
|
||||
except Exception as e:
|
||||
logger.error(f"Error in device status monitor: {e}", exc_info=True)
|
||||
|
||||
# Sleep in small intervals for graceful shutdown
|
||||
for _ in range(self.check_interval):
|
||||
if not self.running:
|
||||
break
|
||||
await asyncio.sleep(1)
|
||||
|
||||
logger.info("DeviceStatusMonitor loop exited")
|
||||
|
||||
async def _check_all_devices(self):
|
||||
"""
|
||||
Fetch polling status from SLMM and detect state transitions.
|
||||
|
||||
Uses GET /api/slmm/_polling/status (proxied to SLMM)
|
||||
"""
|
||||
try:
|
||||
# Get status from SLMM
|
||||
status_response = await self.slmm_client.get_polling_status()
|
||||
devices = status_response.get("devices", [])
|
||||
|
||||
if not devices:
|
||||
logger.debug("No devices in polling status response")
|
||||
return
|
||||
|
||||
db = SessionLocal()
|
||||
try:
|
||||
alert_service = get_alert_service(db)
|
||||
|
||||
for device in devices:
|
||||
unit_id = device.get("unit_id")
|
||||
if not unit_id:
|
||||
continue
|
||||
|
||||
is_reachable = device.get("is_reachable", True)
|
||||
previous_reachable = self._device_states.get(unit_id)
|
||||
|
||||
# Skip if this is the first check (no previous state)
|
||||
if previous_reachable is None:
|
||||
self._device_states[unit_id] = is_reachable
|
||||
logger.debug(f"Initial state for {unit_id}: reachable={is_reachable}")
|
||||
continue
|
||||
|
||||
# Detect offline transition (was online, now offline)
|
||||
if previous_reachable and not is_reachable:
|
||||
logger.warning(f"Device {unit_id} went OFFLINE")
|
||||
alert_service.create_device_offline_alert(
|
||||
unit_id=unit_id,
|
||||
consecutive_failures=device.get("consecutive_failures", 0),
|
||||
last_error=device.get("last_error"),
|
||||
)
|
||||
|
||||
# Detect online transition (was offline, now online)
|
||||
elif not previous_reachable and is_reachable:
|
||||
logger.info(f"Device {unit_id} came back ONLINE")
|
||||
alert_service.resolve_device_offline_alert(unit_id)
|
||||
|
||||
# Update tracked state
|
||||
self._device_states[unit_id] = is_reachable
|
||||
|
||||
# Cleanup expired alerts while we're here
|
||||
alert_service.cleanup_expired_alerts()
|
||||
|
||||
finally:
|
||||
db.close()
|
||||
|
||||
except SLMMClientError as e:
|
||||
logger.warning(f"Could not reach SLMM for status check: {e}")
|
||||
except Exception as e:
|
||||
logger.error(f"Error checking device status: {e}", exc_info=True)
|
||||
|
||||
def get_tracked_devices(self) -> Dict[str, bool]:
|
||||
"""
|
||||
Get the current tracked device states.
|
||||
|
||||
Returns:
|
||||
Dict mapping unit_id to is_reachable status
|
||||
"""
|
||||
return dict(self._device_states)
|
||||
|
||||
def clear_tracked_devices(self):
|
||||
"""Clear all tracked device states (useful for testing)."""
|
||||
self._device_states.clear()
|
||||
|
||||
|
||||
# Singleton instance
|
||||
_monitor_instance: Optional[DeviceStatusMonitor] = None
|
||||
|
||||
|
||||
def get_device_status_monitor() -> DeviceStatusMonitor:
|
||||
"""
|
||||
Get the device status monitor singleton instance.
|
||||
|
||||
Returns:
|
||||
DeviceStatusMonitor instance
|
||||
"""
|
||||
global _monitor_instance
|
||||
if _monitor_instance is None:
|
||||
_monitor_instance = DeviceStatusMonitor()
|
||||
return _monitor_instance
|
||||
|
||||
|
||||
async def start_device_status_monitor():
|
||||
"""Start the global device status monitor."""
|
||||
monitor = get_device_status_monitor()
|
||||
await monitor.start()
|
||||
|
||||
|
||||
def stop_device_status_monitor():
|
||||
"""Stop the global device status monitor."""
|
||||
monitor = get_device_status_monitor()
|
||||
monitor.stop()
|
||||
559
backend/services/recurring_schedule_service.py
Normal file
@@ -0,0 +1,559 @@
|
||||
"""
|
||||
Recurring Schedule Service
|
||||
|
||||
Manages recurring schedule definitions and generates ScheduledAction
|
||||
instances based on patterns (weekly calendar, simple interval).
|
||||
"""
|
||||
|
||||
import json
|
||||
import uuid
|
||||
import logging
|
||||
from datetime import datetime, timedelta, date, time
|
||||
from typing import Optional, List, Dict, Any, Tuple
|
||||
from zoneinfo import ZoneInfo
|
||||
|
||||
from sqlalchemy.orm import Session
|
||||
from sqlalchemy import and_
|
||||
|
||||
from backend.models import RecurringSchedule, ScheduledAction, MonitoringLocation, UnitAssignment
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# Day name mapping
|
||||
DAY_NAMES = ["monday", "tuesday", "wednesday", "thursday", "friday", "saturday", "sunday"]
|
||||
|
||||
|
||||
class RecurringScheduleService:
|
||||
"""
|
||||
Service for managing recurring schedules and generating ScheduledActions.
|
||||
|
||||
Supports two schedule types:
|
||||
- weekly_calendar: Specific days with start/end times
|
||||
- simple_interval: Daily stop/download/restart cycles for 24/7 monitoring
|
||||
"""
|
||||
|
||||
def __init__(self, db: Session):
|
||||
self.db = db
|
||||
|
||||
def create_schedule(
|
||||
self,
|
||||
project_id: str,
|
||||
location_id: str,
|
||||
name: str,
|
||||
schedule_type: str,
|
||||
device_type: str = "slm",
|
||||
unit_id: str = None,
|
||||
weekly_pattern: dict = None,
|
||||
interval_type: str = None,
|
||||
cycle_time: str = None,
|
||||
include_download: bool = True,
|
||||
auto_increment_index: bool = True,
|
||||
timezone: str = "America/New_York",
|
||||
) -> RecurringSchedule:
|
||||
"""
|
||||
Create a new recurring schedule.
|
||||
|
||||
Args:
|
||||
project_id: Project ID
|
||||
location_id: Monitoring location ID
|
||||
name: Schedule name
|
||||
schedule_type: "weekly_calendar" or "simple_interval"
|
||||
device_type: "slm" or "seismograph"
|
||||
unit_id: Specific unit (optional, can use assignment)
|
||||
weekly_pattern: Dict of day patterns for weekly_calendar
|
||||
interval_type: "daily" or "hourly" for simple_interval
|
||||
cycle_time: Time string "HH:MM" for cycle
|
||||
include_download: Whether to download data on cycle
|
||||
auto_increment_index: Whether to auto-increment store index before start
|
||||
timezone: Timezone for schedule times
|
||||
|
||||
Returns:
|
||||
Created RecurringSchedule
|
||||
"""
|
||||
schedule = RecurringSchedule(
|
||||
id=str(uuid.uuid4()),
|
||||
project_id=project_id,
|
||||
location_id=location_id,
|
||||
unit_id=unit_id,
|
||||
name=name,
|
||||
schedule_type=schedule_type,
|
||||
device_type=device_type,
|
||||
weekly_pattern=json.dumps(weekly_pattern) if weekly_pattern else None,
|
||||
interval_type=interval_type,
|
||||
cycle_time=cycle_time,
|
||||
include_download=include_download,
|
||||
auto_increment_index=auto_increment_index,
|
||||
enabled=True,
|
||||
timezone=timezone,
|
||||
)
|
||||
|
||||
# Calculate next occurrence
|
||||
schedule.next_occurrence = self._calculate_next_occurrence(schedule)
|
||||
|
||||
self.db.add(schedule)
|
||||
self.db.commit()
|
||||
self.db.refresh(schedule)
|
||||
|
||||
logger.info(f"Created recurring schedule: {name} ({schedule_type})")
|
||||
return schedule
|
||||
|
||||
def update_schedule(
|
||||
self,
|
||||
schedule_id: str,
|
||||
**kwargs,
|
||||
) -> Optional[RecurringSchedule]:
|
||||
"""
|
||||
Update a recurring schedule.
|
||||
|
||||
Args:
|
||||
schedule_id: Schedule to update
|
||||
**kwargs: Fields to update
|
||||
|
||||
Returns:
|
||||
Updated schedule or None
|
||||
"""
|
||||
schedule = self.db.query(RecurringSchedule).filter_by(id=schedule_id).first()
|
||||
if not schedule:
|
||||
return None
|
||||
|
||||
for key, value in kwargs.items():
|
||||
if hasattr(schedule, key):
|
||||
if key == "weekly_pattern" and isinstance(value, dict):
|
||||
value = json.dumps(value)
|
||||
setattr(schedule, key, value)
|
||||
|
||||
# Recalculate next occurrence
|
||||
schedule.next_occurrence = self._calculate_next_occurrence(schedule)
|
||||
|
||||
self.db.commit()
|
||||
self.db.refresh(schedule)
|
||||
|
||||
logger.info(f"Updated recurring schedule: {schedule.name}")
|
||||
return schedule
|
||||
|
||||
def delete_schedule(self, schedule_id: str) -> bool:
|
||||
"""
|
||||
Delete a recurring schedule and its pending generated actions.
|
||||
|
||||
Args:
|
||||
schedule_id: Schedule to delete
|
||||
|
||||
Returns:
|
||||
True if deleted, False if not found
|
||||
"""
|
||||
schedule = self.db.query(RecurringSchedule).filter_by(id=schedule_id).first()
|
||||
if not schedule:
|
||||
return False
|
||||
|
||||
# Delete pending generated actions for this schedule
|
||||
# The schedule_id is stored in the notes field as JSON
|
||||
pending_actions = self.db.query(ScheduledAction).filter(
|
||||
and_(
|
||||
ScheduledAction.execution_status == "pending",
|
||||
ScheduledAction.notes.like(f'%"schedule_id": "{schedule_id}"%'),
|
||||
)
|
||||
).all()
|
||||
|
||||
deleted_count = len(pending_actions)
|
||||
for action in pending_actions:
|
||||
self.db.delete(action)
|
||||
|
||||
self.db.delete(schedule)
|
||||
self.db.commit()
|
||||
|
||||
logger.info(f"Deleted recurring schedule: {schedule.name} (and {deleted_count} pending actions)")
|
||||
return True
|
||||
|
||||
def enable_schedule(self, schedule_id: str) -> Optional[RecurringSchedule]:
|
||||
"""Enable a disabled schedule."""
|
||||
return self.update_schedule(schedule_id, enabled=True)
|
||||
|
||||
def disable_schedule(self, schedule_id: str) -> Optional[RecurringSchedule]:
|
||||
"""Disable a schedule."""
|
||||
return self.update_schedule(schedule_id, enabled=False)
|
||||
|
||||
def generate_actions_for_schedule(
|
||||
self,
|
||||
schedule: RecurringSchedule,
|
||||
horizon_days: int = 7,
|
||||
preview_only: bool = False,
|
||||
) -> List[ScheduledAction]:
|
||||
"""
|
||||
Generate ScheduledAction entries for the next N days based on pattern.
|
||||
|
||||
Args:
|
||||
schedule: The recurring schedule
|
||||
horizon_days: Days ahead to generate
|
||||
preview_only: If True, don't save to DB (for preview)
|
||||
|
||||
Returns:
|
||||
List of generated ScheduledAction instances
|
||||
"""
|
||||
if not schedule.enabled:
|
||||
return []
|
||||
|
||||
if schedule.schedule_type == "weekly_calendar":
|
||||
actions = self._generate_weekly_calendar_actions(schedule, horizon_days)
|
||||
elif schedule.schedule_type == "simple_interval":
|
||||
actions = self._generate_interval_actions(schedule, horizon_days)
|
||||
else:
|
||||
logger.warning(f"Unknown schedule type: {schedule.schedule_type}")
|
||||
return []
|
||||
|
||||
if not preview_only and actions:
|
||||
for action in actions:
|
||||
self.db.add(action)
|
||||
|
||||
schedule.last_generated_at = datetime.utcnow()
|
||||
schedule.next_occurrence = self._calculate_next_occurrence(schedule)
|
||||
|
||||
self.db.commit()
|
||||
logger.info(f"Generated {len(actions)} actions for schedule: {schedule.name}")
|
||||
|
||||
return actions
|
||||
|
||||
def _generate_weekly_calendar_actions(
|
||||
self,
|
||||
schedule: RecurringSchedule,
|
||||
horizon_days: int,
|
||||
) -> List[ScheduledAction]:
|
||||
"""
|
||||
Generate actions from weekly calendar pattern.
|
||||
|
||||
Pattern format:
|
||||
{
|
||||
"monday": {"enabled": true, "start": "19:00", "end": "07:00"},
|
||||
"tuesday": {"enabled": false},
|
||||
...
|
||||
}
|
||||
"""
|
||||
if not schedule.weekly_pattern:
|
||||
return []
|
||||
|
||||
try:
|
||||
pattern = json.loads(schedule.weekly_pattern)
|
||||
except json.JSONDecodeError:
|
||||
logger.error(f"Invalid weekly_pattern JSON for schedule {schedule.id}")
|
||||
return []
|
||||
|
||||
actions = []
|
||||
tz = ZoneInfo(schedule.timezone)
|
||||
now_utc = datetime.utcnow()
|
||||
now_local = now_utc.replace(tzinfo=ZoneInfo("UTC")).astimezone(tz)
|
||||
|
||||
# Get unit_id (from schedule or assignment)
|
||||
unit_id = self._resolve_unit_id(schedule)
|
||||
|
||||
for day_offset in range(horizon_days):
|
||||
check_date = now_local.date() + timedelta(days=day_offset)
|
||||
day_name = DAY_NAMES[check_date.weekday()]
|
||||
day_config = pattern.get(day_name, {})
|
||||
|
||||
if not day_config.get("enabled", False):
|
||||
continue
|
||||
|
||||
start_time_str = day_config.get("start")
|
||||
end_time_str = day_config.get("end")
|
||||
|
||||
if not start_time_str or not end_time_str:
|
||||
continue
|
||||
|
||||
# Parse times
|
||||
start_time = self._parse_time(start_time_str)
|
||||
end_time = self._parse_time(end_time_str)
|
||||
|
||||
if not start_time or not end_time:
|
||||
continue
|
||||
|
||||
# Create start datetime in local timezone
|
||||
start_local = datetime.combine(check_date, start_time, tzinfo=tz)
|
||||
start_utc = start_local.astimezone(ZoneInfo("UTC")).replace(tzinfo=None)
|
||||
|
||||
# Handle overnight schedules (end time is next day)
|
||||
if end_time <= start_time:
|
||||
end_date = check_date + timedelta(days=1)
|
||||
else:
|
||||
end_date = check_date
|
||||
|
||||
end_local = datetime.combine(end_date, end_time, tzinfo=tz)
|
||||
end_utc = end_local.astimezone(ZoneInfo("UTC")).replace(tzinfo=None)
|
||||
|
||||
# Skip if start time has already passed
|
||||
if start_utc <= now_utc:
|
||||
continue
|
||||
|
||||
# Check if action already exists
|
||||
if self._action_exists(schedule.project_id, schedule.location_id, "start", start_utc):
|
||||
continue
|
||||
|
||||
# Build notes with automation metadata
|
||||
start_notes = json.dumps({
|
||||
"schedule_name": schedule.name,
|
||||
"schedule_id": schedule.id,
|
||||
"auto_increment_index": schedule.auto_increment_index,
|
||||
})
|
||||
|
||||
# Create START action
|
||||
start_action = ScheduledAction(
|
||||
id=str(uuid.uuid4()),
|
||||
project_id=schedule.project_id,
|
||||
location_id=schedule.location_id,
|
||||
unit_id=unit_id,
|
||||
action_type="start",
|
||||
device_type=schedule.device_type,
|
||||
scheduled_time=start_utc,
|
||||
execution_status="pending",
|
||||
notes=start_notes,
|
||||
)
|
||||
actions.append(start_action)
|
||||
|
||||
# Create STOP action
|
||||
stop_notes = json.dumps({
|
||||
"schedule_name": schedule.name,
|
||||
"schedule_id": schedule.id,
|
||||
})
|
||||
stop_action = ScheduledAction(
|
||||
id=str(uuid.uuid4()),
|
||||
project_id=schedule.project_id,
|
||||
location_id=schedule.location_id,
|
||||
unit_id=unit_id,
|
||||
action_type="stop",
|
||||
device_type=schedule.device_type,
|
||||
scheduled_time=end_utc,
|
||||
execution_status="pending",
|
||||
notes=stop_notes,
|
||||
)
|
||||
actions.append(stop_action)
|
||||
|
||||
# Create DOWNLOAD action if enabled (1 minute after stop)
|
||||
if schedule.include_download:
|
||||
download_time = end_utc + timedelta(minutes=1)
|
||||
download_notes = json.dumps({
|
||||
"schedule_name": schedule.name,
|
||||
"schedule_id": schedule.id,
|
||||
"schedule_type": "weekly_calendar",
|
||||
})
|
||||
download_action = ScheduledAction(
|
||||
id=str(uuid.uuid4()),
|
||||
project_id=schedule.project_id,
|
||||
location_id=schedule.location_id,
|
||||
unit_id=unit_id,
|
||||
action_type="download",
|
||||
device_type=schedule.device_type,
|
||||
scheduled_time=download_time,
|
||||
execution_status="pending",
|
||||
notes=download_notes,
|
||||
)
|
||||
actions.append(download_action)
|
||||
|
||||
return actions
|
||||
|
||||
def _generate_interval_actions(
|
||||
self,
|
||||
schedule: RecurringSchedule,
|
||||
horizon_days: int,
|
||||
) -> List[ScheduledAction]:
|
||||
"""
|
||||
Generate actions from simple interval pattern.
|
||||
|
||||
For daily cycles: stop, download (optional), start at cycle_time each day.
|
||||
"""
|
||||
if not schedule.cycle_time:
|
||||
return []
|
||||
|
||||
cycle_time = self._parse_time(schedule.cycle_time)
|
||||
if not cycle_time:
|
||||
return []
|
||||
|
||||
actions = []
|
||||
tz = ZoneInfo(schedule.timezone)
|
||||
now_utc = datetime.utcnow()
|
||||
now_local = now_utc.replace(tzinfo=ZoneInfo("UTC")).astimezone(tz)
|
||||
|
||||
# Get unit_id
|
||||
unit_id = self._resolve_unit_id(schedule)
|
||||
|
||||
for day_offset in range(horizon_days):
|
||||
check_date = now_local.date() + timedelta(days=day_offset)
|
||||
|
||||
# Create cycle datetime in local timezone
|
||||
cycle_local = datetime.combine(check_date, cycle_time, tzinfo=tz)
|
||||
cycle_utc = cycle_local.astimezone(ZoneInfo("UTC")).replace(tzinfo=None)
|
||||
|
||||
# Skip if time has passed
|
||||
if cycle_utc <= now_utc:
|
||||
continue
|
||||
|
||||
# Check if action already exists
|
||||
if self._action_exists(schedule.project_id, schedule.location_id, "stop", cycle_utc):
|
||||
continue
|
||||
|
||||
# Build notes with metadata
|
||||
stop_notes = json.dumps({
|
||||
"schedule_name": schedule.name,
|
||||
"schedule_id": schedule.id,
|
||||
"cycle_type": "daily",
|
||||
})
|
||||
|
||||
# Create STOP action
|
||||
stop_action = ScheduledAction(
|
||||
id=str(uuid.uuid4()),
|
||||
project_id=schedule.project_id,
|
||||
location_id=schedule.location_id,
|
||||
unit_id=unit_id,
|
||||
action_type="stop",
|
||||
device_type=schedule.device_type,
|
||||
scheduled_time=cycle_utc,
|
||||
execution_status="pending",
|
||||
notes=stop_notes,
|
||||
)
|
||||
actions.append(stop_action)
|
||||
|
||||
# Create DOWNLOAD action if enabled (1 minute after stop)
|
||||
if schedule.include_download:
|
||||
download_time = cycle_utc + timedelta(minutes=1)
|
||||
download_notes = json.dumps({
|
||||
"schedule_name": schedule.name,
|
||||
"schedule_id": schedule.id,
|
||||
"cycle_type": "daily",
|
||||
})
|
||||
download_action = ScheduledAction(
|
||||
id=str(uuid.uuid4()),
|
||||
project_id=schedule.project_id,
|
||||
location_id=schedule.location_id,
|
||||
unit_id=unit_id,
|
||||
action_type="download",
|
||||
device_type=schedule.device_type,
|
||||
scheduled_time=download_time,
|
||||
execution_status="pending",
|
||||
notes=download_notes,
|
||||
)
|
||||
actions.append(download_action)
|
||||
|
||||
# Create START action (2 minutes after stop, or 1 minute after download)
|
||||
start_offset = 2 if schedule.include_download else 1
|
||||
start_time = cycle_utc + timedelta(minutes=start_offset)
|
||||
start_notes = json.dumps({
|
||||
"schedule_name": schedule.name,
|
||||
"schedule_id": schedule.id,
|
||||
"cycle_type": "daily",
|
||||
"auto_increment_index": schedule.auto_increment_index,
|
||||
})
|
||||
start_action = ScheduledAction(
|
||||
id=str(uuid.uuid4()),
|
||||
project_id=schedule.project_id,
|
||||
location_id=schedule.location_id,
|
||||
unit_id=unit_id,
|
||||
action_type="start",
|
||||
device_type=schedule.device_type,
|
||||
scheduled_time=start_time,
|
||||
execution_status="pending",
|
||||
notes=start_notes,
|
||||
)
|
||||
actions.append(start_action)
|
||||
|
||||
return actions
|
||||
|
||||
def _calculate_next_occurrence(self, schedule: RecurringSchedule) -> Optional[datetime]:
|
||||
"""Calculate when the next action should occur."""
|
||||
if not schedule.enabled:
|
||||
return None
|
||||
|
||||
tz = ZoneInfo(schedule.timezone)
|
||||
now_utc = datetime.utcnow()
|
||||
now_local = now_utc.replace(tzinfo=ZoneInfo("UTC")).astimezone(tz)
|
||||
|
||||
if schedule.schedule_type == "weekly_calendar" and schedule.weekly_pattern:
|
||||
try:
|
||||
pattern = json.loads(schedule.weekly_pattern)
|
||||
except:
|
||||
return None
|
||||
|
||||
# Find next enabled day
|
||||
for day_offset in range(8): # Check up to a week ahead
|
||||
check_date = now_local.date() + timedelta(days=day_offset)
|
||||
day_name = DAY_NAMES[check_date.weekday()]
|
||||
day_config = pattern.get(day_name, {})
|
||||
|
||||
if day_config.get("enabled") and day_config.get("start"):
|
||||
start_time = self._parse_time(day_config["start"])
|
||||
if start_time:
|
||||
start_local = datetime.combine(check_date, start_time, tzinfo=tz)
|
||||
start_utc = start_local.astimezone(ZoneInfo("UTC")).replace(tzinfo=None)
|
||||
if start_utc > now_utc:
|
||||
return start_utc
|
||||
|
||||
elif schedule.schedule_type == "simple_interval" and schedule.cycle_time:
|
||||
cycle_time = self._parse_time(schedule.cycle_time)
|
||||
if cycle_time:
|
||||
# Find next cycle time
|
||||
for day_offset in range(2):
|
||||
check_date = now_local.date() + timedelta(days=day_offset)
|
||||
cycle_local = datetime.combine(check_date, cycle_time, tzinfo=tz)
|
||||
cycle_utc = cycle_local.astimezone(ZoneInfo("UTC")).replace(tzinfo=None)
|
||||
if cycle_utc > now_utc:
|
||||
return cycle_utc
|
||||
|
||||
return None
|
||||
|
||||
def _resolve_unit_id(self, schedule: RecurringSchedule) -> Optional[str]:
|
||||
"""Get unit_id from schedule or active assignment."""
|
||||
if schedule.unit_id:
|
||||
return schedule.unit_id
|
||||
|
||||
# Try to get from active assignment
|
||||
assignment = self.db.query(UnitAssignment).filter(
|
||||
and_(
|
||||
UnitAssignment.location_id == schedule.location_id,
|
||||
UnitAssignment.status == "active",
|
||||
)
|
||||
).first()
|
||||
|
||||
return assignment.unit_id if assignment else None
|
||||
|
||||
def _action_exists(
|
||||
self,
|
||||
project_id: str,
|
||||
location_id: str,
|
||||
action_type: str,
|
||||
scheduled_time: datetime,
|
||||
) -> bool:
|
||||
"""Check if an action already exists for this time slot."""
|
||||
# Allow 5-minute window for duplicate detection
|
||||
time_window_start = scheduled_time - timedelta(minutes=5)
|
||||
time_window_end = scheduled_time + timedelta(minutes=5)
|
||||
|
||||
exists = self.db.query(ScheduledAction).filter(
|
||||
and_(
|
||||
ScheduledAction.project_id == project_id,
|
||||
ScheduledAction.location_id == location_id,
|
||||
ScheduledAction.action_type == action_type,
|
||||
ScheduledAction.scheduled_time >= time_window_start,
|
||||
ScheduledAction.scheduled_time <= time_window_end,
|
||||
ScheduledAction.execution_status == "pending",
|
||||
)
|
||||
).first()
|
||||
|
||||
return exists is not None
|
||||
|
||||
@staticmethod
|
||||
def _parse_time(time_str: str) -> Optional[time]:
|
||||
"""Parse time string "HH:MM" to time object."""
|
||||
try:
|
||||
parts = time_str.split(":")
|
||||
return time(int(parts[0]), int(parts[1]))
|
||||
except (ValueError, IndexError):
|
||||
return None
|
||||
|
||||
def get_schedules_for_project(self, project_id: str) -> List[RecurringSchedule]:
|
||||
"""Get all recurring schedules for a project."""
|
||||
return self.db.query(RecurringSchedule).filter_by(project_id=project_id).all()
|
||||
|
||||
def get_enabled_schedules(self) -> List[RecurringSchedule]:
|
||||
"""Get all enabled recurring schedules."""
|
||||
return self.db.query(RecurringSchedule).filter_by(enabled=True).all()
|
||||
|
||||
|
||||
def get_recurring_schedule_service(db: Session) -> RecurringScheduleService:
|
||||
"""Get a RecurringScheduleService instance."""
|
||||
return RecurringScheduleService(db)
|
||||
528
backend/services/scheduler.py
Normal file
@@ -0,0 +1,528 @@
|
||||
"""
|
||||
Scheduler Service
|
||||
|
||||
Executes scheduled actions for Projects system.
|
||||
Monitors pending scheduled actions and executes them by calling device modules (SLMM/SFM).
|
||||
|
||||
Extended to support recurring schedules:
|
||||
- Generates ScheduledActions from RecurringSchedule patterns
|
||||
- Cleans up old completed/failed actions
|
||||
|
||||
This service runs as a background task in FastAPI, checking for pending actions
|
||||
every minute and executing them when their scheduled time arrives.
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import json
|
||||
import logging
|
||||
from datetime import datetime, timedelta
|
||||
from typing import Optional, List, Dict, Any
|
||||
from sqlalchemy.orm import Session
|
||||
from sqlalchemy import and_
|
||||
|
||||
from backend.database import SessionLocal
|
||||
from backend.models import ScheduledAction, RecordingSession, MonitoringLocation, Project, RecurringSchedule
|
||||
from backend.services.device_controller import get_device_controller, DeviceControllerError
|
||||
from backend.services.alert_service import get_alert_service
|
||||
import uuid
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class SchedulerService:
|
||||
"""
|
||||
Service for executing scheduled actions.
|
||||
|
||||
Usage:
|
||||
scheduler = SchedulerService()
|
||||
await scheduler.start() # Start background loop
|
||||
scheduler.stop() # Stop background loop
|
||||
"""
|
||||
|
||||
def __init__(self, check_interval: int = 60):
|
||||
"""
|
||||
Initialize scheduler.
|
||||
|
||||
Args:
|
||||
check_interval: Seconds between checks for pending actions (default: 60)
|
||||
"""
|
||||
self.check_interval = check_interval
|
||||
self.running = False
|
||||
self.task: Optional[asyncio.Task] = None
|
||||
self.device_controller = get_device_controller()
|
||||
|
||||
async def start(self):
|
||||
"""Start the scheduler background task."""
|
||||
if self.running:
|
||||
print("Scheduler is already running")
|
||||
return
|
||||
|
||||
self.running = True
|
||||
self.task = asyncio.create_task(self._run_loop())
|
||||
print(f"Scheduler started (checking every {self.check_interval}s)")
|
||||
|
||||
def stop(self):
|
||||
"""Stop the scheduler background task."""
|
||||
self.running = False
|
||||
if self.task:
|
||||
self.task.cancel()
|
||||
print("Scheduler stopped")
|
||||
|
||||
async def _run_loop(self):
|
||||
"""Main scheduler loop."""
|
||||
# Track when we last generated recurring actions (do this once per hour)
|
||||
last_generation_check = datetime.utcnow() - timedelta(hours=1)
|
||||
|
||||
while self.running:
|
||||
try:
|
||||
# Execute pending actions
|
||||
await self.execute_pending_actions()
|
||||
|
||||
# Generate actions from recurring schedules (every hour)
|
||||
now = datetime.utcnow()
|
||||
if (now - last_generation_check).total_seconds() >= 3600:
|
||||
await self.generate_recurring_actions()
|
||||
last_generation_check = now
|
||||
|
||||
# Cleanup old actions (also every hour, during generation cycle)
|
||||
if (now - last_generation_check).total_seconds() < 60:
|
||||
await self.cleanup_old_actions()
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Scheduler error: {e}", exc_info=True)
|
||||
# Continue running even if there's an error
|
||||
|
||||
await asyncio.sleep(self.check_interval)
|
||||
|
||||
async def execute_pending_actions(self) -> List[Dict[str, Any]]:
|
||||
"""
|
||||
Find and execute all pending scheduled actions that are due.
|
||||
|
||||
Returns:
|
||||
List of execution results
|
||||
"""
|
||||
db = SessionLocal()
|
||||
results = []
|
||||
|
||||
try:
|
||||
# Find pending actions that are due
|
||||
now = datetime.utcnow()
|
||||
pending_actions = db.query(ScheduledAction).filter(
|
||||
and_(
|
||||
ScheduledAction.execution_status == "pending",
|
||||
ScheduledAction.scheduled_time <= now,
|
||||
)
|
||||
).order_by(ScheduledAction.scheduled_time).all()
|
||||
|
||||
if not pending_actions:
|
||||
return []
|
||||
|
||||
print(f"Found {len(pending_actions)} pending action(s) to execute")
|
||||
|
||||
for action in pending_actions:
|
||||
result = await self._execute_action(action, db)
|
||||
results.append(result)
|
||||
|
||||
db.commit()
|
||||
|
||||
except Exception as e:
|
||||
print(f"Error executing pending actions: {e}")
|
||||
db.rollback()
|
||||
finally:
|
||||
db.close()
|
||||
|
||||
return results
|
||||
|
||||
async def _execute_action(
|
||||
self,
|
||||
action: ScheduledAction,
|
||||
db: Session,
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Execute a single scheduled action.
|
||||
|
||||
Args:
|
||||
action: ScheduledAction to execute
|
||||
db: Database session
|
||||
|
||||
Returns:
|
||||
Execution result dict
|
||||
"""
|
||||
print(f"Executing action {action.id}: {action.action_type} for unit {action.unit_id}")
|
||||
|
||||
result = {
|
||||
"action_id": action.id,
|
||||
"action_type": action.action_type,
|
||||
"unit_id": action.unit_id,
|
||||
"scheduled_time": action.scheduled_time.isoformat(),
|
||||
"success": False,
|
||||
"error": None,
|
||||
}
|
||||
|
||||
try:
|
||||
# Determine which unit to use
|
||||
# If unit_id is specified, use it; otherwise get from location assignment
|
||||
unit_id = action.unit_id
|
||||
if not unit_id:
|
||||
# Get assigned unit from location
|
||||
from backend.models import UnitAssignment
|
||||
assignment = db.query(UnitAssignment).filter(
|
||||
and_(
|
||||
UnitAssignment.location_id == action.location_id,
|
||||
UnitAssignment.status == "active",
|
||||
)
|
||||
).first()
|
||||
|
||||
if not assignment:
|
||||
raise Exception(f"No active unit assigned to location {action.location_id}")
|
||||
|
||||
unit_id = assignment.unit_id
|
||||
|
||||
# Execute the action based on type
|
||||
if action.action_type == "start":
|
||||
response = await self._execute_start(action, unit_id, db)
|
||||
elif action.action_type == "stop":
|
||||
response = await self._execute_stop(action, unit_id, db)
|
||||
elif action.action_type == "download":
|
||||
response = await self._execute_download(action, unit_id, db)
|
||||
else:
|
||||
raise Exception(f"Unknown action type: {action.action_type}")
|
||||
|
||||
# Mark action as completed
|
||||
action.execution_status = "completed"
|
||||
action.executed_at = datetime.utcnow()
|
||||
action.module_response = json.dumps(response)
|
||||
|
||||
result["success"] = True
|
||||
result["response"] = response
|
||||
|
||||
print(f"✓ Action {action.id} completed successfully")
|
||||
|
||||
# Create success alert
|
||||
try:
|
||||
alert_service = get_alert_service(db)
|
||||
alert_metadata = response.get("cycle_response", {}) if isinstance(response, dict) else {}
|
||||
alert_service.create_schedule_completed_alert(
|
||||
schedule_id=action.id,
|
||||
action_type=action.action_type,
|
||||
unit_id=unit_id,
|
||||
project_id=action.project_id,
|
||||
location_id=action.location_id,
|
||||
metadata=alert_metadata,
|
||||
)
|
||||
except Exception as alert_err:
|
||||
logger.warning(f"Failed to create success alert: {alert_err}")
|
||||
|
||||
except Exception as e:
|
||||
# Mark action as failed
|
||||
action.execution_status = "failed"
|
||||
action.executed_at = datetime.utcnow()
|
||||
action.error_message = str(e)
|
||||
|
||||
result["error"] = str(e)
|
||||
|
||||
print(f"✗ Action {action.id} failed: {e}")
|
||||
|
||||
# Create failure alert
|
||||
try:
|
||||
alert_service = get_alert_service(db)
|
||||
alert_service.create_schedule_failed_alert(
|
||||
schedule_id=action.id,
|
||||
action_type=action.action_type,
|
||||
unit_id=unit_id if 'unit_id' in dir() else action.unit_id,
|
||||
error_message=str(e),
|
||||
project_id=action.project_id,
|
||||
location_id=action.location_id,
|
||||
)
|
||||
except Exception as alert_err:
|
||||
logger.warning(f"Failed to create failure alert: {alert_err}")
|
||||
|
||||
return result
|
||||
|
||||
async def _execute_start(
|
||||
self,
|
||||
action: ScheduledAction,
|
||||
unit_id: str,
|
||||
db: Session,
|
||||
) -> Dict[str, Any]:
|
||||
"""Execute a 'start' action using the start_cycle command.
|
||||
|
||||
start_cycle handles:
|
||||
1. Sync device clock to server time
|
||||
2. Find next safe index (with overwrite protection)
|
||||
3. Start measurement
|
||||
"""
|
||||
# Execute the full start cycle via device controller
|
||||
# SLMM handles clock sync, index increment, and start
|
||||
cycle_response = await self.device_controller.start_cycle(
|
||||
unit_id,
|
||||
action.device_type,
|
||||
sync_clock=True,
|
||||
)
|
||||
|
||||
# Create recording session
|
||||
session = RecordingSession(
|
||||
id=str(uuid.uuid4()),
|
||||
project_id=action.project_id,
|
||||
location_id=action.location_id,
|
||||
unit_id=unit_id,
|
||||
session_type="sound" if action.device_type == "slm" else "vibration",
|
||||
started_at=datetime.utcnow(),
|
||||
status="recording",
|
||||
session_metadata=json.dumps({
|
||||
"scheduled_action_id": action.id,
|
||||
"cycle_response": cycle_response,
|
||||
}),
|
||||
)
|
||||
db.add(session)
|
||||
|
||||
return {
|
||||
"status": "started",
|
||||
"session_id": session.id,
|
||||
"cycle_response": cycle_response,
|
||||
}
|
||||
|
||||
async def _execute_stop(
|
||||
self,
|
||||
action: ScheduledAction,
|
||||
unit_id: str,
|
||||
db: Session,
|
||||
) -> Dict[str, Any]:
|
||||
"""Execute a 'stop' action using the stop_cycle command.
|
||||
|
||||
stop_cycle handles:
|
||||
1. Stop measurement
|
||||
2. Enable FTP
|
||||
3. Download measurement folder
|
||||
4. Verify download
|
||||
"""
|
||||
# Parse notes for download preference
|
||||
include_download = True
|
||||
try:
|
||||
if action.notes:
|
||||
notes_data = json.loads(action.notes)
|
||||
include_download = notes_data.get("include_download", True)
|
||||
except json.JSONDecodeError:
|
||||
pass # Notes is plain text, not JSON
|
||||
|
||||
# Execute the full stop cycle via device controller
|
||||
# SLMM handles stop, FTP enable, and download
|
||||
cycle_response = await self.device_controller.stop_cycle(
|
||||
unit_id,
|
||||
action.device_type,
|
||||
download=include_download,
|
||||
)
|
||||
|
||||
# Find and update the active recording session
|
||||
active_session = db.query(RecordingSession).filter(
|
||||
and_(
|
||||
RecordingSession.location_id == action.location_id,
|
||||
RecordingSession.unit_id == unit_id,
|
||||
RecordingSession.status == "recording",
|
||||
)
|
||||
).first()
|
||||
|
||||
if active_session:
|
||||
active_session.stopped_at = datetime.utcnow()
|
||||
active_session.status = "completed"
|
||||
active_session.duration_seconds = int(
|
||||
(active_session.stopped_at - active_session.started_at).total_seconds()
|
||||
)
|
||||
# Store download info in session metadata
|
||||
if cycle_response.get("download_success"):
|
||||
try:
|
||||
metadata = json.loads(active_session.session_metadata or "{}")
|
||||
metadata["downloaded_folder"] = cycle_response.get("downloaded_folder")
|
||||
metadata["local_path"] = cycle_response.get("local_path")
|
||||
active_session.session_metadata = json.dumps(metadata)
|
||||
except json.JSONDecodeError:
|
||||
pass
|
||||
|
||||
return {
|
||||
"status": "stopped",
|
||||
"session_id": active_session.id if active_session else None,
|
||||
"cycle_response": cycle_response,
|
||||
}
|
||||
|
||||
async def _execute_download(
|
||||
self,
|
||||
action: ScheduledAction,
|
||||
unit_id: str,
|
||||
db: Session,
|
||||
) -> Dict[str, Any]:
|
||||
"""Execute a 'download' action."""
|
||||
# Get project and location info for file path
|
||||
location = db.query(MonitoringLocation).filter_by(id=action.location_id).first()
|
||||
project = db.query(Project).filter_by(id=action.project_id).first()
|
||||
|
||||
if not location or not project:
|
||||
raise Exception("Project or location not found")
|
||||
|
||||
# Build destination path
|
||||
# Example: data/Projects/{project-id}/sound/{location-name}/session-{timestamp}/
|
||||
session_timestamp = datetime.utcnow().strftime("%Y-%m-%d-%H%M")
|
||||
location_type_dir = "sound" if action.device_type == "slm" else "vibration"
|
||||
|
||||
destination_path = (
|
||||
f"data/Projects/{project.id}/{location_type_dir}/"
|
||||
f"{location.name}/session-{session_timestamp}/"
|
||||
)
|
||||
|
||||
# Download files via device controller
|
||||
response = await self.device_controller.download_files(
|
||||
unit_id,
|
||||
action.device_type,
|
||||
destination_path,
|
||||
files=None, # Download all files
|
||||
)
|
||||
|
||||
# TODO: Create DataFile records for downloaded files
|
||||
|
||||
return {
|
||||
"status": "downloaded",
|
||||
"destination_path": destination_path,
|
||||
"device_response": response,
|
||||
}
|
||||
|
||||
# ========================================================================
|
||||
# Recurring Schedule Generation
|
||||
# ========================================================================
|
||||
|
||||
async def generate_recurring_actions(self) -> int:
|
||||
"""
|
||||
Generate ScheduledActions from all enabled recurring schedules.
|
||||
|
||||
Runs once per hour to generate actions for the next 7 days.
|
||||
|
||||
Returns:
|
||||
Total number of actions generated
|
||||
"""
|
||||
db = SessionLocal()
|
||||
total_generated = 0
|
||||
|
||||
try:
|
||||
from backend.services.recurring_schedule_service import get_recurring_schedule_service
|
||||
|
||||
service = get_recurring_schedule_service(db)
|
||||
schedules = service.get_enabled_schedules()
|
||||
|
||||
if not schedules:
|
||||
logger.debug("No enabled recurring schedules found")
|
||||
return 0
|
||||
|
||||
logger.info(f"Generating actions for {len(schedules)} recurring schedule(s)")
|
||||
|
||||
for schedule in schedules:
|
||||
try:
|
||||
actions = service.generate_actions_for_schedule(schedule, horizon_days=7)
|
||||
total_generated += len(actions)
|
||||
except Exception as e:
|
||||
logger.error(f"Error generating actions for schedule {schedule.id}: {e}")
|
||||
|
||||
if total_generated > 0:
|
||||
logger.info(f"Generated {total_generated} scheduled actions from recurring schedules")
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error in generate_recurring_actions: {e}", exc_info=True)
|
||||
finally:
|
||||
db.close()
|
||||
|
||||
return total_generated
|
||||
|
||||
async def cleanup_old_actions(self, retention_days: int = 30) -> int:
|
||||
"""
|
||||
Remove old completed/failed actions to prevent database bloat.
|
||||
|
||||
Args:
|
||||
retention_days: Keep actions newer than this many days
|
||||
|
||||
Returns:
|
||||
Number of actions cleaned up
|
||||
"""
|
||||
db = SessionLocal()
|
||||
cleaned = 0
|
||||
|
||||
try:
|
||||
cutoff = datetime.utcnow() - timedelta(days=retention_days)
|
||||
|
||||
old_actions = db.query(ScheduledAction).filter(
|
||||
and_(
|
||||
ScheduledAction.execution_status.in_(["completed", "failed", "cancelled"]),
|
||||
ScheduledAction.executed_at < cutoff,
|
||||
)
|
||||
).all()
|
||||
|
||||
cleaned = len(old_actions)
|
||||
for action in old_actions:
|
||||
db.delete(action)
|
||||
|
||||
if cleaned > 0:
|
||||
db.commit()
|
||||
logger.info(f"Cleaned up {cleaned} old scheduled actions (>{retention_days} days)")
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error cleaning up old actions: {e}")
|
||||
db.rollback()
|
||||
finally:
|
||||
db.close()
|
||||
|
||||
return cleaned
|
||||
|
||||
# ========================================================================
|
||||
# Manual Execution (for testing/debugging)
|
||||
# ========================================================================
|
||||
|
||||
async def execute_action_by_id(self, action_id: str) -> Dict[str, Any]:
|
||||
"""
|
||||
Manually execute a specific action by ID.
|
||||
|
||||
Args:
|
||||
action_id: ScheduledAction ID
|
||||
|
||||
Returns:
|
||||
Execution result
|
||||
"""
|
||||
db = SessionLocal()
|
||||
try:
|
||||
action = db.query(ScheduledAction).filter_by(id=action_id).first()
|
||||
if not action:
|
||||
return {"success": False, "error": "Action not found"}
|
||||
|
||||
result = await self._execute_action(action, db)
|
||||
db.commit()
|
||||
return result
|
||||
|
||||
except Exception as e:
|
||||
db.rollback()
|
||||
return {"success": False, "error": str(e)}
|
||||
finally:
|
||||
db.close()
|
||||
|
||||
|
||||
# Singleton instance
|
||||
_scheduler_instance: Optional[SchedulerService] = None
|
||||
|
||||
|
||||
def get_scheduler() -> SchedulerService:
|
||||
"""
|
||||
Get the scheduler singleton instance.
|
||||
|
||||
Returns:
|
||||
SchedulerService instance
|
||||
"""
|
||||
global _scheduler_instance
|
||||
if _scheduler_instance is None:
|
||||
_scheduler_instance = SchedulerService()
|
||||
return _scheduler_instance
|
||||
|
||||
|
||||
async def start_scheduler():
|
||||
"""Start the global scheduler instance."""
|
||||
scheduler = get_scheduler()
|
||||
await scheduler.start()
|
||||
|
||||
|
||||
def stop_scheduler():
|
||||
"""Stop the global scheduler instance."""
|
||||
scheduler = get_scheduler()
|
||||
scheduler.stop()
|
||||
671
backend/services/slmm_client.py
Normal file
@@ -0,0 +1,671 @@
|
||||
"""
|
||||
SLMM API Client Wrapper
|
||||
|
||||
Provides a clean interface for Terra-View to interact with the SLMM backend.
|
||||
All SLM operations should go through this client instead of direct HTTP calls.
|
||||
|
||||
SLMM (Sound Level Meter Manager) is a separate service running on port 8100
|
||||
that handles TCP/FTP communication with Rion NL-43/NL-53 devices.
|
||||
"""
|
||||
|
||||
import httpx
|
||||
import os
|
||||
from typing import Optional, Dict, Any, List
|
||||
from datetime import datetime
|
||||
import json
|
||||
|
||||
|
||||
# SLMM backend base URLs - use environment variable if set (for Docker)
|
||||
SLMM_BASE_URL = os.environ.get("SLMM_BASE_URL", "http://localhost:8100")
|
||||
SLMM_API_BASE = f"{SLMM_BASE_URL}/api/nl43"
|
||||
|
||||
|
||||
class SLMMClientError(Exception):
|
||||
"""Base exception for SLMM client errors."""
|
||||
pass
|
||||
|
||||
|
||||
class SLMMConnectionError(SLMMClientError):
|
||||
"""Raised when cannot connect to SLMM backend."""
|
||||
pass
|
||||
|
||||
|
||||
class SLMMDeviceError(SLMMClientError):
|
||||
"""Raised when device operation fails."""
|
||||
pass
|
||||
|
||||
|
||||
class SLMMClient:
|
||||
"""
|
||||
Client for interacting with SLMM backend.
|
||||
|
||||
Usage:
|
||||
client = SLMMClient()
|
||||
units = await client.get_all_units()
|
||||
status = await client.get_unit_status("nl43-001")
|
||||
await client.start_recording("nl43-001", config={...})
|
||||
"""
|
||||
|
||||
def __init__(self, base_url: str = SLMM_BASE_URL, timeout: float = 30.0):
|
||||
self.base_url = base_url
|
||||
self.api_base = f"{base_url}/api/nl43"
|
||||
self.timeout = timeout
|
||||
|
||||
async def _request(
|
||||
self,
|
||||
method: str,
|
||||
endpoint: str,
|
||||
data: Optional[Dict] = None,
|
||||
params: Optional[Dict] = None,
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Make an HTTP request to SLMM backend.
|
||||
|
||||
Args:
|
||||
method: HTTP method (GET, POST, PUT, DELETE)
|
||||
endpoint: API endpoint (e.g., "/units", "/{unit_id}/status")
|
||||
data: JSON body for POST/PUT requests
|
||||
params: Query parameters
|
||||
|
||||
Returns:
|
||||
Response JSON as dict
|
||||
|
||||
Raises:
|
||||
SLMMConnectionError: Cannot reach SLMM
|
||||
SLMMDeviceError: Device operation failed
|
||||
"""
|
||||
url = f"{self.api_base}{endpoint}"
|
||||
|
||||
try:
|
||||
async with httpx.AsyncClient(timeout=self.timeout) as client:
|
||||
response = await client.request(
|
||||
method=method,
|
||||
url=url,
|
||||
json=data,
|
||||
params=params,
|
||||
)
|
||||
response.raise_for_status()
|
||||
|
||||
# Handle empty responses
|
||||
if not response.content:
|
||||
return {}
|
||||
|
||||
return response.json()
|
||||
|
||||
except httpx.ConnectError as e:
|
||||
raise SLMMConnectionError(
|
||||
f"Cannot connect to SLMM backend at {self.base_url}. "
|
||||
f"Is SLMM running? Error: {str(e)}"
|
||||
)
|
||||
except httpx.HTTPStatusError as e:
|
||||
error_detail = "Unknown error"
|
||||
try:
|
||||
error_data = e.response.json()
|
||||
error_detail = error_data.get("detail", str(error_data))
|
||||
except:
|
||||
error_detail = e.response.text or str(e)
|
||||
|
||||
raise SLMMDeviceError(
|
||||
f"SLMM operation failed: {error_detail}"
|
||||
)
|
||||
except Exception as e:
|
||||
raise SLMMClientError(f"Unexpected error: {str(e)}")
|
||||
|
||||
# ========================================================================
|
||||
# Unit Management
|
||||
# ========================================================================
|
||||
|
||||
async def get_all_units(self) -> List[Dict[str, Any]]:
|
||||
"""
|
||||
Get all configured SLM units from SLMM.
|
||||
|
||||
Returns:
|
||||
List of unit dicts with id, config, and status
|
||||
"""
|
||||
# SLMM doesn't have a /units endpoint yet, so we'll need to add this
|
||||
# For now, return empty list or implement when SLMM endpoint is ready
|
||||
try:
|
||||
response = await self._request("GET", "/units")
|
||||
return response.get("units", [])
|
||||
except SLMMClientError:
|
||||
# Endpoint may not exist yet
|
||||
return []
|
||||
|
||||
async def get_unit_config(self, unit_id: str) -> Dict[str, Any]:
|
||||
"""
|
||||
Get unit configuration from SLMM cache.
|
||||
|
||||
Args:
|
||||
unit_id: Unit identifier (e.g., "nl43-001")
|
||||
|
||||
Returns:
|
||||
Config dict with host, tcp_port, ftp_port, etc.
|
||||
"""
|
||||
return await self._request("GET", f"/{unit_id}/config")
|
||||
|
||||
async def update_unit_config(
|
||||
self,
|
||||
unit_id: str,
|
||||
host: Optional[str] = None,
|
||||
tcp_port: Optional[int] = None,
|
||||
ftp_port: Optional[int] = None,
|
||||
ftp_username: Optional[str] = None,
|
||||
ftp_password: Optional[str] = None,
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Update unit configuration in SLMM cache.
|
||||
|
||||
Args:
|
||||
unit_id: Unit identifier
|
||||
host: Device IP address
|
||||
tcp_port: TCP control port (default: 2255)
|
||||
ftp_port: FTP data port (default: 21)
|
||||
ftp_username: FTP username
|
||||
ftp_password: FTP password
|
||||
|
||||
Returns:
|
||||
Updated config
|
||||
"""
|
||||
config = {}
|
||||
if host is not None:
|
||||
config["host"] = host
|
||||
if tcp_port is not None:
|
||||
config["tcp_port"] = tcp_port
|
||||
if ftp_port is not None:
|
||||
config["ftp_port"] = ftp_port
|
||||
if ftp_username is not None:
|
||||
config["ftp_username"] = ftp_username
|
||||
if ftp_password is not None:
|
||||
config["ftp_password"] = ftp_password
|
||||
|
||||
return await self._request("PUT", f"/{unit_id}/config", data=config)
|
||||
|
||||
# ========================================================================
|
||||
# Status & Monitoring
|
||||
# ========================================================================
|
||||
|
||||
async def get_unit_status(self, unit_id: str) -> Dict[str, Any]:
|
||||
"""
|
||||
Get cached status snapshot from SLMM.
|
||||
|
||||
Args:
|
||||
unit_id: Unit identifier
|
||||
|
||||
Returns:
|
||||
Status dict with measurement_state, lp, leq, battery, etc.
|
||||
"""
|
||||
return await self._request("GET", f"/{unit_id}/status")
|
||||
|
||||
async def get_live_data(self, unit_id: str) -> Dict[str, Any]:
|
||||
"""
|
||||
Request fresh data from device (DOD command).
|
||||
|
||||
Args:
|
||||
unit_id: Unit identifier
|
||||
|
||||
Returns:
|
||||
Live data snapshot
|
||||
"""
|
||||
return await self._request("GET", f"/{unit_id}/live")
|
||||
|
||||
# ========================================================================
|
||||
# Recording Control
|
||||
# ========================================================================
|
||||
|
||||
async def start_recording(
|
||||
self,
|
||||
unit_id: str,
|
||||
config: Optional[Dict[str, Any]] = None,
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Start recording on a unit.
|
||||
|
||||
Args:
|
||||
unit_id: Unit identifier
|
||||
config: Optional recording config (interval, settings, etc.)
|
||||
|
||||
Returns:
|
||||
Response from SLMM with success status
|
||||
"""
|
||||
return await self._request("POST", f"/{unit_id}/start", data=config or {})
|
||||
|
||||
async def stop_recording(self, unit_id: str) -> Dict[str, Any]:
|
||||
"""
|
||||
Stop recording on a unit.
|
||||
|
||||
Args:
|
||||
unit_id: Unit identifier
|
||||
|
||||
Returns:
|
||||
Response from SLMM
|
||||
"""
|
||||
return await self._request("POST", f"/{unit_id}/stop")
|
||||
|
||||
async def pause_recording(self, unit_id: str) -> Dict[str, Any]:
|
||||
"""
|
||||
Pause recording on a unit.
|
||||
|
||||
Args:
|
||||
unit_id: Unit identifier
|
||||
|
||||
Returns:
|
||||
Response from SLMM
|
||||
"""
|
||||
return await self._request("POST", f"/{unit_id}/pause")
|
||||
|
||||
async def resume_recording(self, unit_id: str) -> Dict[str, Any]:
|
||||
"""
|
||||
Resume paused recording on a unit.
|
||||
|
||||
Args:
|
||||
unit_id: Unit identifier
|
||||
|
||||
Returns:
|
||||
Response from SLMM
|
||||
"""
|
||||
return await self._request("POST", f"/{unit_id}/resume")
|
||||
|
||||
async def reset_data(self, unit_id: str) -> Dict[str, Any]:
|
||||
"""
|
||||
Reset measurement data on a unit.
|
||||
|
||||
Args:
|
||||
unit_id: Unit identifier
|
||||
|
||||
Returns:
|
||||
Response from SLMM
|
||||
"""
|
||||
return await self._request("POST", f"/{unit_id}/reset")
|
||||
|
||||
# ========================================================================
|
||||
# Store/Index Management
|
||||
# ========================================================================
|
||||
|
||||
async def get_index_number(self, unit_id: str) -> Dict[str, Any]:
|
||||
"""
|
||||
Get current store/index number from device.
|
||||
|
||||
Args:
|
||||
unit_id: Unit identifier
|
||||
|
||||
Returns:
|
||||
Dict with current index_number (store name)
|
||||
"""
|
||||
return await self._request("GET", f"/{unit_id}/index-number")
|
||||
|
||||
async def set_index_number(
|
||||
self,
|
||||
unit_id: str,
|
||||
index_number: int,
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Set store/index number on device.
|
||||
|
||||
Args:
|
||||
unit_id: Unit identifier
|
||||
index_number: New index number to set
|
||||
|
||||
Returns:
|
||||
Confirmation response
|
||||
"""
|
||||
return await self._request(
|
||||
"PUT",
|
||||
f"/{unit_id}/index-number",
|
||||
data={"index_number": index_number},
|
||||
)
|
||||
|
||||
async def check_overwrite_status(self, unit_id: str) -> Dict[str, Any]:
|
||||
"""
|
||||
Check if data exists at the current store index.
|
||||
|
||||
Args:
|
||||
unit_id: Unit identifier
|
||||
|
||||
Returns:
|
||||
Dict with:
|
||||
- overwrite_status: "None" (safe) or "Exist" (would overwrite)
|
||||
- will_overwrite: bool
|
||||
- safe_to_store: bool
|
||||
"""
|
||||
return await self._request("GET", f"/{unit_id}/overwrite-check")
|
||||
|
||||
async def increment_index(self, unit_id: str, max_attempts: int = 100) -> Dict[str, Any]:
|
||||
"""
|
||||
Find and set the next available (unused) store/index number.
|
||||
|
||||
Checks the current index - if it would overwrite existing data,
|
||||
increments until finding an unused index number.
|
||||
|
||||
Args:
|
||||
unit_id: Unit identifier
|
||||
max_attempts: Maximum number of indices to try before giving up
|
||||
|
||||
Returns:
|
||||
Dict with old_index, new_index, and attempts_made
|
||||
"""
|
||||
# Get current index
|
||||
current = await self.get_index_number(unit_id)
|
||||
old_index = current.get("index_number", 0)
|
||||
|
||||
# Check if current index is safe
|
||||
overwrite_check = await self.check_overwrite_status(unit_id)
|
||||
if overwrite_check.get("safe_to_store", False):
|
||||
# Current index is safe, no need to increment
|
||||
return {
|
||||
"success": True,
|
||||
"old_index": old_index,
|
||||
"new_index": old_index,
|
||||
"unit_id": unit_id,
|
||||
"already_safe": True,
|
||||
"attempts_made": 0,
|
||||
}
|
||||
|
||||
# Need to find an unused index
|
||||
attempts = 0
|
||||
test_index = old_index + 1
|
||||
|
||||
while attempts < max_attempts:
|
||||
# Set the new index
|
||||
await self.set_index_number(unit_id, test_index)
|
||||
|
||||
# Check if this index is safe
|
||||
overwrite_check = await self.check_overwrite_status(unit_id)
|
||||
attempts += 1
|
||||
|
||||
if overwrite_check.get("safe_to_store", False):
|
||||
return {
|
||||
"success": True,
|
||||
"old_index": old_index,
|
||||
"new_index": test_index,
|
||||
"unit_id": unit_id,
|
||||
"already_safe": False,
|
||||
"attempts_made": attempts,
|
||||
}
|
||||
|
||||
# Try next index (wrap around at 9999)
|
||||
test_index = (test_index + 1) % 10000
|
||||
|
||||
# Avoid infinite loops if we've wrapped around
|
||||
if test_index == old_index:
|
||||
break
|
||||
|
||||
# Could not find a safe index
|
||||
raise SLMMDeviceError(
|
||||
f"Could not find unused store index for {unit_id} after {attempts} attempts. "
|
||||
f"Consider downloading and clearing data from the device."
|
||||
)
|
||||
|
||||
# ========================================================================
|
||||
# Device Settings
|
||||
# ========================================================================
|
||||
|
||||
async def get_frequency_weighting(self, unit_id: str) -> Dict[str, Any]:
|
||||
"""
|
||||
Get frequency weighting setting (A, C, or Z).
|
||||
|
||||
Args:
|
||||
unit_id: Unit identifier
|
||||
|
||||
Returns:
|
||||
Dict with current weighting
|
||||
"""
|
||||
return await self._request("GET", f"/{unit_id}/frequency-weighting")
|
||||
|
||||
async def set_frequency_weighting(
|
||||
self,
|
||||
unit_id: str,
|
||||
weighting: str,
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Set frequency weighting (A, C, or Z).
|
||||
|
||||
Args:
|
||||
unit_id: Unit identifier
|
||||
weighting: "A", "C", or "Z"
|
||||
|
||||
Returns:
|
||||
Confirmation response
|
||||
"""
|
||||
return await self._request(
|
||||
"PUT",
|
||||
f"/{unit_id}/frequency-weighting",
|
||||
data={"weighting": weighting},
|
||||
)
|
||||
|
||||
async def get_time_weighting(self, unit_id: str) -> Dict[str, Any]:
|
||||
"""
|
||||
Get time weighting setting (F, S, or I).
|
||||
|
||||
Args:
|
||||
unit_id: Unit identifier
|
||||
|
||||
Returns:
|
||||
Dict with current time weighting
|
||||
"""
|
||||
return await self._request("GET", f"/{unit_id}/time-weighting")
|
||||
|
||||
async def set_time_weighting(
|
||||
self,
|
||||
unit_id: str,
|
||||
weighting: str,
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Set time weighting (F=Fast, S=Slow, I=Impulse).
|
||||
|
||||
Args:
|
||||
unit_id: Unit identifier
|
||||
weighting: "F", "S", or "I"
|
||||
|
||||
Returns:
|
||||
Confirmation response
|
||||
"""
|
||||
return await self._request(
|
||||
"PUT",
|
||||
f"/{unit_id}/time-weighting",
|
||||
data={"weighting": weighting},
|
||||
)
|
||||
|
||||
async def get_all_settings(self, unit_id: str) -> Dict[str, Any]:
|
||||
"""
|
||||
Get all device settings.
|
||||
|
||||
Args:
|
||||
unit_id: Unit identifier
|
||||
|
||||
Returns:
|
||||
Dict with all settings
|
||||
"""
|
||||
return await self._request("GET", f"/{unit_id}/settings")
|
||||
|
||||
# ========================================================================
|
||||
# Data Download (Future)
|
||||
# ========================================================================
|
||||
|
||||
async def download_files(
|
||||
self,
|
||||
unit_id: str,
|
||||
destination_path: str,
|
||||
files: Optional[List[str]] = None,
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Download files from unit via FTP.
|
||||
|
||||
NOTE: This endpoint doesn't exist in SLMM yet. Will need to implement.
|
||||
|
||||
Args:
|
||||
unit_id: Unit identifier
|
||||
destination_path: Local path to save files
|
||||
files: List of filenames to download, or None for all
|
||||
|
||||
Returns:
|
||||
Dict with downloaded files list and metadata
|
||||
"""
|
||||
data = {
|
||||
"destination_path": destination_path,
|
||||
"files": files or "all",
|
||||
}
|
||||
return await self._request("POST", f"/{unit_id}/ftp/download", data=data)
|
||||
|
||||
# ========================================================================
|
||||
# Cycle Commands (for scheduled automation)
|
||||
# ========================================================================
|
||||
|
||||
async def start_cycle(
|
||||
self,
|
||||
unit_id: str,
|
||||
sync_clock: bool = True,
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Execute complete start cycle on device via SLMM.
|
||||
|
||||
This handles the full pre-recording workflow:
|
||||
1. Sync device clock to server time
|
||||
2. Find next safe index (with overwrite protection)
|
||||
3. Start measurement
|
||||
|
||||
Args:
|
||||
unit_id: Unit identifier
|
||||
sync_clock: Whether to sync device clock to server time
|
||||
|
||||
Returns:
|
||||
Dict with clock_synced, old_index, new_index, started, etc.
|
||||
"""
|
||||
return await self._request(
|
||||
"POST",
|
||||
f"/{unit_id}/start-cycle",
|
||||
data={"sync_clock": sync_clock},
|
||||
)
|
||||
|
||||
async def stop_cycle(
|
||||
self,
|
||||
unit_id: str,
|
||||
download: bool = True,
|
||||
download_path: Optional[str] = None,
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Execute complete stop cycle on device via SLMM.
|
||||
|
||||
This handles the full post-recording workflow:
|
||||
1. Stop measurement
|
||||
2. Enable FTP
|
||||
3. Download measurement folder (if download=True)
|
||||
4. Verify download
|
||||
|
||||
Args:
|
||||
unit_id: Unit identifier
|
||||
download: Whether to download measurement data
|
||||
download_path: Custom path for downloaded ZIP (optional)
|
||||
|
||||
Returns:
|
||||
Dict with stopped, ftp_enabled, download_success, local_path, etc.
|
||||
"""
|
||||
data = {"download": download}
|
||||
if download_path:
|
||||
data["download_path"] = download_path
|
||||
return await self._request(
|
||||
"POST",
|
||||
f"/{unit_id}/stop-cycle",
|
||||
data=data,
|
||||
)
|
||||
|
||||
# ========================================================================
|
||||
# Polling Status (for device monitoring/alerts)
|
||||
# ========================================================================
|
||||
|
||||
async def get_polling_status(self) -> Dict[str, Any]:
|
||||
"""
|
||||
Get global polling status from SLMM.
|
||||
|
||||
Returns device reachability information for all polled devices.
|
||||
Used by DeviceStatusMonitor to detect offline/online transitions.
|
||||
|
||||
Returns:
|
||||
Dict with devices list containing:
|
||||
- unit_id
|
||||
- is_reachable
|
||||
- consecutive_failures
|
||||
- last_poll_attempt
|
||||
- last_success
|
||||
- last_error
|
||||
"""
|
||||
try:
|
||||
async with httpx.AsyncClient(timeout=self.timeout) as client:
|
||||
response = await client.get(f"{self.base_url}/api/nl43/_polling/status")
|
||||
response.raise_for_status()
|
||||
return response.json()
|
||||
except httpx.ConnectError:
|
||||
raise SLMMConnectionError("Cannot connect to SLMM for polling status")
|
||||
except Exception as e:
|
||||
raise SLMMClientError(f"Failed to get polling status: {str(e)}")
|
||||
|
||||
async def get_device_polling_config(self, unit_id: str) -> Dict[str, Any]:
|
||||
"""
|
||||
Get polling configuration for a specific device.
|
||||
|
||||
Args:
|
||||
unit_id: Unit identifier
|
||||
|
||||
Returns:
|
||||
Dict with poll_enabled and poll_interval_seconds
|
||||
"""
|
||||
return await self._request("GET", f"/{unit_id}/polling/config")
|
||||
|
||||
async def update_device_polling_config(
|
||||
self,
|
||||
unit_id: str,
|
||||
poll_enabled: Optional[bool] = None,
|
||||
poll_interval_seconds: Optional[int] = None,
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Update polling configuration for a device.
|
||||
|
||||
Args:
|
||||
unit_id: Unit identifier
|
||||
poll_enabled: Enable/disable polling
|
||||
poll_interval_seconds: Polling interval (10-3600)
|
||||
|
||||
Returns:
|
||||
Updated config
|
||||
"""
|
||||
config = {}
|
||||
if poll_enabled is not None:
|
||||
config["poll_enabled"] = poll_enabled
|
||||
if poll_interval_seconds is not None:
|
||||
config["poll_interval_seconds"] = poll_interval_seconds
|
||||
|
||||
return await self._request("PUT", f"/{unit_id}/polling/config", data=config)
|
||||
|
||||
# ========================================================================
|
||||
# Health Check
|
||||
# ========================================================================
|
||||
|
||||
async def health_check(self) -> bool:
|
||||
"""
|
||||
Check if SLMM backend is reachable.
|
||||
|
||||
Returns:
|
||||
True if SLMM is responding, False otherwise
|
||||
"""
|
||||
try:
|
||||
async with httpx.AsyncClient(timeout=5.0) as client:
|
||||
response = await client.get(f"{self.base_url}/health")
|
||||
return response.status_code == 200
|
||||
except:
|
||||
return False
|
||||
|
||||
|
||||
# Singleton instance for convenience
|
||||
_default_client: Optional[SLMMClient] = None
|
||||
|
||||
|
||||
def get_slmm_client() -> SLMMClient:
|
||||
"""
|
||||
Get the default SLMM client instance.
|
||||
|
||||
Returns:
|
||||
SLMMClient instance
|
||||
"""
|
||||
global _default_client
|
||||
if _default_client is None:
|
||||
_default_client = SLMMClient()
|
||||
return _default_client
|
||||
227
backend/services/slmm_sync.py
Normal file
@@ -0,0 +1,227 @@
|
||||
"""
|
||||
SLMM Synchronization Service
|
||||
|
||||
This service ensures Terra-View roster is the single source of truth for SLM device configuration.
|
||||
When SLM devices are added, edited, or deleted in Terra-View, changes are automatically synced to SLMM.
|
||||
"""
|
||||
|
||||
import logging
|
||||
import httpx
|
||||
import os
|
||||
from typing import Optional
|
||||
from sqlalchemy.orm import Session
|
||||
|
||||
from backend.models import RosterUnit
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
SLMM_BASE_URL = os.getenv("SLMM_BASE_URL", "http://localhost:8100")
|
||||
|
||||
|
||||
async def sync_slm_to_slmm(unit: RosterUnit) -> bool:
|
||||
"""
|
||||
Sync a single SLM device from Terra-View roster to SLMM.
|
||||
|
||||
Args:
|
||||
unit: RosterUnit with device_type="slm"
|
||||
|
||||
Returns:
|
||||
True if sync successful, False otherwise
|
||||
"""
|
||||
if unit.device_type != "slm":
|
||||
logger.warning(f"Attempted to sync non-SLM unit {unit.id} to SLMM")
|
||||
return False
|
||||
|
||||
if not unit.slm_host:
|
||||
logger.warning(f"SLM {unit.id} has no host configured, skipping SLMM sync")
|
||||
return False
|
||||
|
||||
try:
|
||||
async with httpx.AsyncClient(timeout=5.0) as client:
|
||||
response = await client.put(
|
||||
f"{SLMM_BASE_URL}/api/nl43/{unit.id}/config",
|
||||
json={
|
||||
"host": unit.slm_host,
|
||||
"tcp_port": unit.slm_tcp_port or 2255,
|
||||
"tcp_enabled": True,
|
||||
"ftp_enabled": True,
|
||||
"ftp_username": "USER", # Default NL43 credentials
|
||||
"ftp_password": "0000",
|
||||
"poll_enabled": not unit.retired, # Disable polling for retired units
|
||||
"poll_interval_seconds": 60, # Default interval
|
||||
}
|
||||
)
|
||||
|
||||
if response.status_code in [200, 201]:
|
||||
logger.info(f"✓ Synced SLM {unit.id} to SLMM at {unit.slm_host}:{unit.slm_tcp_port or 2255}")
|
||||
return True
|
||||
else:
|
||||
logger.error(f"Failed to sync SLM {unit.id} to SLMM: {response.status_code} {response.text}")
|
||||
return False
|
||||
|
||||
except httpx.TimeoutException:
|
||||
logger.error(f"Timeout syncing SLM {unit.id} to SLMM")
|
||||
return False
|
||||
except Exception as e:
|
||||
logger.error(f"Error syncing SLM {unit.id} to SLMM: {e}")
|
||||
return False
|
||||
|
||||
|
||||
async def delete_slm_from_slmm(unit_id: str) -> bool:
|
||||
"""
|
||||
Delete a device from SLMM database.
|
||||
|
||||
Args:
|
||||
unit_id: The unit ID to delete
|
||||
|
||||
Returns:
|
||||
True if deletion successful or device doesn't exist, False on error
|
||||
"""
|
||||
try:
|
||||
async with httpx.AsyncClient(timeout=5.0) as client:
|
||||
response = await client.delete(
|
||||
f"{SLMM_BASE_URL}/api/nl43/{unit_id}/config"
|
||||
)
|
||||
|
||||
if response.status_code == 200:
|
||||
logger.info(f"✓ Deleted SLM {unit_id} from SLMM")
|
||||
return True
|
||||
elif response.status_code == 404:
|
||||
logger.info(f"SLM {unit_id} not found in SLMM (already deleted)")
|
||||
return True
|
||||
else:
|
||||
logger.error(f"Failed to delete SLM {unit_id} from SLMM: {response.status_code} {response.text}")
|
||||
return False
|
||||
|
||||
except httpx.TimeoutException:
|
||||
logger.error(f"Timeout deleting SLM {unit_id} from SLMM")
|
||||
return False
|
||||
except Exception as e:
|
||||
logger.error(f"Error deleting SLM {unit_id} from SLMM: {e}")
|
||||
return False
|
||||
|
||||
|
||||
async def sync_all_slms_to_slmm(db: Session) -> dict:
|
||||
"""
|
||||
Sync all SLM devices from Terra-View roster to SLMM.
|
||||
|
||||
This ensures SLMM database matches Terra-View roster as the source of truth.
|
||||
Should be called on Terra-View startup and optionally via admin endpoint.
|
||||
|
||||
Args:
|
||||
db: Database session
|
||||
|
||||
Returns:
|
||||
Dictionary with sync results
|
||||
"""
|
||||
logger.info("Starting full SLM sync to SLMM...")
|
||||
|
||||
# Get all SLM units from roster
|
||||
slm_units = db.query(RosterUnit).filter_by(device_type="slm").all()
|
||||
|
||||
results = {
|
||||
"total": len(slm_units),
|
||||
"synced": 0,
|
||||
"skipped": 0,
|
||||
"failed": 0
|
||||
}
|
||||
|
||||
for unit in slm_units:
|
||||
# Skip units without host configured
|
||||
if not unit.slm_host:
|
||||
results["skipped"] += 1
|
||||
logger.debug(f"Skipped {unit.unit_type} - no host configured")
|
||||
continue
|
||||
|
||||
# Sync to SLMM
|
||||
success = await sync_slm_to_slmm(unit)
|
||||
if success:
|
||||
results["synced"] += 1
|
||||
else:
|
||||
results["failed"] += 1
|
||||
|
||||
logger.info(
|
||||
f"SLM sync complete: {results['synced']} synced, "
|
||||
f"{results['skipped']} skipped, {results['failed']} failed"
|
||||
)
|
||||
|
||||
return results
|
||||
|
||||
|
||||
async def get_slmm_devices() -> Optional[list]:
|
||||
"""
|
||||
Get list of all devices currently in SLMM database.
|
||||
|
||||
Returns:
|
||||
List of device unit_ids, or None on error
|
||||
"""
|
||||
try:
|
||||
async with httpx.AsyncClient(timeout=5.0) as client:
|
||||
response = await client.get(f"{SLMM_BASE_URL}/api/nl43/_polling/status")
|
||||
|
||||
if response.status_code == 200:
|
||||
data = response.json()
|
||||
return [device["unit_id"] for device in data["data"]["devices"]]
|
||||
else:
|
||||
logger.error(f"Failed to get SLMM devices: {response.status_code}")
|
||||
return None
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error getting SLMM devices: {e}")
|
||||
return None
|
||||
|
||||
|
||||
async def cleanup_orphaned_slmm_devices(db: Session) -> dict:
|
||||
"""
|
||||
Remove devices from SLMM that are not in Terra-View roster.
|
||||
|
||||
This cleans up orphaned test devices or devices that were manually added to SLMM.
|
||||
|
||||
Args:
|
||||
db: Database session
|
||||
|
||||
Returns:
|
||||
Dictionary with cleanup results
|
||||
"""
|
||||
logger.info("Checking for orphaned devices in SLMM...")
|
||||
|
||||
# Get all device IDs from SLMM
|
||||
slmm_devices = await get_slmm_devices()
|
||||
if slmm_devices is None:
|
||||
return {"error": "Failed to get SLMM device list"}
|
||||
|
||||
# Get all SLM unit IDs from Terra-View roster
|
||||
roster_units = db.query(RosterUnit.id).filter_by(device_type="slm").all()
|
||||
roster_unit_ids = {unit.id for unit in roster_units}
|
||||
|
||||
# Find orphaned devices (in SLMM but not in roster)
|
||||
orphaned = [uid for uid in slmm_devices if uid not in roster_unit_ids]
|
||||
|
||||
results = {
|
||||
"total_in_slmm": len(slmm_devices),
|
||||
"total_in_roster": len(roster_unit_ids),
|
||||
"orphaned": len(orphaned),
|
||||
"deleted": 0,
|
||||
"failed": 0,
|
||||
"orphaned_devices": orphaned
|
||||
}
|
||||
|
||||
if not orphaned:
|
||||
logger.info("No orphaned devices found in SLMM")
|
||||
return results
|
||||
|
||||
logger.info(f"Found {len(orphaned)} orphaned devices in SLMM: {orphaned}")
|
||||
|
||||
# Delete orphaned devices
|
||||
for unit_id in orphaned:
|
||||
success = await delete_slm_from_slmm(unit_id)
|
||||
if success:
|
||||
results["deleted"] += 1
|
||||
else:
|
||||
results["failed"] += 1
|
||||
|
||||
logger.info(
|
||||
f"Cleanup complete: {results['deleted']} deleted, {results['failed']} failed"
|
||||
)
|
||||
|
||||
return results
|
||||
191
backend/services/snapshot.py
Normal file
@@ -0,0 +1,191 @@
|
||||
from datetime import datetime, timezone
|
||||
from sqlalchemy.orm import Session
|
||||
|
||||
from backend.database import get_db_session
|
||||
from backend.models import Emitter, RosterUnit, IgnoredUnit
|
||||
|
||||
|
||||
def ensure_utc(dt):
|
||||
if dt is None:
|
||||
return None
|
||||
if dt.tzinfo is None:
|
||||
return dt.replace(tzinfo=timezone.utc)
|
||||
return dt.astimezone(timezone.utc)
|
||||
|
||||
|
||||
def format_age(last_seen):
|
||||
if not last_seen:
|
||||
return "N/A"
|
||||
last_seen = ensure_utc(last_seen)
|
||||
now = datetime.now(timezone.utc)
|
||||
diff = now - last_seen
|
||||
hours = diff.total_seconds() // 3600
|
||||
mins = (diff.total_seconds() % 3600) // 60
|
||||
return f"{int(hours)}h {int(mins)}m"
|
||||
|
||||
|
||||
def calculate_status(last_seen, status_ok_threshold=12, status_pending_threshold=24):
|
||||
"""
|
||||
Calculate status based on how long ago the unit was last seen.
|
||||
|
||||
Args:
|
||||
last_seen: datetime of last seen (UTC)
|
||||
status_ok_threshold: hours before status becomes Pending (default 12)
|
||||
status_pending_threshold: hours before status becomes Missing (default 24)
|
||||
|
||||
Returns:
|
||||
"OK", "Pending", or "Missing"
|
||||
"""
|
||||
if not last_seen:
|
||||
return "Missing"
|
||||
|
||||
last_seen = ensure_utc(last_seen)
|
||||
now = datetime.now(timezone.utc)
|
||||
hours_ago = (now - last_seen).total_seconds() / 3600
|
||||
|
||||
if hours_ago > status_pending_threshold:
|
||||
return "Missing"
|
||||
elif hours_ago > status_ok_threshold:
|
||||
return "Pending"
|
||||
else:
|
||||
return "OK"
|
||||
|
||||
|
||||
def emit_status_snapshot():
|
||||
"""
|
||||
Merge roster (what we *intend*) with emitter data (what is *actually happening*).
|
||||
Status is recalculated based on current time to ensure accuracy.
|
||||
"""
|
||||
|
||||
db = get_db_session()
|
||||
try:
|
||||
# Get user preferences for status thresholds
|
||||
from backend.models import UserPreferences
|
||||
prefs = db.query(UserPreferences).filter_by(id=1).first()
|
||||
status_ok_threshold = prefs.status_ok_threshold_hours if prefs else 12
|
||||
status_pending_threshold = prefs.status_pending_threshold_hours if prefs else 24
|
||||
|
||||
roster = {r.id: r for r in db.query(RosterUnit).all()}
|
||||
emitters = {e.id: e for e in db.query(Emitter).all()}
|
||||
ignored = {i.id for i in db.query(IgnoredUnit).all()}
|
||||
|
||||
units = {}
|
||||
|
||||
# --- Merge roster entries first ---
|
||||
for unit_id, r in roster.items():
|
||||
e = emitters.get(unit_id)
|
||||
if r.retired:
|
||||
# Retired units get separated later
|
||||
status = "Retired"
|
||||
age = "N/A"
|
||||
last_seen = None
|
||||
fname = ""
|
||||
else:
|
||||
if e:
|
||||
last_seen = ensure_utc(e.last_seen)
|
||||
# RECALCULATE status based on current time, not stored value
|
||||
status = calculate_status(last_seen, status_ok_threshold, status_pending_threshold)
|
||||
age = format_age(last_seen)
|
||||
fname = e.last_file
|
||||
else:
|
||||
# Rostered but no emitter data
|
||||
status = "Missing"
|
||||
last_seen = None
|
||||
age = "N/A"
|
||||
fname = ""
|
||||
|
||||
units[unit_id] = {
|
||||
"id": unit_id,
|
||||
"status": status,
|
||||
"age": age,
|
||||
"last": last_seen.isoformat() if last_seen else None,
|
||||
"fname": fname,
|
||||
"deployed": r.deployed,
|
||||
"note": r.note or "",
|
||||
"retired": r.retired,
|
||||
# Device type and type-specific fields
|
||||
"device_type": r.device_type or "seismograph",
|
||||
"last_calibrated": r.last_calibrated.isoformat() if r.last_calibrated else None,
|
||||
"next_calibration_due": r.next_calibration_due.isoformat() if r.next_calibration_due else None,
|
||||
"deployed_with_modem_id": r.deployed_with_modem_id,
|
||||
"ip_address": r.ip_address,
|
||||
"phone_number": r.phone_number,
|
||||
"hardware_model": r.hardware_model,
|
||||
# Location for mapping
|
||||
"location": r.location or "",
|
||||
"address": r.address or "",
|
||||
"coordinates": r.coordinates or "",
|
||||
}
|
||||
|
||||
# --- Add unexpected emitter-only units ---
|
||||
for unit_id, e in emitters.items():
|
||||
if unit_id not in roster:
|
||||
last_seen = ensure_utc(e.last_seen)
|
||||
# RECALCULATE status for unknown units too
|
||||
status = calculate_status(last_seen, status_ok_threshold, status_pending_threshold)
|
||||
units[unit_id] = {
|
||||
"id": unit_id,
|
||||
"status": status,
|
||||
"age": format_age(last_seen),
|
||||
"last": last_seen.isoformat(),
|
||||
"fname": e.last_file,
|
||||
"deployed": False, # default
|
||||
"note": "",
|
||||
"retired": False,
|
||||
# Device type and type-specific fields (defaults for unknown units)
|
||||
"device_type": "seismograph", # default
|
||||
"last_calibrated": None,
|
||||
"next_calibration_due": None,
|
||||
"deployed_with_modem_id": None,
|
||||
"ip_address": None,
|
||||
"phone_number": None,
|
||||
"hardware_model": None,
|
||||
# Location fields
|
||||
"location": "",
|
||||
"address": "",
|
||||
"coordinates": "",
|
||||
}
|
||||
|
||||
# Separate buckets for UI
|
||||
active_units = {
|
||||
uid: u for uid, u in units.items()
|
||||
if not u["retired"] and u["deployed"] and uid not in ignored
|
||||
}
|
||||
|
||||
benched_units = {
|
||||
uid: u for uid, u in units.items()
|
||||
if not u["retired"] and not u["deployed"] and uid not in ignored
|
||||
}
|
||||
|
||||
retired_units = {
|
||||
uid: u for uid, u in units.items()
|
||||
if u["retired"]
|
||||
}
|
||||
|
||||
# Unknown units - emitters that aren't in the roster and aren't ignored
|
||||
unknown_units = {
|
||||
uid: u for uid, u in units.items()
|
||||
if uid not in roster and uid not in ignored
|
||||
}
|
||||
|
||||
return {
|
||||
"timestamp": datetime.utcnow().isoformat(),
|
||||
"units": units,
|
||||
"active": active_units,
|
||||
"benched": benched_units,
|
||||
"retired": retired_units,
|
||||
"unknown": unknown_units,
|
||||
"summary": {
|
||||
"total": len(active_units) + len(benched_units),
|
||||
"active": len(active_units),
|
||||
"benched": len(benched_units),
|
||||
"retired": len(retired_units),
|
||||
"unknown": len(unknown_units),
|
||||
# Status counts only for deployed units (active_units)
|
||||
"ok": sum(1 for u in active_units.values() if u["status"] == "OK"),
|
||||
"pending": sum(1 for u in active_units.values() if u["status"] == "Pending"),
|
||||
"missing": sum(1 for u in active_units.values() if u["status"] == "Missing"),
|
||||
}
|
||||
}
|
||||
finally:
|
||||
db.close()
|
||||
78
backend/static/icons/ICON_GENERATION_INSTRUCTIONS.md
Normal file
@@ -0,0 +1,78 @@
|
||||
# PWA Icon Generation Instructions
|
||||
|
||||
The PWA manifest requires 8 icon sizes for full compatibility across devices.
|
||||
|
||||
## Required Icon Sizes
|
||||
|
||||
- 72x72px
|
||||
- 96x96px
|
||||
- 128x128px
|
||||
- 144x144px
|
||||
- 152x152px
|
||||
- 192x192px
|
||||
- 384x384px
|
||||
- 512x512px (maskable)
|
||||
|
||||
## Design Guidelines
|
||||
|
||||
**Background:** Navy blue (#142a66)
|
||||
**Icon/Logo:** Orange (#f48b1c)
|
||||
**Style:** Simple, recognizable design that works at small sizes
|
||||
|
||||
## Quick Generation Methods
|
||||
|
||||
### Option 1: Online PWA Icon Generator
|
||||
|
||||
1. Visit: https://www.pwabuilder.com/imageGenerator
|
||||
2. Upload a 512x512px source image
|
||||
3. Download the generated icon pack
|
||||
4. Copy PNG files to this directory
|
||||
|
||||
### Option 2: ImageMagick (Command Line)
|
||||
|
||||
If you have a 512x512px source image called `source-icon.png`:
|
||||
|
||||
```bash
|
||||
# From the icons directory
|
||||
for size in 72 96 128 144 152 192 384 512; do
|
||||
convert source-icon.png -resize ${size}x${size} icon-${size}.png
|
||||
done
|
||||
```
|
||||
|
||||
### Option 3: Photoshop/GIMP
|
||||
|
||||
1. Create a 512x512px canvas
|
||||
2. Add your design (navy background + orange icon)
|
||||
3. Save/Export for each required size
|
||||
4. Name files as: icon-72.png, icon-96.png, etc.
|
||||
|
||||
## Temporary Placeholder
|
||||
|
||||
For testing, you can use a simple colored square:
|
||||
|
||||
```bash
|
||||
# Generate simple colored placeholder icons
|
||||
for size in 72 96 128 144 152 192 384 512; do
|
||||
convert -size ${size}x${size} xc:#142a66 \
|
||||
-gravity center \
|
||||
-fill '#f48b1c' \
|
||||
-pointsize $((size / 2)) \
|
||||
-annotate +0+0 'SFM' \
|
||||
icon-${size}.png
|
||||
done
|
||||
```
|
||||
|
||||
## Verification
|
||||
|
||||
After generating icons, verify:
|
||||
- All 8 sizes exist in this directory
|
||||
- Files are named exactly: icon-72.png, icon-96.png, etc.
|
||||
- Images have transparent or navy background
|
||||
- Logo/text is clearly visible at smallest size (72px)
|
||||
|
||||
## Testing PWA Installation
|
||||
|
||||
1. Open SFM in Chrome on Android or Safari on iOS
|
||||
2. Look for "Install App" or "Add to Home Screen" prompt
|
||||
3. Check that the correct icon appears in the install dialog
|
||||
4. After installation, verify icon on home screen
|
||||
BIN
backend/static/icons/favicon-16.png
Normal file
|
After Width: | Height: | Size: 424 B |
BIN
backend/static/icons/favicon-32.png
Normal file
|
After Width: | Height: | Size: 1.1 KiB |
BIN
backend/static/icons/icon-128.png
Normal file
|
After Width: | Height: | Size: 7.7 KiB |
4
backend/static/icons/icon-128.png.svg
Normal file
@@ -0,0 +1,4 @@
|
||||
<svg width="128" height="128" xmlns="http://www.w3.org/2000/svg">
|
||||
<rect width="128" height="128" fill="#142a66"/>
|
||||
<text x="50%" y="50%" font-family="Arial, sans-serif" font-size="128" font-weight="bold" fill="#f48b1c" text-anchor="middle" dominant-baseline="middle">SFM</text>
|
||||
</svg>
|
||||
|
After Width: | Height: | Size: 288 B |
BIN
backend/static/icons/icon-144.png
Normal file
|
After Width: | Height: | Size: 9.2 KiB |
4
backend/static/icons/icon-144.png.svg
Normal file
@@ -0,0 +1,4 @@
|
||||
<svg width="144" height="144" xmlns="http://www.w3.org/2000/svg">
|
||||
<rect width="144" height="144" fill="#142a66"/>
|
||||
<text x="50%" y="50%" font-family="Arial, sans-serif" font-size="14" font-weight="bold" fill="#f48b1c" text-anchor="middle" dominant-baseline="middle">SFM</text>
|
||||
</svg>
|
||||
|
After Width: | Height: | Size: 287 B |
BIN
backend/static/icons/icon-152.png
Normal file
|
After Width: | Height: | Size: 10 KiB |
4
backend/static/icons/icon-152.png.svg
Normal file
@@ -0,0 +1,4 @@
|
||||
<svg width="152" height="152" xmlns="http://www.w3.org/2000/svg">
|
||||
<rect width="152" height="152" fill="#142a66"/>
|
||||
<text x="50%" y="50%" font-family="Arial, sans-serif" font-size="152" font-weight="bold" fill="#f48b1c" text-anchor="middle" dominant-baseline="middle">SFM</text>
|
||||
</svg>
|
||||
|
After Width: | Height: | Size: 288 B |
BIN
backend/static/icons/icon-192.png
Normal file
|
After Width: | Height: | Size: 15 KiB |
4
backend/static/icons/icon-192.png.svg
Normal file
@@ -0,0 +1,4 @@
|
||||
<svg width="192" height="192" xmlns="http://www.w3.org/2000/svg">
|
||||
<rect width="192" height="192" fill="#142a66"/>
|
||||
<text x="50%" y="50%" font-family="Arial, sans-serif" font-size="192" font-weight="bold" fill="#f48b1c" text-anchor="middle" dominant-baseline="middle">SFM</text>
|
||||
</svg>
|
||||
|
After Width: | Height: | Size: 288 B |
BIN
backend/static/icons/icon-384.png
Normal file
|
After Width: | Height: | Size: 44 KiB |
4
backend/static/icons/icon-384.png.svg
Normal file
@@ -0,0 +1,4 @@
|
||||
<svg width="384" height="384" xmlns="http://www.w3.org/2000/svg">
|
||||
<rect width="384" height="384" fill="#142a66"/>
|
||||
<text x="50%" y="50%" font-family="Arial, sans-serif" font-size="38" font-weight="bold" fill="#f48b1c" text-anchor="middle" dominant-baseline="middle">SFM</text>
|
||||
</svg>
|
||||
|
After Width: | Height: | Size: 287 B |
BIN
backend/static/icons/icon-512.png
Normal file
|
After Width: | Height: | Size: 68 KiB |
4
backend/static/icons/icon-512.png.svg
Normal file
@@ -0,0 +1,4 @@
|
||||
<svg width="512" height="512" xmlns="http://www.w3.org/2000/svg">
|
||||
<rect width="512" height="512" fill="#142a66"/>
|
||||
<text x="50%" y="50%" font-family="Arial, sans-serif" font-size="512" font-weight="bold" fill="#f48b1c" text-anchor="middle" dominant-baseline="middle">SFM</text>
|
||||
</svg>
|
||||
|
After Width: | Height: | Size: 288 B |
BIN
backend/static/icons/icon-72.png
Normal file
|
After Width: | Height: | Size: 3.2 KiB |
4
backend/static/icons/icon-72.png.svg
Normal file
@@ -0,0 +1,4 @@
|
||||
<svg width="72" height="72" xmlns="http://www.w3.org/2000/svg">
|
||||
<rect width="72" height="72" fill="#142a66"/>
|
||||
<text x="50%" y="50%" font-family="Arial, sans-serif" font-size="72" font-weight="bold" fill="#f48b1c" text-anchor="middle" dominant-baseline="middle">SFM</text>
|
||||
</svg>
|
||||
|
After Width: | Height: | Size: 283 B |
BIN
backend/static/icons/icon-96.png
Normal file
|
After Width: | Height: | Size: 5.0 KiB |
4
backend/static/icons/icon-96.png.svg
Normal file
@@ -0,0 +1,4 @@
|
||||
<svg width="96" height="96" xmlns="http://www.w3.org/2000/svg">
|
||||
<rect width="96" height="96" fill="#142a66"/>
|
||||
<text x="50%" y="50%" font-family="Arial, sans-serif" font-size="96" font-weight="bold" fill="#f48b1c" text-anchor="middle" dominant-baseline="middle">SFM</text>
|
||||
</svg>
|
||||
|
After Width: | Height: | Size: 283 B |
78
backend/static/manifest.json
Normal file
@@ -0,0 +1,78 @@
|
||||
{
|
||||
"name": "Seismo Fleet Manager",
|
||||
"short_name": "SFM",
|
||||
"description": "Real-time seismograph and modem fleet monitoring and management",
|
||||
"start_url": "/",
|
||||
"display": "standalone",
|
||||
"orientation": "portrait",
|
||||
"background_color": "#142a66",
|
||||
"theme_color": "#f48b1c",
|
||||
"icons": [
|
||||
{
|
||||
"src": "/static/icons/icon-72.png",
|
||||
"sizes": "72x72",
|
||||
"type": "image/png"
|
||||
},
|
||||
{
|
||||
"src": "/static/icons/icon-96.png",
|
||||
"sizes": "96x96",
|
||||
"type": "image/png"
|
||||
},
|
||||
{
|
||||
"src": "/static/icons/icon-128.png",
|
||||
"sizes": "128x128",
|
||||
"type": "image/png"
|
||||
},
|
||||
{
|
||||
"src": "/static/icons/icon-144.png",
|
||||
"sizes": "144x144",
|
||||
"type": "image/png"
|
||||
},
|
||||
{
|
||||
"src": "/static/icons/icon-152.png",
|
||||
"sizes": "152x152",
|
||||
"type": "image/png"
|
||||
},
|
||||
{
|
||||
"src": "/static/icons/icon-192.png",
|
||||
"sizes": "192x192",
|
||||
"type": "image/png"
|
||||
},
|
||||
{
|
||||
"src": "/static/icons/icon-384.png",
|
||||
"sizes": "384x384",
|
||||
"type": "image/png"
|
||||
},
|
||||
{
|
||||
"src": "/static/icons/icon-512.png",
|
||||
"sizes": "512x512",
|
||||
"type": "image/png",
|
||||
"purpose": "any maskable"
|
||||
}
|
||||
],
|
||||
"screenshots": [
|
||||
{
|
||||
"src": "/static/screenshots/dashboard.png",
|
||||
"type": "image/png",
|
||||
"sizes": "540x720",
|
||||
"form_factor": "narrow"
|
||||
}
|
||||
],
|
||||
"categories": ["utilities", "productivity"],
|
||||
"shortcuts": [
|
||||
{
|
||||
"name": "Dashboard",
|
||||
"short_name": "Dashboard",
|
||||
"description": "View fleet status dashboard",
|
||||
"url": "/",
|
||||
"icons": [{ "src": "/static/icons/icon-192.png", "sizes": "192x192" }]
|
||||
},
|
||||
{
|
||||
"name": "Fleet Roster",
|
||||
"short_name": "Roster",
|
||||
"description": "View and manage fleet roster",
|
||||
"url": "/roster",
|
||||
"icons": [{ "src": "/static/icons/icon-192.png", "sizes": "192x192" }]
|
||||
}
|
||||
]
|
||||
}
|
||||
612
backend/static/mobile.css
Normal file
@@ -0,0 +1,612 @@
|
||||
/* Mobile-specific styles for Seismo Fleet Manager */
|
||||
/* Touch-optimized, portrait-first design */
|
||||
|
||||
/* ===== MOBILE TOUCH TARGETS ===== */
|
||||
@media (max-width: 767px) {
|
||||
/* Buttons - 44x44px minimum (iOS standard) */
|
||||
.btn, button:not(.tab-button), .button, a.button {
|
||||
min-width: 44px;
|
||||
min-height: 44px;
|
||||
padding: 12px 16px;
|
||||
}
|
||||
|
||||
/* Icon-only buttons */
|
||||
.icon-button, .btn-icon {
|
||||
width: 44px;
|
||||
height: 44px;
|
||||
display: flex;
|
||||
align-items: center;
|
||||
justify-content: center;
|
||||
padding: 0;
|
||||
}
|
||||
|
||||
/* Form inputs - 48px height, 16px font prevents iOS zoom */
|
||||
input:not([type="checkbox"]):not([type="radio"]),
|
||||
select,
|
||||
textarea {
|
||||
min-height: 48px;
|
||||
font-size: 16px !important;
|
||||
padding: 12px 16px;
|
||||
}
|
||||
|
||||
/* Checkboxes and radio buttons - larger touch targets */
|
||||
input[type="checkbox"],
|
||||
input[type="radio"] {
|
||||
width: 24px;
|
||||
height: 24px;
|
||||
min-height: 24px;
|
||||
}
|
||||
|
||||
/* Bottom nav buttons - 56px industry standard */
|
||||
.bottom-nav button {
|
||||
min-height: 56px;
|
||||
padding: 8px;
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
align-items: center;
|
||||
justify-content: center;
|
||||
gap: 4px;
|
||||
}
|
||||
|
||||
/* Increase spacing between clickable elements */
|
||||
.btn + .btn,
|
||||
button + button {
|
||||
margin-left: 8px;
|
||||
}
|
||||
}
|
||||
|
||||
/* ===== HAMBURGER MENU ===== */
|
||||
.hamburger-btn {
|
||||
position: fixed;
|
||||
top: 1rem;
|
||||
left: 1rem;
|
||||
z-index: 50;
|
||||
width: 44px;
|
||||
height: 44px;
|
||||
display: flex;
|
||||
align-items: center;
|
||||
justify-content: center;
|
||||
background-color: white;
|
||||
border: 1px solid #e5e7eb;
|
||||
border-radius: 0.5rem;
|
||||
box-shadow: 0 1px 3px 0 rgb(0 0 0 / 0.1);
|
||||
transition: all 0.2s;
|
||||
}
|
||||
|
||||
.hamburger-btn:active {
|
||||
transform: scale(0.95);
|
||||
}
|
||||
|
||||
.dark .hamburger-btn {
|
||||
background-color: #1e293b;
|
||||
border-color: #374151;
|
||||
}
|
||||
|
||||
/* Hamburger icon */
|
||||
.hamburger-icon {
|
||||
width: 24px;
|
||||
height: 24px;
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
justify-content: space-around;
|
||||
}
|
||||
|
||||
.hamburger-line {
|
||||
width: 100%;
|
||||
height: 2px;
|
||||
background-color: #374151;
|
||||
transition: all 0.3s;
|
||||
}
|
||||
|
||||
.dark .hamburger-line {
|
||||
background-color: #e5e7eb;
|
||||
}
|
||||
|
||||
/* Hamburger animation when menu open */
|
||||
.menu-open .hamburger-line:nth-child(1) {
|
||||
transform: translateY(8px) rotate(45deg);
|
||||
}
|
||||
|
||||
.menu-open .hamburger-line:nth-child(2) {
|
||||
opacity: 0;
|
||||
}
|
||||
|
||||
.menu-open .hamburger-line:nth-child(3) {
|
||||
transform: translateY(-8px) rotate(-45deg);
|
||||
}
|
||||
|
||||
/* ===== SIDEBAR (RESPONSIVE) ===== */
|
||||
.sidebar {
|
||||
position: fixed;
|
||||
left: 0;
|
||||
top: 0;
|
||||
width: 16rem; /* 256px */
|
||||
height: 100vh;
|
||||
z-index: 40;
|
||||
transition: transform 0.3s ease-in-out;
|
||||
}
|
||||
|
||||
@media (max-width: 767px) {
|
||||
.sidebar {
|
||||
transform: translateX(-100%);
|
||||
}
|
||||
|
||||
.sidebar.open {
|
||||
transform: translateX(0);
|
||||
}
|
||||
}
|
||||
|
||||
@media (min-width: 768px) {
|
||||
.sidebar {
|
||||
transform: translateX(0) !important;
|
||||
}
|
||||
}
|
||||
|
||||
/* ===== BACKDROP ===== */
|
||||
.backdrop {
|
||||
position: fixed;
|
||||
inset: 0;
|
||||
background-color: rgba(0, 0, 0, 0.5);
|
||||
z-index: 30;
|
||||
opacity: 0;
|
||||
transition: opacity 0.3s ease-in-out;
|
||||
pointer-events: none;
|
||||
}
|
||||
|
||||
.backdrop.show {
|
||||
opacity: 1;
|
||||
pointer-events: auto;
|
||||
}
|
||||
|
||||
@media (min-width: 768px) {
|
||||
.backdrop {
|
||||
display: none;
|
||||
}
|
||||
}
|
||||
|
||||
/* ===== BOTTOM NAVIGATION ===== */
|
||||
.bottom-nav {
|
||||
position: fixed;
|
||||
bottom: 0;
|
||||
left: 0;
|
||||
right: 0;
|
||||
height: 4rem;
|
||||
background-color: white;
|
||||
border-top: 1px solid #e5e7eb;
|
||||
z-index: 20;
|
||||
box-shadow: 0 -1px 3px 0 rgb(0 0 0 / 0.1);
|
||||
}
|
||||
|
||||
.dark .bottom-nav {
|
||||
background-color: #1e293b;
|
||||
border-top-color: #374151;
|
||||
}
|
||||
|
||||
.bottom-nav-btn {
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
align-items: center;
|
||||
justify-content: center;
|
||||
gap: 4px;
|
||||
color: #6b7280;
|
||||
transition: all 0.2s;
|
||||
border: none;
|
||||
background: none;
|
||||
cursor: pointer;
|
||||
width: 100%;
|
||||
height: 100%;
|
||||
}
|
||||
|
||||
.bottom-nav-btn:active {
|
||||
transform: scale(0.95);
|
||||
background-color: #f3f4f6;
|
||||
}
|
||||
|
||||
.dark .bottom-nav-btn:active {
|
||||
background-color: #374151;
|
||||
}
|
||||
|
||||
.bottom-nav-btn.active {
|
||||
color: #f48b1c; /* seismo-orange */
|
||||
}
|
||||
|
||||
.bottom-nav-btn svg {
|
||||
width: 24px;
|
||||
height: 24px;
|
||||
}
|
||||
|
||||
.bottom-nav-btn span {
|
||||
font-size: 11px;
|
||||
font-weight: 500;
|
||||
}
|
||||
|
||||
@media (min-width: 768px) {
|
||||
.bottom-nav {
|
||||
display: none;
|
||||
}
|
||||
}
|
||||
|
||||
/* ===== MAIN CONTENT ADJUSTMENTS ===== */
|
||||
.main-content {
|
||||
margin-left: 0;
|
||||
padding-bottom: 5rem; /* 80px for bottom nav */
|
||||
min-height: 100vh;
|
||||
}
|
||||
|
||||
@media (min-width: 768px) {
|
||||
.main-content {
|
||||
margin-left: 16rem; /* 256px sidebar width */
|
||||
padding-bottom: 0;
|
||||
}
|
||||
}
|
||||
|
||||
/* ===== MOBILE ROSTER CARDS ===== */
|
||||
.unit-card {
|
||||
background-color: white;
|
||||
border-radius: 0.5rem;
|
||||
box-shadow: 0 1px 3px 0 rgb(0 0 0 / 0.1);
|
||||
padding: 1rem;
|
||||
cursor: pointer;
|
||||
transition: transform 0.2s, box-shadow 0.2s;
|
||||
-webkit-tap-highlight-color: transparent;
|
||||
}
|
||||
|
||||
.unit-card:active {
|
||||
transform: scale(0.98);
|
||||
}
|
||||
|
||||
.dark .unit-card {
|
||||
background-color: #1e293b;
|
||||
}
|
||||
|
||||
.unit-card:hover {
|
||||
box-shadow: 0 4px 6px -1px rgb(0 0 0 / 0.1);
|
||||
}
|
||||
|
||||
/* ===== UNIT DETAIL MODAL (BOTTOM SHEET) ===== */
|
||||
.unit-modal {
|
||||
position: fixed;
|
||||
inset: 0;
|
||||
z-index: 50;
|
||||
display: flex;
|
||||
align-items: flex-end;
|
||||
justify-content: center;
|
||||
pointer-events: none;
|
||||
opacity: 0;
|
||||
transition: opacity 0.3s ease-in-out;
|
||||
}
|
||||
|
||||
.unit-modal.show {
|
||||
pointer-events: auto;
|
||||
opacity: 1;
|
||||
}
|
||||
|
||||
.unit-modal-backdrop {
|
||||
position: absolute;
|
||||
inset: 0;
|
||||
background-color: rgba(0, 0, 0, 0.5);
|
||||
}
|
||||
|
||||
.unit-modal-content {
|
||||
position: relative;
|
||||
width: 100%;
|
||||
max-height: 85vh;
|
||||
background-color: white;
|
||||
border-top-left-radius: 1rem;
|
||||
border-top-right-radius: 1rem;
|
||||
box-shadow: 0 -4px 6px -1px rgb(0 0 0 / 0.1);
|
||||
overflow-y: auto;
|
||||
transform: translateY(100%);
|
||||
transition: transform 0.3s ease-out;
|
||||
}
|
||||
|
||||
.unit-modal.show .unit-modal-content {
|
||||
transform: translateY(0);
|
||||
}
|
||||
|
||||
.dark .unit-modal-content {
|
||||
background-color: #1e293b;
|
||||
}
|
||||
|
||||
@media (min-width: 768px) {
|
||||
.unit-modal {
|
||||
align-items: center;
|
||||
}
|
||||
|
||||
.unit-modal-content {
|
||||
max-width: 42rem; /* 672px */
|
||||
border-radius: 0.75rem;
|
||||
transform: translateY(20px);
|
||||
opacity: 0;
|
||||
}
|
||||
|
||||
.unit-modal.show .unit-modal-content {
|
||||
transform: translateY(0);
|
||||
opacity: 1;
|
||||
}
|
||||
}
|
||||
|
||||
/* Modal handle bar (mobile only) */
|
||||
.modal-handle {
|
||||
height: 4px;
|
||||
width: 3rem;
|
||||
background-color: #d1d5db;
|
||||
border-radius: 9999px;
|
||||
margin: 0.75rem auto 1rem;
|
||||
}
|
||||
|
||||
@media (min-width: 768px) {
|
||||
.modal-handle {
|
||||
display: none;
|
||||
}
|
||||
}
|
||||
|
||||
/* ===== OFFLINE INDICATOR ===== */
|
||||
.offline-indicator {
|
||||
position: fixed;
|
||||
top: 0;
|
||||
left: 0;
|
||||
right: 0;
|
||||
background-color: #eab308; /* yellow-500 */
|
||||
color: white;
|
||||
text-align: center;
|
||||
padding: 0.5rem;
|
||||
font-size: 0.875rem;
|
||||
font-weight: 500;
|
||||
z-index: 50;
|
||||
transform: translateY(-100%);
|
||||
transition: transform 0.3s ease-in-out;
|
||||
}
|
||||
|
||||
.offline-indicator.show {
|
||||
transform: translateY(0);
|
||||
}
|
||||
|
||||
/* ===== SYNC TOAST ===== */
|
||||
.sync-toast {
|
||||
position: fixed;
|
||||
bottom: 6rem; /* Above bottom nav */
|
||||
left: 1rem;
|
||||
right: 1rem;
|
||||
background-color: #22c55e; /* green-500 */
|
||||
color: white;
|
||||
padding: 0.75rem 1rem;
|
||||
border-radius: 0.5rem;
|
||||
box-shadow: 0 4px 6px -1px rgb(0 0 0 / 0.1);
|
||||
z-index: 50;
|
||||
opacity: 0;
|
||||
transform: translateY(20px);
|
||||
transition: opacity 0.3s, transform 0.3s;
|
||||
pointer-events: none;
|
||||
}
|
||||
|
||||
.sync-toast.show {
|
||||
opacity: 1;
|
||||
transform: translateY(0);
|
||||
pointer-events: auto;
|
||||
}
|
||||
|
||||
@media (min-width: 768px) {
|
||||
.sync-toast {
|
||||
bottom: 1rem;
|
||||
left: auto;
|
||||
right: 1rem;
|
||||
max-width: 20rem;
|
||||
}
|
||||
}
|
||||
|
||||
/* ===== MOBILE SEARCH BAR (STICKY) ===== */
|
||||
@media (max-width: 767px) {
|
||||
.mobile-search-sticky {
|
||||
position: sticky;
|
||||
top: 0;
|
||||
z-index: 10;
|
||||
background-color: #f3f4f6;
|
||||
margin: -1rem -1rem 1rem -1rem;
|
||||
padding: 0.5rem 1rem;
|
||||
}
|
||||
|
||||
.dark .mobile-search-sticky {
|
||||
background-color: #111827;
|
||||
}
|
||||
}
|
||||
|
||||
@media (min-width: 768px) {
|
||||
.mobile-search-sticky {
|
||||
position: static;
|
||||
background-color: transparent;
|
||||
margin: 0;
|
||||
padding: 0;
|
||||
}
|
||||
}
|
||||
|
||||
/* ===== STATUS BADGES ===== */
|
||||
.status-dot {
|
||||
width: 1rem;
|
||||
height: 1rem;
|
||||
border-radius: 9999px;
|
||||
flex-shrink: 0;
|
||||
}
|
||||
|
||||
.status-badge {
|
||||
padding: 0.25rem 0.75rem;
|
||||
border-radius: 9999px;
|
||||
font-size: 0.75rem;
|
||||
font-weight: 500;
|
||||
}
|
||||
|
||||
/* ===== DEVICE TYPE BADGES ===== */
|
||||
.device-badge {
|
||||
padding: 0.25rem 0.5rem;
|
||||
border-radius: 9999px;
|
||||
font-size: 0.75rem;
|
||||
font-weight: 500;
|
||||
display: inline-block;
|
||||
}
|
||||
|
||||
/* ===== MOBILE MAP HEIGHT ===== */
|
||||
@media (max-width: 767px) {
|
||||
#fleet-map {
|
||||
height: 16rem !important; /* 256px on mobile */
|
||||
}
|
||||
|
||||
#unit-map {
|
||||
height: 16rem !important; /* 256px on mobile */
|
||||
}
|
||||
}
|
||||
|
||||
/* ===== MAP OVERLAP FIX ===== */
|
||||
/* Prevent map and controls from overlapping UI elements on mobile */
|
||||
@media (max-width: 767px) {
|
||||
/* Constrain leaflet container to prevent overflow */
|
||||
.leaflet-container {
|
||||
max-width: 100%;
|
||||
overflow: hidden;
|
||||
}
|
||||
|
||||
/* Override Leaflet's default high z-index values */
|
||||
/* Bottom nav is z-20, sidebar is z-40, so map must be below */
|
||||
.leaflet-pane,
|
||||
.leaflet-tile-pane,
|
||||
.leaflet-overlay-pane,
|
||||
.leaflet-shadow-pane,
|
||||
.leaflet-marker-pane,
|
||||
.leaflet-tooltip-pane,
|
||||
.leaflet-popup-pane {
|
||||
z-index: 1 !important;
|
||||
}
|
||||
|
||||
/* Map controls should also be below navigation elements */
|
||||
.leaflet-control-container,
|
||||
.leaflet-top,
|
||||
.leaflet-bottom,
|
||||
.leaflet-left,
|
||||
.leaflet-right {
|
||||
z-index: 1 !important;
|
||||
}
|
||||
|
||||
.leaflet-control {
|
||||
z-index: 1 !important;
|
||||
}
|
||||
|
||||
/* When sidebar is open, hide all Leaflet controls (zoom, attribution, etc) */
|
||||
body.menu-open .leaflet-control-container {
|
||||
opacity: 0;
|
||||
pointer-events: none;
|
||||
transition: opacity 0.3s ease-in-out;
|
||||
}
|
||||
|
||||
/* Ensure map tiles are non-interactive when sidebar is open */
|
||||
body.menu-open #fleet-map,
|
||||
body.menu-open #unit-map {
|
||||
pointer-events: none;
|
||||
}
|
||||
}
|
||||
|
||||
/* ===== PENDING SYNC BADGE ===== */
|
||||
.pending-sync-badge {
|
||||
display: inline-flex;
|
||||
align-items: center;
|
||||
gap: 0.25rem;
|
||||
padding: 0.125rem 0.5rem;
|
||||
background-color: #fef3c7; /* amber-100 */
|
||||
color: #92400e; /* amber-800 */
|
||||
border-radius: 9999px;
|
||||
font-size: 0.75rem;
|
||||
font-weight: 500;
|
||||
}
|
||||
|
||||
.dark .pending-sync-badge {
|
||||
background-color: #78350f;
|
||||
color: #fef3c7;
|
||||
}
|
||||
|
||||
.pending-sync-badge::before {
|
||||
content: "⏳";
|
||||
font-size: 0.875rem;
|
||||
}
|
||||
|
||||
/* ===== MOBILE-SPECIFIC UTILITY CLASSES ===== */
|
||||
@media (max-width: 767px) {
|
||||
.mobile-text-lg {
|
||||
font-size: 1.125rem;
|
||||
line-height: 1.75rem;
|
||||
}
|
||||
|
||||
.mobile-text-xl {
|
||||
font-size: 1.25rem;
|
||||
line-height: 1.75rem;
|
||||
}
|
||||
|
||||
.mobile-p-4 {
|
||||
padding: 1rem;
|
||||
}
|
||||
|
||||
.mobile-mb-4 {
|
||||
margin-bottom: 1rem;
|
||||
}
|
||||
}
|
||||
|
||||
/* ===== ACCESSIBILITY ===== */
|
||||
/* Improve focus visibility on mobile */
|
||||
@media (max-width: 767px) {
|
||||
button:focus-visible,
|
||||
a:focus-visible,
|
||||
input:focus-visible,
|
||||
select:focus-visible,
|
||||
textarea:focus-visible {
|
||||
outline: 2px solid #f48b1c;
|
||||
outline-offset: 2px;
|
||||
}
|
||||
}
|
||||
|
||||
/* Prevent text selection on buttons (better mobile UX) */
|
||||
button,
|
||||
.btn,
|
||||
.button {
|
||||
-webkit-user-select: none;
|
||||
user-select: none;
|
||||
-webkit-tap-highlight-color: transparent;
|
||||
}
|
||||
|
||||
/* ===== SMOOTH SCROLLING ===== */
|
||||
html {
|
||||
scroll-behavior: smooth;
|
||||
}
|
||||
|
||||
/* Prevent overscroll bounce on iOS */
|
||||
body {
|
||||
overscroll-behavior-y: none;
|
||||
}
|
||||
|
||||
/* ===== LOADING STATES ===== */
|
||||
.loading-pulse {
|
||||
animation: pulse 2s cubic-bezier(0.4, 0, 0.6, 1) infinite;
|
||||
}
|
||||
|
||||
@keyframes pulse {
|
||||
0%, 100% {
|
||||
opacity: 1;
|
||||
}
|
||||
50% {
|
||||
opacity: 0.5;
|
||||
}
|
||||
}
|
||||
|
||||
/* ===== SAFE AREA SUPPORT (iOS notch) ===== */
|
||||
@supports (padding: env(safe-area-inset-bottom)) {
|
||||
.bottom-nav {
|
||||
padding-bottom: env(safe-area-inset-bottom);
|
||||
height: calc(4rem + env(safe-area-inset-bottom));
|
||||
}
|
||||
|
||||
.main-content {
|
||||
padding-bottom: calc(5rem + env(safe-area-inset-bottom));
|
||||
}
|
||||
|
||||
@media (min-width: 768px) {
|
||||
.main-content {
|
||||
padding-bottom: 0;
|
||||
}
|
||||
}
|
||||
}
|
||||
597
backend/static/mobile.js
Normal file
@@ -0,0 +1,597 @@
|
||||
/* Mobile JavaScript for Seismo Fleet Manager */
|
||||
/* Handles hamburger menu, modals, offline sync, and mobile interactions */
|
||||
|
||||
// ===== GLOBAL STATE =====
|
||||
let currentUnitData = null;
|
||||
let isOnline = navigator.onLine;
|
||||
|
||||
// ===== HAMBURGER MENU TOGGLE =====
|
||||
function toggleMenu() {
|
||||
const sidebar = document.getElementById('sidebar');
|
||||
const backdrop = document.getElementById('backdrop');
|
||||
const hamburgerBtn = document.getElementById('hamburgerBtn');
|
||||
|
||||
if (sidebar && backdrop) {
|
||||
const isOpen = sidebar.classList.contains('open');
|
||||
|
||||
if (isOpen) {
|
||||
// Close menu
|
||||
sidebar.classList.remove('open');
|
||||
backdrop.classList.remove('show');
|
||||
hamburgerBtn?.classList.remove('menu-open');
|
||||
document.body.style.overflow = '';
|
||||
document.body.classList.remove('menu-open');
|
||||
} else {
|
||||
// Open menu
|
||||
sidebar.classList.add('open');
|
||||
backdrop.classList.add('show');
|
||||
hamburgerBtn?.classList.add('menu-open');
|
||||
document.body.style.overflow = 'hidden';
|
||||
document.body.classList.add('menu-open');
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Close menu when clicking backdrop
|
||||
function closeMenuFromBackdrop() {
|
||||
const sidebar = document.getElementById('sidebar');
|
||||
const backdrop = document.getElementById('backdrop');
|
||||
const hamburgerBtn = document.getElementById('hamburgerBtn');
|
||||
|
||||
if (sidebar && backdrop) {
|
||||
sidebar.classList.remove('open');
|
||||
backdrop.classList.remove('show');
|
||||
hamburgerBtn?.classList.remove('menu-open');
|
||||
document.body.style.overflow = '';
|
||||
document.body.classList.remove('menu-open');
|
||||
}
|
||||
}
|
||||
|
||||
// Close menu when window is resized to desktop
|
||||
function handleResize() {
|
||||
if (window.innerWidth >= 768) {
|
||||
const sidebar = document.getElementById('sidebar');
|
||||
const backdrop = document.getElementById('backdrop');
|
||||
const hamburgerBtn = document.getElementById('hamburgerBtn');
|
||||
|
||||
if (sidebar && backdrop) {
|
||||
sidebar.classList.remove('open');
|
||||
backdrop.classList.remove('show');
|
||||
hamburgerBtn?.classList.remove('menu-open');
|
||||
document.body.style.overflow = '';
|
||||
document.body.classList.remove('menu-open');
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// ===== UNIT DETAIL MODAL =====
|
||||
function openUnitModal(unitId, status = null, age = null) {
|
||||
const modal = document.getElementById('unitModal');
|
||||
if (!modal) return;
|
||||
|
||||
// Store the status info passed from the card
|
||||
// Accept status if it's a non-empty string, use age if provided or default to '--'
|
||||
const cardStatusInfo = (status && status !== '') ? {
|
||||
status: status,
|
||||
age: age || '--'
|
||||
} : null;
|
||||
|
||||
console.log('openUnitModal:', { unitId, status, age, cardStatusInfo });
|
||||
|
||||
// Fetch unit data and populate modal
|
||||
fetchUnitDetails(unitId).then(unit => {
|
||||
if (unit) {
|
||||
currentUnitData = unit;
|
||||
// Pass the card status info to the populate function
|
||||
populateUnitModal(unit, cardStatusInfo);
|
||||
modal.classList.add('show');
|
||||
document.body.style.overflow = 'hidden';
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
function closeUnitModal(event) {
|
||||
// Only close if clicking backdrop or close button
|
||||
if (event && event.target.closest('.unit-modal-content') && !event.target.closest('[data-close-modal]')) {
|
||||
return;
|
||||
}
|
||||
|
||||
const modal = document.getElementById('unitModal');
|
||||
if (modal) {
|
||||
modal.classList.remove('show');
|
||||
document.body.style.overflow = '';
|
||||
currentUnitData = null;
|
||||
}
|
||||
}
|
||||
|
||||
async function fetchUnitDetails(unitId) {
|
||||
try {
|
||||
// Try to fetch from network first
|
||||
const response = await fetch(`/api/roster/${unitId}`);
|
||||
if (response.ok) {
|
||||
const unit = await response.json();
|
||||
|
||||
// Save to IndexedDB if offline support is available
|
||||
if (window.offlineDB) {
|
||||
await window.offlineDB.saveUnit(unit);
|
||||
}
|
||||
|
||||
return unit;
|
||||
}
|
||||
} catch (error) {
|
||||
console.log('Network fetch failed, trying offline storage:', error);
|
||||
|
||||
// Fall back to offline storage
|
||||
if (window.offlineDB) {
|
||||
return await window.offlineDB.getUnit(unitId);
|
||||
}
|
||||
}
|
||||
|
||||
return null;
|
||||
}
|
||||
|
||||
function populateUnitModal(unit, cardStatusInfo = null) {
|
||||
// Set unit ID in header
|
||||
const modalUnitId = document.getElementById('modalUnitId');
|
||||
if (modalUnitId) {
|
||||
modalUnitId.textContent = unit.id;
|
||||
}
|
||||
|
||||
// Populate modal content
|
||||
const modalContent = document.getElementById('modalContent');
|
||||
if (!modalContent) return;
|
||||
|
||||
// Use status from card if provided, otherwise get from snapshot or derive from unit
|
||||
let statusInfo = cardStatusInfo || getUnitStatus(unit.id, unit);
|
||||
console.log('populateUnitModal:', { unit, cardStatusInfo, statusInfo });
|
||||
|
||||
const statusColor = statusInfo.status === 'OK' ? 'green' :
|
||||
statusInfo.status === 'Pending' ? 'yellow' :
|
||||
statusInfo.status === 'Missing' ? 'red' : 'gray';
|
||||
|
||||
const statusTextColor = statusInfo.status === 'OK' ? 'text-green-600 dark:text-green-400' :
|
||||
statusInfo.status === 'Pending' ? 'text-yellow-600 dark:text-yellow-400' :
|
||||
statusInfo.status === 'Missing' ? 'text-red-600 dark:text-red-400' :
|
||||
'text-gray-600 dark:text-gray-400';
|
||||
|
||||
// Determine status label (show "Benched" instead of "Unknown" for non-deployed units)
|
||||
let statusLabel = statusInfo.status;
|
||||
if ((statusInfo.status === 'Unknown' || statusInfo.status === 'N/A') && !unit.deployed) {
|
||||
statusLabel = 'Benched';
|
||||
}
|
||||
|
||||
// Create navigation URL for location
|
||||
const createNavUrl = (address, coordinates) => {
|
||||
if (address) {
|
||||
// Use address for navigation
|
||||
const encodedAddress = encodeURIComponent(address);
|
||||
// Universal link that works on iOS and Android
|
||||
return `https://www.google.com/maps/search/?api=1&query=${encodedAddress}`;
|
||||
} else if (coordinates) {
|
||||
// Use coordinates for navigation (format: lat,lon)
|
||||
const encodedCoords = encodeURIComponent(coordinates);
|
||||
return `https://www.google.com/maps/search/?api=1&query=${encodedCoords}`;
|
||||
}
|
||||
return null;
|
||||
};
|
||||
|
||||
const navUrl = createNavUrl(unit.address, unit.coordinates);
|
||||
|
||||
modalContent.innerHTML = `
|
||||
<!-- Status Section -->
|
||||
<div class="flex items-center justify-between pb-4 border-b border-gray-200 dark:border-gray-700">
|
||||
<div class="flex items-center gap-2">
|
||||
<span class="w-4 h-4 rounded-full bg-${statusColor}-500"></span>
|
||||
<span class="font-semibold ${statusTextColor}">${statusLabel}</span>
|
||||
</div>
|
||||
<span class="text-sm text-gray-500">${statusInfo.age || '--'}</span>
|
||||
</div>
|
||||
|
||||
<!-- Device Info -->
|
||||
<div class="space-y-3">
|
||||
<div>
|
||||
<label class="text-xs font-medium text-gray-500 dark:text-gray-400">Device Type</label>
|
||||
<p class="mt-1 text-gray-900 dark:text-white">${unit.device_type || '--'}</p>
|
||||
</div>
|
||||
|
||||
${unit.unit_type ? `
|
||||
<div>
|
||||
<label class="text-xs font-medium text-gray-500 dark:text-gray-400">Unit Type</label>
|
||||
<p class="mt-1 text-gray-900 dark:text-white">${unit.unit_type}</p>
|
||||
</div>
|
||||
` : ''}
|
||||
|
||||
${unit.project_id ? `
|
||||
<div>
|
||||
<label class="text-xs font-medium text-gray-500 dark:text-gray-400">Project ID</label>
|
||||
<p class="mt-1 text-gray-900 dark:text-white">${unit.project_id}</p>
|
||||
</div>
|
||||
` : ''}
|
||||
|
||||
${unit.address ? `
|
||||
<div>
|
||||
<label class="text-xs font-medium text-gray-500 dark:text-gray-400">Address</label>
|
||||
${navUrl ? `
|
||||
<a href="${navUrl}" target="_blank" class="mt-1 flex items-center gap-2 text-seismo-orange hover:text-orange-600 dark:text-seismo-orange dark:hover:text-orange-400">
|
||||
<svg class="w-4 h-4 flex-shrink-0" fill="none" stroke="currentColor" viewBox="0 0 24 24">
|
||||
<path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M17.657 16.657L13.414 20.9a1.998 1.998 0 01-2.827 0l-4.244-4.243a8 8 0 1111.314 0z"></path>
|
||||
<path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M15 11a3 3 0 11-6 0 3 3 0 016 0z"></path>
|
||||
</svg>
|
||||
<span class="underline">${unit.address}</span>
|
||||
</a>
|
||||
` : `
|
||||
<p class="mt-1 text-gray-900 dark:text-white">${unit.address}</p>
|
||||
`}
|
||||
</div>
|
||||
` : ''}
|
||||
|
||||
${unit.coordinates && !unit.address ? `
|
||||
<div>
|
||||
<label class="text-xs font-medium text-gray-500 dark:text-gray-400">Coordinates</label>
|
||||
${navUrl ? `
|
||||
<a href="${navUrl}" target="_blank" class="mt-1 flex items-center gap-2 text-seismo-orange hover:text-orange-600 dark:text-seismo-orange dark:hover:text-orange-400">
|
||||
<svg class="w-4 h-4 flex-shrink-0" fill="none" stroke="currentColor" viewBox="0 0 24 24">
|
||||
<path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M17.657 16.657L13.414 20.9a1.998 1.998 0 01-2.827 0l-4.244-4.243a8 8 0 1111.314 0z"></path>
|
||||
<path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M15 11a3 3 0 11-6 0 3 3 0 016 0z"></path>
|
||||
</svg>
|
||||
<span class="font-mono text-sm underline">${unit.coordinates}</span>
|
||||
</a>
|
||||
` : `
|
||||
<p class="mt-1 text-gray-900 dark:text-white font-mono text-sm">${unit.coordinates}</p>
|
||||
`}
|
||||
</div>
|
||||
` : ''}
|
||||
|
||||
<!-- Seismograph-specific fields -->
|
||||
${unit.device_type === 'seismograph' ? `
|
||||
${unit.last_calibrated ? `
|
||||
<div>
|
||||
<label class="text-xs font-medium text-gray-500 dark:text-gray-400">Last Calibrated</label>
|
||||
<p class="mt-1 text-gray-900 dark:text-white">${unit.last_calibrated}</p>
|
||||
</div>
|
||||
` : ''}
|
||||
|
||||
${unit.next_calibration_due ? `
|
||||
<div>
|
||||
<label class="text-xs font-medium text-gray-500 dark:text-gray-400">Next Calibration Due</label>
|
||||
<p class="mt-1 text-gray-900 dark:text-white">${unit.next_calibration_due}</p>
|
||||
</div>
|
||||
` : ''}
|
||||
|
||||
${unit.deployed_with_modem_id ? `
|
||||
<div>
|
||||
<label class="text-xs font-medium text-gray-500 dark:text-gray-400">Deployed With Modem</label>
|
||||
<p class="mt-1 text-gray-900 dark:text-white">${unit.deployed_with_modem_id}</p>
|
||||
</div>
|
||||
` : ''}
|
||||
` : ''}
|
||||
|
||||
<!-- Modem-specific fields -->
|
||||
${unit.device_type === 'modem' ? `
|
||||
${unit.ip_address ? `
|
||||
<div>
|
||||
<label class="text-xs font-medium text-gray-500 dark:text-gray-400">IP Address</label>
|
||||
<p class="mt-1 text-gray-900 dark:text-white">${unit.ip_address}</p>
|
||||
</div>
|
||||
` : ''}
|
||||
|
||||
${unit.phone_number ? `
|
||||
<div>
|
||||
<label class="text-xs font-medium text-gray-500 dark:text-gray-400">Phone Number</label>
|
||||
<p class="mt-1 text-gray-900 dark:text-white">${unit.phone_number}</p>
|
||||
</div>
|
||||
` : ''}
|
||||
|
||||
${unit.hardware_model ? `
|
||||
<div>
|
||||
<label class="text-xs font-medium text-gray-500 dark:text-gray-400">Hardware Model</label>
|
||||
<p class="mt-1 text-gray-900 dark:text-white">${unit.hardware_model}</p>
|
||||
</div>
|
||||
` : ''}
|
||||
` : ''}
|
||||
|
||||
${unit.note ? `
|
||||
<div>
|
||||
<label class="text-xs font-medium text-gray-500 dark:text-gray-400">Notes</label>
|
||||
<p class="mt-1 text-gray-900 dark:text-white whitespace-pre-wrap">${unit.note}</p>
|
||||
</div>
|
||||
` : ''}
|
||||
|
||||
<div class="grid grid-cols-2 gap-3 pt-2">
|
||||
<div>
|
||||
<label class="text-xs font-medium text-gray-500 dark:text-gray-400">Deployed</label>
|
||||
<p class="mt-1 text-gray-900 dark:text-white">${unit.deployed ? 'Yes' : 'No'}</p>
|
||||
</div>
|
||||
<div>
|
||||
<label class="text-xs font-medium text-gray-500 dark:text-gray-400">Retired</label>
|
||||
<p class="mt-1 text-gray-900 dark:text-white">${unit.retired ? 'Yes' : 'No'}</p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
`;
|
||||
|
||||
// Update action buttons
|
||||
const editBtn = document.getElementById('modalEditBtn');
|
||||
const deployBtn = document.getElementById('modalDeployBtn');
|
||||
const deleteBtn = document.getElementById('modalDeleteBtn');
|
||||
|
||||
if (editBtn) {
|
||||
editBtn.onclick = () => {
|
||||
window.location.href = `/unit/${unit.id}`;
|
||||
};
|
||||
}
|
||||
|
||||
if (deployBtn) {
|
||||
deployBtn.textContent = unit.deployed ? 'Bench Unit' : 'Deploy Unit';
|
||||
deployBtn.onclick = () => toggleDeployStatus(unit.id, !unit.deployed);
|
||||
}
|
||||
|
||||
if (deleteBtn) {
|
||||
deleteBtn.onclick = () => deleteUnit(unit.id);
|
||||
}
|
||||
}
|
||||
|
||||
function getUnitStatus(unitId, unit = null) {
|
||||
// Prefer roster table data if it was rendered with the current view
|
||||
if (window.rosterStatusMap && window.rosterStatusMap[unitId]) {
|
||||
return window.rosterStatusMap[unitId];
|
||||
}
|
||||
|
||||
// Try to get status from dashboard snapshot if it exists
|
||||
if (window.lastStatusSnapshot && window.lastStatusSnapshot.units && window.lastStatusSnapshot.units[unitId]) {
|
||||
const unitStatus = window.lastStatusSnapshot.units[unitId];
|
||||
return {
|
||||
status: unitStatus.status,
|
||||
age: unitStatus.age,
|
||||
last: unitStatus.last
|
||||
};
|
||||
}
|
||||
|
||||
// Fallback: if unit data is provided, derive status from deployment state
|
||||
if (unit) {
|
||||
if (unit.deployed) {
|
||||
// For deployed units without status data, default to "Unknown"
|
||||
return { status: 'Unknown', age: '--', last: '--' };
|
||||
} else {
|
||||
// For benched units, use "N/A" which will be displayed as "Benched"
|
||||
return { status: 'N/A', age: '--', last: '--' };
|
||||
}
|
||||
}
|
||||
|
||||
return { status: 'Unknown', age: '--', last: '--' };
|
||||
}
|
||||
|
||||
async function toggleDeployStatus(unitId, deployed) {
|
||||
try {
|
||||
const formData = new FormData();
|
||||
formData.append('deployed', deployed ? 'true' : 'false');
|
||||
|
||||
const response = await fetch(`/api/roster/edit/${unitId}`, {
|
||||
method: 'POST',
|
||||
body: formData
|
||||
});
|
||||
|
||||
if (response.ok) {
|
||||
showToast('✓ Unit updated successfully');
|
||||
closeUnitModal();
|
||||
|
||||
// Trigger HTMX refresh if on roster page
|
||||
const rosterTable = document.querySelector('[hx-get*="roster"]');
|
||||
if (rosterTable) {
|
||||
htmx.trigger(rosterTable, 'refresh');
|
||||
}
|
||||
} else {
|
||||
showToast('❌ Failed to update unit', 'error');
|
||||
}
|
||||
} catch (error) {
|
||||
console.error('Error toggling deploy status:', error);
|
||||
showToast('❌ Failed to update unit', 'error');
|
||||
}
|
||||
}
|
||||
|
||||
async function deleteUnit(unitId) {
|
||||
if (!confirm(`Are you sure you want to delete unit ${unitId}?\n\nThis action cannot be undone!`)) {
|
||||
return;
|
||||
}
|
||||
|
||||
try {
|
||||
const response = await fetch(`/api/roster/${unitId}`, {
|
||||
method: 'DELETE'
|
||||
});
|
||||
|
||||
if (response.ok) {
|
||||
showToast('✓ Unit deleted successfully');
|
||||
closeUnitModal();
|
||||
|
||||
// Refresh roster page if present
|
||||
const rosterTable = document.querySelector('[hx-get*="roster"]');
|
||||
if (rosterTable) {
|
||||
htmx.trigger(rosterTable, 'refresh');
|
||||
}
|
||||
} else {
|
||||
showToast('❌ Failed to delete unit', 'error');
|
||||
}
|
||||
} catch (error) {
|
||||
console.error('Error deleting unit:', error);
|
||||
showToast('❌ Failed to delete unit', 'error');
|
||||
}
|
||||
}
|
||||
|
||||
// ===== ONLINE/OFFLINE STATUS =====
|
||||
function updateOnlineStatus() {
|
||||
isOnline = navigator.onLine;
|
||||
const offlineIndicator = document.getElementById('offlineIndicator');
|
||||
|
||||
if (offlineIndicator) {
|
||||
if (isOnline) {
|
||||
offlineIndicator.classList.remove('show');
|
||||
// Trigger sync when coming back online
|
||||
if (window.offlineDB) {
|
||||
syncPendingEdits();
|
||||
}
|
||||
} else {
|
||||
offlineIndicator.classList.add('show');
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
window.addEventListener('online', updateOnlineStatus);
|
||||
window.addEventListener('offline', updateOnlineStatus);
|
||||
|
||||
// ===== SYNC FUNCTIONALITY =====
|
||||
async function syncPendingEdits() {
|
||||
if (!window.offlineDB) return;
|
||||
|
||||
try {
|
||||
const pendingEdits = await window.offlineDB.getPendingEdits();
|
||||
|
||||
if (pendingEdits.length === 0) return;
|
||||
|
||||
console.log(`Syncing ${pendingEdits.length} pending edits...`);
|
||||
|
||||
for (const edit of pendingEdits) {
|
||||
try {
|
||||
const formData = new FormData();
|
||||
for (const [key, value] of Object.entries(edit.changes)) {
|
||||
formData.append(key, value);
|
||||
}
|
||||
|
||||
const response = await fetch(`/api/roster/edit/${edit.unitId}`, {
|
||||
method: 'POST',
|
||||
body: formData
|
||||
});
|
||||
|
||||
if (response.ok) {
|
||||
await window.offlineDB.clearEdit(edit.id);
|
||||
console.log(`Synced edit ${edit.id} for unit ${edit.unitId}`);
|
||||
} else {
|
||||
console.error(`Failed to sync edit ${edit.id}`);
|
||||
}
|
||||
} catch (error) {
|
||||
console.error(`Error syncing edit ${edit.id}:`, error);
|
||||
// Keep in queue for next sync attempt
|
||||
}
|
||||
}
|
||||
|
||||
// Show success toast
|
||||
showToast('✓ Synced successfully');
|
||||
|
||||
} catch (error) {
|
||||
console.error('Error in syncPendingEdits:', error);
|
||||
}
|
||||
}
|
||||
|
||||
// Manual sync button
|
||||
function manualSync() {
|
||||
if (!isOnline) {
|
||||
showToast('⚠️ Cannot sync while offline', 'warning');
|
||||
return;
|
||||
}
|
||||
|
||||
syncPendingEdits();
|
||||
}
|
||||
|
||||
// ===== TOAST NOTIFICATIONS =====
|
||||
function showToast(message, type = 'success') {
|
||||
const toast = document.getElementById('syncToast');
|
||||
if (!toast) return;
|
||||
|
||||
// Update toast appearance based on type
|
||||
toast.classList.remove('bg-green-500', 'bg-red-500', 'bg-yellow-500');
|
||||
|
||||
if (type === 'success') {
|
||||
toast.classList.add('bg-green-500');
|
||||
} else if (type === 'error') {
|
||||
toast.classList.add('bg-red-500');
|
||||
} else if (type === 'warning') {
|
||||
toast.classList.add('bg-yellow-500');
|
||||
}
|
||||
|
||||
toast.textContent = message;
|
||||
toast.classList.add('show');
|
||||
|
||||
// Auto-hide after 3 seconds
|
||||
setTimeout(() => {
|
||||
toast.classList.remove('show');
|
||||
}, 3000);
|
||||
}
|
||||
|
||||
// ===== BOTTOM NAV ACTIVE STATE =====
|
||||
function updateBottomNavActiveState() {
|
||||
const currentPath = window.location.pathname;
|
||||
const navButtons = document.querySelectorAll('.bottom-nav-btn');
|
||||
|
||||
navButtons.forEach(btn => {
|
||||
const href = btn.getAttribute('data-href');
|
||||
if (href && (currentPath === href || (href !== '/' && currentPath.startsWith(href)))) {
|
||||
btn.classList.add('active');
|
||||
} else {
|
||||
btn.classList.remove('active');
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
// ===== INITIALIZATION =====
|
||||
document.addEventListener('DOMContentLoaded', () => {
|
||||
// Initialize online/offline status
|
||||
updateOnlineStatus();
|
||||
|
||||
// Update bottom nav active state
|
||||
updateBottomNavActiveState();
|
||||
|
||||
// Add resize listener
|
||||
window.addEventListener('resize', handleResize);
|
||||
|
||||
// Close menu on navigation (for mobile)
|
||||
document.addEventListener('click', (e) => {
|
||||
const link = e.target.closest('a');
|
||||
if (link && link.closest('#sidebar')) {
|
||||
// Delay to allow navigation to start
|
||||
setTimeout(() => {
|
||||
if (window.innerWidth < 768) {
|
||||
closeMenuFromBackdrop();
|
||||
}
|
||||
}, 100);
|
||||
}
|
||||
});
|
||||
|
||||
// Prevent scroll when modals are open (iOS fix)
|
||||
document.addEventListener('touchmove', (e) => {
|
||||
const modal = document.querySelector('.unit-modal.show, #sidebar.open');
|
||||
if (modal && !e.target.closest('.unit-modal-content, #sidebar')) {
|
||||
e.preventDefault();
|
||||
}
|
||||
}, { passive: false });
|
||||
|
||||
console.log('Mobile.js initialized');
|
||||
});
|
||||
|
||||
// ===== SERVICE WORKER REGISTRATION =====
|
||||
if ('serviceWorker' in navigator) {
|
||||
navigator.serviceWorker.register('/sw.js')
|
||||
.then(registration => {
|
||||
console.log('Service Worker registered:', registration);
|
||||
|
||||
// Check for updates periodically
|
||||
setInterval(() => {
|
||||
registration.update();
|
||||
}, 60 * 60 * 1000); // Check every hour
|
||||
})
|
||||
.catch(error => {
|
||||
console.error('Service Worker registration failed:', error);
|
||||
});
|
||||
|
||||
// Listen for service worker updates
|
||||
navigator.serviceWorker.addEventListener('controllerchange', () => {
|
||||
console.log('Service Worker updated, reloading page...');
|
||||
window.location.reload();
|
||||
});
|
||||
}
|
||||
|
||||
// Export functions for global use
|
||||
window.toggleMenu = toggleMenu;
|
||||
window.closeMenuFromBackdrop = closeMenuFromBackdrop;
|
||||
window.openUnitModal = openUnitModal;
|
||||
window.closeUnitModal = closeUnitModal;
|
||||
window.manualSync = manualSync;
|
||||
window.showToast = showToast;
|
||||
349
backend/static/offline-db.js
Normal file
@@ -0,0 +1,349 @@
|
||||
/* IndexedDB wrapper for offline data storage in SFM */
|
||||
/* Handles unit data, status snapshots, and pending edit queue */
|
||||
|
||||
class OfflineDB {
|
||||
constructor() {
|
||||
this.dbName = 'sfm-offline-db';
|
||||
this.version = 1;
|
||||
this.db = null;
|
||||
}
|
||||
|
||||
// Initialize database
|
||||
async init() {
|
||||
return new Promise((resolve, reject) => {
|
||||
const request = indexedDB.open(this.dbName, this.version);
|
||||
|
||||
request.onerror = () => {
|
||||
console.error('IndexedDB error:', request.error);
|
||||
reject(request.error);
|
||||
};
|
||||
|
||||
request.onsuccess = () => {
|
||||
this.db = request.result;
|
||||
console.log('IndexedDB initialized successfully');
|
||||
resolve(this.db);
|
||||
};
|
||||
|
||||
request.onupgradeneeded = (event) => {
|
||||
const db = event.target.result;
|
||||
|
||||
// Units store - full unit details
|
||||
if (!db.objectStoreNames.contains('units')) {
|
||||
const unitsStore = db.createObjectStore('units', { keyPath: 'id' });
|
||||
unitsStore.createIndex('device_type', 'device_type', { unique: false });
|
||||
unitsStore.createIndex('deployed', 'deployed', { unique: false });
|
||||
console.log('Created units object store');
|
||||
}
|
||||
|
||||
// Status snapshot store - latest status data
|
||||
if (!db.objectStoreNames.contains('status-snapshot')) {
|
||||
db.createObjectStore('status-snapshot', { keyPath: 'timestamp' });
|
||||
console.log('Created status-snapshot object store');
|
||||
}
|
||||
|
||||
// Pending edits store - offline edit queue
|
||||
if (!db.objectStoreNames.contains('pending-edits')) {
|
||||
const editsStore = db.createObjectStore('pending-edits', {
|
||||
keyPath: 'id',
|
||||
autoIncrement: true
|
||||
});
|
||||
editsStore.createIndex('unitId', 'unitId', { unique: false });
|
||||
editsStore.createIndex('timestamp', 'timestamp', { unique: false });
|
||||
console.log('Created pending-edits object store');
|
||||
}
|
||||
};
|
||||
});
|
||||
}
|
||||
|
||||
// ===== UNITS OPERATIONS =====
|
||||
|
||||
// Save or update a unit
|
||||
async saveUnit(unit) {
|
||||
if (!this.db) await this.init();
|
||||
|
||||
return new Promise((resolve, reject) => {
|
||||
const transaction = this.db.transaction(['units'], 'readwrite');
|
||||
const store = transaction.objectStore('units');
|
||||
const request = store.put(unit);
|
||||
|
||||
request.onsuccess = () => resolve(request.result);
|
||||
request.onerror = () => reject(request.error);
|
||||
});
|
||||
}
|
||||
|
||||
// Get a single unit by ID
|
||||
async getUnit(unitId) {
|
||||
if (!this.db) await this.init();
|
||||
|
||||
return new Promise((resolve, reject) => {
|
||||
const transaction = this.db.transaction(['units'], 'readonly');
|
||||
const store = transaction.objectStore('units');
|
||||
const request = store.get(unitId);
|
||||
|
||||
request.onsuccess = () => resolve(request.result);
|
||||
request.onerror = () => reject(request.error);
|
||||
});
|
||||
}
|
||||
|
||||
// Get all units
|
||||
async getAllUnits() {
|
||||
if (!this.db) await this.init();
|
||||
|
||||
return new Promise((resolve, reject) => {
|
||||
const transaction = this.db.transaction(['units'], 'readonly');
|
||||
const store = transaction.objectStore('units');
|
||||
const request = store.getAll();
|
||||
|
||||
request.onsuccess = () => resolve(request.result);
|
||||
request.onerror = () => reject(request.error);
|
||||
});
|
||||
}
|
||||
|
||||
// Delete a unit
|
||||
async deleteUnit(unitId) {
|
||||
if (!this.db) await this.init();
|
||||
|
||||
return new Promise((resolve, reject) => {
|
||||
const transaction = this.db.transaction(['units'], 'readwrite');
|
||||
const store = transaction.objectStore('units');
|
||||
const request = store.delete(unitId);
|
||||
|
||||
request.onsuccess = () => resolve();
|
||||
request.onerror = () => reject(request.error);
|
||||
});
|
||||
}
|
||||
|
||||
// ===== STATUS SNAPSHOT OPERATIONS =====
|
||||
|
||||
// Save status snapshot
|
||||
async saveSnapshot(snapshot) {
|
||||
if (!this.db) await this.init();
|
||||
|
||||
return new Promise((resolve, reject) => {
|
||||
const transaction = this.db.transaction(['status-snapshot'], 'readwrite');
|
||||
const store = transaction.objectStore('status-snapshot');
|
||||
|
||||
// Add timestamp
|
||||
const snapshotWithTimestamp = {
|
||||
...snapshot,
|
||||
timestamp: Date.now()
|
||||
};
|
||||
|
||||
const request = store.put(snapshotWithTimestamp);
|
||||
|
||||
request.onsuccess = () => resolve(request.result);
|
||||
request.onerror = () => reject(request.error);
|
||||
});
|
||||
}
|
||||
|
||||
// Get latest status snapshot
|
||||
async getLatestSnapshot() {
|
||||
if (!this.db) await this.init();
|
||||
|
||||
return new Promise((resolve, reject) => {
|
||||
const transaction = this.db.transaction(['status-snapshot'], 'readonly');
|
||||
const store = transaction.objectStore('status-snapshot');
|
||||
const request = store.getAll();
|
||||
|
||||
request.onsuccess = () => {
|
||||
const snapshots = request.result;
|
||||
if (snapshots.length > 0) {
|
||||
// Return the most recent snapshot
|
||||
const latest = snapshots.reduce((prev, current) =>
|
||||
(prev.timestamp > current.timestamp) ? prev : current
|
||||
);
|
||||
resolve(latest);
|
||||
} else {
|
||||
resolve(null);
|
||||
}
|
||||
};
|
||||
request.onerror = () => reject(request.error);
|
||||
});
|
||||
}
|
||||
|
||||
// Clear old snapshots (keep only latest)
|
||||
async clearOldSnapshots() {
|
||||
if (!this.db) await this.init();
|
||||
|
||||
return new Promise(async (resolve, reject) => {
|
||||
const transaction = this.db.transaction(['status-snapshot'], 'readwrite');
|
||||
const store = transaction.objectStore('status-snapshot');
|
||||
const getAllRequest = store.getAll();
|
||||
|
||||
getAllRequest.onsuccess = () => {
|
||||
const snapshots = getAllRequest.result;
|
||||
|
||||
if (snapshots.length > 1) {
|
||||
// Sort by timestamp, keep only the latest
|
||||
snapshots.sort((a, b) => b.timestamp - a.timestamp);
|
||||
|
||||
// Delete all except the first (latest)
|
||||
for (let i = 1; i < snapshots.length; i++) {
|
||||
store.delete(snapshots[i].timestamp);
|
||||
}
|
||||
}
|
||||
|
||||
resolve();
|
||||
};
|
||||
|
||||
getAllRequest.onerror = () => reject(getAllRequest.error);
|
||||
});
|
||||
}
|
||||
|
||||
// ===== PENDING EDITS OPERATIONS =====
|
||||
|
||||
// Queue an edit for offline sync
|
||||
async queueEdit(unitId, changes) {
|
||||
if (!this.db) await this.init();
|
||||
|
||||
return new Promise((resolve, reject) => {
|
||||
const transaction = this.db.transaction(['pending-edits'], 'readwrite');
|
||||
const store = transaction.objectStore('pending-edits');
|
||||
|
||||
const edit = {
|
||||
unitId,
|
||||
changes,
|
||||
timestamp: Date.now()
|
||||
};
|
||||
|
||||
const request = store.add(edit);
|
||||
|
||||
request.onsuccess = () => {
|
||||
console.log(`Queued edit for unit ${unitId}`);
|
||||
resolve(request.result);
|
||||
};
|
||||
request.onerror = () => reject(request.error);
|
||||
});
|
||||
}
|
||||
|
||||
// Get all pending edits
|
||||
async getPendingEdits() {
|
||||
if (!this.db) await this.init();
|
||||
|
||||
return new Promise((resolve, reject) => {
|
||||
const transaction = this.db.transaction(['pending-edits'], 'readonly');
|
||||
const store = transaction.objectStore('pending-edits');
|
||||
const request = store.getAll();
|
||||
|
||||
request.onsuccess = () => resolve(request.result);
|
||||
request.onerror = () => reject(request.error);
|
||||
});
|
||||
}
|
||||
|
||||
// Get pending edits count
|
||||
async getPendingEditsCount() {
|
||||
if (!this.db) await this.init();
|
||||
|
||||
return new Promise((resolve, reject) => {
|
||||
const transaction = this.db.transaction(['pending-edits'], 'readonly');
|
||||
const store = transaction.objectStore('pending-edits');
|
||||
const request = store.count();
|
||||
|
||||
request.onsuccess = () => resolve(request.result);
|
||||
request.onerror = () => reject(request.error);
|
||||
});
|
||||
}
|
||||
|
||||
// Clear a synced edit
|
||||
async clearEdit(editId) {
|
||||
if (!this.db) await this.init();
|
||||
|
||||
return new Promise((resolve, reject) => {
|
||||
const transaction = this.db.transaction(['pending-edits'], 'readwrite');
|
||||
const store = transaction.objectStore('pending-edits');
|
||||
const request = store.delete(editId);
|
||||
|
||||
request.onsuccess = () => {
|
||||
console.log(`Cleared edit ${editId} from queue`);
|
||||
resolve();
|
||||
};
|
||||
request.onerror = () => reject(request.error);
|
||||
});
|
||||
}
|
||||
|
||||
// Clear all pending edits
|
||||
async clearAllEdits() {
|
||||
if (!this.db) await this.init();
|
||||
|
||||
return new Promise((resolve, reject) => {
|
||||
const transaction = this.db.transaction(['pending-edits'], 'readwrite');
|
||||
const store = transaction.objectStore('pending-edits');
|
||||
const request = store.clear();
|
||||
|
||||
request.onsuccess = () => {
|
||||
console.log('Cleared all pending edits');
|
||||
resolve();
|
||||
};
|
||||
request.onerror = () => reject(request.error);
|
||||
});
|
||||
}
|
||||
|
||||
// ===== UTILITY OPERATIONS =====
|
||||
|
||||
// Clear all data (for debugging/reset)
|
||||
async clearAllData() {
|
||||
if (!this.db) await this.init();
|
||||
|
||||
return new Promise((resolve, reject) => {
|
||||
const storeNames = ['units', 'status-snapshot', 'pending-edits'];
|
||||
const transaction = this.db.transaction(storeNames, 'readwrite');
|
||||
|
||||
storeNames.forEach(storeName => {
|
||||
transaction.objectStore(storeName).clear();
|
||||
});
|
||||
|
||||
transaction.oncomplete = () => {
|
||||
console.log('Cleared all offline data');
|
||||
resolve();
|
||||
};
|
||||
transaction.onerror = () => reject(transaction.error);
|
||||
});
|
||||
}
|
||||
|
||||
// Get database statistics
|
||||
async getStats() {
|
||||
if (!this.db) await this.init();
|
||||
|
||||
const unitsCount = await new Promise((resolve, reject) => {
|
||||
const transaction = this.db.transaction(['units'], 'readonly');
|
||||
const request = transaction.objectStore('units').count();
|
||||
request.onsuccess = () => resolve(request.result);
|
||||
request.onerror = () => reject(request.error);
|
||||
});
|
||||
|
||||
const pendingEditsCount = await this.getPendingEditsCount();
|
||||
|
||||
const hasSnapshot = await new Promise((resolve, reject) => {
|
||||
const transaction = this.db.transaction(['status-snapshot'], 'readonly');
|
||||
const request = transaction.objectStore('status-snapshot').count();
|
||||
request.onsuccess = () => resolve(request.result > 0);
|
||||
request.onerror = () => reject(request.error);
|
||||
});
|
||||
|
||||
return {
|
||||
unitsCount,
|
||||
pendingEditsCount,
|
||||
hasSnapshot
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
// Create global instance
|
||||
window.offlineDB = new OfflineDB();
|
||||
|
||||
// Initialize on page load
|
||||
document.addEventListener('DOMContentLoaded', async () => {
|
||||
try {
|
||||
await window.offlineDB.init();
|
||||
console.log('Offline database ready');
|
||||
|
||||
// Display pending edits count if any
|
||||
const pendingCount = await window.offlineDB.getPendingEditsCount();
|
||||
if (pendingCount > 0) {
|
||||
console.log(`${pendingCount} pending edits in queue`);
|
||||
// Could show a badge in the UI here
|
||||
}
|
||||
} catch (error) {
|
||||
console.error('Failed to initialize offline database:', error);
|
||||
}
|
||||
});
|
||||
12
backend/static/style.css
Normal file
@@ -0,0 +1,12 @@
|
||||
/* Custom styles for Seismo Fleet Manager */
|
||||
|
||||
/* Additional custom styles can go here */
|
||||
|
||||
.card-hover {
|
||||
transition: transform 0.2s ease, box-shadow 0.2s ease;
|
||||
}
|
||||
|
||||
.card-hover:hover {
|
||||
transform: translateY(-2px);
|
||||
box-shadow: 0 10px 25px rgba(0, 0, 0, 0.1);
|
||||
}
|
||||
347
backend/static/sw.js
Normal file
@@ -0,0 +1,347 @@
|
||||
/* Service Worker for Seismo Fleet Manager PWA */
|
||||
/* Network-first strategy with cache fallback for real-time data */
|
||||
|
||||
const CACHE_VERSION = 'v1';
|
||||
const STATIC_CACHE = `sfm-static-${CACHE_VERSION}`;
|
||||
const DYNAMIC_CACHE = `sfm-dynamic-${CACHE_VERSION}`;
|
||||
const DATA_CACHE = `sfm-data-${CACHE_VERSION}`;
|
||||
|
||||
// Files to precache (critical app shell)
|
||||
const STATIC_FILES = [
|
||||
'/',
|
||||
'/static/style.css',
|
||||
'/static/mobile.css',
|
||||
'/static/mobile.js',
|
||||
'/static/offline-db.js',
|
||||
'/static/manifest.json',
|
||||
'https://cdn.tailwindcss.com',
|
||||
'https://unpkg.com/htmx.org@1.9.10',
|
||||
'https://unpkg.com/leaflet@1.9.4/dist/leaflet.css',
|
||||
'https://unpkg.com/leaflet@1.9.4/dist/leaflet.js'
|
||||
];
|
||||
|
||||
// Install event - cache static files
|
||||
self.addEventListener('install', (event) => {
|
||||
console.log('[SW] Installing service worker...');
|
||||
|
||||
event.waitUntil(
|
||||
caches.open(STATIC_CACHE)
|
||||
.then((cache) => {
|
||||
console.log('[SW] Precaching static files');
|
||||
return cache.addAll(STATIC_FILES);
|
||||
})
|
||||
.then(() => {
|
||||
console.log('[SW] Static files cached successfully');
|
||||
return self.skipWaiting(); // Activate immediately
|
||||
})
|
||||
.catch((error) => {
|
||||
console.error('[SW] Precaching failed:', error);
|
||||
})
|
||||
);
|
||||
});
|
||||
|
||||
// Activate event - clean up old caches
|
||||
self.addEventListener('activate', (event) => {
|
||||
console.log('[SW] Activating service worker...');
|
||||
|
||||
event.waitUntil(
|
||||
caches.keys()
|
||||
.then((cacheNames) => {
|
||||
return Promise.all(
|
||||
cacheNames.map((cacheName) => {
|
||||
// Delete old caches that don't match current version
|
||||
if (cacheName !== STATIC_CACHE &&
|
||||
cacheName !== DYNAMIC_CACHE &&
|
||||
cacheName !== DATA_CACHE) {
|
||||
console.log('[SW] Deleting old cache:', cacheName);
|
||||
return caches.delete(cacheName);
|
||||
}
|
||||
})
|
||||
);
|
||||
})
|
||||
.then(() => {
|
||||
console.log('[SW] Service worker activated');
|
||||
return self.clients.claim(); // Take control of all pages
|
||||
})
|
||||
);
|
||||
});
|
||||
|
||||
// Fetch event - network-first strategy
|
||||
self.addEventListener('fetch', (event) => {
|
||||
const { request } = event;
|
||||
const url = new URL(request.url);
|
||||
|
||||
// Skip non-GET requests
|
||||
if (request.method !== 'GET') {
|
||||
return;
|
||||
}
|
||||
|
||||
// Skip chrome-extension and other non-http(s) requests
|
||||
if (!url.protocol.startsWith('http')) {
|
||||
return;
|
||||
}
|
||||
|
||||
// API requests - network first, cache fallback
|
||||
if (url.pathname.startsWith('/api/')) {
|
||||
event.respondWith(networkFirstStrategy(request, DATA_CACHE));
|
||||
return;
|
||||
}
|
||||
|
||||
// Static assets - cache first
|
||||
if (isStaticAsset(url.pathname)) {
|
||||
event.respondWith(cacheFirstStrategy(request, STATIC_CACHE));
|
||||
return;
|
||||
}
|
||||
|
||||
// HTML pages - network first with cache fallback
|
||||
if (request.headers.get('accept')?.includes('text/html')) {
|
||||
event.respondWith(networkFirstStrategy(request, DYNAMIC_CACHE));
|
||||
return;
|
||||
}
|
||||
|
||||
// Everything else - network first
|
||||
event.respondWith(networkFirstStrategy(request, DYNAMIC_CACHE));
|
||||
});
|
||||
|
||||
// Network-first strategy
|
||||
async function networkFirstStrategy(request, cacheName) {
|
||||
try {
|
||||
// Try network first
|
||||
const networkResponse = await fetch(request);
|
||||
|
||||
// Cache successful responses
|
||||
if (networkResponse && networkResponse.status === 200) {
|
||||
const cache = await caches.open(cacheName);
|
||||
cache.put(request, networkResponse.clone());
|
||||
}
|
||||
|
||||
return networkResponse;
|
||||
} catch (error) {
|
||||
// Network failed, try cache
|
||||
console.log('[SW] Network failed, trying cache:', request.url);
|
||||
const cachedResponse = await caches.match(request);
|
||||
|
||||
if (cachedResponse) {
|
||||
console.log('[SW] Serving from cache:', request.url);
|
||||
return cachedResponse;
|
||||
}
|
||||
|
||||
// No cache available, return offline page or error
|
||||
console.error('[SW] No cache available for:', request.url);
|
||||
|
||||
// For HTML requests, return a basic offline page
|
||||
if (request.headers.get('accept')?.includes('text/html')) {
|
||||
return new Response(
|
||||
`<!DOCTYPE html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="UTF-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||
<title>Offline - SFM</title>
|
||||
<style>
|
||||
body {
|
||||
font-family: system-ui, -apple-system, sans-serif;
|
||||
display: flex;
|
||||
align-items: center;
|
||||
justify-content: center;
|
||||
min-height: 100vh;
|
||||
margin: 0;
|
||||
background: #f3f4f6;
|
||||
color: #1f2937;
|
||||
}
|
||||
.container {
|
||||
text-align: center;
|
||||
padding: 2rem;
|
||||
}
|
||||
h1 { color: #f48b1c; margin-bottom: 1rem; }
|
||||
p { margin-bottom: 1.5rem; color: #6b7280; }
|
||||
button {
|
||||
background: #f48b1c;
|
||||
color: white;
|
||||
border: none;
|
||||
padding: 0.75rem 1.5rem;
|
||||
border-radius: 0.5rem;
|
||||
font-size: 1rem;
|
||||
cursor: pointer;
|
||||
}
|
||||
button:hover { background: #d97706; }
|
||||
</style>
|
||||
</head>
|
||||
<body>
|
||||
<div class="container">
|
||||
<h1>📡 You're Offline</h1>
|
||||
<p>SFM requires an internet connection for this page.</p>
|
||||
<p>Please check your connection and try again.</p>
|
||||
<button onclick="location.reload()">Retry</button>
|
||||
</div>
|
||||
</body>
|
||||
</html>`,
|
||||
{
|
||||
headers: { 'Content-Type': 'text/html' }
|
||||
}
|
||||
);
|
||||
}
|
||||
|
||||
// For other requests, return error
|
||||
return new Response('Network error', {
|
||||
status: 503,
|
||||
statusText: 'Service Unavailable'
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
// Cache-first strategy
|
||||
async function cacheFirstStrategy(request, cacheName) {
|
||||
const cachedResponse = await caches.match(request);
|
||||
|
||||
if (cachedResponse) {
|
||||
return cachedResponse;
|
||||
}
|
||||
|
||||
// Not in cache, fetch from network
|
||||
try {
|
||||
const networkResponse = await fetch(request);
|
||||
|
||||
// Cache successful responses
|
||||
if (networkResponse && networkResponse.status === 200) {
|
||||
const cache = await caches.open(cacheName);
|
||||
cache.put(request, networkResponse.clone());
|
||||
}
|
||||
|
||||
return networkResponse;
|
||||
} catch (error) {
|
||||
console.error('[SW] Fetch failed:', request.url, error);
|
||||
return new Response('Network error', {
|
||||
status: 503,
|
||||
statusText: 'Service Unavailable'
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
// Check if URL is a static asset
|
||||
function isStaticAsset(pathname) {
|
||||
const staticExtensions = ['.css', '.js', '.png', '.jpg', '.jpeg', '.svg', '.ico', '.woff', '.woff2'];
|
||||
return staticExtensions.some(ext => pathname.endsWith(ext));
|
||||
}
|
||||
|
||||
// Background Sync - for offline edits
|
||||
self.addEventListener('sync', (event) => {
|
||||
console.log('[SW] Background sync event:', event.tag);
|
||||
|
||||
if (event.tag === 'sync-edits') {
|
||||
event.waitUntil(syncPendingEdits());
|
||||
}
|
||||
});
|
||||
|
||||
// Sync pending edits to server
|
||||
async function syncPendingEdits() {
|
||||
console.log('[SW] Syncing pending edits...');
|
||||
|
||||
try {
|
||||
// Get pending edits from IndexedDB
|
||||
const db = await openDatabase();
|
||||
const edits = await getPendingEdits(db);
|
||||
|
||||
if (edits.length === 0) {
|
||||
console.log('[SW] No pending edits to sync');
|
||||
return;
|
||||
}
|
||||
|
||||
console.log(`[SW] Syncing ${edits.length} pending edits`);
|
||||
|
||||
// Send edits to server
|
||||
const response = await fetch('/api/sync-edits', {
|
||||
method: 'POST',
|
||||
headers: {
|
||||
'Content-Type': 'application/json'
|
||||
},
|
||||
body: JSON.stringify({ edits })
|
||||
});
|
||||
|
||||
if (response.ok) {
|
||||
const result = await response.json();
|
||||
console.log('[SW] Sync successful:', result);
|
||||
|
||||
// Clear synced edits from IndexedDB
|
||||
await clearSyncedEdits(db, result.synced_ids || []);
|
||||
|
||||
// Notify all clients about successful sync
|
||||
const clients = await self.clients.matchAll();
|
||||
clients.forEach(client => {
|
||||
client.postMessage({
|
||||
type: 'SYNC_COMPLETE',
|
||||
synced: result.synced
|
||||
});
|
||||
});
|
||||
} else {
|
||||
console.error('[SW] Sync failed:', response.status);
|
||||
}
|
||||
} catch (error) {
|
||||
console.error('[SW] Sync error:', error);
|
||||
throw error; // Will retry sync later
|
||||
}
|
||||
}
|
||||
|
||||
// IndexedDB helpers (simplified versions - full implementations in offline-db.js)
|
||||
function openDatabase() {
|
||||
return new Promise((resolve, reject) => {
|
||||
const request = indexedDB.open('sfm-offline-db', 1);
|
||||
|
||||
request.onerror = () => reject(request.error);
|
||||
request.onsuccess = () => resolve(request.result);
|
||||
|
||||
request.onupgradeneeded = (event) => {
|
||||
const db = event.target.result;
|
||||
|
||||
if (!db.objectStoreNames.contains('pending-edits')) {
|
||||
db.createObjectStore('pending-edits', { keyPath: 'id', autoIncrement: true });
|
||||
}
|
||||
};
|
||||
});
|
||||
}
|
||||
|
||||
function getPendingEdits(db) {
|
||||
return new Promise((resolve, reject) => {
|
||||
const transaction = db.transaction(['pending-edits'], 'readonly');
|
||||
const store = transaction.objectStore('pending-edits');
|
||||
const request = store.getAll();
|
||||
|
||||
request.onsuccess = () => resolve(request.result);
|
||||
request.onerror = () => reject(request.error);
|
||||
});
|
||||
}
|
||||
|
||||
function clearSyncedEdits(db, editIds) {
|
||||
return new Promise((resolve, reject) => {
|
||||
const transaction = db.transaction(['pending-edits'], 'readwrite');
|
||||
const store = transaction.objectStore('pending-edits');
|
||||
|
||||
editIds.forEach(id => {
|
||||
store.delete(id);
|
||||
});
|
||||
|
||||
transaction.oncomplete = () => resolve();
|
||||
transaction.onerror = () => reject(transaction.error);
|
||||
});
|
||||
}
|
||||
|
||||
// Message event - handle messages from clients
|
||||
self.addEventListener('message', (event) => {
|
||||
console.log('[SW] Message received:', event.data);
|
||||
|
||||
if (event.data.type === 'SKIP_WAITING') {
|
||||
self.skipWaiting();
|
||||
}
|
||||
|
||||
if (event.data.type === 'CLEAR_CACHE') {
|
||||
event.waitUntil(
|
||||
caches.keys().then((cacheNames) => {
|
||||
return Promise.all(
|
||||
cacheNames.map((cacheName) => caches.delete(cacheName))
|
||||
);
|
||||
})
|
||||
);
|
||||
}
|
||||
});
|
||||
|
||||
console.log('[SW] Service Worker loaded');
|
||||
BIN
backend/static/terra-view-logo-dark.png
Normal file
|
After Width: | Height: | Size: 13 KiB |
BIN
backend/static/terra-view-logo-dark@2x.png
Normal file
|
After Width: | Height: | Size: 57 KiB |
BIN
backend/static/terra-view-logo-light.png
Normal file
|
After Width: | Height: | Size: 14 KiB |
BIN
backend/static/terra-view-logo-light@2x.png
Normal file
|
After Width: | Height: | Size: 49 KiB |
39
backend/templates_config.py
Normal file
@@ -0,0 +1,39 @@
|
||||
"""
|
||||
Shared Jinja2 templates configuration.
|
||||
|
||||
All routers should import `templates` from this module to get consistent
|
||||
filter and global function registration.
|
||||
"""
|
||||
|
||||
from fastapi.templating import Jinja2Templates
|
||||
|
||||
# Import timezone utilities
|
||||
from backend.utils.timezone import (
|
||||
format_local_datetime, format_local_time,
|
||||
get_user_timezone, get_timezone_abbreviation
|
||||
)
|
||||
|
||||
|
||||
def jinja_local_datetime(dt, fmt="%Y-%m-%d %H:%M"):
|
||||
"""Jinja filter to convert UTC datetime to local timezone."""
|
||||
return format_local_datetime(dt, fmt)
|
||||
|
||||
|
||||
def jinja_local_time(dt):
|
||||
"""Jinja filter to format time in local timezone."""
|
||||
return format_local_time(dt)
|
||||
|
||||
|
||||
def jinja_timezone_abbr():
|
||||
"""Jinja global to get current timezone abbreviation."""
|
||||
return get_timezone_abbreviation()
|
||||
|
||||
|
||||
# Create templates instance
|
||||
templates = Jinja2Templates(directory="templates")
|
||||
|
||||
# Register Jinja filters and globals
|
||||
templates.env.filters["local_datetime"] = jinja_local_datetime
|
||||
templates.env.filters["local_time"] = jinja_local_time
|
||||
templates.env.globals["timezone_abbr"] = jinja_timezone_abbr
|
||||
templates.env.globals["get_user_timezone"] = get_user_timezone
|
||||
1
backend/utils/__init__.py
Normal file
@@ -0,0 +1 @@
|
||||
# Utils package
|
||||
173
backend/utils/timezone.py
Normal file
@@ -0,0 +1,173 @@
|
||||
"""
|
||||
Timezone utilities for Terra-View.
|
||||
|
||||
Provides consistent timezone handling throughout the application.
|
||||
All database times are stored in UTC; this module converts for display.
|
||||
"""
|
||||
|
||||
from datetime import datetime
|
||||
from zoneinfo import ZoneInfo
|
||||
from typing import Optional
|
||||
|
||||
from backend.database import SessionLocal
|
||||
from backend.models import UserPreferences
|
||||
|
||||
|
||||
# Default timezone if none set
|
||||
DEFAULT_TIMEZONE = "America/New_York"
|
||||
|
||||
|
||||
def get_user_timezone() -> str:
|
||||
"""
|
||||
Get the user's configured timezone from preferences.
|
||||
|
||||
Returns:
|
||||
Timezone string (e.g., "America/New_York")
|
||||
"""
|
||||
db = SessionLocal()
|
||||
try:
|
||||
prefs = db.query(UserPreferences).filter_by(id=1).first()
|
||||
if prefs and prefs.timezone:
|
||||
return prefs.timezone
|
||||
return DEFAULT_TIMEZONE
|
||||
finally:
|
||||
db.close()
|
||||
|
||||
|
||||
def get_timezone_info(tz_name: str = None) -> ZoneInfo:
|
||||
"""
|
||||
Get ZoneInfo object for the specified or user's timezone.
|
||||
|
||||
Args:
|
||||
tz_name: Timezone name, or None to use user preference
|
||||
|
||||
Returns:
|
||||
ZoneInfo object
|
||||
"""
|
||||
if tz_name is None:
|
||||
tz_name = get_user_timezone()
|
||||
try:
|
||||
return ZoneInfo(tz_name)
|
||||
except Exception:
|
||||
return ZoneInfo(DEFAULT_TIMEZONE)
|
||||
|
||||
|
||||
def utc_to_local(dt: datetime, tz_name: str = None) -> datetime:
|
||||
"""
|
||||
Convert a UTC datetime to local timezone.
|
||||
|
||||
Args:
|
||||
dt: Datetime in UTC (naive or aware)
|
||||
tz_name: Target timezone, or None to use user preference
|
||||
|
||||
Returns:
|
||||
Datetime in local timezone
|
||||
"""
|
||||
if dt is None:
|
||||
return None
|
||||
|
||||
tz = get_timezone_info(tz_name)
|
||||
|
||||
# Assume naive datetime is UTC
|
||||
if dt.tzinfo is None:
|
||||
dt = dt.replace(tzinfo=ZoneInfo("UTC"))
|
||||
|
||||
return dt.astimezone(tz)
|
||||
|
||||
|
||||
def local_to_utc(dt: datetime, tz_name: str = None) -> datetime:
|
||||
"""
|
||||
Convert a local datetime to UTC.
|
||||
|
||||
Args:
|
||||
dt: Datetime in local timezone (naive or aware)
|
||||
tz_name: Source timezone, or None to use user preference
|
||||
|
||||
Returns:
|
||||
Datetime in UTC (naive, for database storage)
|
||||
"""
|
||||
if dt is None:
|
||||
return None
|
||||
|
||||
tz = get_timezone_info(tz_name)
|
||||
|
||||
# Assume naive datetime is in local timezone
|
||||
if dt.tzinfo is None:
|
||||
dt = dt.replace(tzinfo=tz)
|
||||
|
||||
# Convert to UTC and strip tzinfo for database storage
|
||||
return dt.astimezone(ZoneInfo("UTC")).replace(tzinfo=None)
|
||||
|
||||
|
||||
def format_local_datetime(dt: datetime, fmt: str = "%Y-%m-%d %H:%M", tz_name: str = None) -> str:
|
||||
"""
|
||||
Format a UTC datetime as local time string.
|
||||
|
||||
Args:
|
||||
dt: Datetime in UTC
|
||||
fmt: strftime format string
|
||||
tz_name: Target timezone, or None to use user preference
|
||||
|
||||
Returns:
|
||||
Formatted datetime string in local time
|
||||
"""
|
||||
if dt is None:
|
||||
return "N/A"
|
||||
|
||||
local_dt = utc_to_local(dt, tz_name)
|
||||
return local_dt.strftime(fmt)
|
||||
|
||||
|
||||
def format_local_time(dt: datetime, tz_name: str = None) -> str:
|
||||
"""
|
||||
Format a UTC datetime as local time (HH:MM format).
|
||||
|
||||
Args:
|
||||
dt: Datetime in UTC
|
||||
tz_name: Target timezone
|
||||
|
||||
Returns:
|
||||
Time string in HH:MM format
|
||||
"""
|
||||
return format_local_datetime(dt, "%H:%M", tz_name)
|
||||
|
||||
|
||||
def format_local_date(dt: datetime, tz_name: str = None) -> str:
|
||||
"""
|
||||
Format a UTC datetime as local date (YYYY-MM-DD format).
|
||||
|
||||
Args:
|
||||
dt: Datetime in UTC
|
||||
tz_name: Target timezone
|
||||
|
||||
Returns:
|
||||
Date string
|
||||
"""
|
||||
return format_local_datetime(dt, "%Y-%m-%d", tz_name)
|
||||
|
||||
|
||||
def get_timezone_abbreviation(tz_name: str = None) -> str:
|
||||
"""
|
||||
Get the abbreviation for a timezone (e.g., EST, EDT, PST).
|
||||
|
||||
Args:
|
||||
tz_name: Timezone name, or None to use user preference
|
||||
|
||||
Returns:
|
||||
Timezone abbreviation
|
||||
"""
|
||||
tz = get_timezone_info(tz_name)
|
||||
now = datetime.now(tz)
|
||||
return now.strftime("%Z")
|
||||
|
||||
|
||||
# Common US timezone choices for settings dropdown
|
||||
TIMEZONE_CHOICES = [
|
||||
("America/New_York", "Eastern Time (ET)"),
|
||||
("America/Chicago", "Central Time (CT)"),
|
||||
("America/Denver", "Mountain Time (MT)"),
|
||||
("America/Los_Angeles", "Pacific Time (PT)"),
|
||||
("America/Anchorage", "Alaska Time (AKT)"),
|
||||
("Pacific/Honolulu", "Hawaii Time (HT)"),
|
||||
("UTC", "UTC"),
|
||||
]
|
||||
10
data-dev/backups/snapshot_20251216_201738.db.meta.json
Normal file
@@ -0,0 +1,10 @@
|
||||
{
|
||||
"filename": "snapshot_20251216_201738.db",
|
||||
"created_at": "20251216_201738",
|
||||
"created_at_iso": "2025-12-16T20:17:38.638982",
|
||||
"description": "Auto-backup before restore",
|
||||
"size_bytes": 57344,
|
||||
"size_mb": 0.05,
|
||||
"original_db_size_bytes": 57344,
|
||||
"type": "manual"
|
||||
}
|
||||
@@ -0,0 +1,9 @@
|
||||
{
|
||||
"filename": "snapshot_uploaded_20251216_201732.db",
|
||||
"created_at": "20251216_201732",
|
||||
"created_at_iso": "2025-12-16T20:17:32.574205",
|
||||
"description": "Uploaded: snapshot_20251216_200259.db",
|
||||
"size_bytes": 77824,
|
||||
"size_mb": 0.07,
|
||||
"type": "uploaded"
|
||||
}
|
||||
72
docker-compose.yml
Normal file
@@ -0,0 +1,72 @@
|
||||
services:
|
||||
|
||||
# --- TERRA-VIEW PRODUCTION ---
|
||||
terra-view:
|
||||
build: .
|
||||
container_name: terra-view
|
||||
ports:
|
||||
- "8001:8001"
|
||||
volumes:
|
||||
- ./data:/app/data
|
||||
environment:
|
||||
- PYTHONUNBUFFERED=1
|
||||
- ENVIRONMENT=production
|
||||
- SLMM_BASE_URL=http://host.docker.internal:8100
|
||||
restart: unless-stopped
|
||||
depends_on:
|
||||
- slmm
|
||||
extra_hosts:
|
||||
- "host.docker.internal:host-gateway"
|
||||
healthcheck:
|
||||
test: ["CMD", "curl", "-f", "http://localhost:8001/health"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
start_period: 40s
|
||||
|
||||
# --- TERRA-VIEW DEVELOPMENT ---
|
||||
# terra-view-dev:
|
||||
# build: .
|
||||
# container_name: terra-view-dev
|
||||
# ports:
|
||||
# - "1001:8001"
|
||||
# volumes:
|
||||
# - ./data-dev:/app/data
|
||||
# environment:
|
||||
# - PYTHONUNBUFFERED=1
|
||||
# - ENVIRONMENT=development
|
||||
# - SLMM_BASE_URL=http://slmm:8100
|
||||
# restart: unless-stopped
|
||||
# depends_on:
|
||||
# - slmm
|
||||
# healthcheck:
|
||||
# test: ["CMD", "curl", "-f", "http://localhost:8001/health"]
|
||||
# interval: 30s
|
||||
# timeout: 10s
|
||||
# retries: 3
|
||||
# start_period: 40s
|
||||
|
||||
# --- SLMM (Sound Level Meter Manager) ---
|
||||
slmm:
|
||||
build:
|
||||
context: ../slmm
|
||||
dockerfile: Dockerfile
|
||||
container_name: slmm
|
||||
network_mode: host
|
||||
volumes:
|
||||
- ../slmm/data:/app/data
|
||||
environment:
|
||||
- PYTHONUNBUFFERED=1
|
||||
- PORT=8100
|
||||
- CORS_ORIGINS=*
|
||||
restart: unless-stopped
|
||||
healthcheck:
|
||||
test: ["CMD", "curl", "-f", "http://localhost:8100/health"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
start_period: 10s
|
||||
|
||||
volumes:
|
||||
data:
|
||||
data-dev:
|
||||
477
docs/DATABASE_MANAGEMENT.md
Normal file
@@ -0,0 +1,477 @@
|
||||
# Database Management Guide
|
||||
|
||||
This guide covers the comprehensive database management features available in the Seismo Fleet Manager, including manual snapshots, restoration, remote cloning, and automatic backups.
|
||||
|
||||
## Table of Contents
|
||||
|
||||
1. [Manual Database Snapshots](#manual-database-snapshots)
|
||||
2. [Restore from Snapshot](#restore-from-snapshot)
|
||||
3. [Download and Upload Snapshots](#download-and-upload-snapshots)
|
||||
4. [Clone Database to Dev Server](#clone-database-to-dev-server)
|
||||
5. [Automatic Backup Service](#automatic-backup-service)
|
||||
6. [API Reference](#api-reference)
|
||||
|
||||
---
|
||||
|
||||
## Manual Database Snapshots
|
||||
|
||||
### Creating a Snapshot via UI
|
||||
|
||||
1. Navigate to **Settings** → **Danger Zone** tab
|
||||
2. Scroll to the **Database Management** section
|
||||
3. Click **"Create Snapshot"**
|
||||
4. Optionally enter a description
|
||||
5. The snapshot will be created and appear in the "Available Snapshots" list
|
||||
|
||||
### Creating a Snapshot via API
|
||||
|
||||
```bash
|
||||
curl -X POST http://localhost:8000/api/settings/database/snapshot \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"description": "Pre-deployment backup"}'
|
||||
```
|
||||
|
||||
### What Happens
|
||||
|
||||
- A full copy of the SQLite database is created using the SQLite backup API
|
||||
- The snapshot is stored in `./data/backups/` directory
|
||||
- A metadata JSON file is created alongside the snapshot
|
||||
- No downtime or interruption to the running application
|
||||
|
||||
### Snapshot Files
|
||||
|
||||
Snapshots are stored as:
|
||||
- **Database file**: `snapshot_YYYYMMDD_HHMMSS.db`
|
||||
- **Metadata file**: `snapshot_YYYYMMDD_HHMMSS.db.meta.json`
|
||||
|
||||
Example:
|
||||
```
|
||||
data/backups/
|
||||
├── snapshot_20250101_143022.db
|
||||
├── snapshot_20250101_143022.db.meta.json
|
||||
├── snapshot_20250102_080000.db
|
||||
└── snapshot_20250102_080000.db.meta.json
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Restore from Snapshot
|
||||
|
||||
### Restoring via UI
|
||||
|
||||
1. Navigate to **Settings** → **Danger Zone** tab
|
||||
2. In the **Available Snapshots** section, find the snapshot you want to restore
|
||||
3. Click the **restore icon** (circular arrow) next to the snapshot
|
||||
4. Confirm the restoration warning
|
||||
5. A safety backup of the current database is automatically created
|
||||
6. The database is replaced with the snapshot
|
||||
7. The page reloads automatically
|
||||
|
||||
### Restoring via API
|
||||
|
||||
```bash
|
||||
curl -X POST http://localhost:8000/api/settings/database/restore \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"filename": "snapshot_20250101_143022.db",
|
||||
"create_backup": true
|
||||
}'
|
||||
```
|
||||
|
||||
### Important Notes
|
||||
|
||||
- **Always creates a safety backup** before restoring (unless explicitly disabled)
|
||||
- **Application reload required** - Users should refresh their browsers
|
||||
- **Atomic operation** - The entire database is replaced at once
|
||||
- **Cannot be undone** - But you'll have the safety backup
|
||||
|
||||
---
|
||||
|
||||
## Download and Upload Snapshots
|
||||
|
||||
### Download a Snapshot
|
||||
|
||||
**Via UI**: Click the download icon next to any snapshot in the list
|
||||
|
||||
**Via Browser**:
|
||||
```
|
||||
http://localhost:8000/api/settings/database/snapshot/snapshot_20250101_143022.db
|
||||
```
|
||||
|
||||
**Via Command Line**:
|
||||
```bash
|
||||
curl -o backup.db http://localhost:8000/api/settings/database/snapshot/snapshot_20250101_143022.db
|
||||
```
|
||||
|
||||
### Upload a Snapshot
|
||||
|
||||
**Via UI**:
|
||||
1. Navigate to **Settings** → **Danger Zone** tab
|
||||
2. Find the **Upload Snapshot** section
|
||||
3. Click **"Choose File"** and select a `.db` file
|
||||
4. Click **"Upload Snapshot"**
|
||||
|
||||
**Via Command Line**:
|
||||
```bash
|
||||
curl -X POST http://localhost:8000/api/settings/database/upload-snapshot \
|
||||
-F "file=@/path/to/your/backup.db"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Clone Database to Dev Server
|
||||
|
||||
The clone tool allows you to copy the production database to a remote development server over the network.
|
||||
|
||||
### Prerequisites
|
||||
|
||||
- Remote dev server must have the same Seismo Fleet Manager installation
|
||||
- Network connectivity between production and dev servers
|
||||
- Python 3 and `requests` library installed
|
||||
|
||||
### Basic Usage
|
||||
|
||||
```bash
|
||||
# Clone current database to dev server
|
||||
python3 scripts/clone_db_to_dev.py --url https://dev.example.com
|
||||
|
||||
# Clone using existing snapshot
|
||||
python3 scripts/clone_db_to_dev.py \
|
||||
--url https://dev.example.com \
|
||||
--snapshot snapshot_20250101_143022.db
|
||||
|
||||
# Clone with authentication token
|
||||
python3 scripts/clone_db_to_dev.py \
|
||||
--url https://dev.example.com \
|
||||
--token YOUR_AUTH_TOKEN
|
||||
```
|
||||
|
||||
### What Happens
|
||||
|
||||
1. Creates a snapshot of the production database (or uses existing one)
|
||||
2. Uploads the snapshot to the remote dev server
|
||||
3. Automatically restores the snapshot on the dev server
|
||||
4. Creates a safety backup on the dev server before restoring
|
||||
|
||||
### Remote Server Setup
|
||||
|
||||
The remote dev server needs no special setup - it just needs to be running the same Seismo Fleet Manager application with the database management endpoints enabled.
|
||||
|
||||
### Use Cases
|
||||
|
||||
- **Testing**: Test changes against production data in a dev environment
|
||||
- **Debugging**: Investigate production issues with real data safely
|
||||
- **Training**: Provide realistic data for user training
|
||||
- **Development**: Build new features with realistic data
|
||||
|
||||
---
|
||||
|
||||
## Automatic Backup Service
|
||||
|
||||
The automatic backup service runs scheduled backups in the background and manages backup retention.
|
||||
|
||||
### Configuration
|
||||
|
||||
The backup scheduler can be configured programmatically or via environment variables.
|
||||
|
||||
**Programmatic Configuration**:
|
||||
|
||||
```python
|
||||
from backend.services.backup_scheduler import get_backup_scheduler
|
||||
|
||||
scheduler = get_backup_scheduler()
|
||||
scheduler.configure(
|
||||
interval_hours=24, # Backup every 24 hours
|
||||
keep_count=10, # Keep last 10 backups
|
||||
enabled=True # Enable automatic backups
|
||||
)
|
||||
scheduler.start()
|
||||
```
|
||||
|
||||
**Environment Variables** (add to your `.env` or deployment config):
|
||||
|
||||
```bash
|
||||
AUTO_BACKUP_ENABLED=true
|
||||
AUTO_BACKUP_INTERVAL_HOURS=24
|
||||
AUTO_BACKUP_KEEP_COUNT=10
|
||||
```
|
||||
|
||||
### Integration with Application Startup
|
||||
|
||||
Add to `backend/main.py`:
|
||||
|
||||
```python
|
||||
from backend.services.backup_scheduler import get_backup_scheduler
|
||||
|
||||
@app.on_event("startup")
|
||||
async def startup_event():
|
||||
# Start automatic backup scheduler
|
||||
scheduler = get_backup_scheduler()
|
||||
scheduler.configure(
|
||||
interval_hours=24, # Daily backups
|
||||
keep_count=10, # Keep 10 most recent
|
||||
enabled=True
|
||||
)
|
||||
scheduler.start()
|
||||
|
||||
@app.on_event("shutdown")
|
||||
async def shutdown_event():
|
||||
# Stop backup scheduler gracefully
|
||||
scheduler = get_backup_scheduler()
|
||||
scheduler.stop()
|
||||
```
|
||||
|
||||
### Manual Control
|
||||
|
||||
```python
|
||||
from backend.services.backup_scheduler import get_backup_scheduler
|
||||
|
||||
scheduler = get_backup_scheduler()
|
||||
|
||||
# Get current status
|
||||
status = scheduler.get_status()
|
||||
print(status)
|
||||
# {'enabled': True, 'running': True, 'interval_hours': 24, 'keep_count': 10, 'next_run': '2025-01-02T14:00:00'}
|
||||
|
||||
# Create backup immediately
|
||||
scheduler.create_automatic_backup()
|
||||
|
||||
# Stop scheduler
|
||||
scheduler.stop()
|
||||
|
||||
# Start scheduler
|
||||
scheduler.start()
|
||||
```
|
||||
|
||||
### Backup Retention
|
||||
|
||||
The scheduler automatically deletes old backups based on the `keep_count` setting. For example, if `keep_count=10`, only the 10 most recent backups are kept, and older ones are automatically deleted.
|
||||
|
||||
---
|
||||
|
||||
## API Reference
|
||||
|
||||
### Database Statistics
|
||||
|
||||
```http
|
||||
GET /api/settings/database/stats
|
||||
```
|
||||
|
||||
Returns database size, row counts, and last modified time.
|
||||
|
||||
**Response**:
|
||||
```json
|
||||
{
|
||||
"database_path": "./data/seismo_fleet.db",
|
||||
"size_bytes": 1048576,
|
||||
"size_mb": 1.0,
|
||||
"total_rows": 1250,
|
||||
"tables": {
|
||||
"roster": 450,
|
||||
"emitters": 600,
|
||||
"ignored_units": 50,
|
||||
"unit_history": 150
|
||||
},
|
||||
"last_modified": "2025-01-01T14:30:22"
|
||||
}
|
||||
```
|
||||
|
||||
### Create Snapshot
|
||||
|
||||
```http
|
||||
POST /api/settings/database/snapshot
|
||||
Content-Type: application/json
|
||||
|
||||
{
|
||||
"description": "Optional description"
|
||||
}
|
||||
```
|
||||
|
||||
**Response**:
|
||||
```json
|
||||
{
|
||||
"message": "Snapshot created successfully",
|
||||
"snapshot": {
|
||||
"filename": "snapshot_20250101_143022.db",
|
||||
"created_at": "20250101_143022",
|
||||
"created_at_iso": "2025-01-01T14:30:22",
|
||||
"description": "Optional description",
|
||||
"size_bytes": 1048576,
|
||||
"size_mb": 1.0,
|
||||
"type": "manual"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### List Snapshots
|
||||
|
||||
```http
|
||||
GET /api/settings/database/snapshots
|
||||
```
|
||||
|
||||
**Response**:
|
||||
```json
|
||||
{
|
||||
"snapshots": [
|
||||
{
|
||||
"filename": "snapshot_20250101_143022.db",
|
||||
"created_at": "20250101_143022",
|
||||
"created_at_iso": "2025-01-01T14:30:22",
|
||||
"description": "Manual backup",
|
||||
"size_mb": 1.0,
|
||||
"type": "manual"
|
||||
}
|
||||
],
|
||||
"count": 1
|
||||
}
|
||||
```
|
||||
|
||||
### Download Snapshot
|
||||
|
||||
```http
|
||||
GET /api/settings/database/snapshot/{filename}
|
||||
```
|
||||
|
||||
Returns the snapshot file as a download.
|
||||
|
||||
### Delete Snapshot
|
||||
|
||||
```http
|
||||
DELETE /api/settings/database/snapshot/{filename}
|
||||
```
|
||||
|
||||
### Restore Database
|
||||
|
||||
```http
|
||||
POST /api/settings/database/restore
|
||||
Content-Type: application/json
|
||||
|
||||
{
|
||||
"filename": "snapshot_20250101_143022.db",
|
||||
"create_backup": true
|
||||
}
|
||||
```
|
||||
|
||||
**Response**:
|
||||
```json
|
||||
{
|
||||
"message": "Database restored successfully",
|
||||
"restored_from": "snapshot_20250101_143022.db",
|
||||
"restored_at": "2025-01-01T15:00:00",
|
||||
"backup_created": "snapshot_20250101_150000.db"
|
||||
}
|
||||
```
|
||||
|
||||
### Upload Snapshot
|
||||
|
||||
```http
|
||||
POST /api/settings/database/upload-snapshot
|
||||
Content-Type: multipart/form-data
|
||||
|
||||
file: <binary data>
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Best Practices
|
||||
|
||||
### 1. Regular Backups
|
||||
|
||||
- **Enable automatic backups** with a 24-hour interval
|
||||
- **Keep at least 7-10 backups** for historical coverage
|
||||
- **Create manual snapshots** before major changes
|
||||
|
||||
### 2. Before Major Operations
|
||||
|
||||
Always create a snapshot before:
|
||||
- Software upgrades
|
||||
- Bulk data imports
|
||||
- Database schema changes
|
||||
- Testing destructive operations
|
||||
|
||||
### 3. Testing Restores
|
||||
|
||||
Periodically test your restore process:
|
||||
1. Download a snapshot
|
||||
2. Test restoration on a dev environment
|
||||
3. Verify data integrity
|
||||
|
||||
### 4. Off-Site Backups
|
||||
|
||||
For production systems:
|
||||
- **Download snapshots** to external storage regularly
|
||||
- Use the clone tool to **sync to remote servers**
|
||||
- Store backups in **multiple geographic locations**
|
||||
|
||||
### 5. Snapshot Management
|
||||
|
||||
- Delete old snapshots when no longer needed
|
||||
- Use descriptive names/descriptions for manual snapshots
|
||||
- Keep pre-deployment snapshots separate
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Snapshot Creation Fails
|
||||
|
||||
**Problem**: "Database is locked" error
|
||||
|
||||
**Solution**: The database is being written to. Wait a moment and try again. The SQLite backup API handles most locking automatically.
|
||||
|
||||
### Restore Doesn't Complete
|
||||
|
||||
**Problem**: Restore appears to hang
|
||||
|
||||
**Solution**:
|
||||
- Check server logs for errors
|
||||
- Ensure sufficient disk space
|
||||
- Verify the snapshot file isn't corrupted
|
||||
|
||||
### Upload Fails on Dev Server
|
||||
|
||||
**Problem**: "Permission denied" or "File too large"
|
||||
|
||||
**Solutions**:
|
||||
- Check file upload size limits in your web server config (nginx/apache)
|
||||
- Verify write permissions on `./data/backups/` directory
|
||||
- Ensure sufficient disk space
|
||||
|
||||
### Automatic Backups Not Running
|
||||
|
||||
**Problem**: No automatic backups being created
|
||||
|
||||
**Solutions**:
|
||||
1. Check if scheduler is enabled: `scheduler.get_status()`
|
||||
2. Check application logs for scheduler errors
|
||||
3. Ensure `schedule` library is installed: `pip install schedule`
|
||||
4. Verify scheduler was started in application startup
|
||||
|
||||
---
|
||||
|
||||
## Security Considerations
|
||||
|
||||
1. **Access Control**: Restrict access to the Settings → Danger Zone to administrators only
|
||||
2. **Backup Storage**: Store backups in a secure location with proper permissions
|
||||
3. **Remote Cloning**: Use authentication tokens when cloning to remote servers
|
||||
4. **Data Sensitivity**: Remember that snapshots contain all database data - treat them with the same security as the live database
|
||||
|
||||
---
|
||||
|
||||
## File Locations
|
||||
|
||||
- **Database**: `./data/seismo_fleet.db`
|
||||
- **Backups Directory**: `./data/backups/`
|
||||
- **Clone Script**: `./scripts/clone_db_to_dev.py`
|
||||
- **Backup Service**: `./backend/services/database_backup.py`
|
||||
- **Scheduler Service**: `./backend/services/backup_scheduler.py`
|
||||
|
||||
---
|
||||
|
||||
## Support
|
||||
|
||||
For issues or questions:
|
||||
1. Check application logs in `./logs/`
|
||||
2. Review this documentation
|
||||
3. Test with a small database first
|
||||
4. Contact your system administrator
|
||||
276
docs/DEVICE_TYPE_DASHBOARDS.md
Normal file
@@ -0,0 +1,276 @@
|
||||
# Device Type Dashboards
|
||||
|
||||
This document describes the separate dashboard system for different device types in SFM.
|
||||
|
||||
## Overview
|
||||
|
||||
SFM now has dedicated dashboards for each device type:
|
||||
- **Main Dashboard** (`/`) - Combined overview of all devices
|
||||
- **Seismographs Dashboard** (`/seismographs`) - Seismograph-specific view
|
||||
- **Sound Level Meters Dashboard** (`/sound-level-meters`) - SLM-specific view
|
||||
- **Fleet Roster** (`/roster`) - All devices with filtering and management
|
||||
|
||||
## Architecture
|
||||
|
||||
### 1. Main Dashboard
|
||||
|
||||
**Route**: `/`
|
||||
**Template**: [templates/dashboard.html](../templates/dashboard.html)
|
||||
**Purpose**: Combined overview showing all device types
|
||||
|
||||
**Features**:
|
||||
- Fleet summary card now includes device type breakdown
|
||||
- Shows count of seismographs and SLMs separately
|
||||
- Links to dedicated dashboards for each device type
|
||||
- Shared components: map, alerts, recent photos, fleet status
|
||||
|
||||
**Device Type Counts**:
|
||||
The dashboard calculates device type counts in JavaScript from the snapshot data:
|
||||
|
||||
```javascript
|
||||
// Device type counts
|
||||
let seismoCount = 0;
|
||||
let slmCount = 0;
|
||||
Object.values(data.units || {}).forEach(unit => {
|
||||
if (unit.retired) return; // Don't count retired units
|
||||
const deviceType = unit.device_type || 'seismograph';
|
||||
if (deviceType === 'seismograph') {
|
||||
seismoCount++;
|
||||
} else if (deviceType === 'sound_level_meter') {
|
||||
slmCount++;
|
||||
}
|
||||
});
|
||||
```
|
||||
|
||||
### 2. Seismographs Dashboard
|
||||
|
||||
**Route**: `/seismographs`
|
||||
**Template**: [templates/seismographs.html](../templates/seismographs.html)
|
||||
**Router**: [backend/routers/seismo_dashboard.py](../backend/routers/seismo_dashboard.py)
|
||||
|
||||
**API Endpoints**:
|
||||
- `GET /api/seismo-dashboard/stats` - Statistics summary (HTML partial)
|
||||
- `GET /api/seismo-dashboard/units?search=<query>` - Unit list with search (HTML partial)
|
||||
|
||||
**Features**:
|
||||
- Statistics cards (total, deployed, benched, with/without modem)
|
||||
- Searchable unit list with real-time filtering
|
||||
- Shows modem assignments
|
||||
- Links to individual unit detail pages
|
||||
|
||||
**Stats Calculation** ([backend/routers/seismo_dashboard.py:18](../backend/routers/seismo_dashboard.py#L18)):
|
||||
|
||||
```python
|
||||
seismos = db.query(RosterUnit).filter_by(
|
||||
device_type="seismograph",
|
||||
retired=False
|
||||
).all()
|
||||
|
||||
total = len(seismos)
|
||||
deployed = sum(1 for s in seismos if s.deployed)
|
||||
benched = sum(1 for s in seismos if not s.deployed)
|
||||
with_modem = sum(1 for s in seismos if s.deployed and s.deployed_with_modem_id)
|
||||
```
|
||||
|
||||
### 3. Sound Level Meters Dashboard
|
||||
|
||||
**Route**: `/sound-level-meters`
|
||||
**Template**: [templates/sound_level_meters.html](../templates/sound_level_meters.html)
|
||||
**Router**: [backend/routers/slm_dashboard.py](../backend/routers/slm_dashboard.py)
|
||||
|
||||
**API Endpoints**:
|
||||
- `GET /api/slm-dashboard/stats` - Statistics summary (HTML partial)
|
||||
- `GET /api/slm-dashboard/units?search=<query>` - Unit list with search (HTML partial)
|
||||
- `GET /api/slm-dashboard/live-view/{unit_id}` - Live view panel (HTML partial)
|
||||
|
||||
**Features**:
|
||||
- Statistics cards (total, deployed, benched, measuring)
|
||||
- Searchable unit list
|
||||
- Live view panel with real-time measurement data
|
||||
- WebSocket integration for DRD streaming
|
||||
- Shows modem assignments and IP resolution
|
||||
|
||||
See [SOUND_LEVEL_METERS_DASHBOARD.md](SOUND_LEVEL_METERS_DASHBOARD.md) for detailed SLM dashboard documentation.
|
||||
|
||||
## Navigation
|
||||
|
||||
The sidebar navigation ([templates/base.html:116-128](../templates/base.html#L116-L128)) includes links to both dashboards:
|
||||
|
||||
```html
|
||||
<a href="/seismographs">
|
||||
<svg>...</svg>
|
||||
Seismographs
|
||||
</a>
|
||||
|
||||
<a href="/sound-level-meters">
|
||||
<svg>...</svg>
|
||||
Sound Level Meters
|
||||
</a>
|
||||
```
|
||||
|
||||
Active page highlighting is automatic based on `request.url.path`.
|
||||
|
||||
## Database Queries
|
||||
|
||||
All dashboards filter by device type using SQLAlchemy:
|
||||
|
||||
### Seismographs Query
|
||||
```python
|
||||
seismos = db.query(RosterUnit).filter_by(
|
||||
device_type="seismograph",
|
||||
retired=False
|
||||
).all()
|
||||
```
|
||||
|
||||
### Sound Level Meters Query
|
||||
```python
|
||||
slms = db.query(RosterUnit).filter_by(
|
||||
device_type="slm",
|
||||
retired=False
|
||||
).all()
|
||||
```
|
||||
|
||||
### With Search Filter
|
||||
```python
|
||||
query = db.query(RosterUnit).filter_by(
|
||||
device_type="seismograph",
|
||||
retired=False
|
||||
)
|
||||
|
||||
if search:
|
||||
query = query.filter(
|
||||
(RosterUnit.id.ilike(f"%{search}%")) |
|
||||
(RosterUnit.note.ilike(f"%{search}%")) |
|
||||
(RosterUnit.address.ilike(f"%{search}%"))
|
||||
)
|
||||
```
|
||||
|
||||
## UI Components
|
||||
|
||||
### Stats Cards (Partials)
|
||||
|
||||
**Seismograph Stats**: [templates/partials/seismo_stats.html](../templates/partials/seismo_stats.html)
|
||||
- Total Seismographs
|
||||
- Deployed
|
||||
- Benched
|
||||
- With Modem (showing X / Y deployed)
|
||||
|
||||
**SLM Stats**: [templates/partials/slm_stats.html](../templates/partials/slm_stats.html)
|
||||
- Total SLMs
|
||||
- Deployed
|
||||
- Benched
|
||||
- Currently Measuring (from live status)
|
||||
|
||||
### Unit Lists (Partials)
|
||||
|
||||
**Seismograph List**: [templates/partials/seismo_unit_list.html](../templates/partials/seismo_unit_list.html)
|
||||
|
||||
Table columns:
|
||||
- Unit ID (link to detail page)
|
||||
- Status (Deployed/Benched badge)
|
||||
- Modem (link to modem unit)
|
||||
- Location (address or coordinates)
|
||||
- Notes
|
||||
- Actions (View Details link)
|
||||
|
||||
**SLM List**: [templates/partials/slm_unit_list.html](../templates/partials/slm_unit_list.html)
|
||||
|
||||
Table columns:
|
||||
- Unit ID (link to detail page)
|
||||
- Model (NL-43, NL-53)
|
||||
- Status (Deployed/Benched badge)
|
||||
- Modem (link to modem unit)
|
||||
- Location
|
||||
- Actions (View Details, Live View)
|
||||
|
||||
## HTMX Integration
|
||||
|
||||
Both dashboards use HTMX for dynamic updates:
|
||||
|
||||
### Auto-refresh Stats
|
||||
```html
|
||||
<div hx-get="/api/seismo-dashboard/stats"
|
||||
hx-trigger="load, every 30s"
|
||||
hx-swap="innerHTML">
|
||||
```
|
||||
|
||||
### Search with Debouncing
|
||||
```html
|
||||
<input type="text"
|
||||
hx-get="/api/seismo-dashboard/units"
|
||||
hx-trigger="keyup changed delay:300ms"
|
||||
hx-target="#seismo-units-list"
|
||||
name="search" />
|
||||
```
|
||||
|
||||
## Adding New Device Types
|
||||
|
||||
To add support for a new device type (e.g., "modem"):
|
||||
|
||||
1. **Create Router** (`backend/routers/modem_dashboard.py`):
|
||||
```python
|
||||
@router.get("/stats", response_class=HTMLResponse)
|
||||
async def get_modem_stats(request: Request, db: Session = Depends(get_db)):
|
||||
modems = db.query(RosterUnit).filter_by(
|
||||
device_type="modem",
|
||||
retired=False
|
||||
).all()
|
||||
# Calculate stats and return partial
|
||||
```
|
||||
|
||||
2. **Create Templates**:
|
||||
- `templates/modems.html` - Main dashboard page
|
||||
- `templates/partials/modem_stats.html` - Stats cards
|
||||
- `templates/partials/modem_unit_list.html` - Unit list table
|
||||
|
||||
3. **Register in main.py**:
|
||||
```python
|
||||
from backend.routers import modem_dashboard
|
||||
app.include_router(modem_dashboard.router)
|
||||
|
||||
@app.get("/modems", response_class=HTMLResponse)
|
||||
async def modems_page(request: Request):
|
||||
return templates.TemplateResponse("modems.html", {"request": request})
|
||||
```
|
||||
|
||||
4. **Add to Navigation** (`templates/base.html`):
|
||||
```html
|
||||
<a href="/modems">
|
||||
<svg>...</svg>
|
||||
Modems
|
||||
</a>
|
||||
```
|
||||
|
||||
5. **Update Main Dashboard** (`templates/dashboard.html`):
|
||||
Add modem count to the device type breakdown:
|
||||
```html
|
||||
<div class="flex justify-between items-center">
|
||||
<a href="/modems">Modems</a>
|
||||
<span id="modem-count">--</span>
|
||||
</div>
|
||||
```
|
||||
|
||||
Update JavaScript to count modems:
|
||||
```javascript
|
||||
let modemCount = 0;
|
||||
Object.values(data.units || {}).forEach(unit => {
|
||||
if (unit.device_type === 'modem' && !unit.retired) {
|
||||
modemCount++;
|
||||
}
|
||||
});
|
||||
document.getElementById('modem-count').textContent = modemCount;
|
||||
```
|
||||
|
||||
## Benefits
|
||||
|
||||
1. **Separation of Concerns**: Each device type has its own dedicated interface
|
||||
2. **Scalability**: Easy to add new device types following the established pattern
|
||||
3. **Performance**: Queries are filtered by device type, reducing data transfer
|
||||
4. **User Experience**: Users can focus on specific device types without clutter
|
||||
5. **Maintainability**: Each dashboard is self-contained and easy to modify
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- [SOUND_LEVEL_METERS_DASHBOARD.md](SOUND_LEVEL_METERS_DASHBOARD.md) - SLM dashboard details
|
||||
- [DEVICE_TYPE_SLM_SUPPORT.md](DEVICE_TYPE_SLM_SUPPORT.md) - Adding SLM device type support
|
||||
- [MODEM_INTEGRATION.md](MODEM_INTEGRATION.md) - Modem assignment architecture
|
||||
288
docs/DEVICE_TYPE_SCHEMA.md
Normal file
@@ -0,0 +1,288 @@
|
||||
# Device Type Schema - Terra-View
|
||||
|
||||
## Overview
|
||||
|
||||
Terra-View uses a single roster table to manage three different device types. The `device_type` field is the primary discriminator that determines which fields are relevant for each unit.
|
||||
|
||||
## Official device_type Values
|
||||
|
||||
As of **Terra-View v0.4.3**, the following device_type values are standardized:
|
||||
|
||||
### 1. `"seismograph"` (Default)
|
||||
**Purpose**: Seismic monitoring devices
|
||||
|
||||
**Applicable Fields**:
|
||||
- Common: id, unit_type, deployed, retired, note, project_id, location, address, coordinates
|
||||
- Specific: last_calibrated, next_calibration_due, deployed_with_modem_id
|
||||
|
||||
**Examples**:
|
||||
- `BE1234` - Series 3 seismograph
|
||||
- `UM12345` - Series 4 Micromate unit
|
||||
- `SEISMO-001` - Custom seismograph
|
||||
|
||||
**Unit Type Values**:
|
||||
- `series3` - Series 3 devices (default)
|
||||
- `series4` - Series 4 devices
|
||||
- `micromate` - Micromate devices
|
||||
|
||||
---
|
||||
|
||||
### 2. `"modem"`
|
||||
**Purpose**: Field modems and network equipment
|
||||
|
||||
**Applicable Fields**:
|
||||
- Common: id, unit_type, deployed, retired, note, project_id, location, address, coordinates
|
||||
- Specific: ip_address, phone_number, hardware_model
|
||||
|
||||
**Examples**:
|
||||
- `MDM001` - Field modem
|
||||
- `MODEM-2025-01` - Network modem
|
||||
- `RAVEN-XTV-01` - Specific modem model
|
||||
|
||||
**Unit Type Values**:
|
||||
- `modem` - Generic modem
|
||||
- `raven-xtv` - Raven XTV model
|
||||
- Custom values for specific hardware
|
||||
|
||||
---
|
||||
|
||||
### 3. `"slm"` ⭐
|
||||
**Purpose**: Sound level meters (Rion NL-43/NL-53)
|
||||
|
||||
**Applicable Fields**:
|
||||
- Common: id, unit_type, deployed, retired, note, project_id, location, address, coordinates
|
||||
- Specific: slm_host, slm_tcp_port, slm_ftp_port, slm_model, slm_serial_number, slm_frequency_weighting, slm_time_weighting, slm_measurement_range, slm_last_check, deployed_with_modem_id
|
||||
|
||||
**Examples**:
|
||||
- `SLM-43-01` - NL-43 sound level meter
|
||||
- `NL43-001` - NL-43 unit
|
||||
- `NL53-002` - NL-53 unit
|
||||
|
||||
**Unit Type Values**:
|
||||
- `nl43` - Rion NL-43 model
|
||||
- `nl53` - Rion NL-53 model
|
||||
|
||||
---
|
||||
|
||||
## Migration from Legacy Values
|
||||
|
||||
### Deprecated Values
|
||||
|
||||
The following device_type values have been **deprecated** and should be migrated:
|
||||
|
||||
- ❌ `"sound_level_meter"` → ✅ `"slm"`
|
||||
|
||||
### How to Migrate
|
||||
|
||||
Run the standardization migration script to update existing databases:
|
||||
|
||||
```bash
|
||||
cd /home/serversdown/tmi/terra-view
|
||||
python3 backend/migrate_standardize_device_types.py
|
||||
```
|
||||
|
||||
This script:
|
||||
- Converts all `"sound_level_meter"` values to `"slm"`
|
||||
- Is idempotent (safe to run multiple times)
|
||||
- Shows before/after distribution of device types
|
||||
- No data loss
|
||||
|
||||
---
|
||||
|
||||
## Database Schema
|
||||
|
||||
### RosterUnit Model (`backend/models.py`)
|
||||
|
||||
```python
|
||||
class RosterUnit(Base):
|
||||
"""
|
||||
Supports multiple device types:
|
||||
- "seismograph" - Seismic monitoring devices (default)
|
||||
- "modem" - Field modems and network equipment
|
||||
- "slm" - Sound level meters (NL-43/NL-53)
|
||||
"""
|
||||
__tablename__ = "roster"
|
||||
|
||||
# Core fields (all device types)
|
||||
id = Column(String, primary_key=True)
|
||||
unit_type = Column(String, default="series3")
|
||||
device_type = Column(String, default="seismograph") # "seismograph" | "modem" | "slm"
|
||||
deployed = Column(Boolean, default=True)
|
||||
retired = Column(Boolean, default=False)
|
||||
# ... other common fields
|
||||
|
||||
# Seismograph-specific
|
||||
last_calibrated = Column(Date, nullable=True)
|
||||
next_calibration_due = Column(Date, nullable=True)
|
||||
|
||||
# Modem-specific
|
||||
ip_address = Column(String, nullable=True)
|
||||
phone_number = Column(String, nullable=True)
|
||||
hardware_model = Column(String, nullable=True)
|
||||
|
||||
# SLM-specific
|
||||
slm_host = Column(String, nullable=True)
|
||||
slm_tcp_port = Column(Integer, nullable=True)
|
||||
slm_ftp_port = Column(Integer, nullable=True)
|
||||
slm_model = Column(String, nullable=True)
|
||||
slm_serial_number = Column(String, nullable=True)
|
||||
slm_frequency_weighting = Column(String, nullable=True)
|
||||
slm_time_weighting = Column(String, nullable=True)
|
||||
slm_measurement_range = Column(String, nullable=True)
|
||||
slm_last_check = Column(DateTime, nullable=True)
|
||||
|
||||
# Shared fields (seismograph + SLM)
|
||||
deployed_with_modem_id = Column(String, nullable=True) # FK to modem
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## API Usage
|
||||
|
||||
### Adding a New Unit
|
||||
|
||||
**Seismograph**:
|
||||
```bash
|
||||
curl -X POST http://localhost:8001/api/roster/add \
|
||||
-F "id=BE1234" \
|
||||
-F "device_type=seismograph" \
|
||||
-F "unit_type=series3" \
|
||||
-F "deployed=true"
|
||||
```
|
||||
|
||||
**Modem**:
|
||||
```bash
|
||||
curl -X POST http://localhost:8001/api/roster/add \
|
||||
-F "id=MDM001" \
|
||||
-F "device_type=modem" \
|
||||
-F "ip_address=192.0.2.10" \
|
||||
-F "phone_number=+1-555-0100"
|
||||
```
|
||||
|
||||
**Sound Level Meter**:
|
||||
```bash
|
||||
curl -X POST http://localhost:8001/api/roster/add \
|
||||
-F "id=SLM-43-01" \
|
||||
-F "device_type=slm" \
|
||||
-F "slm_host=63.45.161.30" \
|
||||
-F "slm_tcp_port=2255" \
|
||||
-F "slm_model=NL-43"
|
||||
```
|
||||
|
||||
### CSV Import Format
|
||||
|
||||
```csv
|
||||
unit_id,unit_type,device_type,deployed,slm_host,slm_tcp_port,slm_model
|
||||
SLM-43-01,nl43,slm,true,63.45.161.30,2255,NL-43
|
||||
SLM-43-02,nl43,slm,true,63.45.161.31,2255,NL-43
|
||||
BE1234,series3,seismograph,true,,,
|
||||
MDM001,modem,modem,true,,,
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Frontend Behavior
|
||||
|
||||
### Device Type Selection
|
||||
|
||||
**Templates**: `unit_detail.html`, `roster.html`
|
||||
|
||||
```html
|
||||
<select name="device_type">
|
||||
<option value="seismograph">Seismograph</option>
|
||||
<option value="modem">Modem</option>
|
||||
<option value="slm">Sound Level Meter</option>
|
||||
</select>
|
||||
```
|
||||
|
||||
### Conditional Field Display
|
||||
|
||||
JavaScript functions check `device_type` to show/hide relevant fields:
|
||||
|
||||
```javascript
|
||||
function toggleDetailFields() {
|
||||
const deviceType = document.getElementById('device_type').value;
|
||||
|
||||
if (deviceType === 'seismograph') {
|
||||
// Show calibration fields
|
||||
} else if (deviceType === 'modem') {
|
||||
// Show network fields
|
||||
} else if (deviceType === 'slm') {
|
||||
// Show SLM configuration fields
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Code Conventions
|
||||
|
||||
### Always Use Lowercase
|
||||
|
||||
✅ **Correct**:
|
||||
```python
|
||||
if unit.device_type == "slm":
|
||||
# Handle sound level meter
|
||||
```
|
||||
|
||||
❌ **Incorrect**:
|
||||
```python
|
||||
if unit.device_type == "SLM": # Wrong - case sensitive
|
||||
if unit.device_type == "sound_level_meter": # Deprecated
|
||||
```
|
||||
|
||||
### Query Patterns
|
||||
|
||||
**Filter by device type**:
|
||||
```python
|
||||
# Get all SLMs
|
||||
slms = db.query(RosterUnit).filter_by(device_type="slm").all()
|
||||
|
||||
# Get deployed seismographs
|
||||
seismos = db.query(RosterUnit).filter_by(
|
||||
device_type="seismograph",
|
||||
deployed=True
|
||||
).all()
|
||||
|
||||
# Get all modems
|
||||
modems = db.query(RosterUnit).filter_by(device_type="modem").all()
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Testing
|
||||
|
||||
### Verify Device Type Distribution
|
||||
|
||||
```bash
|
||||
# Quick check
|
||||
sqlite3 data/seismo_fleet.db "SELECT device_type, COUNT(*) FROM roster GROUP BY device_type;"
|
||||
|
||||
# Detailed view
|
||||
sqlite3 data/seismo_fleet.db "SELECT id, device_type, unit_type, deployed FROM roster ORDER BY device_type, id;"
|
||||
```
|
||||
|
||||
### Check for Legacy Values
|
||||
|
||||
```bash
|
||||
# Should return 0 rows after migration
|
||||
sqlite3 data/seismo_fleet.db "SELECT id FROM roster WHERE device_type = 'sound_level_meter';"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Version History
|
||||
|
||||
- **v0.4.3** (2026-01-16) - Standardized device_type values, deprecated `"sound_level_meter"` → `"slm"`
|
||||
- **v0.4.0** (2026-01-05) - Added SLM support with `"sound_level_meter"` value
|
||||
- **v0.2.0** (2025-12-03) - Added modem device type
|
||||
- **v0.1.0** (2024-11-20) - Initial release with seismograph-only support
|
||||
|
||||
---
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- [README.md](../README.md) - Main project documentation with data model
|
||||
- [DEVICE_TYPE_SLM_SUPPORT.md](DEVICE_TYPE_SLM_SUPPORT.md) - Legacy SLM implementation notes
|
||||
- [SOUND_LEVEL_METERS_DASHBOARD.md](SOUND_LEVEL_METERS_DASHBOARD.md) - SLM dashboard features
|
||||
- [SLM_CONFIGURATION.md](SLM_CONFIGURATION.md) - SLM device configuration guide
|
||||
161
docs/DEVICE_TYPE_SLM_SUPPORT.md
Normal file
@@ -0,0 +1,161 @@
|
||||
# Sound Level Meter Device Type Support
|
||||
|
||||
**⚠️ IMPORTANT**: This documentation uses the legacy `sound_level_meter` device type value. As of v0.4.3, the standardized value is `"slm"`. Run `backend/migrate_standardize_device_types.py` to update your database.
|
||||
|
||||
## Overview
|
||||
|
||||
Added full support for "Sound Level Meter" as a device type in the roster management system. Users can now create, edit, and manage SLM units through the Fleet Roster interface.
|
||||
|
||||
## Changes Made
|
||||
|
||||
### 1. Frontend - Unit Detail Page
|
||||
|
||||
**File**: `templates/unit_detail.html`
|
||||
|
||||
#### Added Device Type Option
|
||||
- Added "Sound Level Meter" option to device type dropdown (line 243)
|
||||
- Value: `sound_level_meter`
|
||||
|
||||
#### Added SLM-Specific Fields Section (lines 320-370)
|
||||
New form fields for Sound Level Meter configuration:
|
||||
|
||||
- **Host (IP Address)**: Device network address
|
||||
- Field name: `slm_host`
|
||||
- Example: `192.168.1.100`
|
||||
|
||||
- **TCP Port**: Control port (default 2255)
|
||||
- Field name: `slm_tcp_port`
|
||||
- Type: number
|
||||
|
||||
- **Model**: Device model designation
|
||||
- Field name: `slm_model`
|
||||
- Example: `NL-43`, `NL-53`
|
||||
|
||||
- **Serial Number**: Manufacturer serial number
|
||||
- Field name: `slm_serial_number`
|
||||
|
||||
- **Frequency Weighting**: Sound measurement weighting curve
|
||||
- Field name: `slm_frequency_weighting`
|
||||
- Options: A-weighting, C-weighting, Z-weighting (Flat)
|
||||
|
||||
- **Time Weighting**: Temporal averaging method
|
||||
- Field name: `slm_time_weighting`
|
||||
- Options: Fast (125ms), Slow (1s), Impulse (35ms)
|
||||
|
||||
- **Measurement Range**: Device measurement capability
|
||||
- Field name: `slm_measurement_range`
|
||||
- Example: `30-130 dB`
|
||||
|
||||
#### Updated JavaScript Functions
|
||||
|
||||
**toggleDetailFields()** (lines 552-571)
|
||||
- Now handles three device types: seismograph, modem, sound_level_meter
|
||||
- Hides all device-specific sections, then shows only the relevant one
|
||||
- Shows `slmFields` section when device type is `sound_level_meter`
|
||||
|
||||
**populateEditForm()** (lines 527-558)
|
||||
- Populates all 7 SLM-specific fields from unit data
|
||||
- Sets empty string as default if field is null
|
||||
|
||||
### 2. Backend - API Endpoints
|
||||
|
||||
**File**: `backend/routers/roster_edit.py`
|
||||
|
||||
#### Updated Add Unit Endpoint
|
||||
`POST /api/roster/add`
|
||||
|
||||
**New Parameters** (lines 61-67):
|
||||
- `slm_host`: str (optional)
|
||||
- `slm_tcp_port`: int (optional)
|
||||
- `slm_model`: str (optional)
|
||||
- `slm_serial_number`: str (optional)
|
||||
- `slm_frequency_weighting`: str (optional)
|
||||
- `slm_time_weighting`: str (optional)
|
||||
- `slm_measurement_range`: str (optional)
|
||||
|
||||
**Unit Creation** (lines 108-115):
|
||||
All SLM fields are set when creating new unit.
|
||||
|
||||
#### Updated Get Unit Endpoint
|
||||
`GET /api/roster/{unit_id}`
|
||||
|
||||
**New Response Fields** (lines 146-152):
|
||||
Returns all 7 SLM fields in the response, with empty string as default if null.
|
||||
|
||||
#### Updated Edit Unit Endpoint
|
||||
`POST /api/roster/edit/{unit_id}`
|
||||
|
||||
**New Parameters** (lines 177-183):
|
||||
Same 7 SLM-specific parameters as add endpoint.
|
||||
|
||||
**Unit Update** (lines 232-239):
|
||||
All SLM fields are updated when editing existing unit.
|
||||
|
||||
### 3. Database Schema
|
||||
|
||||
**File**: `backend/models.py`
|
||||
|
||||
The database schema already included SLM fields (no changes needed):
|
||||
- All fields are nullable to support multiple device types
|
||||
- Fields are only relevant when `device_type = "slm"`
|
||||
|
||||
## Usage
|
||||
|
||||
### Creating a New SLM Unit
|
||||
|
||||
1. Go to Fleet Roster page
|
||||
2. Click "Add Unit" or edit an existing unit
|
||||
3. Select "Sound Level Meter" from Device Type dropdown
|
||||
4. Fill in SLM-specific fields (Host, Port, Model, etc.)
|
||||
5. Save
|
||||
|
||||
### Converting Existing Unit to SLM
|
||||
|
||||
1. Open unit detail page
|
||||
2. Click "Edit Unit"
|
||||
3. Change Device Type to "Sound Level Meter"
|
||||
4. SLM fields section will appear
|
||||
5. Fill in required SLM configuration
|
||||
6. Save changes
|
||||
|
||||
### Field Visibility
|
||||
|
||||
The form automatically shows/hides relevant fields based on device type:
|
||||
- **Seismograph**: Shows calibration dates, modem deployment info
|
||||
- **Modem**: Shows IP, phone number, hardware model
|
||||
- **Sound Level Meter**: Shows host, port, model, serial, weightings, range
|
||||
|
||||
## Integration with SLMM Dashboard
|
||||
|
||||
Units with `device_type = "slm"` will:
|
||||
- Appear in the Sound Level Meters dashboard (`/sound-level-meters`)
|
||||
- Be available for live monitoring and control
|
||||
- Use the configured `slm_host` and `slm_tcp_port` for device communication
|
||||
|
||||
## Testing
|
||||
|
||||
Test SLM units have been added via `add_test_slms.py`:
|
||||
- `nl43-001` - Deployed at construction site A
|
||||
- `nl43-002` - Deployed at construction site B
|
||||
- `nl53-001` - Deployed at residential area
|
||||
- `nl43-003` - Benched for calibration
|
||||
|
||||
You can edit any of these units to verify the form works correctly.
|
||||
|
||||
## Files Modified
|
||||
|
||||
1. `templates/unit_detail.html` - Added dropdown option, SLM fields section, updated JavaScript
|
||||
2. `backend/routers/roster_edit.py` - Added SLM parameters to add/edit/get endpoints
|
||||
3. `backend/models.py` - No changes (schema already supported SLM)
|
||||
|
||||
## Backward Compatibility
|
||||
|
||||
- Existing seismograph and modem units are unaffected
|
||||
- All SLM fields are optional/nullable
|
||||
- Forms gracefully handle units with missing device_type (defaults to seismograph)
|
||||
|
||||
---
|
||||
|
||||
**Version**: 1.0.0
|
||||
**Date**: January 5, 2026
|
||||
**Related**: SOUND_LEVEL_METERS_DASHBOARD.md
|
||||
132
docs/DEV_DATABASE_SETUP.md
Normal file
@@ -0,0 +1,132 @@
|
||||
# DEV Database Setup Instructions
|
||||
|
||||
## Current Situation
|
||||
|
||||
The test SLM and modem data was accidentally added to the **PRODUCTION** database (`data/seismo_fleet.db`).
|
||||
|
||||
**Good news**: I've already removed it! The production database is clean.
|
||||
|
||||
**Issue**: The DEV database (`data-dev/seismo_fleet.db`) is:
|
||||
1. Owned by root (read-only for your user)
|
||||
2. Missing the SLM-specific columns in its schema
|
||||
|
||||
## What You Need to Do
|
||||
|
||||
### Step 1: Fix DEV Database Permissions
|
||||
|
||||
Run this command to make the DEV database writable:
|
||||
|
||||
```bash
|
||||
cd /home/serversdown/sfm/seismo-fleet-manager
|
||||
sudo chown serversdown:serversdown data-dev/seismo_fleet.db
|
||||
sudo chmod 664 data-dev/seismo_fleet.db
|
||||
```
|
||||
|
||||
### Step 2: Migrate DEV Database Schema
|
||||
|
||||
Add the SLM columns to the DEV database:
|
||||
|
||||
```bash
|
||||
python3 scripts/migrate_dev_db.py
|
||||
```
|
||||
|
||||
This will add these columns to the `roster` table:
|
||||
- `slm_host`
|
||||
- `slm_tcp_port`
|
||||
- `slm_model`
|
||||
- `slm_serial_number`
|
||||
- `slm_frequency_weighting`
|
||||
- `slm_time_weighting`
|
||||
- `slm_measurement_range`
|
||||
- `slm_last_check`
|
||||
|
||||
### Step 3: Add Test Data to DEV
|
||||
|
||||
Now you can safely add test data to the DEV database:
|
||||
|
||||
```bash
|
||||
# Add test SLMs
|
||||
python3 scripts/add_test_slms.py
|
||||
|
||||
# Add test modems and assign to SLMs
|
||||
python3 scripts/add_test_modems.py
|
||||
```
|
||||
|
||||
This will create:
|
||||
- 4 test SLM units (nl43-001, nl43-002, nl53-001, nl43-003)
|
||||
- 4 test modem units (modem-001, modem-002, modem-003, modem-004)
|
||||
- Assign modems to the SLMs
|
||||
|
||||
## Production Database Status
|
||||
|
||||
✅ **Production database is CLEAN** - all test data has been removed.
|
||||
|
||||
The production database (`data/seismo_fleet.db`) is ready for real production use.
|
||||
|
||||
## Test Scripts
|
||||
|
||||
All test scripts have been updated to use the DEV database:
|
||||
|
||||
### Scripts Updated:
|
||||
1. `scripts/add_test_slms.py` - Now uses `data-dev/seismo_fleet.db`
|
||||
2. `scripts/add_test_modems.py` - Now uses `data-dev/seismo_fleet.db`
|
||||
|
||||
### Scripts Created:
|
||||
1. `scripts/remove_test_data_from_prod.py` - Removes test data from production (already run)
|
||||
2. `scripts/update_dev_db_schema.py` - Updates schema (doesn't work for SQLite ALTER)
|
||||
3. `scripts/migrate_dev_db.py` - Adds SLM columns to DEV database
|
||||
|
||||
All helper scripts are located in the `scripts/` directory. See [scripts/README.md](../scripts/README.md) for detailed usage instructions.
|
||||
|
||||
## Verification
|
||||
|
||||
After running the steps above, verify everything worked:
|
||||
|
||||
```bash
|
||||
# Check DEV database has test data
|
||||
sqlite3 data-dev/seismo_fleet.db "SELECT id, device_type FROM roster WHERE device_type IN ('sound_level_meter', 'modem');"
|
||||
```
|
||||
|
||||
You should see:
|
||||
```
|
||||
nl43-001|sound_level_meter
|
||||
nl43-002|sound_level_meter
|
||||
nl53-001|sound_level_meter
|
||||
nl43-003|sound_level_meter
|
||||
modem-001|modem
|
||||
modem-002|modem
|
||||
modem-003|modem
|
||||
modem-004|modem
|
||||
```
|
||||
|
||||
## Development vs Production
|
||||
|
||||
### When to Use DEV Database
|
||||
|
||||
To use the DEV database, set the environment variable:
|
||||
|
||||
```bash
|
||||
# Not implemented yet, but you could add this to database.py:
|
||||
export DATABASE_ENV=dev
|
||||
```
|
||||
|
||||
Or modify `backend/database.py` to check for an environment variable.
|
||||
|
||||
### Current Setup
|
||||
|
||||
Right now, the application always uses `data/seismo_fleet.db` (production).
|
||||
|
||||
For development/testing, you could:
|
||||
1. Point SFM to use the DEV database temporarily
|
||||
2. Or keep test data in production (not recommended)
|
||||
3. Or implement environment-based database selection
|
||||
|
||||
## Summary
|
||||
|
||||
- ✅ Production DB cleaned (no test data)
|
||||
- ⚠️ DEV DB needs permission fix (run sudo commands above)
|
||||
- ⚠️ DEV DB needs schema migration (run scripts/migrate_dev_db.py)
|
||||
- ✅ Test scripts updated to use DEV DB
|
||||
- ✅ All test data ready to be added to DEV DB
|
||||
|
||||
Run the commands above and you'll be all set!
|
||||
62
docs/FIX_DEV_PERMISSIONS.md
Normal file
@@ -0,0 +1,62 @@
|
||||
# Fix DEV Database Permissions
|
||||
|
||||
## The Problem
|
||||
|
||||
SQLite needs write access to both the database file AND the directory it's in (to create temporary files like journals and WAL files).
|
||||
|
||||
Currently:
|
||||
- ✅ Database file: `data-dev/seismo_fleet.db` - permissions fixed
|
||||
- ❌ Directory: `data-dev/` - still owned by root
|
||||
|
||||
## The Fix
|
||||
|
||||
Run this command to fix the directory ownership:
|
||||
|
||||
```bash
|
||||
sudo chown -R serversdown:serversdown /home/serversdown/sfm/seismo-fleet-manager/data-dev/
|
||||
```
|
||||
|
||||
Then run the migration again:
|
||||
|
||||
```bash
|
||||
python3 scripts/migrate_dev_db.py
|
||||
```
|
||||
|
||||
## Full Setup Commands
|
||||
|
||||
Here's the complete sequence:
|
||||
|
||||
```bash
|
||||
cd /home/serversdown/sfm/seismo-fleet-manager
|
||||
|
||||
# Fix directory ownership (includes all files inside)
|
||||
sudo chown -R serversdown:serversdown data-dev/
|
||||
|
||||
# Migrate schema
|
||||
python3 scripts/migrate_dev_db.py
|
||||
|
||||
# Add test data
|
||||
python3 scripts/add_test_slms.py
|
||||
python3 scripts/add_test_modems.py
|
||||
```
|
||||
|
||||
## Verify It Worked
|
||||
|
||||
After running the migration, you should see:
|
||||
|
||||
```
|
||||
Migrating DEV database to add SLM columns...
|
||||
============================================================
|
||||
✓ Added column: slm_host
|
||||
✓ Added column: slm_tcp_port
|
||||
✓ Added column: slm_model
|
||||
✓ Added column: slm_serial_number
|
||||
✓ Added column: slm_frequency_weighting
|
||||
✓ Added column: slm_time_weighting
|
||||
✓ Added column: slm_measurement_range
|
||||
✓ Added column: slm_last_check
|
||||
============================================================
|
||||
DEV database migration completed!
|
||||
```
|
||||
|
||||
Then the test data scripts should work without errors!
|
||||
375
docs/MODEM_INTEGRATION.md
Normal file
@@ -0,0 +1,375 @@
|
||||
# Modem Integration System
|
||||
|
||||
## Overview
|
||||
|
||||
The modem integration system allows Sound Level Meters (SLMs) and Seismographs to be deployed with network connectivity modems. Instead of storing IP addresses directly on each device, units are assigned to modems which provide the network connection. This enables:
|
||||
|
||||
- Centralized modem management and tracking
|
||||
- IP address updates in one place
|
||||
- Future modem API integration for diagnostics
|
||||
- Proper asset tracking for network equipment
|
||||
|
||||
## Architecture
|
||||
|
||||
### Database Design
|
||||
|
||||
**Modem Units**: Stored as `RosterUnit` with `device_type = "modem"`
|
||||
|
||||
**Modem-Specific Fields**:
|
||||
- `ip_address`: Network IP address or hostname
|
||||
- `phone_number`: Cellular phone number (if applicable)
|
||||
- `hardware_model`: Modem hardware (e.g., "Raven XTV", "Sierra Wireless AirLink")
|
||||
|
||||
**Device Assignment**:
|
||||
- Both SLMs and Seismographs use `deployed_with_modem_id` to reference their modem
|
||||
- This is a foreign key to another `RosterUnit.id` where `device_type = "modem"`
|
||||
|
||||
### How It Works
|
||||
|
||||
1. **Create Modem Units**
|
||||
- Add modems as regular roster units with `device_type = "modem"`
|
||||
- Set IP address, phone number, and hardware model
|
||||
- Deploy or bench modems like any other asset
|
||||
|
||||
2. **Assign Devices to Modems**
|
||||
- When editing an SLM or Seismograph, select modem from dropdown
|
||||
- The `deployed_with_modem_id` field stores the modem ID
|
||||
- IP address is fetched from the assigned modem at runtime
|
||||
|
||||
3. **Runtime Resolution**
|
||||
- When SLM dashboard needs to connect to a device:
|
||||
1. Load SLM unit data
|
||||
2. Check `deployed_with_modem_id`
|
||||
3. Fetch modem unit
|
||||
4. Use modem's `ip_address` for connection
|
||||
5. Fallback to legacy `slm_host` if no modem assigned
|
||||
|
||||
## Implementation Details
|
||||
|
||||
### Frontend Changes
|
||||
|
||||
#### Unit Detail Page (`templates/unit_detail.html`)
|
||||
|
||||
**SLM Fields** (lines 320-375):
|
||||
- Removed direct "Host (IP Address)" field
|
||||
- Added "Deployed With Modem" dropdown selector
|
||||
- Dropdown populated with all active modems via JavaScript
|
||||
- Shows modem ID, IP address, and hardware model in dropdown
|
||||
|
||||
**JavaScript** (lines 456-485):
|
||||
```javascript
|
||||
async function loadModemsList() {
|
||||
// Fetches all modems from /api/roster/modems
|
||||
// Populates both seismograph and SLM modem dropdowns
|
||||
// Shows format: "modem-001 (192.168.1.100) - Raven XTV"
|
||||
}
|
||||
```
|
||||
|
||||
#### SLM Dashboard (`backend/routers/slm_dashboard.py`)
|
||||
|
||||
**Live View Endpoint** (lines 84-148):
|
||||
```python
|
||||
# Get modem information if assigned
|
||||
modem = None
|
||||
modem_ip = None
|
||||
if unit.deployed_with_modem_id:
|
||||
modem = db.query(RosterUnit).filter_by(
|
||||
id=unit.deployed_with_modem_id,
|
||||
device_type="modem"
|
||||
).first()
|
||||
if modem:
|
||||
modem_ip = modem.ip_address
|
||||
|
||||
# Fallback to direct slm_host (backward compatibility)
|
||||
if not modem_ip and unit.slm_host:
|
||||
modem_ip = unit.slm_host
|
||||
```
|
||||
|
||||
**Live View Template** (`templates/partials/slm_live_view.html`):
|
||||
- Displays modem information in header
|
||||
- Shows "via Modem: modem-001 (192.168.1.100)"
|
||||
- Warning if no modem assigned
|
||||
|
||||
### Backend Changes
|
||||
|
||||
#### New API Endpoint (`backend/routers/roster_edit.py`)
|
||||
|
||||
```python
|
||||
GET /api/roster/modems
|
||||
```
|
||||
|
||||
Returns list of all non-retired modem units:
|
||||
```json
|
||||
[
|
||||
{
|
||||
"id": "modem-001",
|
||||
"ip_address": "192.168.1.100",
|
||||
"phone_number": "+1-555-0100",
|
||||
"hardware_model": "Raven XTV",
|
||||
"deployed": true
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
Used by frontend to populate modem selection dropdowns.
|
||||
|
||||
#### Database Schema (`backend/models.py`)
|
||||
|
||||
**Modem Assignment Field** (line 44):
|
||||
```python
|
||||
# Shared by seismographs and SLMs
|
||||
deployed_with_modem_id = Column(String, nullable=True)
|
||||
```
|
||||
|
||||
**Modem Fields** (lines 46-48):
|
||||
```python
|
||||
ip_address = Column(String, nullable=True)
|
||||
phone_number = Column(String, nullable=True)
|
||||
hardware_model = Column(String, nullable=True)
|
||||
```
|
||||
|
||||
**Legacy SLM Fields** (kept for backward compatibility):
|
||||
```python
|
||||
slm_host = Column(String, nullable=True) # Deprecated - use modem instead
|
||||
slm_tcp_port = Column(Integer, nullable=True) # Still used
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
### Creating Modem Units
|
||||
|
||||
1. Go to Fleet Roster
|
||||
2. Click "Add Unit"
|
||||
3. Set Device Type to "Modem"
|
||||
4. Fill in:
|
||||
- Unit ID (e.g., "modem-001")
|
||||
- IP Address (e.g., "192.168.1.100")
|
||||
- Phone Number (if cellular)
|
||||
- Hardware Model (e.g., "Raven XTV")
|
||||
- Address/Coordinates (physical location)
|
||||
5. Set Deployed status
|
||||
6. Save
|
||||
|
||||
### Assigning Modem to SLM
|
||||
|
||||
1. Open SLM unit detail page
|
||||
2. Click "Edit Unit"
|
||||
3. Ensure Device Type is "Sound Level Meter"
|
||||
4. In "Deployed With Modem" dropdown, select modem
|
||||
5. Verify TCP Port (default 2255)
|
||||
6. Save
|
||||
|
||||
### Assigning Modem to Seismograph
|
||||
|
||||
Same process - both device types use the same modem selection field.
|
||||
|
||||
## Test Data
|
||||
|
||||
Use the included script to create test modems:
|
||||
|
||||
```bash
|
||||
python3 add_test_modems.py
|
||||
```
|
||||
|
||||
This creates:
|
||||
- **modem-001**: 192.168.1.100, Raven XTV → assigned to nl43-001
|
||||
- **modem-002**: 192.168.1.101, Raven XTV → assigned to nl43-002
|
||||
- **modem-003**: 192.168.1.102, Sierra Wireless → assigned to nl53-001
|
||||
- **modem-004**: Spare modem (not deployed)
|
||||
|
||||
## Benefits
|
||||
|
||||
### For Operations
|
||||
|
||||
1. **Centralized IP Management**
|
||||
- Update modem IP once, affects all assigned devices
|
||||
- Easy to track which modem serves which devices
|
||||
- Inventory management for network equipment
|
||||
|
||||
2. **Asset Tracking**
|
||||
- Modems are first-class assets in the roster
|
||||
- Track deployment status, location, notes
|
||||
- Can bench/retire modems independently
|
||||
|
||||
3. **Future Capabilities**
|
||||
- Modem API integration (signal strength, data usage)
|
||||
- Automatic IP updates from DHCP/cellular network
|
||||
- Modem health monitoring
|
||||
- Remote modem diagnostics
|
||||
|
||||
### For Maintenance
|
||||
|
||||
1. **Easier Troubleshooting**
|
||||
- See which modem serves a device
|
||||
- Check modem status separately
|
||||
- Swap modems without reconfiguring devices
|
||||
|
||||
2. **Configuration Changes**
|
||||
- Change IP addresses system-wide
|
||||
- Move devices between modems
|
||||
- Test with backup modems
|
||||
|
||||
## Migration from Legacy System
|
||||
|
||||
### For Existing SLMs with Direct IP
|
||||
|
||||
Legacy SLMs with `slm_host` set still work:
|
||||
- System checks `deployed_with_modem_id` first
|
||||
- Falls back to `slm_host` if no modem assigned
|
||||
- Logs fallback usage for visibility
|
||||
|
||||
### Migration Steps
|
||||
|
||||
1. Create modem units for each IP address
|
||||
2. Assign SLMs to their modems
|
||||
3. System will use modem IP automatically
|
||||
4. Legacy `slm_host` can be cleared (optional)
|
||||
|
||||
Script `add_test_modems.py` demonstrates this:
|
||||
```python
|
||||
# Clear legacy field after modem assignment
|
||||
slm.slm_host = None
|
||||
slm.deployed_with_modem_id = "modem-001"
|
||||
```
|
||||
|
||||
## Future Enhancements
|
||||
|
||||
### Near-term
|
||||
|
||||
1. **Modem Status Dashboard**
|
||||
- List all modems with connection status
|
||||
- Show which devices use each modem
|
||||
- Signal strength, data usage indicators
|
||||
|
||||
2. **Automatic IP Discovery**
|
||||
- Query cellular provider API for modem IPs
|
||||
- Auto-update IP addresses in database
|
||||
- Alert on IP changes
|
||||
|
||||
3. **Modem Health Monitoring**
|
||||
- Ping modems periodically
|
||||
- Check cellular signal quality
|
||||
- Data usage tracking
|
||||
|
||||
### Long-term
|
||||
|
||||
1. **Modem API Integration**
|
||||
- Direct modem management (Raven, Sierra APIs)
|
||||
- Remote reboot capability
|
||||
- Configuration backup/restore
|
||||
- Firmware updates
|
||||
|
||||
2. **Network Topology View**
|
||||
- Graphical view of modem-device relationships
|
||||
- Network health visualization
|
||||
- Troubleshooting tools
|
||||
|
||||
3. **Multi-Modem Support**
|
||||
- Failover between modems
|
||||
- Load balancing
|
||||
- Automatic fallback on modem failure
|
||||
|
||||
## API Reference
|
||||
|
||||
### Get Modems List
|
||||
|
||||
**Endpoint**: `GET /api/roster/modems`
|
||||
|
||||
**Response**:
|
||||
```json
|
||||
[
|
||||
{
|
||||
"id": "modem-001",
|
||||
"ip_address": "192.168.1.100",
|
||||
"phone_number": "+1-555-0100",
|
||||
"hardware_model": "Raven XTV",
|
||||
"deployed": true
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
**Used By**:
|
||||
- Unit detail page (modem dropdown)
|
||||
- Future modem management dashboard
|
||||
|
||||
### Get Unit with Modem Info
|
||||
|
||||
**Endpoint**: `GET /api/roster/{unit_id}`
|
||||
|
||||
**Response** (for SLM):
|
||||
```json
|
||||
{
|
||||
"id": "nl43-001",
|
||||
"device_type": "slm",
|
||||
"deployed_with_modem_id": "modem-001",
|
||||
"slm_tcp_port": 2255,
|
||||
"slm_model": "NL-43",
|
||||
...
|
||||
}
|
||||
```
|
||||
|
||||
Then fetch modem separately or use dashboard endpoint which resolves it automatically.
|
||||
|
||||
### SLM Live View (with modem resolution)
|
||||
|
||||
**Endpoint**: `GET /api/slm-dashboard/live-view/{unit_id}`
|
||||
|
||||
**Process**:
|
||||
1. Loads SLM unit
|
||||
2. Resolves `deployed_with_modem_id` to modem unit
|
||||
3. Extracts `modem.ip_address`
|
||||
4. Uses IP to connect to SLMM backend
|
||||
5. Returns live view HTML with modem info
|
||||
|
||||
## Files Modified/Created
|
||||
|
||||
### Modified
|
||||
1. `backend/models.py` - Clarified modem assignment field
|
||||
2. `templates/unit_detail.html` - Added modem selector for SLMs
|
||||
3. `backend/routers/roster_edit.py` - Added modems list endpoint
|
||||
4. `backend/routers/slm_dashboard.py` - Modem resolution logic
|
||||
5. `templates/partials/slm_live_view.html` - Display modem info
|
||||
|
||||
### Created
|
||||
1. `add_test_modems.py` - Script to create test modems and assignments
|
||||
2. `docs/MODEM_INTEGRATION.md` - This documentation
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### SLM shows "No modem assigned"
|
||||
|
||||
**Cause**: Unit has no `deployed_with_modem_id` set
|
||||
|
||||
**Solution**:
|
||||
1. Edit the SLM unit
|
||||
2. Select a modem from dropdown
|
||||
3. Save
|
||||
|
||||
### Modem dropdown is empty
|
||||
|
||||
**Cause**: No modem units in database
|
||||
|
||||
**Solution**:
|
||||
1. Create modem units first
|
||||
2. Set `device_type = "modem"`
|
||||
3. Ensure they're not retired
|
||||
|
||||
### Can't connect to SLM
|
||||
|
||||
**Possible Causes**:
|
||||
1. Modem IP is incorrect
|
||||
2. Modem is offline
|
||||
3. SLM TCP port is wrong
|
||||
4. Network routing issue
|
||||
|
||||
**Debug Steps**:
|
||||
1. Check modem unit's IP address
|
||||
2. Ping the modem IP
|
||||
3. Check SLM's `slm_tcp_port` (default 2255)
|
||||
4. Review logs for connection errors
|
||||
|
||||
---
|
||||
|
||||
**Version**: 1.0.0
|
||||
**Date**: January 5, 2026
|
||||
**Related**: SOUND_LEVEL_METERS_DASHBOARD.md, DEVICE_TYPE_SLM_SUPPORT.md
|
||||
275
docs/SLM_CONFIGURATION.md
Normal file
@@ -0,0 +1,275 @@
|
||||
# SLM Configuration Interface
|
||||
|
||||
This document describes the SLM configuration interface added to the Sound Level Meters dashboard.
|
||||
|
||||
## Overview
|
||||
|
||||
Sound Level Meters can now be configured directly from the dashboard without needing to navigate to the unit detail page. A configuration button appears on each SLM unit card on hover, opening a modal with all configurable parameters.
|
||||
|
||||
## Features
|
||||
|
||||
### 1. Quick Access Configuration Button
|
||||
|
||||
- **Location**: Appears on each SLM unit card in the unit list
|
||||
- **Behavior**: Shows on hover (desktop) or always visible (mobile)
|
||||
- **Icon**: Gear/settings icon in top-right corner of unit card
|
||||
|
||||
### 2. Configuration Modal
|
||||
|
||||
The configuration modal provides a comprehensive interface for all SLM parameters:
|
||||
|
||||
#### Device Information
|
||||
- **Model**: Dropdown selection (NL-43, NL-53)
|
||||
- **Serial Number**: Text input for device serial number
|
||||
|
||||
#### Measurement Parameters
|
||||
- **Frequency Weighting**: A, C, or Z (Linear)
|
||||
- **Time Weighting**: Fast (125ms), Slow (1s), or Impulse
|
||||
- **Measurement Range**: 30-130 dB, 40-140 dB, or 50-140 dB
|
||||
|
||||
#### Network Configuration
|
||||
- **Assigned Modem**: Dropdown list of available modems
|
||||
- Shows modem ID and IP address
|
||||
- Option for "No modem (direct connection)"
|
||||
- **Direct IP Address**: Only shown when no modem assigned
|
||||
- **TCP Port**: Only shown when no modem assigned (default: 502)
|
||||
|
||||
### 3. Actions
|
||||
|
||||
The modal provides three action buttons:
|
||||
|
||||
- **Test Connection**: Tests network connectivity to the SLM
|
||||
- Uses current form values (not saved values)
|
||||
- Shows toast notification with results
|
||||
- Green: Connection successful
|
||||
- Yellow: Connection failed or device offline
|
||||
- Red: Test error
|
||||
|
||||
- **Cancel**: Closes modal without saving changes
|
||||
|
||||
- **Save Configuration**: Saves all changes to database
|
||||
- Shows success/error toast
|
||||
- Refreshes unit list on success
|
||||
- Auto-closes modal after 2 seconds
|
||||
|
||||
## Implementation
|
||||
|
||||
### Frontend Components
|
||||
|
||||
#### Unit List Partial
|
||||
**File**: [templates/partials/slm_unit_list.html](../templates/partials/slm_unit_list.html)
|
||||
|
||||
```html
|
||||
<!-- Configure button (appears on hover) -->
|
||||
<button onclick="event.stopPropagation(); openConfigModal('{{ unit.id }}');"
|
||||
class="absolute top-2 right-2 opacity-0 group-hover:opacity-100...">
|
||||
<svg>...</svg>
|
||||
</button>
|
||||
```
|
||||
|
||||
#### Configuration Modal
|
||||
**File**: [templates/sound_level_meters.html](../templates/sound_level_meters.html#L73)
|
||||
|
||||
```html
|
||||
<!-- Configuration Modal -->
|
||||
<div id="config-modal" class="hidden fixed inset-0 bg-black bg-opacity-50...">
|
||||
<div id="config-modal-content">
|
||||
<!-- Content loaded via HTMX -->
|
||||
</div>
|
||||
</div>
|
||||
```
|
||||
|
||||
#### Configuration Form
|
||||
**File**: [templates/partials/slm_config_form.html](../templates/partials/slm_config_form.html)
|
||||
|
||||
Form fields mapped to database columns:
|
||||
- `slm_model` → `unit.slm_model`
|
||||
- `slm_serial_number` → `unit.slm_serial_number`
|
||||
- `slm_frequency_weighting` → `unit.slm_frequency_weighting`
|
||||
- `slm_time_weighting` → `unit.slm_time_weighting`
|
||||
- `slm_measurement_range` → `unit.slm_measurement_range`
|
||||
- `deployed_with_modem_id` → `unit.deployed_with_modem_id`
|
||||
- `slm_host` → `unit.slm_host` (legacy, only if no modem)
|
||||
- `slm_tcp_port` → `unit.slm_tcp_port` (legacy, only if no modem)
|
||||
|
||||
### Backend Endpoints
|
||||
|
||||
#### GET /api/slm-dashboard/config/{unit_id}
|
||||
**File**: [backend/routers/slm_dashboard.py:184](../backend/routers/slm_dashboard.py#L184)
|
||||
|
||||
Returns configuration form HTML partial with current unit values pre-populated.
|
||||
|
||||
**Response**: HTML partial (slm_config_form.html)
|
||||
|
||||
#### POST /api/slm-dashboard/config/{unit_id}
|
||||
**File**: [backend/routers/slm_dashboard.py:203](../backend/routers/slm_dashboard.py#L203)
|
||||
|
||||
Saves configuration changes to database.
|
||||
|
||||
**Request**: Form data with configuration parameters
|
||||
|
||||
**Response**: JSON
|
||||
```json
|
||||
{
|
||||
"status": "success",
|
||||
"unit_id": "nl43-001"
|
||||
}
|
||||
```
|
||||
|
||||
**Behavior**:
|
||||
- Updates all SLM-specific fields from form data
|
||||
- If modem is assigned: clears legacy `slm_host` and `slm_tcp_port`
|
||||
- If no modem: uses direct IP fields from form
|
||||
|
||||
### JavaScript Functions
|
||||
|
||||
#### openConfigModal(unitId)
|
||||
**File**: [templates/sound_level_meters.html:127](../templates/sound_level_meters.html#L127)
|
||||
|
||||
Opens configuration modal and loads form via HTMX.
|
||||
|
||||
```javascript
|
||||
function openConfigModal(unitId) {
|
||||
const modal = document.getElementById('config-modal');
|
||||
modal.classList.remove('hidden');
|
||||
|
||||
htmx.ajax('GET', `/api/slm-dashboard/config/${unitId}`, {
|
||||
target: '#config-modal-content',
|
||||
swap: 'innerHTML'
|
||||
});
|
||||
}
|
||||
```
|
||||
|
||||
#### closeConfigModal()
|
||||
**File**: [templates/sound_level_meters.html:136](../templates/sound_level_meters.html#L136)
|
||||
|
||||
Closes configuration modal.
|
||||
|
||||
#### handleConfigSave(event)
|
||||
**File**: [templates/partials/slm_config_form.html:109](../templates/partials/slm_config_form.html#L109)
|
||||
|
||||
Handles HTMX response after form submission:
|
||||
- Shows success/error toast
|
||||
- Refreshes unit list
|
||||
- Auto-closes modal after 2 seconds
|
||||
|
||||
#### testConnection(unitId)
|
||||
**File**: [templates/partials/slm_config_form.html:129](../templates/partials/slm_config_form.html#L129)
|
||||
|
||||
Tests connection to SLM unit:
|
||||
```javascript
|
||||
async function testConnection(unitId) {
|
||||
const response = await fetch(`/api/slmm/${unitId}/status`);
|
||||
const data = await response.json();
|
||||
|
||||
if (response.ok && data.status === 'online') {
|
||||
// Show success toast
|
||||
} else {
|
||||
// Show warning toast
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
#### loadModemsForConfig()
|
||||
**File**: [templates/partials/slm_config_form.html:87](../templates/partials/slm_config_form.html#L87)
|
||||
|
||||
Loads available modems from `/api/roster/modems` and populates dropdown.
|
||||
|
||||
## User Workflow
|
||||
|
||||
### Configuring an SLM
|
||||
|
||||
1. Navigate to Sound Level Meters dashboard ([/sound-level-meters](../sound-level-meters))
|
||||
2. Hover over desired SLM unit card in the list
|
||||
3. Click the gear icon that appears in top-right corner
|
||||
4. Configuration modal opens with current values pre-filled
|
||||
5. Modify desired parameters:
|
||||
- Update model/serial if needed
|
||||
- Set measurement parameters (frequency/time weighting, range)
|
||||
- Choose network configuration:
|
||||
- **Option A**: Select a modem from dropdown (recommended)
|
||||
- **Option B**: Enter direct IP address and port
|
||||
6. (Optional) Click "Test Connection" to verify network settings
|
||||
7. Click "Save Configuration"
|
||||
8. Modal shows success message and auto-closes
|
||||
9. Unit list refreshes to show updated information
|
||||
|
||||
### Network Configuration Options
|
||||
|
||||
**Modem Assignment (Recommended)**:
|
||||
- Select modem from dropdown
|
||||
- IP address automatically resolved from modem's `ip_address` field
|
||||
- Direct IP/port fields hidden
|
||||
- Enables modem tracking and management
|
||||
|
||||
**Direct Connection (Legacy)**:
|
||||
- Select "No modem (direct connection)"
|
||||
- Enter IP address and TCP port manually
|
||||
- Direct IP/port fields become visible
|
||||
- Useful for temporary setups or non-modem connections
|
||||
|
||||
## Database Schema
|
||||
|
||||
The configuration interface updates these `roster` table columns:
|
||||
|
||||
```sql
|
||||
-- SLM-specific fields
|
||||
slm_model VARCHAR -- Device model (NL-43, NL-53)
|
||||
slm_serial_number VARCHAR -- Serial number
|
||||
slm_frequency_weighting VARCHAR -- A, C, or Z weighting
|
||||
slm_time_weighting VARCHAR -- Fast, Slow, or Impulse
|
||||
slm_measurement_range VARCHAR -- Measurement range (30-130, 40-140, 50-140)
|
||||
|
||||
-- Network configuration
|
||||
deployed_with_modem_id VARCHAR -- FK to modem unit (preferred method)
|
||||
slm_host VARCHAR -- Legacy direct IP (only if no modem)
|
||||
slm_tcp_port INTEGER -- Legacy TCP port (only if no modem)
|
||||
```
|
||||
|
||||
## UI/UX Design
|
||||
|
||||
### Modal Behavior
|
||||
- **Opens**: Via configure button on unit card
|
||||
- **Closes**:
|
||||
- Cancel button
|
||||
- X button in header
|
||||
- Escape key
|
||||
- Clicking outside modal (on backdrop)
|
||||
- **Auto-close**: After successful save (2 second delay)
|
||||
|
||||
### Responsive Design
|
||||
- **Desktop**: Configuration button appears on hover
|
||||
- **Mobile**: Configuration button always visible
|
||||
- **Modal**: Responsive width, scrollable on small screens
|
||||
- **Form**: Two-column layout on desktop, single column on mobile
|
||||
|
||||
### Visual Feedback
|
||||
- **Loading**: Skeleton loader while form loads
|
||||
- **Saving**: HTMX handles form submission
|
||||
- **Success**: Green toast notification
|
||||
- **Error**: Red toast notification
|
||||
- **Testing**: Blue toast while testing, then green/yellow/red based on result
|
||||
|
||||
### Accessibility
|
||||
- **Keyboard**: Modal can be closed with Escape key
|
||||
- **Focus**: Modal traps focus when open
|
||||
- **Labels**: All form fields have proper labels
|
||||
- **Colors**: Sufficient contrast in dark/light modes
|
||||
|
||||
## Future Enhancements
|
||||
|
||||
Potential improvements for future versions:
|
||||
|
||||
1. **Bulk Configuration**: Configure multiple SLMs at once
|
||||
2. **Configuration Templates**: Save and apply configuration presets
|
||||
3. **Configuration History**: Track configuration changes over time
|
||||
4. **Remote Configuration**: Push configuration directly to device via SLMM
|
||||
5. **Validation**: Real-time validation of IP addresses and ports
|
||||
6. **Advanced Settings**: Additional NL-43/NL-53 specific parameters
|
||||
7. **Configuration Import/Export**: JSON/CSV configuration files
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- [SOUND_LEVEL_METERS_DASHBOARD.md](SOUND_LEVEL_METERS_DASHBOARD.md) - Main SLM dashboard
|
||||
- [MODEM_INTEGRATION.md](MODEM_INTEGRATION.md) - Modem assignment architecture
|
||||
- [DEVICE_TYPE_SLM_SUPPORT.md](DEVICE_TYPE_SLM_SUPPORT.md) - SLM device type implementation
|
||||
333
docs/SOUND_LEVEL_METERS_DASHBOARD.md
Normal file
@@ -0,0 +1,333 @@
|
||||
# Sound Level Meters Dashboard
|
||||
|
||||
## Overview
|
||||
|
||||
The Sound Level Meters dashboard is a new feature in SFM (soon to be rebranded as Terra-view) that provides real-time monitoring and control of Rion NL-43/NL-53 sound level meters through the SLMM backend integration.
|
||||
|
||||
## Features
|
||||
|
||||
### 1. Dashboard Summary Statistics
|
||||
- **Total Units**: Count of all SLM devices in the system
|
||||
- **Deployed Units**: Active devices currently in the field
|
||||
- **Active Now**: Units that have checked in within the last hour
|
||||
- **Benched Units**: Devices not currently deployed
|
||||
|
||||
### 2. Unit List (Sidebar)
|
||||
- Searchable list of all deployed SLM units
|
||||
- Real-time status indicators:
|
||||
- 🟢 Green: Active (recently checked in)
|
||||
- ⚪ Gray: No check-in data
|
||||
- Quick unit information:
|
||||
- Device model (NL-43, NL-53, etc.)
|
||||
- Location/address
|
||||
- Network address (IP:port)
|
||||
- Click any unit to view its live data
|
||||
|
||||
### 3. Live View Panel
|
||||
|
||||
When a unit is selected, the live view panel displays:
|
||||
|
||||
#### Control Buttons
|
||||
- **Start**: Begin measurement
|
||||
- **Pause**: Pause current measurement
|
||||
- **Stop**: Stop measurement
|
||||
- **Reset**: Reset measurement data
|
||||
- **Start Live Stream**: Open WebSocket connection for real-time DRD data
|
||||
|
||||
#### Real-time Metrics
|
||||
- **Lp (Current)**: Instantaneous sound level in dB
|
||||
- **Leq (Average)**: Equivalent continuous sound level
|
||||
- **Lmax (Peak)**: Maximum sound level recorded
|
||||
- **Lmin**: Minimum sound level recorded
|
||||
|
||||
#### Live Chart
|
||||
- Real-time line chart showing Lp and Leq over time
|
||||
- 60-second rolling window (adjustable)
|
||||
- Chart.js-powered visualization with dark mode support
|
||||
- No animation for smooth real-time updates
|
||||
|
||||
#### Device Information
|
||||
- Battery level and power source
|
||||
- Frequency weighting (A, C, Z)
|
||||
- Time weighting (F, S, I)
|
||||
- SD card remaining space
|
||||
|
||||
## Architecture
|
||||
|
||||
### Frontend Components
|
||||
|
||||
#### Main Template
|
||||
**File**: `templates/sound_level_meters.html`
|
||||
|
||||
The main dashboard page that includes:
|
||||
- Page header and navigation integration
|
||||
- Stats summary section (auto-refreshes every 10s)
|
||||
- Two-column layout: unit list (left) + live view (right)
|
||||
- JavaScript functions for unit selection and WebSocket streaming
|
||||
|
||||
#### Partial Templates
|
||||
|
||||
1. **slm_stats.html** - Summary statistics cards
|
||||
- Auto-loads on page load
|
||||
- Refreshes every 10 seconds via HTMX
|
||||
|
||||
2. **slm_unit_list.html** - Searchable unit list
|
||||
- Auto-loads on page load
|
||||
- Refreshes every 10 seconds via HTMX
|
||||
- Supports search filtering
|
||||
|
||||
3. **slm_live_view.html** - Live data panel for selected unit
|
||||
- Loaded on-demand when unit is selected
|
||||
- Includes Chart.js for visualization
|
||||
- WebSocket connection for streaming data
|
||||
|
||||
4. **slm_live_view_error.html** - Error state display
|
||||
|
||||
### Backend Components
|
||||
|
||||
#### Router: `backend/routers/slm_dashboard.py`
|
||||
|
||||
**Endpoints:**
|
||||
|
||||
```python
|
||||
GET /api/slm-dashboard/stats
|
||||
```
|
||||
Returns HTML partial with summary statistics.
|
||||
|
||||
```python
|
||||
GET /api/slm-dashboard/units?search={term}
|
||||
```
|
||||
Returns HTML partial with filtered unit list.
|
||||
|
||||
```python
|
||||
GET /api/slm-dashboard/live-view/{unit_id}
|
||||
```
|
||||
Returns HTML partial with live view panel for specific unit.
|
||||
- Fetches unit details from database
|
||||
- Queries SLMM API for current measurement state
|
||||
- Queries SLMM API for live status (DOD data)
|
||||
|
||||
```python
|
||||
POST /api/slm-dashboard/control/{unit_id}/{action}
|
||||
```
|
||||
Sends control commands to SLMM backend.
|
||||
- Valid actions: start, stop, pause, resume, reset
|
||||
- Proxies to `http://localhost:8100/api/nl43/{unit_id}/{action}`
|
||||
|
||||
### Integration with SLMM
|
||||
|
||||
The dashboard communicates with the SLMM backend service running on port 8100:
|
||||
|
||||
**REST API Calls:**
|
||||
- `GET /api/nl43/{unit_id}/measurement-state` - Check if measuring
|
||||
- `GET /api/nl43/{unit_id}/live` - Get current DOD data
|
||||
- `POST /api/nl43/{unit_id}/start|stop|pause|resume|reset` - Control commands
|
||||
|
||||
**WebSocket Streaming:**
|
||||
- `WS /api/nl43/{unit_id}/live` - Real-time DRD data stream
|
||||
- Proxied through SFM at `/api/slmm/{unit_id}/live`
|
||||
- Streams continuous measurement data for live charting
|
||||
|
||||
### Database Schema
|
||||
|
||||
**Table**: `roster`
|
||||
|
||||
SLM-specific fields in the RosterUnit model:
|
||||
|
||||
```python
|
||||
device_type = "slm" # Distinguishes SLMs from seismographs
|
||||
slm_host = String # Device IP or hostname
|
||||
slm_tcp_port = Integer # TCP control port (default 2255)
|
||||
slm_model = String # NL-43, NL-53, etc.
|
||||
slm_serial_number = String # Device serial number
|
||||
slm_frequency_weighting = String # A, C, or Z weighting
|
||||
slm_time_weighting = String # F (Fast), S (Slow), I (Impulse)
|
||||
slm_measurement_range = String # e.g., "30-130 dB"
|
||||
slm_last_check = DateTime # Last communication timestamp
|
||||
```
|
||||
|
||||
## Navigation
|
||||
|
||||
The Sound Level Meters page is accessible from:
|
||||
- **URL**: `/sound-level-meters`
|
||||
- **Sidebar**: "Sound Level Meters" menu item (between Fleet Roster and Projects)
|
||||
- **Icon**: Speaker/sound wave SVG icon
|
||||
|
||||
## Real-time Updates
|
||||
|
||||
The dashboard uses three mechanisms for real-time updates:
|
||||
|
||||
1. **HTMX Polling** (10-second intervals)
|
||||
- Summary statistics
|
||||
- Unit list
|
||||
- Ensures data freshness even without user interaction
|
||||
|
||||
2. **On-Demand Loading** (HTMX)
|
||||
- Live view panel loads when unit is selected
|
||||
- Control button responses
|
||||
|
||||
3. **WebSocket Streaming** (continuous)
|
||||
- Real-time DRD data for live charting
|
||||
- User-initiated via "Start Live Stream" button
|
||||
- Automatically closed on page unload or unit change
|
||||
|
||||
## Measurement Duration Tracking
|
||||
|
||||
**Important**: The NL-43/NL-53 devices do not expose measurement duration via their API. Elapsed time and interval counts are only visible on the device's on-screen display (OSD).
|
||||
|
||||
**Solution**: Track measurement start time in your application when calling the `/start` endpoint:
|
||||
|
||||
```javascript
|
||||
// When starting measurement
|
||||
const startTime = new Date();
|
||||
localStorage.setItem(`slm_${unitId}_start`, startTime.toISOString());
|
||||
|
||||
// Calculate elapsed time
|
||||
const startTime = new Date(localStorage.getItem(`slm_${unitId}_start`));
|
||||
const elapsed = (new Date() - startTime) / 1000; // seconds
|
||||
```
|
||||
|
||||
**Future Enhancement**: SLMM backend could store measurement start times in a database table to track duration across sessions.
|
||||
|
||||
## Testing
|
||||
|
||||
### Add Test Data
|
||||
|
||||
Use the included script to add test SLM units:
|
||||
|
||||
```bash
|
||||
python3 add_test_slms.py
|
||||
```
|
||||
|
||||
This creates:
|
||||
- 3 deployed test units (nl43-001, nl43-002, nl53-001)
|
||||
- 1 benched unit (nl43-003)
|
||||
|
||||
### Running the Dashboard
|
||||
|
||||
1. Start SLMM backend (port 8100):
|
||||
```bash
|
||||
cd /home/serversdown/slmm
|
||||
uvicorn main:app --host 0.0.0.0 --port 8100
|
||||
```
|
||||
|
||||
2. Start SFM (port 8000):
|
||||
```bash
|
||||
cd /home/serversdown/sfm/seismo-fleet-manager
|
||||
uvicorn backend.main:app --host 0.0.0.0 --port 8000 --reload
|
||||
```
|
||||
|
||||
3. Access dashboard:
|
||||
```
|
||||
http://localhost:8000/sound-level-meters
|
||||
```
|
||||
|
||||
### Testing Without Physical Devices
|
||||
|
||||
The dashboard will work without physical NL-43 devices connected:
|
||||
- Unit list will display based on database records
|
||||
- Live view will show connection errors (gracefully handled)
|
||||
- Mock data can be added to SLMM for testing
|
||||
|
||||
## Future Enhancements
|
||||
|
||||
### Near-term
|
||||
1. **Measurement Duration Tracking**
|
||||
- Add database table to track measurement sessions
|
||||
- Display elapsed time in live view
|
||||
- Store start/stop timestamps
|
||||
|
||||
2. **Historical Data View**
|
||||
- Chart historical Leq intervals
|
||||
- Export measurement data
|
||||
- Comparison between units
|
||||
|
||||
3. **Alerts & Thresholds**
|
||||
- Configurable sound level alerts
|
||||
- Email/SMS notifications when thresholds exceeded
|
||||
- Visual indicators on dashboard
|
||||
|
||||
### Long-term
|
||||
1. **Map View**
|
||||
- Display all SLMs on a map (like seismographs)
|
||||
- Click map markers to view live data
|
||||
- Color-coded by current sound level
|
||||
|
||||
2. **Batch Operations**
|
||||
- Start/stop multiple units simultaneously
|
||||
- Synchronized measurements
|
||||
- Group configurations
|
||||
|
||||
3. **Advanced Analytics**
|
||||
- Noise compliance reports
|
||||
- Statistical summaries
|
||||
- Trend analysis
|
||||
|
||||
## Integration with Terra-view
|
||||
|
||||
When SFM is rebranded to Terra-view:
|
||||
|
||||
1. **Multi-Module Dashboard**
|
||||
- Sound Level Meters module (this dashboard)
|
||||
- Seismograph Fleet Manager module (existing)
|
||||
- Future monitoring modules
|
||||
|
||||
2. **Project Management**
|
||||
- Link SLMs to projects
|
||||
- Combined project view with seismographs + SLMs
|
||||
- Project-level reporting
|
||||
|
||||
3. **Unified Navigation**
|
||||
- Top-level module switcher
|
||||
- Consistent UI/UX across modules
|
||||
- Shared authentication and settings
|
||||
|
||||
## Technical Notes
|
||||
|
||||
### HTMX Integration
|
||||
The dashboard extensively uses HTMX for dynamic updates without full page reloads:
|
||||
- `hx-get`: Fetch and swap content
|
||||
- `hx-trigger`: Auto-refresh intervals
|
||||
- `hx-swap`: Content replacement strategy
|
||||
- `hx-target`: Specify update target
|
||||
|
||||
### Dark Mode Support
|
||||
All components support dark mode:
|
||||
- Chart.js colors adapt to theme
|
||||
- Tailwind dark: classes throughout
|
||||
- Automatic theme detection
|
||||
|
||||
### Performance Considerations
|
||||
- WebSocket connections are per-unit (only one active at a time)
|
||||
- Chart data limited to 60 points (1 minute) to prevent memory bloat
|
||||
- Polling intervals balanced for responsiveness vs server load
|
||||
- Lazy loading of live view panel (only when unit selected)
|
||||
|
||||
## Files Modified/Created
|
||||
|
||||
### New Files
|
||||
- `templates/sound_level_meters.html`
|
||||
- `templates/partials/slm_stats.html`
|
||||
- `templates/partials/slm_unit_list.html`
|
||||
- `templates/partials/slm_live_view.html`
|
||||
- `templates/partials/slm_live_view_error.html`
|
||||
- `backend/routers/slm_dashboard.py`
|
||||
- `add_test_slms.py`
|
||||
- `docs/SOUND_LEVEL_METERS_DASHBOARD.md`
|
||||
|
||||
### Modified Files
|
||||
- `backend/main.py` - Added route and router import
|
||||
- `templates/base.html` - Added navigation menu item
|
||||
|
||||
## Support
|
||||
|
||||
For issues or questions:
|
||||
- Check SLMM API documentation: `/home/serversdown/slmm/docs/API.md`
|
||||
- Review SFM changelog: `CHANGELOG.md`
|
||||
- Submit issues to project repository
|
||||
|
||||
---
|
||||
|
||||
**Version**: 1.0.0
|
||||
**Created**: January 2026
|
||||
**Last Updated**: January 5, 2026
|
||||
546
docs/archive/PROJECTS_SYSTEM_IMPLEMENTATION.md
Normal file
@@ -0,0 +1,546 @@
|
||||
# Projects System Implementation - Terra-View
|
||||
|
||||
## Overview
|
||||
|
||||
The Projects system has been successfully scaffolded in Terra-View. This document provides a complete overview of what has been built, how it works, and what needs to be completed.
|
||||
|
||||
## ✅ Completed Components
|
||||
|
||||
### 1. Database Schema
|
||||
|
||||
**Location**: `/backend/models.py`
|
||||
|
||||
Seven new tables have been added:
|
||||
|
||||
- **ProjectType**: Template definitions for project types (Sound, Vibration, Combined)
|
||||
- **Project**: Top-level project organization with type reference
|
||||
- **MonitoringLocation**: Generic locations (NRLs for sound, monitoring points for vibration)
|
||||
- **UnitAssignment**: Links devices to locations
|
||||
- **ScheduledAction**: Automated recording control schedules
|
||||
- **RecordingSession**: Tracks actual recording/monitoring sessions
|
||||
- **DataFile**: File references for downloaded data
|
||||
|
||||
**Key Features**:
|
||||
- Type-aware design (project_type_id determines features)
|
||||
- Flexible metadata fields (JSON columns for type-specific data)
|
||||
- Denormalized fields for efficient queries
|
||||
- Proper indexing on foreign keys
|
||||
|
||||
### 2. Service Layer
|
||||
|
||||
#### SLMM Client (`/backend/services/slmm_client.py`)
|
||||
- Clean wrapper for all SLMM API operations
|
||||
- Methods for: start/stop/pause/resume recording, get status, configure devices
|
||||
- Error handling with custom exceptions
|
||||
- Singleton pattern for easy access
|
||||
|
||||
#### Device Controller (`/backend/services/device_controller.py`)
|
||||
- Routes commands to appropriate backend (SLMM for SLMs, SFM for seismographs)
|
||||
- Unified interface across device types
|
||||
- Ready for future SFM implementation
|
||||
|
||||
#### Scheduler Service (`/backend/services/scheduler.py`)
|
||||
- Background task that checks for pending scheduled actions every 60 seconds
|
||||
- Executes actions by calling device controller
|
||||
- Creates/updates recording sessions
|
||||
- Tracks execution status and errors
|
||||
- Manual execution support for testing
|
||||
|
||||
### 3. API Routers
|
||||
|
||||
#### Projects Router (`/backend/routers/projects.py`)
|
||||
Endpoints:
|
||||
- `GET /api/projects/list` - Project list with stats
|
||||
- `GET /api/projects/stats` - Overview statistics
|
||||
- `POST /api/projects/create` - Create new project
|
||||
- `GET /api/projects/{id}` - Get project details
|
||||
- `PUT /api/projects/{id}` - Update project
|
||||
- `DELETE /api/projects/{id}` - Archive project
|
||||
- `GET /api/projects/{id}/dashboard` - Project dashboard data
|
||||
- `GET /api/projects/types/list` - Get project type templates
|
||||
|
||||
#### Project Locations Router (`/backend/routers/project_locations.py`)
|
||||
Endpoints:
|
||||
- `GET /api/projects/{id}/locations` - List locations
|
||||
- `POST /api/projects/{id}/locations/create` - Create location
|
||||
- `PUT /api/projects/{id}/locations/{location_id}` - Update location
|
||||
- `DELETE /api/projects/{id}/locations/{location_id}` - Delete location
|
||||
- `GET /api/projects/{id}/assignments` - List unit assignments
|
||||
- `POST /api/projects/{id}/locations/{location_id}/assign` - Assign unit
|
||||
- `POST /api/projects/{id}/assignments/{assignment_id}/unassign` - Unassign unit
|
||||
- `GET /api/projects/{id}/available-units` - Get units available for assignment
|
||||
|
||||
#### Scheduler Router (`/backend/routers/scheduler.py`)
|
||||
Endpoints:
|
||||
- `GET /api/projects/{id}/scheduler/actions` - List scheduled actions
|
||||
- `POST /api/projects/{id}/scheduler/actions/create` - Create action
|
||||
- `POST /api/projects/{id}/scheduler/schedule-session` - Schedule recording session
|
||||
- `PUT /api/projects/{id}/scheduler/actions/{action_id}` - Update action
|
||||
- `POST /api/projects/{id}/scheduler/actions/{action_id}/cancel` - Cancel action
|
||||
- `DELETE /api/projects/{id}/scheduler/actions/{action_id}` - Delete action
|
||||
- `POST /api/projects/{id}/scheduler/actions/{action_id}/execute` - Manual execution
|
||||
- `GET /api/projects/{id}/scheduler/status` - Scheduler status
|
||||
- `POST /api/projects/{id}/scheduler/execute-pending` - Trigger pending executions
|
||||
|
||||
### 4. Frontend
|
||||
|
||||
#### Main Page
|
||||
**Location**: `/templates/projects/overview.html`
|
||||
|
||||
Features:
|
||||
- Summary statistics cards (projects, locations, assignments, sessions)
|
||||
- Tabbed interface (All, Active, Completed, Archived)
|
||||
- Project cards grid layout
|
||||
- Create project modal with two-step flow:
|
||||
1. Select project type (Sound/Vibration/Combined)
|
||||
2. Fill project details
|
||||
- HTMX-powered dynamic updates
|
||||
|
||||
#### Navigation
|
||||
**Location**: `/templates/base.html` (updated)
|
||||
- "Projects" link added to sidebar
|
||||
- Active state highlighting
|
||||
|
||||
### 5. Application Integration
|
||||
|
||||
**Location**: `/backend/main.py`
|
||||
|
||||
- Routers registered
|
||||
- Page route added (`/projects`)
|
||||
- Scheduler service starts on application startup
|
||||
- Scheduler stops on application shutdown
|
||||
|
||||
### 6. Database Initialization
|
||||
|
||||
**Script**: `/backend/init_projects_db.py`
|
||||
|
||||
- Creates all project tables
|
||||
- Populates ProjectType with default templates
|
||||
- ✅ Successfully executed - database is ready
|
||||
|
||||
---
|
||||
|
||||
## 📁 File Organization
|
||||
|
||||
```
|
||||
terra-view/
|
||||
├── backend/
|
||||
│ ├── models.py [✅ Updated]
|
||||
│ ├── init_projects_db.py [✅ Created]
|
||||
│ ├── main.py [✅ Updated]
|
||||
│ ├── routers/
|
||||
│ │ ├── projects.py [✅ Created]
|
||||
│ │ ├── project_locations.py [✅ Created]
|
||||
│ │ └── scheduler.py [✅ Created]
|
||||
│ └── services/
|
||||
│ ├── slmm_client.py [✅ Created]
|
||||
│ ├── device_controller.py [✅ Created]
|
||||
│ └── scheduler.py [✅ Created]
|
||||
├── templates/
|
||||
│ ├── base.html [✅ Updated]
|
||||
│ ├── projects/
|
||||
│ │ └── overview.html [✅ Created]
|
||||
│ └── partials/
|
||||
│ └── projects/ [📁 Created, empty]
|
||||
└── data/
|
||||
└── seismo_fleet.db [✅ Tables created]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 🔨 What Still Needs to be Built
|
||||
|
||||
### 1. Frontend Templates (Partials)
|
||||
|
||||
**Directory**: `/templates/partials/projects/`
|
||||
|
||||
**Required Files**:
|
||||
|
||||
#### `project_stats.html`
|
||||
Stats cards for overview page:
|
||||
- Total/Active/Completed projects
|
||||
- Total locations
|
||||
- Assigned units
|
||||
- Active sessions
|
||||
|
||||
#### `project_list.html`
|
||||
Project cards grid:
|
||||
- Project name, type, status
|
||||
- Location count, unit count
|
||||
- Active session indicator
|
||||
- Link to project dashboard
|
||||
|
||||
#### `project_dashboard.html`
|
||||
Main project dashboard panel with tabs:
|
||||
- Summary stats
|
||||
- Active locations and assignments
|
||||
- Upcoming scheduled actions
|
||||
- Recent sessions
|
||||
|
||||
#### `location_list.html`
|
||||
Location cards/table:
|
||||
- Location name, type, coordinates
|
||||
- Assigned unit (if any)
|
||||
- Session count
|
||||
- Assign/unassign button
|
||||
|
||||
#### `assignment_list.html`
|
||||
Unit assignment table:
|
||||
- Unit ID, device type
|
||||
- Location name
|
||||
- Assignment dates
|
||||
- Status
|
||||
- Unassign button
|
||||
|
||||
#### `scheduler_agenda.html`
|
||||
Calendar/agenda view:
|
||||
- Scheduled actions sorted by time
|
||||
- Action type (start/stop/download)
|
||||
- Location and unit
|
||||
- Status indicator
|
||||
- Cancel/execute buttons
|
||||
|
||||
### 2. Project Dashboard Page
|
||||
|
||||
**Location**: `/templates/projects/project_dashboard.html`
|
||||
|
||||
Full project detail page with:
|
||||
- Header with project name, type, status
|
||||
- Tab navigation (Dashboard, Scheduler, Locations, Units, Data, Settings)
|
||||
- Tab content areas
|
||||
- Modals for adding locations, scheduling sessions
|
||||
|
||||
### 3. Additional UI Components
|
||||
|
||||
- Project type selection cards (with icons)
|
||||
- Location creation modal
|
||||
- Unit assignment modal
|
||||
- Schedule session modal (with date/time picker)
|
||||
- Data file browser
|
||||
|
||||
### 4. SLMM Enhancements
|
||||
|
||||
**Location**: `/slmm/app/routers.py` (SLMM repo)
|
||||
|
||||
New endpoint needed:
|
||||
```python
|
||||
POST /api/nl43/{unit_id}/ftp/download
|
||||
```
|
||||
|
||||
This should:
|
||||
- Accept destination_path and files list
|
||||
- Connect to SLM via FTP
|
||||
- Download specified files
|
||||
- Save to Terra-View's `data/Projects/` directory
|
||||
- Return file list with metadata
|
||||
|
||||
### 5. SFM Client (Future)
|
||||
|
||||
**Location**: `/backend/services/sfm_client.py` (to be created)
|
||||
|
||||
Similar to SLMM client, but for seismographs:
|
||||
- Get seismograph status
|
||||
- Start/stop recording
|
||||
- Download data files
|
||||
- Integrate with device controller
|
||||
|
||||
---
|
||||
|
||||
## 🚀 Testing the System
|
||||
|
||||
### 1. Start Terra-View
|
||||
|
||||
```bash
|
||||
cd /home/serversdown/tmi/terra-view
|
||||
# Start Terra-View (however you normally start it)
|
||||
```
|
||||
|
||||
Verify in logs:
|
||||
```
|
||||
Starting scheduler service...
|
||||
Scheduler service started
|
||||
```
|
||||
|
||||
### 2. Navigate to Projects
|
||||
|
||||
Open browser: `http://localhost:8001/projects`
|
||||
|
||||
You should see:
|
||||
- Summary stats cards (all zeros initially)
|
||||
- Tabs (All Projects, Active, Completed, Archived)
|
||||
- "New Project" button
|
||||
|
||||
### 3. Create a Project
|
||||
|
||||
1. Click "New Project"
|
||||
2. Select a project type (e.g., "Sound Monitoring")
|
||||
3. Fill in details:
|
||||
- Name: "Test Sound Project"
|
||||
- Client: "Test Client"
|
||||
- Start Date: Today
|
||||
4. Submit
|
||||
|
||||
### 4. Test API Endpoints
|
||||
|
||||
```bash
|
||||
# Get project types
|
||||
curl http://localhost:8001/api/projects/types/list
|
||||
|
||||
# Get projects list
|
||||
curl http://localhost:8001/api/projects/list
|
||||
|
||||
# Get project stats
|
||||
curl http://localhost:8001/api/projects/stats
|
||||
```
|
||||
|
||||
### 5. Test Scheduler Status
|
||||
|
||||
```bash
|
||||
curl http://localhost:8001/api/projects/{project_id}/scheduler/status
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 📋 Dataflow Examples
|
||||
|
||||
### Creating and Scheduling a Recording Session
|
||||
|
||||
1. **User creates project** → Project record in DB
|
||||
2. **User adds NRL** → MonitoringLocation record
|
||||
3. **User assigns SLM to NRL** → UnitAssignment record
|
||||
4. **User schedules recording** → 2 ScheduledAction records (start + stop)
|
||||
5. **Scheduler runs every minute** → Checks for pending actions
|
||||
6. **Start action time arrives** → Scheduler calls SLMM via device controller
|
||||
7. **SLMM sends TCP command to SLM** → Recording starts
|
||||
8. **RecordingSession created** → Tracks the session
|
||||
9. **Stop action time arrives** → Scheduler stops recording
|
||||
10. **Session updated** → stopped_at, duration_seconds filled
|
||||
11. **User triggers download** → Files copied to `data/Projects/{project_id}/sound/{nrl_name}/`
|
||||
12. **DataFile records created** → Track file references
|
||||
|
||||
---
|
||||
|
||||
## 🎨 UI Design Patterns
|
||||
|
||||
### Established Patterns (from SLM dashboard):
|
||||
|
||||
1. **Stats Cards**: 4-column grid, auto-refresh every 30s
|
||||
2. **Sidebar Lists**: Searchable, filterable, auto-refresh
|
||||
3. **Main Panel**: Large central area for details
|
||||
4. **Modals**: Centered, overlay background
|
||||
5. **HTMX**: All dynamic updates, minimal JavaScript
|
||||
6. **Tailwind**: Consistent styling with dark mode support
|
||||
|
||||
### Color Scheme:
|
||||
|
||||
- Primary: `seismo-orange` (#f48b1c)
|
||||
- Secondary: `seismo-navy` (#142a66)
|
||||
- Accent: `seismo-burgundy` (#7d234d)
|
||||
|
||||
---
|
||||
|
||||
## 🔧 Configuration
|
||||
|
||||
### Environment Variables
|
||||
|
||||
- `SLMM_BASE_URL`: SLMM backend URL (default: http://localhost:8100)
|
||||
- `ENVIRONMENT`: "development" or "production"
|
||||
|
||||
### Scheduler Settings
|
||||
|
||||
Located in `/backend/services/scheduler.py`:
|
||||
- `check_interval`: 60 seconds (adjust as needed)
|
||||
|
||||
---
|
||||
|
||||
## 📚 Next Steps
|
||||
|
||||
### Immediate (Get Basic UI Working):
|
||||
1. Create partial templates (stats, lists)
|
||||
2. Test creating projects via UI
|
||||
3. Implement project dashboard page
|
||||
|
||||
### Short-term (Core Features):
|
||||
4. Add location management UI
|
||||
5. Add unit assignment UI
|
||||
6. Add scheduler UI (agenda view)
|
||||
|
||||
### Medium-term (Data Flow):
|
||||
7. Implement SLMM download endpoint
|
||||
8. Test full recording workflow
|
||||
9. Add file browser for downloaded data
|
||||
|
||||
### Long-term (Complete System):
|
||||
10. Implement SFM client for seismographs
|
||||
11. Add data visualization
|
||||
12. Add project reporting
|
||||
13. Add user authentication
|
||||
|
||||
---
|
||||
|
||||
## 🐛 Known Issues / TODOs
|
||||
|
||||
1. **Partial templates missing**: Need to create HTML templates for all partials
|
||||
2. **SLMM download endpoint**: Needs implementation in SLMM backend
|
||||
3. **Project dashboard page**: Not yet created
|
||||
4. **SFM integration**: Placeholder only, needs real implementation
|
||||
5. **File download tracking**: DataFile records not yet created after downloads
|
||||
6. **Error handling**: Need better user-facing error messages
|
||||
7. **Validation**: Form validation could be improved
|
||||
8. **Testing**: No automated tests yet
|
||||
|
||||
---
|
||||
|
||||
## 📖 API Documentation
|
||||
|
||||
### Project Type Object
|
||||
```json
|
||||
{
|
||||
"id": "sound_monitoring",
|
||||
"name": "Sound Monitoring",
|
||||
"description": "...",
|
||||
"icon": "volume-2",
|
||||
"supports_sound": true,
|
||||
"supports_vibration": false
|
||||
}
|
||||
```
|
||||
|
||||
### Project Object
|
||||
```json
|
||||
{
|
||||
"id": "uuid",
|
||||
"name": "Project Name",
|
||||
"description": "...",
|
||||
"project_type_id": "sound_monitoring",
|
||||
"status": "active",
|
||||
"client_name": "Client Inc",
|
||||
"site_address": "123 Main St",
|
||||
"site_coordinates": "40.7128,-74.0060",
|
||||
"start_date": "2024-01-15",
|
||||
"end_date": null,
|
||||
"created_at": "2024-01-15T10:00:00",
|
||||
"updated_at": "2024-01-15T10:00:00"
|
||||
}
|
||||
```
|
||||
|
||||
### MonitoringLocation Object
|
||||
```json
|
||||
{
|
||||
"id": "uuid",
|
||||
"project_id": "uuid",
|
||||
"location_type": "sound",
|
||||
"name": "NRL-001",
|
||||
"description": "...",
|
||||
"coordinates": "40.7128,-74.0060",
|
||||
"address": "123 Main St",
|
||||
"location_metadata": "{...}",
|
||||
"created_at": "2024-01-15T10:00:00"
|
||||
}
|
||||
```
|
||||
|
||||
### UnitAssignment Object
|
||||
```json
|
||||
{
|
||||
"id": "uuid",
|
||||
"unit_id": "nl43-001",
|
||||
"location_id": "uuid",
|
||||
"project_id": "uuid",
|
||||
"device_type": "sound_level_meter",
|
||||
"assigned_at": "2024-01-15T10:00:00",
|
||||
"assigned_until": null,
|
||||
"status": "active",
|
||||
"notes": "..."
|
||||
}
|
||||
```
|
||||
|
||||
### ScheduledAction Object
|
||||
```json
|
||||
{
|
||||
"id": "uuid",
|
||||
"project_id": "uuid",
|
||||
"location_id": "uuid",
|
||||
"unit_id": "nl43-001",
|
||||
"action_type": "start",
|
||||
"device_type": "sound_level_meter",
|
||||
"scheduled_time": "2024-01-16T08:00:00",
|
||||
"executed_at": null,
|
||||
"execution_status": "pending",
|
||||
"module_response": null,
|
||||
"error_message": null
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 🎓 Architecture Decisions
|
||||
|
||||
### Why Project Types?
|
||||
Allows the system to scale to different monitoring scenarios (air quality, multi-hazard, etc.) without code changes. Just add a new ProjectType record and the UI adapts.
|
||||
|
||||
### Why Generic MonitoringLocation?
|
||||
Instead of separate NRL and MonitoringPoint tables, one table with a `location_type` discriminator keeps the schema clean and allows for combined projects.
|
||||
|
||||
### Why Denormalized Fields?
|
||||
Fields like `project_id` in UnitAssignment (already have via location) enable faster queries without joins.
|
||||
|
||||
### Why Scheduler in Terra-View?
|
||||
Terra-View is the orchestration layer. SLMM only handles device communication. Keeping scheduling logic in Terra-View allows for complex workflows across multiple device types.
|
||||
|
||||
### Why JSON Metadata Columns?
|
||||
Type-specific fields (like ambient_conditions for sound projects) don't apply to all location types. JSON columns provide flexibility without cluttering the schema.
|
||||
|
||||
---
|
||||
|
||||
## 💡 Tips for Continuing Development
|
||||
|
||||
1. **Follow Existing Patterns**: Look at the SLM dashboard code for reference
|
||||
2. **Use HTMX Aggressively**: Minimize JavaScript, let HTMX handle updates
|
||||
3. **Keep Routers Thin**: Move business logic to service layer
|
||||
4. **Return HTML Partials**: Most endpoints should return HTML, not JSON
|
||||
5. **Test Incrementally**: Build one partial at a time and test in browser
|
||||
6. **Check Logs**: Scheduler logs execution attempts
|
||||
7. **Use Browser DevTools**: Network tab shows HTMX requests
|
||||
|
||||
---
|
||||
|
||||
## 📞 Support
|
||||
|
||||
For questions or issues:
|
||||
1. Check this document first
|
||||
2. Review existing dashboards (SLM, Seismographs) for patterns
|
||||
3. Check logs for scheduler execution details
|
||||
4. Test API endpoints with curl to isolate issues
|
||||
|
||||
---
|
||||
|
||||
## ✅ Checklist for Completion
|
||||
|
||||
- [x] Database schema designed
|
||||
- [x] Models created
|
||||
- [x] Migration script run successfully
|
||||
- [x] Service layer complete (SLMM client, device controller, scheduler)
|
||||
- [x] API routers created (projects, locations, scheduler)
|
||||
- [x] Navigation updated
|
||||
- [x] Main overview page created
|
||||
- [x] Routes registered in main.py
|
||||
- [x] Scheduler service integrated
|
||||
- [ ] Partial templates created
|
||||
- [ ] Project dashboard page created
|
||||
- [ ] Location management UI
|
||||
- [ ] Unit assignment UI
|
||||
- [ ] Scheduler UI (agenda view)
|
||||
- [ ] SLMM download endpoint implemented
|
||||
- [ ] Full workflow tested end-to-end
|
||||
- [ ] SFM client implemented (future)
|
||||
|
||||
---
|
||||
|
||||
**Last Updated**: 2026-01-12
|
||||
|
||||
**Database Status**: ✅ Initialized
|
||||
|
||||
**Backend Status**: ✅ Complete
|
||||
|
||||
**Frontend Status**: 🟡 Partial (overview page only)
|
||||
|
||||
**Ready for Testing**: ✅ Yes (basic functionality)
|
||||
17
docs/archive/README.md
Normal file
@@ -0,0 +1,17 @@
|
||||
# Terra-View Documentation Archive
|
||||
|
||||
This directory contains old documentation files that are no longer actively maintained but preserved for historical reference.
|
||||
|
||||
## Archived Documents
|
||||
|
||||
### PROJECTS_SYSTEM_IMPLEMENTATION.md
|
||||
Early implementation notes for the projects system. Superseded by current documentation in main docs directory.
|
||||
|
||||
### .aider.chat.history.md
|
||||
AI assistant chat history from development sessions. Contains context and decision-making process.
|
||||
|
||||
## Note
|
||||
|
||||
These documents may contain outdated information. For current documentation, see:
|
||||
- [Main README](../../README.md)
|
||||
- [Active Documentation](../)
|
||||
10
requirements.txt
Normal file
@@ -0,0 +1,10 @@
|
||||
fastapi==0.104.1
|
||||
uvicorn[standard]==0.24.0
|
||||
sqlalchemy==2.0.23
|
||||
pydantic==2.5.0
|
||||
python-multipart==0.0.6
|
||||
jinja2==3.1.2
|
||||
aiofiles==23.2.1
|
||||
Pillow==10.1.0
|
||||
httpx==0.25.2
|
||||
openpyxl==3.1.2
|
||||
23
sample_roster.csv
Normal file
@@ -0,0 +1,23 @@
|
||||
unit_id,device_type,unit_type,deployed,retired,note,project_id,location,address,coordinates,last_calibrated,next_calibration_due,deployed_with_modem_id,ip_address,phone_number,hardware_model,slm_host,slm_tcp_port,slm_ftp_port,slm_model,slm_serial_number,slm_frequency_weighting,slm_time_weighting,slm_measurement_range
|
||||
# ============================================
|
||||
# SEISMOGRAPHS (device_type=seismograph)
|
||||
# ============================================
|
||||
BE1234,seismograph,series3,true,false,Primary unit at main site,PROJ-001,San Francisco CA,123 Market St,37.7749;-122.4194,2025-06-15,2026-06-15,MDM001,,,,,,,,,,,
|
||||
BE5678,seismograph,series3,true,false,Backup sensor,PROJ-001,Los Angeles CA,456 Sunset Blvd,34.0522;-118.2437,2025-03-01,2026-03-01,MDM002,,,,,,,,,,,
|
||||
BE9012,seismograph,series4,false,false,In maintenance - needs calibration,PROJ-002,Workshop,789 Industrial Way,,,,,,,,,,,,,,
|
||||
BE3456,seismograph,series3,true,false,,PROJ-003,New York NY,101 Broadway,40.7128;-74.0060,2025-01-10,2026-01-10,,,,,,,,,,,
|
||||
BE7890,seismograph,series3,false,true,Decommissioned 2024,,Storage,Warehouse B,,,,,,,,,,,,,,,
|
||||
# ============================================
|
||||
# MODEMS (device_type=modem)
|
||||
# ============================================
|
||||
MDM001,modem,,true,false,Cradlepoint at SF site,PROJ-001,San Francisco CA,123 Market St,37.7749;-122.4194,,,,,192.168.1.100,+1-555-0101,IBR900,,,,,,,
|
||||
MDM002,modem,,true,false,Sierra Wireless at LA site,PROJ-001,Los Angeles CA,456 Sunset Blvd,34.0522;-118.2437,,,,,10.0.0.50,+1-555-0102,RV55,,,,,,,
|
||||
MDM003,modem,,false,false,Spare modem in storage,,,Storage,Warehouse A,,,,,,+1-555-0103,IBR600,,,,,,,
|
||||
MDM004,modem,,true,false,NYC backup modem,PROJ-003,New York NY,101 Broadway,40.7128;-74.0060,,,,,172.16.0.25,+1-555-0104,IBR1700,,,,,,,
|
||||
# ============================================
|
||||
# SOUND LEVEL METERS (device_type=slm)
|
||||
# ============================================
|
||||
SLM001,slm,,true,false,NL-43 at construction site A,PROJ-004,Downtown Site,500 Main St,40.7589;-73.9851,,,,,,,,192.168.10.101,2255,21,NL-43,12345678,A,F,30-130 dB
|
||||
SLM002,slm,,true,false,NL-43 at construction site B,PROJ-004,Midtown Site,600 Park Ave,40.7614;-73.9776,,,MDM004,,,,,192.168.10.102,2255,21,NL-43,12345679,A,S,30-130 dB
|
||||
SLM003,slm,,false,false,NL-53 spare unit,,,Storage,Warehouse A,,,,,,,,,,,NL-53,98765432,C,F,25-138 dB
|
||||
SLM004,slm,,true,false,NL-43 nighttime monitoring,PROJ-005,Residential Area,200 Quiet Lane,40.7484;-73.9857,,,,,,,,10.0.5.50,2255,21,NL-43,11112222,A,S,30-130 dB
|
||||
|
20
scripts/README.md
Normal file
@@ -0,0 +1,20 @@
|
||||
# Terra-View Utility Scripts
|
||||
|
||||
This directory contains utility scripts for database operations, testing, and maintenance.
|
||||
|
||||
## Scripts
|
||||
|
||||
### create_test_db.py
|
||||
Generate a realistic test database with sample data.
|
||||
|
||||
Usage: python scripts/create_test_db.py
|
||||
|
||||
### rename_unit.py
|
||||
Rename a unit ID across all tables.
|
||||
|
||||
Usage: python scripts/rename_unit.py <old_id> <new_id>
|
||||
|
||||
### sync_slms_to_slmm.py
|
||||
Manually sync all SLM devices from Terra-View to SLMM.
|
||||
|
||||
Usage: python scripts/sync_slms_to_slmm.py
|
||||