125 Commits

Author SHA1 Message Date
73a6ff4d20 feat: Refactor project creation and management to support modular project types
- Updated project creation modal to allow selection of optional modules (Sound and Vibration Monitoring).
- Modified project dashboard and header to display active modules and provide options to add/remove them.
- Enhanced project detail view to dynamically adjust UI based on enabled modules.
- Implemented a new migration script to create a `project_modules` table and seed it based on existing project types.
- Adjusted form submissions to handle module selections and ensure proper API interactions for module management.
2026-03-30 21:44:15 +00:00
184f0ddd13 doc: update to 0.9.3 2026-03-28 01:53:13 +00:00
e7bd09418b fix: update session calendar layout and improve session labels for clarity 2026-03-28 01:44:59 +00:00
27eeb0fae6 fix: adds timeline bars to SLM calendar view, more conscise and legible. 2026-03-27 22:44:53 +00:00
192e15f238 Merge pull request 'cleanup/project-locations-active-assignment' (#42) from cleanup/project-locations-active-assignment into dev
Reviewed-on: #42
2026-03-27 18:20:07 -04:00
49bc625c1a feat: add report_date to monitoring sessions and update related functionality
fix: chart properly renders centered
2026-03-27 22:18:50 +00:00
95fedca8c9 feat: monitoring session improvements — UTC fix, period hours, calendar, session detail
- Fix UTC display bug: upload_nrl_data now wraps RNH datetimes with
  local_to_utc() before storing, matching patch_session behavior.
  Period type and label are derived from local time before conversion.

- Add period_start_hour / period_end_hour to MonitoringSession model
  (nullable integers 0–23). Migration: migrate_add_session_period_hours.py

- Update patch_session to accept and store period_start_hour / period_end_hour.
  Response now includes both fields.

- Update get_project_sessions to compute "Effective: M/D H:MM AM → M/D H:MM AM"
  string from period hours and pass it to session_list.html.

- Rework period edit UI in session_list.html: clicking the period badge now
  opens an inline editor with period type selector + start/end hour inputs.
  Selecting a period type pre-fills default hours (Day: 7–19, Night: 19–7).

- Wire period hours into _build_location_data_from_sessions: uses
  period_start/end_hour when set, falls back to hardcoded defaults.

- RND viewer: inject SESSION_PERIOD_START/END_HOUR from template context.
  renderTable() dims rows outside the period window (opacity-40) with a
  tooltip; shows "(N in period window)" in the row count.

- New session detail page at /api/projects/{id}/sessions/{id}/detail:
  shows breadcrumb, files list with View/Download/Report actions,
  editable session info form (label, period type, hours, times).

- Add local_datetime_input Jinja filter for datetime-local input values.

- Monthly calendar view: new get_sessions_calendar endpoint returns
  sessions_calendar.html partial; added below sessions list in detail.html.
  Color-coded per NRL with legend, HTMX prev/next navigation, session dots
  link to detail page.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-27 21:52:52 +00:00
e8e155556a refactor: unify active assignment checks and add project-type guards
- Replace all UnitAssignment "active" checks from `status == "active"` to
  `assigned_until == None` in both project_locations.py and projects.py.
  This aligns with the canonical definition: active = no end date set.
  (status field is still set in sync, but is no longer the query criterion)

- Add `_require_sound_project()` helper to both routers and call it at the
  top of every sound-monitoring-specific endpoint (FTP browser, FTP downloads,
  RND file viewer, all Excel report endpoints, combined report wizard,
  upload-all, NRL live status, NRL data upload). Vibration projects hitting
  these endpoints now receive a clear 400 instead of silently failing or
  returning empty results.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-27 21:12:38 +00:00
33e962e73d feat: add edit session times functionality with modal for monitoring sessions 2026-03-27 20:54:04 +00:00
ac48fb2977 feat: add swap functionality for unit and modem assignments in vibration monitoring locations 2026-03-27 20:33:13 +00:00
3c4b81cf78 docs: updates to 0.9.2 2026-03-27 17:01:43 +00:00
d135727ebd feat: add in-line quick editing for seismograph details (cal date, notes, deployment status) 2026-03-26 06:10:03 +00:00
64d4423308 feat: add allocated status and project allocation to unit management
- Updated dashboard to display allocated units alongside deployed and benched units.
- Introduced a quick-info modal for units, showing detailed information including calibration status, project allocation, and upcoming jobs.
- Enhanced fleet calendar with a new quick-info modal for units, allowing users to view unit details without navigating away.
- Modified devices table to include allocated status and visual indicators for allocated units.
- Added allocated filter option in the roster view for better unit management.
- Implemented backend migration to add 'allocated' and 'allocated_to_project_id' columns to the roster table.
- Updated unit detail view to reflect allocated status and allow for project allocation input.
2026-03-26 05:05:34 +00:00
4f71d528ce feat: shows cal date in reservation planner after unit selected for location. 2026-03-25 19:10:13 +00:00
4f56dea4f3 feat: adds deployment records for seismographs. 2026-03-25 17:36:51 +00:00
57a85f565b feat: add location_slots to job_reservations for full slot persistence and update version to 0.9.1
Fix: modems do not show as "missing" any more, cleans up the dashboard.
2026-03-24 01:13:29 +00:00
serversdwn
e6555ba924 feat: Update series4 heartbeat to accept new source_id field with fallback to legacy source 2026-03-20 17:13:21 -04:00
8694282dd0 Update version to 0.9.0 with Job Planner 2026-03-20 04:48:22 +00:00
bc02dc9564 feat: Enhance project and reservation management
- Updated reservation list to display estimated units and improved count display.
- Added "Upcoming" status to project dashboard and header with corresponding styles.
- Implemented a dropdown for quick status updates in project header.
- Modified project list compact view to reflect new status labels.
- Updated project overview to include a tab for upcoming projects.
- Added migration script to introduce estimated_units column in job_reservations table.
2026-03-19 22:52:35 +00:00
0d01715f81 'add reservation mechanic to dev' (#35) from reservation into dev
Reviewed-on: #35
2026-03-18 18:19:32 -04:00
b3ec249c5e feat: add location names to reservation slots and promote-to-project
- Each monitoring location slot can now have a named location (e.g. "North Gate")
- Location names and slot order are persisted and restored in the planner
- Location names display in the expanded reservation card view
- Added "Promote to Project" button that converts a reservation into a
  tracked project with monitoring locations and unit assignments pre-filled

Requires DB migration on prod:
  ALTER TABLE job_reservation_units ADD COLUMN location_name TEXT;
  ALTER TABLE job_reservation_units ADD COLUMN slot_index INTEGER;
2026-03-18 22:15:46 +00:00
b6e74258f1 Merge dev into reservation branch to pick up v0.8.0 watcher manager changes 2026-03-18 20:13:19 +00:00
1a87ff13c9 doc: update docs for 0.8.0 2026-03-18 19:59:34 +00:00
22c62c0729 fix: watcher manager now displays "heard from" status, instead of unit status. Refreshes ever 60 mins, if havent heard from in >60 mins display missing. 2026-03-18 19:45:39 +00:00
0f47b69c92 Merge pull request 'merge watcher into dev' (#33) from watcher into dev
Reviewed-on: #33
2026-03-18 14:54:33 -04:00
76667454b3 feat: enhance agent status display with new classes and live refresh functionality 2026-03-18 18:51:54 +00:00
0e3f512203 Feat: expands project reservation system.
-Reservation list view
-expandable project cards
2026-03-15 05:25:23 +00:00
serversdwn
15d962ba42 feat: watcher agent management system implemented. 2026-03-13 17:38:43 -04:00
e4d1f0d684 feat: start build of listed reservation system 2026-03-13 21:37:06 +00:00
serversdwn
b571dc29bc chore: make rebuild script executable 2026-03-13 17:34:32 +00:00
serversdwn
e2c841d5d7 doc: update readme v0.7.1 2026-03-12 22:41:47 +00:00
cc94493331 Merge pull request 'merge v0.7.1 dev into main.' (#31) from dev into main
Reviewed-on: #31
2026-03-12 18:40:15 -04:00
serversdwn
5a5426cceb v0.7.1 - Add out for call status, starting to work on reservation mode, fixed a big brain fart too. 2026-03-12 22:38:22 +00:00
serversdwn
66eddd6fe2 chore: db cleanups and migration script fixes. 2026-03-12 22:09:57 +00:00
serversdwn
c77794787c remove override 2026-03-12 21:41:30 +00:00
61c84bc71d fix: merge conflict fixed 2026-03-12 21:37:09 +00:00
fbf7f2a65d chore: clean up .gitignore 2026-03-12 21:22:51 +00:00
serversdwn
202fcaf91c Merge branch 'dev' of ssh://10.0.0.2:2222/serversdown/terra-view into dev 2026-03-12 20:25:02 +00:00
serversdwn
3a411d0a89 chore: docker and git stuff 2026-03-12 20:24:01 +00:00
0c2186f5d8 feat: reservation modal now usable. 2026-03-12 20:10:42 +00:00
c138e8c6a0 feat: add new "out for cal" status for units currently being calibrated.
-retire unit button changed to be more dramatic... lol
2026-03-12 17:59:42 +00:00
1dd396acd8 Merge pull request 'Update to v0.7.0' (#30) from dev into main
Reviewed-on: #30
2026-03-08 03:13:17 -04:00
e89a04f58c fix: SLM report line graph border added, combined report wizard spacing fix. 2026-03-07 07:16:10 +00:00
e4ef065db8 Version bump to v0.7.0.
Docs: Update readme/changelog for 0.7.0
2026-03-07 01:39:19 +00:00
86010de60c Fix: combined report generation formatting fixed and cleaned up. (i think its good now?) 2026-03-07 01:32:49 +00:00
f89f04cd6f feat: support for day time monitoring data, combined report generation now compaitible with mixed day and night types. 2026-03-07 00:16:58 +00:00
67a2faa2d3 fix: separate days now are in separate .xlsx files, NRLs still 1 per sheet.
add: rebuild script for prod.

fix: Improved data parsing, now filters out unneeded Lp files and .xlsx files.
2026-03-06 23:37:24 +00:00
14856e61ef Feat: Full combined report now working properly. Lotsa stuff fixed. 2026-03-06 22:32:54 +00:00
2b69518b33 fix: add slm start time grace_minutes=15 grace period to include data starting at start time. 2026-03-05 22:48:21 +00:00
6070d03e83 fix: nl32 data date now reads from start_time. 2026-03-05 22:28:10 +00:00
240552751c feat: enhance mass upload parsing, no longer imports tons of unneeded Lp Files. 2026-03-05 22:22:19 +00:00
015ce0a254 feat: add data collection mode to projects with UI updates and migration script 2026-03-05 21:50:41 +00:00
ef8c046f31 feat: add slm model schemas, please run migration on prod db
Feat: add complete combined sound report creation tool (wizard), add new slm schema for each model

feat: update project header link for combined report wizard

feat: add migration script to backfill device_model in monitoring_sessions

feat: implement combined report preview template with spreadsheet functionality

feat: create combined report wizard template for report generation.
2026-03-05 20:43:22 +00:00
3637cf5af8 Feat: Chart preview function now working.
fix: chart formating and styling tweaks to match typical reports.
2026-03-05 06:56:44 +00:00
7fde14d882 feat: add support for nl32 data in webviewer and report generator. 2026-03-05 04:19:34 +00:00
bd3d937a82 feat: enhance project data handling with new Jinja filters and update UI labels for clarity 2026-02-25 21:41:51 +00:00
291fa8e862 feat: Manual sound data uploads, standalone SLM type added.(no modem mode), Smart uploading with fuzzy name matching enabled. 2026-02-25 00:43:47 +00:00
8e292b1aca add: Vibration location detail template 2026-02-24 20:06:55 +00:00
7516bbea70 feat: add manual SD card data upload for offline NRLs; rename RecordingSession to MonitoringSession
- Add POST /api/projects/{project_id}/nrl/{location_id}/upload-data endpoint
  accepting a ZIP or multi-file select of .rnd/.rnh files from an SD card.
  Parses .rnh metadata for session start/stop times, serial number, and store
  name. Creates a MonitoringSession (no unit assignment required) and DataFile
  records for each measurement file.

- Add Upload Data button and collapsible upload panel to the NRL detail Data
  Files tab, with inline success/error feedback and automatic file list refresh
  via HTMX after import.

- Rename RecordingSession -> MonitoringSession throughout the codebase
  (models.py, projects.py, project_locations.py, scheduler.py, roster_rename.py,
  main.py, init_projects_db.py, scripts/rename_unit.py). DB table renamed from
  recording_sessions to monitoring_sessions; old indexes dropped and recreated.

- Update all template UI copy from Recording Sessions to Monitoring Sessions
  (nrl_detail, projects/detail, session_list, schedule_oneoff, roster).

- Add backend/migrate_rename_recording_to_monitoring_sessions.py for applying
  the table rename on production databases before deploying this build.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-24 19:54:40 +00:00
da4e5f66c5 chore: add dev env specifics to .gitignore 2026-02-23 17:37:50 +00:00
dae2595303 Chore: still cleaning up this crap with gitignore 2026-02-23 17:37:12 +00:00
0c4e7aa5e6 chore: remove old backup metadata files 2026-02-23 17:34:44 +00:00
229499ccf6 chore: ignored files removed from git tracking. 2026-02-23 08:42:16 +00:00
fdc4adeaee chore: ignored files removed from git. 2026-02-23 08:42:11 +00:00
b3bf91880a chore: dev-data vol added to gitignore 2026-02-23 08:39:22 +00:00
17b3f91dfc fix: dev server separated from prod deployment. now uses override via docker. 2026-02-23 08:15:04 +00:00
6c1d0bc467 add: vibration projects now no longer show SLM project tabs. 2026-02-23 08:13:51 +00:00
serversdwn
abd059983f fix: slm modal now shows bench/deploy 2026-02-20 02:20:51 +00:00
serversdwn
0f17841218 feat: enhance project management by canceling pending actions for archived and on_hold projects 2026-02-19 18:57:59 +00:00
serversdwn
65362bab21 feat: implement project status management with 'on_hold' state and associated UI updates
-feat: ability to hard delete projects, plus a soft delete with auto pruning.
2026-02-19 15:23:02 +00:00
serversdwn
dc77a362ce fix: add TCP_IDLE_TTL and TCP_MAX_AGE environment variables for SLMM service 2026-02-19 01:25:07 +00:00
serversdwn
28942600ab fix: Auto-downloaded files now show up in project_files data. 2026-02-18 19:51:44 +00:00
serversdwn
80861997af Fix: removed duplicate download following scheduled stop. 2026-02-18 06:44:04 +00:00
b15d434fce Merge pull request 'version bump 0.6.1' (#28) from dev into main
Reviewed-on: #28
2026-02-15 23:47:22 -05:00
serversdwn
70ef43de11 version bump to 0.6.1 2026-02-16 04:46:09 +00:00
7b4e12c127 Merge pull request 'merge dev 0.6.1 to main.' (#27) from dev into main
Reviewed-on: #27
2026-02-15 23:44:19 -05:00
serversdwn
24473c9ca3 chore: version bump to 0.6.0 2026-02-16 04:30:50 +00:00
serversdwn
caabfd0c42 fix: One off scheduler time now set in local time rather than UTC 2026-02-13 17:02:00 +00:00
serversdwn
ebe60d2b7d feat: enhance roster unit management with bidirectional pairing sync
fix: scheduler one off
2026-02-11 20:13:27 +00:00
serversdwn
842e9d6f61 feat: add support for one-off recording schedules with start and end datetime 2026-02-10 07:08:03 +00:00
742a98a8ed Merge pull request 'version bump to 0.6' (#26) from dev into main
Reviewed-on: #26
2026-02-06 16:42:06 -05:00
3b29c4d645 Merge branch 'main' into dev 2026-02-06 16:41:34 -05:00
serversdwn
63d9c59873 doc/chore: v0.6.0 version bump 2026-02-06 21:25:49 +00:00
serversdwn
794bfc00dc Merge branch 'dev' of ssh://10.0.0.2:2222/serversdown/terra-view into dev 2026-02-06 21:19:51 +00:00
serversdwn
89662d2fa5 Change: user sets date of previous calibration, not upcoming expire dates.
- seismograph list page enhanced with better visabilty, filtering, sorting, and calibration dates color coded.
2026-02-06 21:17:14 +00:00
serversdwn
eb0a99796d add: Calander and reservation mode implemented. 2026-02-06 20:40:31 +00:00
b47e69e609 Merge pull request 'Merge dev v0.5.1 before 0.6 update with calender.' (#25) from dev into main
Reviewed-on: #25
2026-02-06 14:56:12 -05:00
1cb25b6c17 Merge branch 'main' into dev 2026-02-06 14:55:54 -05:00
serversdwn
e515bff1a9 fix: tab state persists in url hash. Settings save nolonger reload the page. Scheduler management now cascades to individual events. 2026-02-04 18:12:18 +00:00
serversdwn
f296806fd1 add: link to modem login page in unit-detail page 2026-02-03 23:10:23 +00:00
serversdwn
24da5ab79f add: Modem model # now its own config. allowing for different options on different model #s 2026-02-02 21:15:27 +00:00
serversdwn
305540f564 fix: SLM modal field now only contains correct fields. IP address is passed via modem pairing.
Add: Fuzzy-search modem pairing for slms
2026-02-01 20:39:34 +00:00
serversdwn
639b485c28 Fix: mobile type display in roster and device tables incorrect. 2026-02-01 07:21:34 +00:00
serversdwn
d78bafb76e fix: improved 24hr cycle via scheduler. Should help prevent issues with DLs. 2026-01-31 22:31:34 +00:00
serversdwn
8373cff10d added: Pairing options now available from the modem page. 2026-01-29 23:04:18 +00:00
serversdwn
4957a08198 fix: improvedr pair status sharing. 2026-01-29 16:37:59 +00:00
serversdwn
05482bd903 Add:
- pair_devices.html template for device pairing interface
- SLMM device control lock prevents flooding nl43.
Fix:
- Polling intervals for SLMM.
- modem view now list
- device pairing much improved.
- various other tweaks through out UI.
- SLMM Scheduled downloads fixed.
2026-01-29 07:50:13 +00:00
serversdwn
5ee6f5eb28 feat: Enhance dashboard with filtering options and sync SLM status
- Added a new filtering system to the dashboard for device types and statuses.
- Implemented asynchronous SLM status synchronization to update the Emitter table.
- Updated the status snapshot endpoint to sync SLM status before generating the snapshot.
- Refactored the dashboard HTML to include filter controls and JavaScript for managing filter state.
- Improved the unit detail page to handle modem associations and cascade updates to paired devices.
- Removed redundant code related to syncing start time for measuring devices.
2026-01-28 20:02:10 +00:00
7ce0f6115d Merge pull request 'Update main to 0.5.1. See changelog.' (#18) from dev into main
## [0.5.1] - 2026-01-27

### Added
- **Dashboard Schedule View**: Today's scheduled actions now display directly on the main dashboard
  - New "Today's Actions" panel showing upcoming and past scheduled events
  - Schedule list partial for project-specific schedule views
  - API endpoint for fetching today's schedule data
- **New Branding Assets**: Complete logo rework for Terra-View
  - New Terra-View logos for light and dark themes
  - Retina-ready (@2x) logo variants
  - Updated favicons (16px and 32px)
  - Refreshed PWA icons (72px through 512px)

### Changed
- **Dashboard Layout**: Reorganized to include schedule information panel
- **Base Template**: Updated to use new Terra-View logos with theme-aware switching
2026-01-27 22:29:56 -05:00
serversdwn
6492fdff82 BIG update: Update to 0.5.1. Added:
-Project management
-Modem Managerment
-Modem/unit pairing

and more
2026-01-28 03:27:50 +00:00
serversdwn
44d7841852 BIG update: Update to 0.5.1. Added:
-Project management
-Modem Managerment
-Modem/unit pairing

and more
2026-01-28 03:26:52 +00:00
serversdwn
38c600aca3 Feat: schedule added to dashboard view. logo rework 2026-01-23 19:07:42 +00:00
serversdwn
eeda94926f doc: update to 0.4.4 (again) 2026-01-23 08:46:46 +00:00
serversdwn
57be9bf1f1 Docs: update to 0.4.4 2026-01-23 08:26:02 +00:00
serversdwn
8431784708 feat: Refactor template handling, improve scheduler functions, and add timezone utilities
- Moved Jinja2 template setup to a shared configuration file (templates_config.py) for consistent usage across routers.
- Introduced timezone utilities in a new module (timezone.py) to handle UTC to local time conversions and formatting.
- Updated all relevant routers to use the new shared template configuration and timezone filters.
- Enhanced templates to utilize local time formatting for various datetime fields, improving user experience with timezone awareness.
2026-01-23 06:05:39 +00:00
serversdwn
c771a86675 Feat/Fix: Scheduler actions more strictly defined. Commands now working. 2026-01-22 20:25:19 +00:00
serversdwn
65ea0920db Feat: Scheduler implemented, WIP 2026-01-21 23:11:58 +00:00
serversdwn
1f3fa7a718 feat: Add report templates API for CRUD operations and implement SLM settings modal
- Implemented a new API router for managing report templates, including endpoints for listing, creating, retrieving, updating, and deleting templates.
- Added a new HTML partial for a unified SLM settings modal, allowing users to configure SLM settings with dynamic modem selection and FTP credentials.
- Created a report preview page with an editable data table using jspreadsheet, enabling users to modify report details and download the report as an Excel file.
2026-01-20 21:43:50 +00:00
serversdwn
a9c9b1fd48 feat: SLM project report generator added. WIP 2026-01-20 08:46:06 +00:00
serversdwn
4c213c96ee Feat: rnd file viewer built 2026-01-19 21:49:10 +00:00
serversdwn
ff38b74548 feat: added collapsible view in project data files. 2026-01-19 21:31:22 +00:00
serversdwn
c8a030a3ba fixed project view title appearing as JSON string 2026-01-18 07:48:10 +00:00
serversdwn
d8a8330427 chore: docs/scripts cleaned up 2026-01-16 19:07:08 +00:00
serversdwn
1ef0557ccb feat: standardize device type for Sound Level Meters (SLM)
- Updated all instances of device_type from "sound_level_meter" to "slm" across the codebase.
- Enhanced documentation to reflect the new device type standardization.
- Added migration script to convert legacy device types in the database.
- Updated relevant API endpoints, models, and frontend templates to use the new device type.
- Ensured backward compatibility by deprecating the old device type without data loss.
2026-01-16 18:31:27 +00:00
serversdwn
6c7ce5aad0 Project data management phase 1. Files can be downloaded to server and downloaded locally. 2026-01-16 07:39:22 +00:00
serversdwn
54754e2279 chore: ignore aider cache 2026-01-14 22:20:05 +00:00
serversdwn
8787a2dbb8 doc update for 0.4.3 2026-01-14 22:12:48 +00:00
serversdwn
7971092509 update ftp browser, enable folder downloads (local), reimplemented timer. Enhanced project view 2026-01-14 21:59:22 +00:00
serversdwn
d349af9444 Add Sound Level Meter support to roster management
- Updated roster.html to include a new option for Sound Level Meter in the device type selection.
- Added specific fields for Sound Level Meter information, including model, host/IP address, TCP and FTP ports, serial number, frequency weighting, and time weighting.
- Enhanced JavaScript to handle the visibility and state of Sound Level Meter fields based on the selected device type.
- Modified the unit editing functionality to populate Sound Level Meter fields with existing data when editing a unit.
- Updated settings.html to change the deployment status display from badges to radio buttons for better user interaction.
- Adjusted the toggleDeployed function to accept the new state directly instead of the current state.
- Changed the edit button in unit_detail.html to redirect to the roster edit page with the appropriate unit ID.
2026-01-14 21:59:13 +00:00
serversdwn
be83cb3fe7 feat: Add Rename Unit functionality and improve navigation in SLM dashboard
- Implemented a modal for renaming units with validation and confirmation prompts.
- Added JavaScript functions to handle opening, closing, and submitting the rename unit form.
- Enhanced the back navigation in the SLM detail page to check referrer history.
- Updated breadcrumb navigation in the legacy dashboard to accommodate NRL locations.
- Improved the sound level meters page with a more informative header and device list.
- Introduced a live measurement chart with WebSocket support for real-time data streaming.
- Added functionality to manage active devices and projects with auto-refresh capabilities.
2026-01-14 01:44:30 +00:00
serversdwn
e9216b9abc SLM return to project button added. 2026-01-13 18:57:31 +00:00
serversdwn
d93785c230 Add schedule and unit list templates for project management
- Created `schedule_list.html` to display scheduled actions with execution status, location, and timestamps.
- Implemented buttons for executing and canceling schedules, along with a details view placeholder.
- Created `unit_list.html` to show assigned units with their status, location, model, and session/file counts.
- Added conditional rendering for active sessions and links to view unit and location details.
2026-01-13 08:37:02 +00:00
serversdwn
98ee9d7cea Add file and session lists to project dashboard
- Created a new template for displaying a list of data files in `file_list.html`, including file details and actions for downloading and viewing file details.
- Added a new template for displaying recording sessions in `session_list.html`, featuring session status, details, and action buttons for stopping recordings and viewing session details.
- Introduced a legacy dashboard template `slm_legacy_dashboard.html` for sound level meter control, including a live view panel and configuration modal with dynamic content loading.
2026-01-13 01:32:03 +00:00
serversdwn
04c66bdf9c Refactor project dashboard and device list templates; add modals for editing projects and locations
- Updated project_dashboard.html to conditionally display NRLs or Locations based on project type, and added a button to open a modal for adding locations.
- Enhanced slm_device_list.html with a configuration button for each unit, allowing users to open a modal for device configuration.
- Modified detail.html to include an edit project modal with a form for updating project details, including client name, status, and dates.
- Improved sound_level_meters.html by restructuring the layout and adding a configuration modal for SLM devices.
- Implemented JavaScript functions for handling modal interactions, including opening, closing, and submitting forms for project and location management.
2026-01-12 23:07:25 +00:00
serversdwn
8a5fadb5df Move SLM control center groundwork onto dev 2026-01-12 18:07:26 +00:00
247 changed files with 43960 additions and 7524 deletions

View File

@@ -1,3 +1,5 @@
docker-compose.override.yml
# Python cache / compiled
__pycache__
*.pyc
@@ -28,6 +30,7 @@ ENV/
# Runtime data (mounted volumes)
data/
data-dev/
# Editors / OS junk
.vscode/

22
.gitignore vendored
View File

@@ -1,3 +1,17 @@
# Terra-View Specifics
# Dev build counter (local only, never commit)
build_number.txt
docker-compose.override.yml
# SQLite database files
*.db
*.db-journal
data/
data-dev/
.aider*
.aider*
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[codz]
@@ -206,8 +220,14 @@ marimo/_static/
marimo/_lsp/
__marimo__/
<<<<<<< HEAD
# Seismo Fleet Manager
# SQLite database files
*.db
*.db-journal
data/
/data/
/data-dev/
.aider*
.aider*
=======
>>>>>>> 0c2186f5d89d948b0357d674c0773a67a67d8027

View File

@@ -1,10 +1,303 @@
# Changelog
All notable changes to Seismo Fleet Manager will be documented in this file.
All notable changes to Terra-View will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [0.9.3] - 2026-03-28
### Added
- **Monitoring Session Detail Page**: New dedicated page for each session showing session info, data files (with View/Report/Download actions), an editable session panel, and report actions.
- **Session Calendar with Gantt Bars**: Monthly calendar view below the session list, showing each session as a Gantt-style bar. The dim bar represents the full device on/off window; the bright bar highlights the effective recording window. Bars extend edge-to-edge across day cells for sessions spanning midnight.
- **Configurable Period Windows**: Sessions now store `period_start_hour` and `period_end_hour` to define the exact hours that count toward reports, replacing hardcoded day/night defaults. The session edit panel shows a "Required Recording Window" section with a live preview (e.g. "7:00 AM → 7:00 PM") and a Defaults button that auto-fills based on period type.
- **Report Date Field**: Sessions can now store an explicit `report_date` to override the automatic target-date heuristic — useful when a device ran across multiple days but only one specific day's data is needed for the report.
- **Effective Window on Session Info**: Session detail and session cards now show an "Effective" row displaying the computed recording window dates and times in local time.
- **Vibration Project Redesign**: Vibration project detail page is stripped back to project details and monitoring locations only. Each location supports assigning a seismograph and optional modem. Sound-specific tabs (Schedules, Sessions, Data Files, Assigned Units) are hidden for vibration projects.
- **Modem Assignment on Locations**: Vibration monitoring locations now support an optional paired modem alongside the seismograph. The swap endpoint handles both assignments atomically, updating bidirectional pairing fields on both units.
- **Available Modems Endpoint**: New `GET /api/projects/{project_id}/available-modems` endpoint returning all deployed, non-retired modems for use in assignment dropdowns.
### Fixed
- **Active Assignment Checks**: Unified all `UnitAssignment` "active" checks from `status == "active"` to `assigned_until IS NULL` throughout `project_locations.py` and `projects.py` for consistency with the canonical active definition.
### Changed
- **Sound-Only Endpoint Guards**: FTP browser, RND viewer, Excel report generation, combined report wizard, and data upload endpoints now return HTTP 400 if called on a non-sound-monitoring project.
### Migration Notes
Run on each database before deploying:
```bash
docker compose exec terra-view python3 backend/migrate_add_session_period_hours.py
docker compose exec terra-view python3 backend/migrate_add_session_report_date.py
```
---
## [0.9.2] - 2026-03-27
### Added
- **Deployment Records**: Seismographs now track a full deployment history (location, project, dates). Each deployment is logged on the unit detail page with start/end dates, and the fleet calendar service uses this history for availability calculations.
- **Allocated Unit Status**: New `allocated` status for units reserved for an upcoming job but not yet deployed. Allocated units appear in the dashboard summary, roster filters, and devices table with visual indicators.
- **Project Allocation**: Units can be linked to a project via `allocated_to_project_id`. Allocation is shown on the unit detail page and in a new quick-info modal accessible from the fleet calendar and roster.
- **Quick-Info Unit Modal**: Click any unit in the fleet calendar or roster to open a modal showing cal status, project allocation, upcoming jobs, and deployment state — without leaving the page.
- **Cal Date in Planner**: When a unit is selected for a monitoring location slot in the Job Planner, its calibration expiry date is now shown inline so you can spot near-expiry units before committing.
- **Inline Seismograph Editing**: Unit rows in the seismograph dashboard now support inline editing of cal date, notes, and deployment status without navigating to the full detail page.
### Migration Notes
Run on each database before deploying:
```bash
docker compose exec terra-view python3 backend/migrate_add_allocated.py
docker compose exec terra-view python3 backend/migrate_add_deployment_records.py
```
---
## [0.9.1] - 2026-03-20
### Fixed
- **Location slots not persisting**: Empty monitoring location slots (no unit assigned yet) were lost on save/reload. Added `location_slots` JSON column to `job_reservations` to store the full slot list including empty slots.
- **Modems in Recent Alerts**: Modems no longer appear in the dashboard Recent Alerts panel — alerts are for seismographs and SLMs only. Modem status is still tracked internally via paired device inheritance.
- **Series 4 heartbeat `source_id`**: Updated heartbeat endpoint to accept the new `source_id` field from Series 4 units with fallback to the legacy field for backwards compatibility.
### Migration Notes
Run on each database before deploying:
```bash
docker compose exec terra-view python3 backend/migrate_add_location_slots.py
```
---
## [0.9.0] - 2026-03-19
### Added
- **Job Planner**: Full redesign of the Fleet Calendar into a two-tab Job Planner / Calendar interface
- **Planner tab**: Create and manage job reservations with name, device type, dates, color, estimated units, and monitoring locations
- **Calendar tab**: 12-month rolling heatmap with colored job bars per day; confirmed jobs solid, planned jobs dashed
- **Monitoring Locations**: Each job has named location slots (filled = unit assigned, empty = needs a unit); progress shown as `2/5` with colored squares that fill as units are assigned
- **Estimated Units**: Separate planning number independent of actual location count; shown prominently on job cards
- **Fleet Summary panel**: Unit counts as clickable filter buttons; unit list shows reservation badges with job name, dates, and color
- **Available Units panel**: Shows units available for the job's date range when assigning
- **Smart color picker**: 18-swatch palette + custom color wheel; new jobs auto-pick a color maximally distant in hue from existing jobs
- **Job card progress**: `est. N · X/Y (Z more)` with filled/empty squares; amber → green when fully assigned
- **Promote to Project**: Promote a planned job to a tracked project directly from the planner form
- **Collapsible job details**: Name, dates, device type, color, project link, and estimated units collapse into a summary header
- **Calendar bar tooltips**: Hover any job bar to see job name and date range
- **Hash-based tab persistence**: `#cal` in URL restores Calendar tab on refresh; device type toggle preserves active tab
- **Auto-scroll to today**: Switching to Calendar tab smooth-scrolls to the current month
- **Upcoming project status**: New `upcoming` status for projects promoted from reservations
- **Job device type**: Reservations carry a device type so they only appear on the correct calendar
- **Project filtering by device type**: Projects only appear on the calendar matching their type (vibration → seismograph, sound → SLM, combined → both)
- **Confirmed/Planned toggles**: Independent show/hide toggles for job bar layers on the calendar
- **Cal expire dots toggle**: Calibration expiry dots off by default, togglable
### Changed
- **Renamed**: "Fleet Calendar" / "Reservation Planner" → **"Job Planner"** throughout UI and sidebar
- **Project status dropdown**: Inline `<select>` in project header for quick status changes
- **"All Projects" tab**: Shows everything except deleted; default view excludes archived/completed
- **Toast notifications**: All `alert()` dialogs replaced with non-blocking toasts (green = success, red = error)
### Migration Notes
Run on each database before deploying:
```bash
docker compose exec terra-view python3 -c "
import sqlite3
conn = sqlite3.connect('/app/data/seismo_fleet.db')
conn.execute('ALTER TABLE job_reservations ADD COLUMN estimated_units INTEGER')
conn.commit()
conn.close()
"
```
---
## [0.8.0] - 2026-03-18
### Added
- **Watcher Manager**: New admin page (`/admin/watchers`) for monitoring field watcher agents
- Live status cards per agent showing connectivity, version, IP, last-seen age, and log tail
- Trigger Update button to queue a self-update on the agent's next heartbeat
- Expand/collapse log tail with full-log expand mode
- Live surgical refresh every 30 seconds via `/api/admin/watchers` — no full page reload, open logs stay open
### Changed
- **Watcher status logic**: Agent status now reflects whether Terra-View is hearing from the watcher (ok if seen within 60 minutes, missing otherwise) — previously reflected the worst unit status from the last heartbeat payload, which caused false alarms when units went missing
### Fixed
- **Watcher Manager meta row**: Dark mode background was white due to invalid `dark:bg-slate-850` Tailwind class; corrected to `dark:bg-slate-800`
---
## [0.7.1] - 2026-03-12
### Added
- **"Out for Calibration" Unit Status**: New `out_for_cal` status for units currently away for calibration, with visual indicators in the roster, unit list, and seismograph stats panel
- **Reservation Modal**: Fleet calendar reservation modal is now fully functional for creating and managing device reservations
### Changed
- **Retire Unit Button**: Redesigned to be more visually prominent/destructive to reduce accidental clicks
### Fixed
- **Migration Scripts**: Fixed database path references in several migration scripts
- **Docker Compose**: Removed dev override file from the repository; dev environment config kept separate
### Migration Notes
Run the following migration script once per database before deploying:
```bash
python backend/migrate_add_out_for_calibration.py
```
---
## [0.7.0] - 2026-03-07
### Added
- **Project Status Management**: Projects can now be placed `on_hold` or `archived`, with automatic cancellation of pending scheduled actions
- **Hard Delete Projects**: Support for permanently deleting projects, in addition to soft-delete with auto-pruning
- **Vibration Location Detail**: New dedicated template for vibration project location detail views
- **Vibration Project Isolation**: Vibration projects no longer show SLM-specific project tabs
- **Manual SD Card Data Upload**: Upload offline NRL data directly from SD card via ZIP or multi-file select
- Accepts `.rnd`/`.rnh` files; parses `.rnh` metadata for session start/stop times, serial number, and store name
- Creates `MonitoringSession` and `DataFile` records automatically; no unit assignment required
- Upload panel on NRL detail Data Files tab with inline feedback and auto-refresh via HTMX
- **Standalone SLM Type**: New SLM device mode that operates without a modem (direct IP connection)
- **NL32 Data Support**: Report generator and web viewer now support NL32 measurement data format
- **Combined Report Wizard**: Multi-session combined Excel report generation tool
- Wizard UI grouped by location with period type badges (day/night)
- Each selected session produces one `.xlsx` in a ZIP archive
- Period type filtering: day sessions keep last calendar date (7AM6:59PM); night sessions span both days (7PM6:59AM)
- **Combined Report Preview**: Interactive spreadsheet-style preview before generating combined reports
- **Chart Preview**: Live chart preview in the report generator matching final report styling
- **SLM Model Schemas**: Per-model configuration schemas for NL32, NL43, NL53 devices
- **Data Collection Mode**: Projects now store a data collection mode field with UI controls and migration
### Changed
- **MonitoringSession rename**: `RecordingSession` renamed to `MonitoringSession` throughout codebase; DB table renamed from `recording_sessions` to `monitoring_sessions`
- Migration: `backend/migrate_rename_recording_to_monitoring_sessions.py`
- **Combined Report Split Logic**: Separate days now generate separate `.xlsx` files; NRLs remain one per sheet
- **Mass Upload Parsing**: Smarter file filtering — no longer imports unneeded Lp files or `.xlsx` files
- **SLM Start Time Grace Period**: 15-minute grace window added so data starting at session start time is included
- **NL32 Date Parsing**: Date now read from `start_time` field instead of file metadata
- **Project Data Labels**: Improved Jinja filters and UI label clarity for project data views
### Fixed
- **Dev/Prod Separation**: Dev server now uses Docker Compose override; production deployment no longer affected by dev config
- **SLM Modal**: Bench/deploy toggle now correctly shown in SLM unit modal
- **Auto-Downloaded Files**: Files downloaded by scheduler now appear in project file listings
- **Duplicate Download**: Removed duplicate file download that occurred following a scheduled stop
- **SLMM Environment Variables**: `TCP_IDLE_TTL` and `TCP_MAX_AGE` now correctly passed to SLMM service via docker-compose
### Technical Details
- `session_label` and `period_type` stored on `monitoring_sessions` table (migration: `migrate_add_session_period_type.py`)
- `device_model` stored on `monitoring_sessions` table (migration: `migrate_add_session_device_model.py`)
- Upload endpoint: `POST /api/projects/{project_id}/nrl/{location_id}/upload-data`
- ZIP filename format: `{session_label}_{project_name}_report.xlsx` (label first)
### Migration Notes
Run the following migration scripts once per database before deploying:
```bash
python backend/migrate_rename_recording_to_monitoring_sessions.py
python backend/migrate_add_session_period_type.py
python backend/migrate_add_session_device_model.py
```
---
## [0.6.1] - 2026-02-16
### Added
- **One-Off Recording Schedules**: Support for scheduling single recordings with specific start and end datetimes
- **Bidirectional Pairing Sync**: Pairing a device with a modem now automatically updates both sides, clearing stale pairings when reassigned
- **Auto-Fill Notes from Modem**: Notes are now copied from modem to paired device when fields are empty
- **SLMM Download Requests**: New `_download_request` method in SLMM client for binary file downloads with local save
### Fixed
- **Scheduler Timezone**: One-off scheduler times now use local time instead of UTC
- **Pairing Consistency**: Old device references are properly cleared when a modem is re-paired to a new device
## [0.6.0] - 2026-02-06
### Added
- **Calendar & Reservation Mode**: Fleet calendar view with reservation system for scheduling device deployments
- **Device Pairing Interface**: New two-column pairing page (`/pair-devices`) for linking recorders (seismographs/SLMs) with modems
- Visual pairing interface with drag-and-drop style interactions
- Fuzzy-search modem pairing for SLMs
- Pairing options now accessible from modem page
- Improved pair status sharing across views
- **Modem Dashboard Enhancements**:
- Modem model number now a dedicated configuration field with per-model options
- Direct link to modem login page from unit detail view
- Modem view converted to list format
- **Seismograph List Improvements**:
- Enhanced visibility with better filtering and sorting
- Calibration dates now color-coded for quick status assessment
- User sets date of previous calibration (not expiry) for clearer workflow
- **SLMM Device Control Lock**: Prevents command flooding to NL-43 devices
### Changed
- **Calibration Date UX**: Users now set the date of the previous calibration rather than upcoming expiry dates - more intuitive workflow
- **Settings Persistence**: Settings save no longer reloads the page
- **Tab State**: Tab state now persists in URL hash for better navigation
- **Scheduler Management**: Schedule changes now cascade to individual events
- **Dashboard Filtering**: Enhanced dashboard with additional filtering options and SLM status sync
- **SLMM Polling Intervals**: Fixed and improved polling intervals for better responsiveness
- **24-Hour Scheduler Cycle**: Improved cycle handling to prevent issues with scheduled downloads
### Fixed
- **SLM Modal Fields**: Modal now only contains correct device-specific fields
- **IP Address Handling**: IP address correctly passed via modem pairing
- **Mobile Type Display**: Fixed incorrect device type display in roster and device tables
- **SLMM Scheduled Downloads**: Fixed issues with scheduled download operations
## [0.5.1] - 2026-01-27
### Added
- **Dashboard Schedule View**: Today's scheduled actions now display directly on the main dashboard
- New "Today's Actions" panel showing upcoming and past scheduled events
- Schedule list partial for project-specific schedule views
- API endpoint for fetching today's schedule data
- **New Branding Assets**: Complete logo rework for Terra-View
- New Terra-View logos for light and dark themes
- Retina-ready (@2x) logo variants
- Updated favicons (16px and 32px)
- Refreshed PWA icons (72px through 512px)
### Changed
- **Dashboard Layout**: Reorganized to include schedule information panel
- **Base Template**: Updated to use new Terra-View logos with theme-aware switching
## [0.5.0] - 2026-01-23
_Note: This version was not formally released; changes were included in v0.5.1._
## [0.4.4] - 2026-01-23
### Added
- **Recurring schedules**: New scheduler service, recurring schedule APIs, and schedule templates (calendar/interval/list).
- **Alerts UI + backend**: Alerting service plus dropdown/list templates for surfacing notifications.
- **Report templates + viewers**: CRUD API for report templates, report preview screen, and RND file viewer.
- **SLM tooling**: SLM settings modal and SLM project report generator workflow.
### Changed
- **Project data management**: Unified files view, refreshed FTP browser, and new project header/templates for file/session/unit/assignment lists.
- **Device/SLM sync**: Standardized SLM device types and tightened SLMM sync paths.
- **Docs/scripts**: Cleanup pass and expanded device-type documentation.
### Fixed
- **Scheduler actions**: Strict command definitions so actions run reliably.
- **Project view title**: Resolved JSON string rendering in project headers.
## [0.4.3] - 2026-01-14
### Added
- **Sound Level Meter roster tooling**: Roster manager surfaces SLM metadata, supports rename unit flows, and adds return-to-project navigation to keep SLM dashboard users oriented.
- **Project management templates**: New schedule and unit list templates plus file/session lists show what each project stores before teams dive into deployments.
### Changed
- **Project view refresh**: FTP browser now downloads folders locally, the countdown timer was rebuilt, and project/device templates gained edit modals for projects and locations so navigation feels smoother.
- **SLM control sync & accuracy**: Control center groundwork now runs inside the dev UI, configuration edits propagate to SLMM (which caches configs for faster responses), and the SLM live view reads the correct DRD fields after the refactor.
### Fixed
- **SLM UI syntax bug**: Resolved the unexpected token error that appeared in the refreshed SLM components.
## [0.4.2] - 2026-01-05
### Added
@@ -348,6 +641,12 @@ No database migration required for v0.4.0. All new features use existing databas
- Photo management per unit
- Automated status categorization (OK/Pending/Missing)
[0.7.0]: https://github.com/serversdwn/seismo-fleet-manager/compare/v0.6.1...v0.7.0
[0.6.0]: https://github.com/serversdwn/seismo-fleet-manager/compare/v0.5.1...v0.6.0
[0.5.1]: https://github.com/serversdwn/seismo-fleet-manager/compare/v0.5.0...v0.5.1
[0.5.0]: https://github.com/serversdwn/seismo-fleet-manager/compare/v0.4.4...v0.5.0
[0.4.4]: https://github.com/serversdwn/seismo-fleet-manager/compare/v0.4.3...v0.4.4
[0.4.3]: https://github.com/serversdwn/seismo-fleet-manager/compare/v0.4.2...v0.4.3
[0.4.2]: https://github.com/serversdwn/seismo-fleet-manager/compare/v0.4.1...v0.4.2
[0.4.1]: https://github.com/serversdwn/seismo-fleet-manager/compare/v0.4.0...v0.4.1
[0.4.0]: https://github.com/serversdwn/seismo-fleet-manager/compare/v0.3.3...v0.4.0

View File

@@ -1,5 +1,9 @@
FROM python:3.11-slim
# Build number for dev builds (injected via --build-arg)
ARG BUILD_NUMBER=0
ENV BUILD_NUMBER=${BUILD_NUMBER}
# Set working directory
WORKDIR /app

View File

@@ -1,26 +0,0 @@
FROM python:3.11-slim
# Set working directory
WORKDIR /app
# Install system dependencies
RUN apt-get update && \
apt-get install -y --no-install-recommends iputils-ping curl && \
rm -rf /var/lib/apt/lists/*
# Copy requirements first for better caching
COPY requirements.txt .
# Install dependencies
RUN pip install --no-cache-dir -r requirements.txt
# Copy application code
COPY . .
# Expose SFM port
EXPOSE 8002
# Run SFM backend (API only)
# For now: runs same app on different port
# Future: will run SFM-specific entry point
CMD ["python3", "-m", "app.main"]

View File

@@ -1,21 +0,0 @@
FROM python:3.11-slim
WORKDIR /app
# Install system dependencies
RUN apt-get update && \
apt-get install -y --no-install-recommends curl && \
rm -rf /var/lib/apt/lists/*
# Copy requirements and install dependencies
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
# Copy application code
COPY app /app/app
# Expose port
EXPOSE 8100
# Run the SLM application
CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "8100"]

View File

@@ -1,24 +0,0 @@
FROM python:3.11-slim
# Set working directory
WORKDIR /app
# Install system dependencies
RUN apt-get update && \
apt-get install -y --no-install-recommends iputils-ping curl && \
rm -rf /var/lib/apt/lists/*
# Copy requirements first for better caching
COPY requirements.txt .
# Install dependencies
RUN pip install --no-cache-dir -r requirements.txt
# Copy application code
COPY . .
# Expose Terra-View UI port
EXPOSE 8001
# Run Terra-View (UI + orchestration)
CMD ["python3", "-m", "app.main"]

View File

@@ -1,141 +0,0 @@
# Terra-View Modular Monolith - Known-Good Baseline
**Date:** 2026-01-09
**Status:** ✅ IMPORT MIGRATION COMPLETE
## What We've Achieved
Successfully restructured the application into a modular monolith architecture with the new folder structure working end-to-end.
## New Structure
```
/home/serversdown/sfm/seismo-fleet-manager/
├── app/
│ ├── main.py # NEW: Entry point with Terra-View branding
│ ├── core/ # Shared infrastructure
│ │ ├── config.py # NEW: Centralized configuration
│ │ └── database.py # Shared DB utilities
│ ├── ui/ # UI Layer (device-agnostic)
│ │ ├── routes.py # NEW: HTML page routes
│ │ ├── templates/ # All HTML templates (copied from old location)
│ │ └── static/ # All static files (copied from old location)
│ ├── seismo/ # Seismograph Feature Module
│ │ ├── models.py # ✅ Updated to use app.seismo.database
│ │ ├── database.py # NEW: Seismo-specific DB connection
│ │ ├── routers/ # API routers (copied from backend/routers/)
│ │ └── services/ # Business logic (copied from backend/services/)
│ ├── slm/ # Sound Level Meter Feature Module
│ │ ├── models.py # NEW: Placeholder for SLM models
│ │ ├── database.py # NEW: SLM-specific DB connection
│ │ └── routers/ # SLM routers (copied from backend/routers/)
│ └── api/ # API Aggregation Layer (placeholder)
│ ├── dashboard.py # NEW: Future aggregation endpoints
│ └── roster.py # NEW: Future aggregation endpoints
└── data/
└── seismo_fleet.db # Still using shared DB (migration pending)
```
## What's Working
**Application starts successfully** on port 9999
**Health endpoint works**: `/health` returns Terra-View v1.0.0
**UI renders**: Main dashboard loads with proper templates
**API endpoints work**: `/api/status-snapshot` returns seismograph data
**Database access works**: Models properly connected
**Static files serve**: CSS, JS, icons all accessible
## Critical Changes Made
### 1. Fixed Import in models.py
**File:** `app/seismo/models.py`
**Change:** `from backend.database import Base``from app.seismo.database import Base`
**Reason:** Avoid duplicate Base instances causing SQLAlchemy errors
### 2. Created New Entry Point
**File:** `app/main.py`
**Features:**
- Terra-View branding (title, version, health check)
- Imports from new `app.*` structure
- Registers all seismo and SLM routers
- Middleware for environment context
### 3. Created UI Routes Module
**File:** `app/ui/routes.py`
**Purpose:** Centralize all HTML page routes (device-agnostic)
### 4. Created Module-Specific Databases
**Files:** `app/seismo/database.py`, `app/slm/database.py`
**Status:** Both currently point to shared `seismo_fleet.db` (migration pending)
## Recent Updates (2026-01-09)
**ALL imports updated** - Changed all `backend.*` imports to `app.seismo.*` or `app.slm.*`
**Old structure deleted** - `backend/` and `templates/` directories removed
**Containers rebuilt** - All three containers (Terra-View, SFM, SLMM) working with new imports
**Verified working** - Tested health endpoints and UI after migration
## What's NOT Yet Done
**Partial routes missing** - `/partials/*` endpoints not yet added
**Database not split** - Still using shared `seismo_fleet.db`
## How to Run
```bash
# Start on custom port to avoid conflicts
PORT=9999 python3 -m app.main
# Test health endpoint
curl http://localhost:9999/health
# Test API endpoint
curl http://localhost:9999/api/status-snapshot
# Access UI
open http://localhost:9999/
```
## Next Steps (Recommended Order)
1. **Add partial routes** to app/main.py or create separate router
2. **Test all endpoints thoroughly** - Verify roster CRUD, photos, settings
3. **Split databases** (Phase 2 of plan)
4. **Implement API aggregation layer** (Phase 3 of plan)
## Known Issues
None currently - app starts and serves requests successfully!
## Testing Checklist
- [x] App starts without errors
- [x] Health endpoint returns correct version
- [x] Main dashboard loads
- [x] Status snapshot API works
- [ ] All seismo endpoints work
- [ ] All SLM endpoints work
- [ ] Roster CRUD operations work
- [ ] Photos upload/download works
- [ ] Settings page works
## Rollback Instructions
~~The old structure has been deleted.~~ To rollback, restore from your backup:
```bash
# Restore from your backup
# The old backend/ and templates/ directories were removed on 2026-01-09
```
## Important Notes
- **MIGRATION COMPLETE**: Old `backend/` and `templates/` directories removed
- **ALL IMPORTS UPDATED**: All Python files now use `app.*` imports
- **NO DATA LOSS**: Database untouched, only code structure changed
- **CONTAINERS WORKING**: All three containers (Terra-View, SFM, SLMM) healthy
- **FULLY SELF-CONTAINED**: Application runs entirely from `app/` directory
---
**Congratulations!** 🎉 Import migration complete! The modular monolith is now self-contained and production-ready.

View File

@@ -1,4 +1,4 @@
# Seismo Fleet Manager v0.4.2
# Terra-View v0.9.3
Backend API and HTMX-powered web interface for managing a mixed fleet of seismographs and field modems. Track deployments, monitor health in real time, merge roster intent with incoming telemetry, and control your fleet through a unified database and dashboard.
## Features
@@ -308,7 +308,7 @@ print(response.json())
|-------|------|-------------|
| id | string | Unit identifier (primary key) |
| unit_type | string | Hardware model name (default: `series3`) |
| device_type | string | `seismograph` or `modem` discriminator |
| device_type | string | Device type: `"seismograph"`, `"modem"`, or `"slm"` (sound level meter) |
| deployed | boolean | Whether the unit is in the field |
| retired | boolean | Removes the unit from deployments but preserves history |
| note | string | Notes about the unit |
@@ -334,6 +334,39 @@ print(response.json())
| phone_number | string | Cellular number for the modem |
| hardware_model | string | Modem hardware reference |
**Sound Level Meter (SLM) fields**
| Field | Type | Description |
|-------|------|-------------|
| slm_host | string | Direct IP address for SLM (if not using modem) |
| slm_tcp_port | integer | TCP control port (default: 2255) |
| slm_ftp_port | integer | FTP file transfer port (default: 21) |
| slm_model | string | Device model (NL-43, NL-53) |
| slm_serial_number | string | Manufacturer serial number |
| slm_frequency_weighting | string | Frequency weighting setting (A, C, Z) |
| slm_time_weighting | string | Time weighting setting (F=Fast, S=Slow) |
| slm_measurement_range | string | Measurement range setting |
| slm_last_check | datetime | Last status check timestamp |
| deployed_with_modem_id | string | Modem pairing (shared with seismographs) |
### Device Type Schema
Terra-View supports three device types with the following standardized `device_type` values:
- **`"seismograph"`** (default) - Seismic monitoring devices (Series 3, Series 4, Micromate)
- Uses: calibration dates, modem pairing
- Examples: BE1234, UM12345 (Series 3/4 units)
- **`"modem"`** - Field modems and network equipment
- Uses: IP address, phone number, hardware model
- Examples: MDM001, MODEM-2025-01
- **`"slm"`** - Sound level meters (Rion NL-43/NL-53)
- Uses: TCP/FTP configuration, measurement settings, modem pairing
- Examples: SLM-43-01, NL43-001
**Important**: All `device_type` values must be lowercase. The legacy value `"sound_level_meter"` has been deprecated in favor of the shorter `"slm"`. Run `backend/migrate_standardize_device_types.py` to update existing databases.
### Emitter Table (Device Check-ins)
| Field | Type | Description |
@@ -463,6 +496,40 @@ docker compose down -v
## Release Highlights
### v0.8.0 — 2026-03-18
- **Watcher Manager**: Admin page for monitoring field watcher agents with live status cards, log tails, and one-click update triggering
- **Watcher Status Fix**: Agent status now reflects heartbeat connectivity (missing if not heard from in >60 min) rather than unit-level data staleness
- **Live Refresh**: Watcher Manager surgically patches status, last-seen, and pending indicators every 30s without a full page reload
### v0.7.0 — 2026-03-07
- **Project Status Management**: On-hold and archived project states with automatic cancellation of pending actions
- **Manual SD Card Upload**: Upload offline NRL/SLM data directly from SD card (ZIP or multi-file); auto-creates monitoring sessions from `.rnh` metadata
- **Combined Report Wizard**: Multi-session Excel report generation with location grouping, period type filtering, and ZIP download
- **NL32 Support**: Report generator and web viewer now handle NL32 measurement data
- **Chart Preview**: Live chart preview in the report generator matching final output styling
- **Standalone SLM Mode**: SLMs can now be configured without a paired modem (direct IP)
- **Vibration Project Isolation**: Vibration project views no longer show SLM-specific tabs
- **MonitoringSession Rename**: `RecordingSession` renamed to `MonitoringSession` throughout; run migration before deploying
### v0.6.1 — 2026-02-16
- **One-Off Recording Schedules**: Schedule single recordings with specific start/end datetimes
- **Bidirectional Pairing Sync**: Device-modem pairing now updates both sides automatically
- **Scheduler Timezone Fix**: One-off schedule times use local time instead of UTC
### v0.6.0 — 2026-02-06
- **Calendar & Reservation Mode**: Fleet calendar view with device deployment scheduling and reservation system
- **Device Pairing Interface**: New `/pair-devices` page with two-column layout for linking recorders with modems, fuzzy-search, and visual pairing workflow
- **Calibration UX Overhaul**: Users now set date of previous calibration (not expiry); seismograph list enhanced with color-coded calibration status, filtering, and sorting
- **Modem Dashboard**: Model number as dedicated config, modem login links, list view format, and pairing options accessible from modem page
- **SLMM Improvements**: Device control lock prevents command flooding, fixed polling intervals and scheduled downloads
- **UI Polish**: Tab state persists in URL hash, settings save without reload, scheduler changes cascade to events, fixed mobile type display
### v0.4.3 — 2026-01-14
- **Sound Level Meter workflow**: Roster manager surfaces SLM metadata, supports rename actions, and adds return-to-project navigation plus schedule/unit templates for project planning.
- **Project insight panels**: Project dashboards now expose file and session lists so teams can see what each project stores before diving into units.
- **Project view polish**: FTP browser supports folder downloads, the timer display was reimplemented, and the project/device templates gained edit modals for projects and locations to streamline navigation.
- **SLM sync & accuracy**: Configuration edits now propagate to SLMM (which caches configs for faster responses) and the live view uses the correct DRD fields so telemetry aligns with the control center.
### v0.4.0 — 2025-12-16
- **Database Management System**: Complete backup and restore functionality with manual snapshots, restore operations, and upload/download capabilities
- **Remote Database Cloning**: New `clone_db_to_dev.py` script for copying production database to remote dev servers over WAN
@@ -532,9 +599,29 @@ MIT
## Version
**Current: 0.4.0**Database management system with backup/restore and remote cloning (2025-12-16)
**Current: 0.8.0**Watcher Manager admin page, live agent status refresh, watcher connectivity-based status (2026-03-18)
Previous: 0.3.3 — Mobile navigation improvements and better status visibility (2025-12-12)
Previous: 0.7.1 — Out-for-calibration status, reservation modal, migration fixes (2026-03-12)
0.7.0 — Project status management, manual SD card upload, combined report wizard, NL32 support, MonitoringSession rename (2026-03-07)
0.6.1 — One-off recording schedules, bidirectional pairing sync, scheduler timezone fix (2026-02-16)
0.6.0 — Calendar & reservation mode, device pairing interface, calibration UX overhaul, modem dashboard enhancements (2026-02-06)
0.5.1 — Dashboard schedule view with today's actions panel, new Terra-View branding and logo rework (2026-01-27)
0.4.4 — Recurring schedules, alerting UI, report templates + RND viewer, and SLM workflow polish (2026-01-23)
0.4.3 — SLM roster/project view refresh, project insight panels, FTP browser folder downloads, and SLMM sync (2026-01-14)
0.4.2 — SLM configuration interface with TCP/FTP controls, modem diagnostics, and dashboard endpoints for Sound Level Meters (2026-01-05)
0.4.1 — Sound Level Meter integration with full management UI for SLM units (2026-01-05)
0.4.0 — Database management system with backup/restore and remote cloning (2025-12-16)
0.3.3 — Mobile navigation improvements and better status visibility (2025-12-12)
0.3.2 — Progressive Web App with mobile optimization (2025-12-12)

View File

View File

View File

@@ -1,13 +0,0 @@
"""
API Aggregation Layer - Dashboard endpoints
Composes data from multiple feature modules
"""
from fastapi import APIRouter
router = APIRouter(prefix="/api/dashboard", tags=["dashboard-aggregation"])
# TODO: Implement aggregation endpoints that combine data from
# app.seismo and app.slm modules
# For now, individual feature modules expose their own APIs directly
# Future: Add cross-feature aggregation here

View File

@@ -1,13 +0,0 @@
"""
API Aggregation Layer - Roster endpoints
Aggregates roster data from all feature modules
"""
from fastapi import APIRouter
router = APIRouter(prefix="/api/roster-aggregation", tags=["roster-aggregation"])
# TODO: Implement unified roster endpoints that combine data from
# app.seismo and app.slm modules
# For now, individual feature modules expose their own roster APIs
# Future: Add cross-feature roster aggregation here

View File

View File

@@ -1,20 +0,0 @@
"""
Core configuration for Terra-View application
"""
import os
# Application
APP_NAME = "Terra-View"
VERSION = "1.0.0"
ENVIRONMENT = os.getenv("ENVIRONMENT", "production")
# Ports
PORT = int(os.getenv("PORT", 8001))
# External Services
SLMM_API_URL = os.getenv("SLMM_API_URL", "http://localhost:8100")
# Database URLs (feature-specific)
SEISMO_DATABASE_URL = "sqlite:///./data/seismo.db"
SLM_DATABASE_URL = "sqlite:///./data/slm.db"
MODEM_DATABASE_URL = "sqlite:///./data/modem.db"

View File

@@ -1,205 +0,0 @@
"""
Terra-View - Unified monitoring platform for device fleets
Modular monolith architecture with strict feature boundaries
"""
import os
import logging
from fastapi import FastAPI, Request
from fastapi.middleware.cors import CORSMiddleware
from fastapi.staticfiles import StaticFiles
from fastapi.responses import JSONResponse
from fastapi.exceptions import RequestValidationError
# Configure logging
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s'
)
logger = logging.getLogger(__name__)
# Import configuration
from app.core.config import APP_NAME, VERSION, ENVIRONMENT
# Import UI routes
from app.ui import routes as ui_routes
# Import feature module routers (seismo)
from app.seismo.routers import (
roster as seismo_roster,
units as seismo_units,
photos as seismo_photos,
roster_edit as seismo_roster_edit,
dashboard as seismo_dashboard,
dashboard_tabs as seismo_dashboard_tabs,
activity as seismo_activity,
seismo_dashboard as seismo_seismo_dashboard,
settings as seismo_settings,
)
from app.seismo import routes as seismo_legacy_routes
# Import feature module routers (SLM)
from app.slm.routers import router as slm_router
# Import API aggregation layer (placeholder for now)
from app.api import dashboard as api_dashboard
from app.api import roster as api_roster
# Initialize database tables
from app.seismo.database import engine as seismo_engine, Base as SeismoBase
SeismoBase.metadata.create_all(bind=seismo_engine)
# Initialize FastAPI app
app = FastAPI(
title=APP_NAME,
description="Unified monitoring platform for seismograph, modem, and sound level meter fleets",
version=VERSION
)
# Add validation error handler to log details
@app.exception_handler(RequestValidationError)
async def validation_exception_handler(request: Request, exc: RequestValidationError):
logger.error(f"Validation error on {request.url}: {exc.errors()}")
logger.error(f"Body: {await request.body()}")
return JSONResponse(
status_code=400,
content={"detail": exc.errors()}
)
# Configure CORS
app.add_middleware(
CORSMiddleware,
allow_origins=["*"],
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)
# Mount static files
app.mount("/static", StaticFiles(directory="app/ui/static"), name="static")
# Middleware to add environment to request state
@app.middleware("http")
async def add_environment_to_context(request: Request, call_next):
"""Middleware to add environment variable to request state"""
request.state.environment = ENVIRONMENT
response = await call_next(request)
return response
# ===== INCLUDE ROUTERS =====
# UI Layer (HTML pages)
app.include_router(ui_routes.router)
# Seismograph Feature Module APIs
app.include_router(seismo_roster.router)
app.include_router(seismo_units.router)
app.include_router(seismo_photos.router)
app.include_router(seismo_roster_edit.router)
app.include_router(seismo_dashboard.router)
app.include_router(seismo_dashboard_tabs.router)
app.include_router(seismo_activity.router)
app.include_router(seismo_seismo_dashboard.router)
app.include_router(seismo_settings.router)
app.include_router(seismo_legacy_routes.router)
# SLM Feature Module APIs
app.include_router(slm_router)
# API Aggregation Layer (future cross-feature endpoints)
# app.include_router(api_dashboard.router) # TODO: Implement aggregation
# app.include_router(api_roster.router) # TODO: Implement aggregation
# ===== ADDITIONAL ROUTES FROM OLD MAIN.PY =====
# These will need to be migrated to appropriate modules
from fastapi.templating import Jinja2Templates
from typing import List, Dict
from pydantic import BaseModel
from sqlalchemy.orm import Session
from fastapi import Depends
from app.seismo.database import get_db
from app.seismo.services.snapshot import emit_status_snapshot
from app.seismo.models import IgnoredUnit
# TODO: Move these to appropriate feature modules or UI layer
@app.post("/api/sync-edits")
async def sync_edits(request: dict, db: Session = Depends(get_db)):
"""Process offline edit queue and sync to database"""
# TODO: Move to seismo module
from app.seismo.models import RosterUnit
class EditItem(BaseModel):
id: int
unitId: str
changes: Dict
timestamp: int
class SyncEditsRequest(BaseModel):
edits: List[EditItem]
sync_request = SyncEditsRequest(**request)
results = []
synced_ids = []
for edit in sync_request.edits:
try:
unit = db.query(RosterUnit).filter_by(id=edit.unitId).first()
if not unit:
results.append({
"id": edit.id,
"status": "error",
"reason": f"Unit {edit.unitId} not found"
})
continue
for key, value in edit.changes.items():
if hasattr(unit, key):
if key in ['deployed', 'retired']:
setattr(unit, key, value in ['true', True, 'True', '1', 1])
else:
setattr(unit, key, value if value != '' else None)
db.commit()
results.append({
"id": edit.id,
"status": "success"
})
synced_ids.append(edit.id)
except Exception as e:
db.rollback()
results.append({
"id": edit.id,
"status": "error",
"reason": str(e)
})
synced_count = len(synced_ids)
return JSONResponse({
"synced": synced_count,
"total": len(sync_request.edits),
"synced_ids": synced_ids,
"results": results
})
@app.get("/health")
def health_check():
"""Health check endpoint"""
return {
"message": f"{APP_NAME} v{VERSION}",
"status": "running",
"version": VERSION,
"modules": ["seismo", "slm"]
}
if __name__ == "__main__":
import uvicorn
from app.core.config import PORT
uvicorn.run(app, host="0.0.0.0", port=PORT)

View File

View File

View File

@@ -1,36 +0,0 @@
"""
Seismograph feature module database connection
"""
from sqlalchemy import create_engine
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker
import os
# Ensure data directory exists
os.makedirs("data", exist_ok=True)
# For now, we'll use the old database (seismo_fleet.db) until we migrate
# TODO: Migrate to seismo.db
SQLALCHEMY_DATABASE_URL = "sqlite:///./data/seismo_fleet.db"
engine = create_engine(
SQLALCHEMY_DATABASE_URL, connect_args={"check_same_thread": False}
)
SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)
Base = declarative_base()
def get_db():
"""Dependency for database sessions"""
db = SessionLocal()
try:
yield db
finally:
db.close()
def get_db_session():
"""Get a database session directly (not as a dependency)"""
return SessionLocal()

View File

@@ -1,110 +0,0 @@
from sqlalchemy import Column, String, DateTime, Boolean, Text, Date, Integer
from datetime import datetime
from app.seismo.database import Base
class Emitter(Base):
__tablename__ = "emitters"
id = Column(String, primary_key=True, index=True)
unit_type = Column(String, nullable=False)
last_seen = Column(DateTime, default=datetime.utcnow)
last_file = Column(String, nullable=False)
status = Column(String, nullable=False)
notes = Column(String, nullable=True)
class RosterUnit(Base):
"""
Roster table: represents our *intended assignment* of a unit.
This is editable from the GUI.
Supports multiple device types (seismograph, modem, sound_level_meter) with type-specific fields.
"""
__tablename__ = "roster"
# Core fields (all device types)
id = Column(String, primary_key=True, index=True)
unit_type = Column(String, default="series3") # Backward compatibility
device_type = Column(String, default="seismograph") # "seismograph" | "modem" | "sound_level_meter"
deployed = Column(Boolean, default=True)
retired = Column(Boolean, default=False)
note = Column(String, nullable=True)
project_id = Column(String, nullable=True)
location = Column(String, nullable=True) # Legacy field - use address/coordinates instead
address = Column(String, nullable=True) # Human-readable address
coordinates = Column(String, nullable=True) # Lat,Lon format: "34.0522,-118.2437"
last_updated = Column(DateTime, default=datetime.utcnow)
# Seismograph-specific fields (nullable for modems and SLMs)
last_calibrated = Column(Date, nullable=True)
next_calibration_due = Column(Date, nullable=True)
# Modem assignment (shared by seismographs and SLMs)
deployed_with_modem_id = Column(String, nullable=True) # FK to another RosterUnit (device_type=modem)
# Modem-specific fields (nullable for seismographs and SLMs)
ip_address = Column(String, nullable=True)
phone_number = Column(String, nullable=True)
hardware_model = Column(String, nullable=True)
# Sound Level Meter-specific fields (nullable for seismographs and modems)
slm_host = Column(String, nullable=True) # Device IP or hostname
slm_tcp_port = Column(Integer, nullable=True) # TCP control port (default 2255)
slm_ftp_port = Column(Integer, nullable=True) # FTP data retrieval port (default 21)
slm_model = Column(String, nullable=True) # NL-43, NL-53, etc.
slm_serial_number = Column(String, nullable=True) # Device serial number
slm_frequency_weighting = Column(String, nullable=True) # A, C, Z
slm_time_weighting = Column(String, nullable=True) # F (Fast), S (Slow), I (Impulse)
slm_measurement_range = Column(String, nullable=True) # e.g., "30-130 dB"
slm_last_check = Column(DateTime, nullable=True) # Last communication check
class IgnoredUnit(Base):
"""
Ignored units: units that report but should be filtered out from unknown emitters.
Used to suppress noise from old projects.
"""
__tablename__ = "ignored_units"
id = Column(String, primary_key=True, index=True)
reason = Column(String, nullable=True)
ignored_at = Column(DateTime, default=datetime.utcnow)
class UnitHistory(Base):
"""
Unit history: complete timeline of changes to each unit.
Tracks note changes, status changes, deployment/benched events, and more.
"""
__tablename__ = "unit_history"
id = Column(Integer, primary_key=True, autoincrement=True)
unit_id = Column(String, nullable=False, index=True) # FK to RosterUnit.id
change_type = Column(String, nullable=False) # note_change, deployed_change, retired_change, etc.
field_name = Column(String, nullable=True) # Which field changed
old_value = Column(Text, nullable=True) # Previous value
new_value = Column(Text, nullable=True) # New value
changed_at = Column(DateTime, default=datetime.utcnow, nullable=False, index=True)
source = Column(String, default="manual") # manual, csv_import, telemetry, offline_sync
notes = Column(Text, nullable=True) # Optional reason/context for the change
class UserPreferences(Base):
"""
User preferences: persistent storage for application settings.
Single-row table (id=1) to store global user preferences.
"""
__tablename__ = "user_preferences"
id = Column(Integer, primary_key=True, default=1)
timezone = Column(String, default="America/New_York")
theme = Column(String, default="auto") # auto, light, dark
auto_refresh_interval = Column(Integer, default=10) # seconds
date_format = Column(String, default="MM/DD/YYYY")
table_rows_per_page = Column(Integer, default=25)
calibration_interval_days = Column(Integer, default=365)
calibration_warning_days = Column(Integer, default=30)
status_ok_threshold_hours = Column(Integer, default=12)
status_pending_threshold_hours = Column(Integer, default=24)
updated_at = Column(DateTime, default=datetime.utcnow, onupdate=datetime.utcnow)

View File

@@ -1,25 +0,0 @@
from fastapi import APIRouter, Request, Depends
from fastapi.templating import Jinja2Templates
from app.seismo.services.snapshot import emit_status_snapshot
router = APIRouter()
templates = Jinja2Templates(directory="templates")
@router.get("/dashboard/active")
def dashboard_active(request: Request):
snapshot = emit_status_snapshot()
return templates.TemplateResponse(
"partials/active_table.html",
{"request": request, "units": snapshot["active"]}
)
@router.get("/dashboard/benched")
def dashboard_benched(request: Request):
snapshot = emit_status_snapshot()
return templates.TemplateResponse(
"partials/benched_table.html",
{"request": request, "units": snapshot["benched"]}
)

View File

@@ -1,720 +0,0 @@
from fastapi import APIRouter, Depends, HTTPException, Form, UploadFile, File, Request
from fastapi.exceptions import RequestValidationError
from sqlalchemy.orm import Session
from datetime import datetime, date
import csv
import io
import logging
import httpx
import os
from app.seismo.database import get_db
from app.seismo.models import RosterUnit, IgnoredUnit, Emitter, UnitHistory
router = APIRouter(prefix="/api/roster", tags=["roster-edit"])
logger = logging.getLogger(__name__)
# SLMM backend URL for syncing device configs to cache
SLMM_BASE_URL = os.getenv("SLMM_BASE_URL", "http://localhost:8100")
def record_history(db: Session, unit_id: str, change_type: str, field_name: str = None,
old_value: str = None, new_value: str = None, source: str = "manual", notes: str = None):
"""Helper function to record a change in unit history"""
history_entry = UnitHistory(
unit_id=unit_id,
change_type=change_type,
field_name=field_name,
old_value=old_value,
new_value=new_value,
changed_at=datetime.utcnow(),
source=source,
notes=notes
)
db.add(history_entry)
# Note: caller is responsible for db.commit()
def get_or_create_roster_unit(db: Session, unit_id: str):
unit = db.query(RosterUnit).filter(RosterUnit.id == unit_id).first()
if not unit:
unit = RosterUnit(id=unit_id)
db.add(unit)
db.commit()
db.refresh(unit)
return unit
async def sync_slm_to_slmm_cache(
unit_id: str,
host: str = None,
tcp_port: int = None,
ftp_port: int = None,
ftp_username: str = None,
ftp_password: str = None,
deployed_with_modem_id: str = None,
db: Session = None
) -> dict:
"""
Sync SLM device configuration to SLMM backend cache.
Terra-View is the source of truth for device configs. This function updates
SLMM's config cache (NL43Config table) so SLMM can look up device connection
info by unit_id without Terra-View passing host:port with every request.
Args:
unit_id: Unique identifier for the SLM device
host: Direct IP address/hostname OR will be resolved from modem
tcp_port: TCP control port (default: 2255)
ftp_port: FTP port (default: 21)
ftp_username: FTP username (optional)
ftp_password: FTP password (optional)
deployed_with_modem_id: If set, resolve modem IP as host
db: Database session for modem lookup
Returns:
dict: {"success": bool, "message": str}
"""
# Resolve host from modem if assigned
if deployed_with_modem_id and db:
modem = db.query(RosterUnit).filter_by(
id=deployed_with_modem_id,
device_type="modem"
).first()
if modem and modem.ip_address:
host = modem.ip_address
logger.info(f"Resolved host from modem {deployed_with_modem_id}: {host}")
# Validate required fields
if not host:
logger.warning(f"Cannot sync SLM {unit_id} to SLMM: no host/IP address provided")
return {"success": False, "message": "No host IP address available"}
# Set defaults
tcp_port = tcp_port or 2255
ftp_port = ftp_port or 21
# Build SLMM cache payload
config_payload = {
"host": host,
"tcp_port": tcp_port,
"tcp_enabled": True,
"ftp_enabled": bool(ftp_username and ftp_password),
"web_enabled": False
}
if ftp_username and ftp_password:
config_payload["ftp_username"] = ftp_username
config_payload["ftp_password"] = ftp_password
# Call SLMM cache update API
slmm_url = f"{SLMM_BASE_URL}/api/nl43/{unit_id}/config"
try:
async with httpx.AsyncClient(timeout=10.0) as client:
response = await client.put(slmm_url, json=config_payload)
if response.status_code in [200, 201]:
logger.info(f"Successfully synced SLM {unit_id} to SLMM cache")
return {"success": True, "message": "Device config cached in SLMM"}
else:
logger.error(f"SLMM cache sync failed for {unit_id}: HTTP {response.status_code}")
return {"success": False, "message": f"SLMM returned status {response.status_code}"}
except httpx.ConnectError:
logger.error(f"Cannot connect to SLMM service at {SLMM_BASE_URL}")
return {"success": False, "message": "SLMM service unavailable"}
except Exception as e:
logger.error(f"Error syncing SLM {unit_id} to SLMM: {e}")
return {"success": False, "message": str(e)}
@router.post("/add")
async def add_roster_unit(
id: str = Form(...),
device_type: str = Form("seismograph"),
unit_type: str = Form("series3"),
deployed: str = Form(None),
retired: str = Form(None),
note: str = Form(""),
project_id: str = Form(None),
location: str = Form(None),
address: str = Form(None),
coordinates: str = Form(None),
# Seismograph-specific fields
last_calibrated: str = Form(None),
next_calibration_due: str = Form(None),
deployed_with_modem_id: str = Form(None),
# Modem-specific fields
ip_address: str = Form(None),
phone_number: str = Form(None),
hardware_model: str = Form(None),
# Sound Level Meter-specific fields
slm_host: str = Form(None),
slm_tcp_port: str = Form(None),
slm_ftp_port: str = Form(None),
slm_model: str = Form(None),
slm_serial_number: str = Form(None),
slm_frequency_weighting: str = Form(None),
slm_time_weighting: str = Form(None),
slm_measurement_range: str = Form(None),
db: Session = Depends(get_db)
):
logger.info(f"Adding unit: id={id}, device_type={device_type}, deployed={deployed}, retired={retired}")
# Convert boolean strings to actual booleans
deployed_bool = deployed in ['true', 'True', '1', 'yes'] if deployed else False
retired_bool = retired in ['true', 'True', '1', 'yes'] if retired else False
# Convert port strings to integers
slm_tcp_port_int = int(slm_tcp_port) if slm_tcp_port and slm_tcp_port.strip() else None
slm_ftp_port_int = int(slm_ftp_port) if slm_ftp_port and slm_ftp_port.strip() else None
if db.query(RosterUnit).filter(RosterUnit.id == id).first():
raise HTTPException(status_code=400, detail="Unit already exists")
# Parse date fields if provided
last_cal_date = None
if last_calibrated:
try:
last_cal_date = datetime.strptime(last_calibrated, "%Y-%m-%d").date()
except ValueError:
raise HTTPException(status_code=400, detail="Invalid last_calibrated date format. Use YYYY-MM-DD")
next_cal_date = None
if next_calibration_due:
try:
next_cal_date = datetime.strptime(next_calibration_due, "%Y-%m-%d").date()
except ValueError:
raise HTTPException(status_code=400, detail="Invalid next_calibration_due date format. Use YYYY-MM-DD")
unit = RosterUnit(
id=id,
device_type=device_type,
unit_type=unit_type,
deployed=deployed_bool,
retired=retired_bool,
note=note,
project_id=project_id,
location=location,
address=address,
coordinates=coordinates,
last_updated=datetime.utcnow(),
# Seismograph-specific fields
last_calibrated=last_cal_date,
next_calibration_due=next_cal_date,
deployed_with_modem_id=deployed_with_modem_id if deployed_with_modem_id else None,
# Modem-specific fields
ip_address=ip_address if ip_address else None,
phone_number=phone_number if phone_number else None,
hardware_model=hardware_model if hardware_model else None,
# Sound Level Meter-specific fields
slm_host=slm_host if slm_host else None,
slm_tcp_port=slm_tcp_port_int,
slm_ftp_port=slm_ftp_port_int,
slm_model=slm_model if slm_model else None,
slm_serial_number=slm_serial_number if slm_serial_number else None,
slm_frequency_weighting=slm_frequency_weighting if slm_frequency_weighting else None,
slm_time_weighting=slm_time_weighting if slm_time_weighting else None,
slm_measurement_range=slm_measurement_range if slm_measurement_range else None,
)
db.add(unit)
db.commit()
# If sound level meter, sync config to SLMM cache
if device_type == "sound_level_meter":
logger.info(f"Syncing SLM {id} config to SLMM cache...")
result = await sync_slm_to_slmm_cache(
unit_id=id,
host=slm_host,
tcp_port=slm_tcp_port_int,
ftp_port=slm_ftp_port_int,
deployed_with_modem_id=deployed_with_modem_id,
db=db
)
if not result["success"]:
logger.warning(f"SLMM cache sync warning for {id}: {result['message']}")
# Don't fail the operation - device is still added to Terra-View roster
# User can manually sync later or SLMM will be synced on next config update
return {"message": "Unit added", "id": id, "device_type": device_type}
@router.get("/modems")
def get_modems_list(db: Session = Depends(get_db)):
"""Get list of all modem units for dropdown selection"""
modems = db.query(RosterUnit).filter_by(device_type="modem", retired=False).order_by(RosterUnit.id).all()
return [
{
"id": modem.id,
"ip_address": modem.ip_address,
"phone_number": modem.phone_number,
"hardware_model": modem.hardware_model,
"deployed": modem.deployed
}
for modem in modems
]
@router.get("/{unit_id}")
def get_roster_unit(unit_id: str, db: Session = Depends(get_db)):
"""Get a single roster unit by ID"""
unit = db.query(RosterUnit).filter(RosterUnit.id == unit_id).first()
if not unit:
raise HTTPException(status_code=404, detail="Unit not found")
return {
"id": unit.id,
"device_type": unit.device_type or "seismograph",
"unit_type": unit.unit_type,
"deployed": unit.deployed,
"retired": unit.retired,
"note": unit.note or "",
"project_id": unit.project_id or "",
"location": unit.location or "",
"address": unit.address or "",
"coordinates": unit.coordinates or "",
"last_calibrated": unit.last_calibrated.isoformat() if unit.last_calibrated else "",
"next_calibration_due": unit.next_calibration_due.isoformat() if unit.next_calibration_due else "",
"deployed_with_modem_id": unit.deployed_with_modem_id or "",
"ip_address": unit.ip_address or "",
"phone_number": unit.phone_number or "",
"hardware_model": unit.hardware_model or "",
"slm_host": unit.slm_host or "",
"slm_tcp_port": unit.slm_tcp_port or "",
"slm_ftp_port": unit.slm_ftp_port or "",
"slm_model": unit.slm_model or "",
"slm_serial_number": unit.slm_serial_number or "",
"slm_frequency_weighting": unit.slm_frequency_weighting or "",
"slm_time_weighting": unit.slm_time_weighting or "",
"slm_measurement_range": unit.slm_measurement_range or "",
}
@router.post("/edit/{unit_id}")
def edit_roster_unit(
unit_id: str,
device_type: str = Form("seismograph"),
unit_type: str = Form("series3"),
deployed: str = Form(None),
retired: str = Form(None),
note: str = Form(""),
project_id: str = Form(None),
location: str = Form(None),
address: str = Form(None),
coordinates: str = Form(None),
# Seismograph-specific fields
last_calibrated: str = Form(None),
next_calibration_due: str = Form(None),
deployed_with_modem_id: str = Form(None),
# Modem-specific fields
ip_address: str = Form(None),
phone_number: str = Form(None),
hardware_model: str = Form(None),
# Sound Level Meter-specific fields
slm_host: str = Form(None),
slm_tcp_port: str = Form(None),
slm_ftp_port: str = Form(None),
slm_model: str = Form(None),
slm_serial_number: str = Form(None),
slm_frequency_weighting: str = Form(None),
slm_time_weighting: str = Form(None),
slm_measurement_range: str = Form(None),
db: Session = Depends(get_db)
):
unit = db.query(RosterUnit).filter(RosterUnit.id == unit_id).first()
if not unit:
raise HTTPException(status_code=404, detail="Unit not found")
# Convert boolean strings to actual booleans
deployed_bool = deployed in ['true', 'True', '1', 'yes'] if deployed else False
retired_bool = retired in ['true', 'True', '1', 'yes'] if retired else False
# Convert port strings to integers
slm_tcp_port_int = int(slm_tcp_port) if slm_tcp_port and slm_tcp_port.strip() else None
slm_ftp_port_int = int(slm_ftp_port) if slm_ftp_port and slm_ftp_port.strip() else None
# Parse date fields if provided
last_cal_date = None
if last_calibrated:
try:
last_cal_date = datetime.strptime(last_calibrated, "%Y-%m-%d").date()
except ValueError:
raise HTTPException(status_code=400, detail="Invalid last_calibrated date format. Use YYYY-MM-DD")
next_cal_date = None
if next_calibration_due:
try:
next_cal_date = datetime.strptime(next_calibration_due, "%Y-%m-%d").date()
except ValueError:
raise HTTPException(status_code=400, detail="Invalid next_calibration_due date format. Use YYYY-MM-DD")
# Track changes for history
old_note = unit.note
old_deployed = unit.deployed
old_retired = unit.retired
# Update all fields
unit.device_type = device_type
unit.unit_type = unit_type
unit.deployed = deployed_bool
unit.retired = retired_bool
unit.note = note
unit.project_id = project_id
unit.location = location
unit.address = address
unit.coordinates = coordinates
unit.last_updated = datetime.utcnow()
# Seismograph-specific fields
unit.last_calibrated = last_cal_date
unit.next_calibration_due = next_cal_date
unit.deployed_with_modem_id = deployed_with_modem_id if deployed_with_modem_id else None
# Modem-specific fields
unit.ip_address = ip_address if ip_address else None
unit.phone_number = phone_number if phone_number else None
unit.hardware_model = hardware_model if hardware_model else None
# Sound Level Meter-specific fields
unit.slm_host = slm_host if slm_host else None
unit.slm_tcp_port = slm_tcp_port_int
unit.slm_ftp_port = slm_ftp_port_int
unit.slm_model = slm_model if slm_model else None
unit.slm_serial_number = slm_serial_number if slm_serial_number else None
unit.slm_frequency_weighting = slm_frequency_weighting if slm_frequency_weighting else None
unit.slm_time_weighting = slm_time_weighting if slm_time_weighting else None
unit.slm_measurement_range = slm_measurement_range if slm_measurement_range else None
# Record history entries for changed fields
if old_note != note:
record_history(db, unit_id, "note_change", "note", old_note, note, "manual")
if old_deployed != deployed:
status_text = "deployed" if deployed else "benched"
old_status_text = "deployed" if old_deployed else "benched"
record_history(db, unit_id, "deployed_change", "deployed", old_status_text, status_text, "manual")
if old_retired != retired:
status_text = "retired" if retired else "active"
old_status_text = "retired" if old_retired else "active"
record_history(db, unit_id, "retired_change", "retired", old_status_text, status_text, "manual")
db.commit()
return {"message": "Unit updated", "id": unit_id, "device_type": device_type}
@router.post("/set-deployed/{unit_id}")
def set_deployed(unit_id: str, deployed: bool = Form(...), db: Session = Depends(get_db)):
unit = get_or_create_roster_unit(db, unit_id)
old_deployed = unit.deployed
unit.deployed = deployed
unit.last_updated = datetime.utcnow()
# Record history entry for deployed status change
if old_deployed != deployed:
status_text = "deployed" if deployed else "benched"
old_status_text = "deployed" if old_deployed else "benched"
record_history(
db=db,
unit_id=unit_id,
change_type="deployed_change",
field_name="deployed",
old_value=old_status_text,
new_value=status_text,
source="manual"
)
db.commit()
return {"message": "Updated", "id": unit_id, "deployed": deployed}
@router.post("/set-retired/{unit_id}")
def set_retired(unit_id: str, retired: bool = Form(...), db: Session = Depends(get_db)):
unit = get_or_create_roster_unit(db, unit_id)
old_retired = unit.retired
unit.retired = retired
unit.last_updated = datetime.utcnow()
# Record history entry for retired status change
if old_retired != retired:
status_text = "retired" if retired else "active"
old_status_text = "retired" if old_retired else "active"
record_history(
db=db,
unit_id=unit_id,
change_type="retired_change",
field_name="retired",
old_value=old_status_text,
new_value=status_text,
source="manual"
)
db.commit()
return {"message": "Updated", "id": unit_id, "retired": retired}
@router.delete("/{unit_id}")
def delete_roster_unit(unit_id: str, db: Session = Depends(get_db)):
"""
Permanently delete a unit from the database.
Checks roster, emitters, and ignored_units tables and deletes from any table where the unit exists.
"""
deleted = False
# Try to delete from roster table
roster_unit = db.query(RosterUnit).filter(RosterUnit.id == unit_id).first()
if roster_unit:
db.delete(roster_unit)
deleted = True
# Try to delete from emitters table
emitter = db.query(Emitter).filter(Emitter.id == unit_id).first()
if emitter:
db.delete(emitter)
deleted = True
# Try to delete from ignored_units table
ignored_unit = db.query(IgnoredUnit).filter(IgnoredUnit.id == unit_id).first()
if ignored_unit:
db.delete(ignored_unit)
deleted = True
# If not found in any table, return error
if not deleted:
raise HTTPException(status_code=404, detail="Unit not found")
db.commit()
return {"message": "Unit deleted", "id": unit_id}
@router.post("/set-note/{unit_id}")
def set_note(unit_id: str, note: str = Form(""), db: Session = Depends(get_db)):
unit = get_or_create_roster_unit(db, unit_id)
old_note = unit.note
unit.note = note
unit.last_updated = datetime.utcnow()
# Record history entry for note change
if old_note != note:
record_history(
db=db,
unit_id=unit_id,
change_type="note_change",
field_name="note",
old_value=old_note,
new_value=note,
source="manual"
)
db.commit()
return {"message": "Updated", "id": unit_id, "note": note}
@router.post("/import-csv")
async def import_csv(
file: UploadFile = File(...),
update_existing: bool = Form(True),
db: Session = Depends(get_db)
):
"""
Import roster units from CSV file.
Expected CSV columns (unit_id is required, others are optional):
- unit_id: Unique identifier for the unit
- unit_type: Type of unit (default: "series3")
- deployed: Boolean for deployment status (default: False)
- retired: Boolean for retirement status (default: False)
- note: Notes about the unit
- project_id: Project identifier
- location: Location description
Args:
file: CSV file upload
update_existing: If True, update existing units; if False, skip them
"""
if not file.filename.endswith('.csv'):
raise HTTPException(status_code=400, detail="File must be a CSV")
# Read file content
contents = await file.read()
csv_text = contents.decode('utf-8')
csv_reader = csv.DictReader(io.StringIO(csv_text))
results = {
"added": [],
"updated": [],
"skipped": [],
"errors": []
}
for row_num, row in enumerate(csv_reader, start=2): # Start at 2 to account for header
try:
# Validate required field
unit_id = row.get('unit_id', '').strip()
if not unit_id:
results["errors"].append({
"row": row_num,
"error": "Missing required field: unit_id"
})
continue
# Check if unit exists
existing_unit = db.query(RosterUnit).filter(RosterUnit.id == unit_id).first()
if existing_unit:
if not update_existing:
results["skipped"].append(unit_id)
continue
# Update existing unit
existing_unit.unit_type = row.get('unit_type', existing_unit.unit_type or 'series3')
existing_unit.deployed = row.get('deployed', '').lower() in ('true', '1', 'yes') if row.get('deployed') else existing_unit.deployed
existing_unit.retired = row.get('retired', '').lower() in ('true', '1', 'yes') if row.get('retired') else existing_unit.retired
existing_unit.note = row.get('note', existing_unit.note or '')
existing_unit.project_id = row.get('project_id', existing_unit.project_id)
existing_unit.location = row.get('location', existing_unit.location)
existing_unit.address = row.get('address', existing_unit.address)
existing_unit.coordinates = row.get('coordinates', existing_unit.coordinates)
existing_unit.last_updated = datetime.utcnow()
results["updated"].append(unit_id)
else:
# Create new unit
new_unit = RosterUnit(
id=unit_id,
unit_type=row.get('unit_type', 'series3'),
deployed=row.get('deployed', '').lower() in ('true', '1', 'yes'),
retired=row.get('retired', '').lower() in ('true', '1', 'yes'),
note=row.get('note', ''),
project_id=row.get('project_id'),
location=row.get('location'),
address=row.get('address'),
coordinates=row.get('coordinates'),
last_updated=datetime.utcnow()
)
db.add(new_unit)
results["added"].append(unit_id)
except Exception as e:
results["errors"].append({
"row": row_num,
"unit_id": row.get('unit_id', 'unknown'),
"error": str(e)
})
# Commit all changes
try:
db.commit()
except Exception as e:
db.rollback()
raise HTTPException(status_code=500, detail=f"Database error: {str(e)}")
return {
"message": "CSV import completed",
"summary": {
"added": len(results["added"]),
"updated": len(results["updated"]),
"skipped": len(results["skipped"]),
"errors": len(results["errors"])
},
"details": results
}
@router.post("/ignore/{unit_id}")
def ignore_unit(unit_id: str, reason: str = Form(""), db: Session = Depends(get_db)):
"""
Add a unit to the ignore list to suppress it from unknown emitters.
"""
# Check if already ignored
if db.query(IgnoredUnit).filter(IgnoredUnit.id == unit_id).first():
raise HTTPException(status_code=400, detail="Unit already ignored")
ignored = IgnoredUnit(
id=unit_id,
reason=reason,
ignored_at=datetime.utcnow()
)
db.add(ignored)
db.commit()
return {"message": "Unit ignored", "id": unit_id}
@router.delete("/ignore/{unit_id}")
def unignore_unit(unit_id: str, db: Session = Depends(get_db)):
"""
Remove a unit from the ignore list.
"""
ignored = db.query(IgnoredUnit).filter(IgnoredUnit.id == unit_id).first()
if not ignored:
raise HTTPException(status_code=404, detail="Unit not in ignore list")
db.delete(ignored)
db.commit()
return {"message": "Unit unignored", "id": unit_id}
@router.get("/ignored")
def list_ignored_units(db: Session = Depends(get_db)):
"""
Get list of all ignored units.
"""
ignored_units = db.query(IgnoredUnit).all()
return {
"ignored": [
{
"id": unit.id,
"reason": unit.reason,
"ignored_at": unit.ignored_at.isoformat()
}
for unit in ignored_units
]
}
@router.get("/history/{unit_id}")
def get_unit_history(unit_id: str, db: Session = Depends(get_db)):
"""
Get complete history timeline for a unit.
Returns all historical changes ordered by most recent first.
"""
history_entries = db.query(UnitHistory).filter(
UnitHistory.unit_id == unit_id
).order_by(UnitHistory.changed_at.desc()).all()
return {
"unit_id": unit_id,
"history": [
{
"id": entry.id,
"change_type": entry.change_type,
"field_name": entry.field_name,
"old_value": entry.old_value,
"new_value": entry.new_value,
"changed_at": entry.changed_at.isoformat(),
"source": entry.source,
"notes": entry.notes
}
for entry in history_entries
]
}
@router.delete("/history/{history_id}")
def delete_history_entry(history_id: int, db: Session = Depends(get_db)):
"""
Delete a specific history entry by ID.
Allows manual cleanup of old history entries.
"""
history_entry = db.query(UnitHistory).filter(UnitHistory.id == history_id).first()
if not history_entry:
raise HTTPException(status_code=404, detail="History entry not found")
db.delete(history_entry)
db.commit()
return {"message": "History entry deleted", "id": history_id}

View File

@@ -1,81 +0,0 @@
"""
Seismograph Dashboard API Router
Provides endpoints for the seismograph-specific dashboard
"""
from fastapi import APIRouter, Request, Depends, Query
from fastapi.responses import HTMLResponse
from fastapi.templating import Jinja2Templates
from sqlalchemy.orm import Session
from app.seismo.database import get_db
from app.seismo.models import RosterUnit
router = APIRouter(prefix="/api/seismo-dashboard", tags=["seismo-dashboard"])
templates = Jinja2Templates(directory="templates")
@router.get("/stats", response_class=HTMLResponse)
async def get_seismo_stats(request: Request, db: Session = Depends(get_db)):
"""
Returns HTML partial with seismograph statistics summary
"""
# Get all seismograph units
seismos = db.query(RosterUnit).filter_by(
device_type="seismograph",
retired=False
).all()
total = len(seismos)
deployed = sum(1 for s in seismos if s.deployed)
benched = sum(1 for s in seismos if not s.deployed)
# Count modems assigned to deployed seismographs
with_modem = sum(1 for s in seismos if s.deployed and s.deployed_with_modem_id)
without_modem = deployed - with_modem
return templates.TemplateResponse(
"partials/seismo_stats.html",
{
"request": request,
"total": total,
"deployed": deployed,
"benched": benched,
"with_modem": with_modem,
"without_modem": without_modem
}
)
@router.get("/units", response_class=HTMLResponse)
async def get_seismo_units(
request: Request,
db: Session = Depends(get_db),
search: str = Query(None)
):
"""
Returns HTML partial with filterable seismograph unit list
"""
query = db.query(RosterUnit).filter_by(
device_type="seismograph",
retired=False
)
# Apply search filter
if search:
search_lower = search.lower()
query = query.filter(
(RosterUnit.id.ilike(f"%{search}%")) |
(RosterUnit.note.ilike(f"%{search}%")) |
(RosterUnit.address.ilike(f"%{search}%"))
)
seismos = query.order_by(RosterUnit.id).all()
return templates.TemplateResponse(
"partials/seismo_unit_list.html",
{
"request": request,
"units": seismos,
"search": search or ""
}
)

View File

@@ -1 +0,0 @@
# SLMM addon package for NL43 integration.

View File

@@ -1,27 +0,0 @@
from sqlalchemy import create_engine
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker
import os
# Ensure data directory exists for the SLMM addon
os.makedirs("data", exist_ok=True)
SQLALCHEMY_DATABASE_URL = "sqlite:///./data/slmm.db"
engine = create_engine(SQLALCHEMY_DATABASE_URL, connect_args={"check_same_thread": False})
SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)
Base = declarative_base()
def get_db():
"""Dependency for database sessions."""
db = SessionLocal()
try:
yield db
finally:
db.close()
def get_db_session():
"""Get a database session directly (not as a dependency)."""
return SessionLocal()

View File

@@ -1,116 +0,0 @@
import os
import logging
from fastapi import FastAPI, Request
from fastapi.middleware.cors import CORSMiddleware
from fastapi.responses import HTMLResponse
from fastapi.templating import Jinja2Templates
from app.slm.database import Base, engine
from app.slm import routers
# Configure logging
logging.basicConfig(
level=logging.INFO,
format="%(asctime)s - %(name)s - %(levelname)s - %(message)s",
handlers=[
logging.StreamHandler(),
logging.FileHandler("data/slmm.log"),
],
)
logger = logging.getLogger(__name__)
# Ensure database tables exist for the addon
Base.metadata.create_all(bind=engine)
logger.info("Database tables initialized")
app = FastAPI(
title="SLMM NL43 Addon",
description="Standalone module for NL43 configuration and status APIs",
version="0.1.0",
)
# CORS configuration - use environment variable for allowed origins
# Default to "*" for development, but should be restricted in production
allowed_origins = os.getenv("CORS_ORIGINS", "*").split(",")
logger.info(f"CORS allowed origins: {allowed_origins}")
app.add_middleware(
CORSMiddleware,
allow_origins=allowed_origins,
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)
templates = Jinja2Templates(directory="templates")
app.include_router(routers.router)
@app.get("/", response_class=HTMLResponse)
def index(request: Request):
return templates.TemplateResponse("index.html", {"request": request})
@app.get("/health")
async def health():
"""Basic health check endpoint."""
return {"status": "ok", "service": "slmm-nl43-addon"}
@app.get("/health/devices")
async def health_devices():
"""Enhanced health check that tests device connectivity."""
from sqlalchemy.orm import Session
from app.slm.database import SessionLocal
from app.slm.services import NL43Client
from app.slm.models import NL43Config
db: Session = SessionLocal()
device_status = []
try:
configs = db.query(NL43Config).filter_by(tcp_enabled=True).all()
for cfg in configs:
client = NL43Client(cfg.host, cfg.tcp_port, timeout=2.0, ftp_username=cfg.ftp_username, ftp_password=cfg.ftp_password)
status = {
"unit_id": cfg.unit_id,
"host": cfg.host,
"port": cfg.tcp_port,
"reachable": False,
"error": None,
}
try:
# Try to connect (don't send command to avoid rate limiting issues)
import asyncio
reader, writer = await asyncio.wait_for(
asyncio.open_connection(cfg.host, cfg.tcp_port), timeout=2.0
)
writer.close()
await writer.wait_closed()
status["reachable"] = True
except Exception as e:
status["error"] = str(type(e).__name__)
logger.warning(f"Device {cfg.unit_id} health check failed: {e}")
device_status.append(status)
finally:
db.close()
all_reachable = all(d["reachable"] for d in device_status) if device_status else True
return {
"status": "ok" if all_reachable else "degraded",
"devices": device_status,
"total_devices": len(device_status),
"reachable_devices": sum(1 for d in device_status if d["reachable"]),
}
if __name__ == "__main__":
import uvicorn
uvicorn.run("app.main:app", host="0.0.0.0", port=int(os.getenv("PORT", "8100")), reload=True)

View File

@@ -1,43 +0,0 @@
from sqlalchemy import Column, String, DateTime, Boolean, Integer, Text, func
from app.slm.database import Base
class NL43Config(Base):
"""
NL43 connection/config metadata for the standalone SLMM addon.
"""
__tablename__ = "nl43_config"
unit_id = Column(String, primary_key=True, index=True)
host = Column(String, default="127.0.0.1")
tcp_port = Column(Integer, default=80) # NL43 TCP control port (via RX55)
tcp_enabled = Column(Boolean, default=True)
ftp_enabled = Column(Boolean, default=False)
ftp_username = Column(String, nullable=True) # FTP login username
ftp_password = Column(String, nullable=True) # FTP login password
web_enabled = Column(Boolean, default=False)
class NL43Status(Base):
"""
Latest NL43 status snapshot for quick dashboard/API access.
"""
__tablename__ = "nl43_status"
unit_id = Column(String, primary_key=True, index=True)
last_seen = Column(DateTime, default=func.now())
measurement_state = Column(String, default="unknown") # Measure/Stop
measurement_start_time = Column(DateTime, nullable=True) # When measurement started (UTC)
counter = Column(String, nullable=True) # d0: Measurement interval counter (1-600)
lp = Column(String, nullable=True) # Instantaneous sound pressure level
leq = Column(String, nullable=True) # Equivalent continuous sound level
lmax = Column(String, nullable=True) # Maximum level
lmin = Column(String, nullable=True) # Minimum level
lpeak = Column(String, nullable=True) # Peak level
battery_level = Column(String, nullable=True)
power_source = Column(String, nullable=True)
sd_remaining_mb = Column(String, nullable=True)
sd_free_ratio = Column(String, nullable=True)
raw_payload = Column(Text, nullable=True)

File diff suppressed because it is too large Load Diff

View File

@@ -1,828 +0,0 @@
"""
NL43 TCP connector and snapshot persistence.
Implements simple per-request TCP calls to avoid long-lived socket complexity.
Extend to pooled connections/DRD streaming later.
"""
import asyncio
import contextlib
import logging
import time
from dataclasses import dataclass
from datetime import datetime
from typing import Optional, List
from sqlalchemy.orm import Session
from ftplib import FTP
from pathlib import Path
from app.slm.models import NL43Status
logger = logging.getLogger(__name__)
@dataclass
class NL43Snapshot:
unit_id: str
measurement_state: str = "unknown"
counter: Optional[str] = None # d0: Measurement interval counter (1-600)
lp: Optional[str] = None # Instantaneous sound pressure level
leq: Optional[str] = None # Equivalent continuous sound level
lmax: Optional[str] = None # Maximum level
lmin: Optional[str] = None # Minimum level
lpeak: Optional[str] = None # Peak level
battery_level: Optional[str] = None
power_source: Optional[str] = None
sd_remaining_mb: Optional[str] = None
sd_free_ratio: Optional[str] = None
raw_payload: Optional[str] = None
def persist_snapshot(s: NL43Snapshot, db: Session):
"""Persist the latest snapshot for API/dashboard use."""
try:
row = db.query(NL43Status).filter_by(unit_id=s.unit_id).first()
if not row:
row = NL43Status(unit_id=s.unit_id)
db.add(row)
row.last_seen = datetime.utcnow()
# Track measurement start time by detecting state transition
previous_state = row.measurement_state
new_state = s.measurement_state
logger.info(f"State transition check for {s.unit_id}: '{previous_state}' -> '{new_state}'")
# Device returns "Start" when measuring, "Stop" when stopped
# Normalize to previous behavior for backward compatibility
is_measuring = new_state == "Start"
was_measuring = previous_state == "Start"
if not was_measuring and is_measuring:
# Measurement just started - record the start time
row.measurement_start_time = datetime.utcnow()
logger.info(f"✓ Measurement started on {s.unit_id} at {row.measurement_start_time}")
elif was_measuring and not is_measuring:
# Measurement stopped - clear the start time
row.measurement_start_time = None
logger.info(f"✓ Measurement stopped on {s.unit_id}")
row.measurement_state = new_state
row.counter = s.counter
row.lp = s.lp
row.leq = s.leq
row.lmax = s.lmax
row.lmin = s.lmin
row.lpeak = s.lpeak
row.battery_level = s.battery_level
row.power_source = s.power_source
row.sd_remaining_mb = s.sd_remaining_mb
row.sd_free_ratio = s.sd_free_ratio
row.raw_payload = s.raw_payload
db.commit()
except Exception as e:
db.rollback()
logger.error(f"Failed to persist snapshot for unit {s.unit_id}: {e}")
raise
# Rate limiting: NL43 requires ≥1 second between commands
_last_command_time = {}
_rate_limit_lock = asyncio.Lock()
class NL43Client:
def __init__(self, host: str, port: int, timeout: float = 5.0, ftp_username: str = None, ftp_password: str = None):
self.host = host
self.port = port
self.timeout = timeout
self.ftp_username = ftp_username or "anonymous"
self.ftp_password = ftp_password or ""
self.device_key = f"{host}:{port}"
async def _enforce_rate_limit(self):
"""Ensure ≥1 second between commands to the same device."""
async with _rate_limit_lock:
last_time = _last_command_time.get(self.device_key, 0)
elapsed = time.time() - last_time
if elapsed < 1.0:
wait_time = 1.0 - elapsed
logger.debug(f"Rate limiting: waiting {wait_time:.2f}s for {self.device_key}")
await asyncio.sleep(wait_time)
_last_command_time[self.device_key] = time.time()
async def _send_command(self, cmd: str) -> str:
"""Send ASCII command to NL43 device via TCP.
NL43 protocol returns two lines for query commands:
Line 1: Result code (R+0000 for success, error codes otherwise)
Line 2: Actual data (for query commands ending with '?')
"""
await self._enforce_rate_limit()
logger.info(f"Sending command to {self.device_key}: {cmd.strip()}")
try:
reader, writer = await asyncio.wait_for(
asyncio.open_connection(self.host, self.port), timeout=self.timeout
)
except asyncio.TimeoutError:
logger.error(f"Connection timeout to {self.device_key}")
raise ConnectionError(f"Failed to connect to device at {self.host}:{self.port}")
except Exception as e:
logger.error(f"Connection failed to {self.device_key}: {e}")
raise ConnectionError(f"Failed to connect to device: {str(e)}")
try:
writer.write(cmd.encode("ascii"))
await writer.drain()
# Read first line (result code)
first_line_data = await asyncio.wait_for(reader.readuntil(b"\n"), timeout=self.timeout)
result_code = first_line_data.decode(errors="ignore").strip()
# Remove leading $ prompt if present
if result_code.startswith("$"):
result_code = result_code[1:].strip()
logger.info(f"Result code from {self.device_key}: {result_code}")
# Check result code
if result_code == "R+0000":
# Success - for query commands, read the second line with actual data
is_query = cmd.strip().endswith("?")
if is_query:
data_line = await asyncio.wait_for(reader.readuntil(b"\n"), timeout=self.timeout)
response = data_line.decode(errors="ignore").strip()
logger.debug(f"Data line from {self.device_key}: {response}")
return response
else:
# Setting command - return success code
return result_code
elif result_code == "R+0001":
raise ValueError("Command error - device did not recognize command")
elif result_code == "R+0002":
raise ValueError("Parameter error - invalid parameter value")
elif result_code == "R+0003":
raise ValueError("Spec/type error - command not supported by this device model")
elif result_code == "R+0004":
raise ValueError("Status error - device is in wrong state for this command")
else:
raise ValueError(f"Unknown result code: {result_code}")
except asyncio.TimeoutError:
logger.error(f"Response timeout from {self.device_key}")
raise TimeoutError(f"Device did not respond within {self.timeout}s")
except Exception as e:
logger.error(f"Communication error with {self.device_key}: {e}")
raise
finally:
writer.close()
with contextlib.suppress(Exception):
await writer.wait_closed()
async def request_dod(self) -> NL43Snapshot:
"""Request DOD (Data Output Display) snapshot from device.
Returns parsed measurement data from the device display.
"""
# _send_command now handles result code validation and returns the data line
resp = await self._send_command("DOD?\r\n")
# Validate response format
if not resp:
logger.warning(f"Empty data response from DOD command on {self.device_key}")
raise ValueError("Device returned empty data for DOD? command")
# Remove leading $ prompt if present (shouldn't be there after _send_command, but be safe)
if resp.startswith("$"):
resp = resp[1:].strip()
parts = [p.strip() for p in resp.split(",") if p.strip() != ""]
# DOD should return at least some data points
if len(parts) < 2:
logger.error(f"Malformed DOD data from {self.device_key}: {resp}")
raise ValueError(f"Malformed DOD data: expected comma-separated values, got: {resp}")
logger.info(f"Parsed {len(parts)} data points from DOD response")
# Query actual measurement state (DOD doesn't include this information)
try:
measurement_state = await self.get_measurement_state()
except Exception as e:
logger.warning(f"Failed to get measurement state, defaulting to 'Measure': {e}")
measurement_state = "Measure"
snap = NL43Snapshot(unit_id="", raw_payload=resp, measurement_state=measurement_state)
# Parse known positions (based on NL43 communication guide - DRD format)
# DRD format: d0=counter, d1=Lp, d2=Leq, d3=Lmax, d4=Lmin, d5=Lpeak, d6=LIeq, ...
try:
# Capture d0 (counter) for timer synchronization
if len(parts) >= 1:
snap.counter = parts[0] # d0: Measurement interval counter (1-600)
if len(parts) >= 2:
snap.lp = parts[1] # d1: Instantaneous sound pressure level
if len(parts) >= 3:
snap.leq = parts[2] # d2: Equivalent continuous sound level
if len(parts) >= 4:
snap.lmax = parts[3] # d3: Maximum level
if len(parts) >= 5:
snap.lmin = parts[4] # d4: Minimum level
if len(parts) >= 6:
snap.lpeak = parts[5] # d5: Peak level
except (IndexError, ValueError) as e:
logger.warning(f"Error parsing DOD data points: {e}")
return snap
async def start(self):
"""Start measurement on the device.
According to NL43 protocol: Measure,Start (no $ prefix, capitalized param)
"""
await self._send_command("Measure,Start\r\n")
async def stop(self):
"""Stop measurement on the device.
According to NL43 protocol: Measure,Stop (no $ prefix, capitalized param)
"""
await self._send_command("Measure,Stop\r\n")
async def set_store_mode_manual(self):
"""Set the device to Manual Store mode.
According to NL43 protocol: Store Mode,Manual sets manual storage mode
"""
await self._send_command("Store Mode,Manual\r\n")
logger.info(f"Store mode set to Manual on {self.device_key}")
async def manual_store(self):
"""Manually store the current measurement data.
According to NL43 protocol: Manual Store,Start executes storing
Parameter p1="Start" executes the storage operation
Device must be in Manual Store mode first
"""
await self._send_command("Manual Store,Start\r\n")
logger.info(f"Manual store executed on {self.device_key}")
async def pause(self):
"""Pause the current measurement."""
await self._send_command("Pause,On\r\n")
logger.info(f"Measurement paused on {self.device_key}")
async def resume(self):
"""Resume a paused measurement."""
await self._send_command("Pause,Off\r\n")
logger.info(f"Measurement resumed on {self.device_key}")
async def reset(self):
"""Reset the measurement data."""
await self._send_command("Reset\r\n")
logger.info(f"Measurement data reset on {self.device_key}")
async def get_measurement_state(self) -> str:
"""Get the current measurement state.
Returns: "Start" if measuring, "Stop" if stopped
"""
resp = await self._send_command("Measure?\r\n")
state = resp.strip()
logger.info(f"Measurement state on {self.device_key}: {state}")
return state
async def get_battery_level(self) -> str:
"""Get the battery level."""
resp = await self._send_command("Battery Level?\r\n")
logger.info(f"Battery level on {self.device_key}: {resp}")
return resp.strip()
async def get_clock(self) -> str:
"""Get the device clock time."""
resp = await self._send_command("Clock?\r\n")
logger.info(f"Clock on {self.device_key}: {resp}")
return resp.strip()
async def set_clock(self, datetime_str: str):
"""Set the device clock time.
Args:
datetime_str: Time in format YYYY/MM/DD,HH:MM:SS or YYYY/MM/DD HH:MM:SS
"""
# Device expects format: Clock,YYYY/MM/DD HH:MM:SS (space between date and time)
# Replace comma with space if present to normalize format
normalized = datetime_str.replace(',', ' ', 1)
await self._send_command(f"Clock,{normalized}\r\n")
logger.info(f"Clock set on {self.device_key} to {normalized}")
async def get_frequency_weighting(self, channel: str = "Main") -> str:
"""Get frequency weighting (A, C, Z, etc.).
Args:
channel: Main, Sub1, Sub2, or Sub3
"""
resp = await self._send_command(f"Frequency Weighting ({channel})?\r\n")
logger.info(f"Frequency weighting ({channel}) on {self.device_key}: {resp}")
return resp.strip()
async def set_frequency_weighting(self, weighting: str, channel: str = "Main"):
"""Set frequency weighting.
Args:
weighting: A, C, or Z
channel: Main, Sub1, Sub2, or Sub3
"""
await self._send_command(f"Frequency Weighting ({channel}),{weighting}\r\n")
logger.info(f"Frequency weighting ({channel}) set to {weighting} on {self.device_key}")
async def get_time_weighting(self, channel: str = "Main") -> str:
"""Get time weighting (F, S, I).
Args:
channel: Main, Sub1, Sub2, or Sub3
"""
resp = await self._send_command(f"Time Weighting ({channel})?\r\n")
logger.info(f"Time weighting ({channel}) on {self.device_key}: {resp}")
return resp.strip()
async def set_time_weighting(self, weighting: str, channel: str = "Main"):
"""Set time weighting.
Args:
weighting: F (Fast), S (Slow), or I (Impulse)
channel: Main, Sub1, Sub2, or Sub3
"""
await self._send_command(f"Time Weighting ({channel}),{weighting}\r\n")
logger.info(f"Time weighting ({channel}) set to {weighting} on {self.device_key}")
async def request_dlc(self) -> dict:
"""Request DLC (Data Last Calculation) - final stored measurement results.
This retrieves the complete calculation results from the last/current measurement,
including all statistical data. Similar to DOD but for final results.
Returns:
Dict with parsed DLC data
"""
resp = await self._send_command("DLC?\r\n")
logger.info(f"DLC data received from {self.device_key}: {resp[:100]}...")
# Parse DLC response - similar format to DOD
# The exact format depends on device configuration
# For now, return raw data - can be enhanced based on actual response format
return {
"raw_data": resp.strip(),
"device_key": self.device_key,
}
async def sleep(self):
"""Put the device into sleep mode to conserve battery.
Sleep mode is useful for battery conservation between scheduled measurements.
Device can be woken up remotely via TCP command or by pressing a button.
"""
await self._send_command("Sleep Mode,On\r\n")
logger.info(f"Device {self.device_key} entering sleep mode")
async def wake(self):
"""Wake the device from sleep mode.
Note: This may not work if the device is in deep sleep.
Physical button press might be required in some cases.
"""
await self._send_command("Sleep Mode,Off\r\n")
logger.info(f"Device {self.device_key} waking from sleep mode")
async def get_sleep_status(self) -> str:
"""Get the current sleep mode status."""
resp = await self._send_command("Sleep Mode?\r\n")
logger.info(f"Sleep mode status on {self.device_key}: {resp}")
return resp.strip()
async def stream_drd(self, callback):
"""Stream continuous DRD output from the device.
Opens a persistent connection and streams DRD data lines.
Calls the provided callback function with each parsed snapshot.
Args:
callback: Async function that receives NL43Snapshot objects
The stream continues until an exception occurs or the connection is closed.
Send SUB character (0x1A) to stop the stream.
"""
await self._enforce_rate_limit()
logger.info(f"Starting DRD stream for {self.device_key}")
try:
reader, writer = await asyncio.wait_for(
asyncio.open_connection(self.host, self.port), timeout=self.timeout
)
except asyncio.TimeoutError:
logger.error(f"DRD stream connection timeout to {self.device_key}")
raise ConnectionError(f"Failed to connect to device at {self.host}:{self.port}")
except Exception as e:
logger.error(f"DRD stream connection failed to {self.device_key}: {e}")
raise ConnectionError(f"Failed to connect to device: {str(e)}")
try:
# Start DRD streaming
writer.write(b"DRD?\r\n")
await writer.drain()
# Read initial result code
first_line_data = await asyncio.wait_for(reader.readuntil(b"\n"), timeout=self.timeout)
result_code = first_line_data.decode(errors="ignore").strip()
if result_code.startswith("$"):
result_code = result_code[1:].strip()
logger.debug(f"DRD stream result code from {self.device_key}: {result_code}")
if result_code != "R+0000":
raise ValueError(f"DRD stream failed to start: {result_code}")
logger.info(f"DRD stream started successfully for {self.device_key}")
# Continuously read data lines
while True:
try:
line_data = await asyncio.wait_for(reader.readuntil(b"\n"), timeout=30.0)
line = line_data.decode(errors="ignore").strip()
if not line:
continue
# Remove leading $ if present
if line.startswith("$"):
line = line[1:].strip()
# Parse the DRD data (same format as DOD)
parts = [p.strip() for p in line.split(",") if p.strip() != ""]
if len(parts) < 2:
logger.warning(f"Malformed DRD data from {self.device_key}: {line}")
continue
snap = NL43Snapshot(unit_id="", raw_payload=line, measurement_state="Measure")
# Parse known positions (DRD format - same as DOD)
# DRD format: d0=counter, d1=Lp, d2=Leq, d3=Lmax, d4=Lmin, d5=Lpeak, d6=LIeq, ...
try:
# Capture d0 (counter) for timer synchronization
if len(parts) >= 1:
snap.counter = parts[0] # d0: Measurement interval counter (1-600)
if len(parts) >= 2:
snap.lp = parts[1] # d1: Instantaneous sound pressure level
if len(parts) >= 3:
snap.leq = parts[2] # d2: Equivalent continuous sound level
if len(parts) >= 4:
snap.lmax = parts[3] # d3: Maximum level
if len(parts) >= 5:
snap.lmin = parts[4] # d4: Minimum level
if len(parts) >= 6:
snap.lpeak = parts[5] # d5: Peak level
except (IndexError, ValueError) as e:
logger.warning(f"Error parsing DRD data points: {e}")
# Call the callback with the snapshot
await callback(snap)
except asyncio.TimeoutError:
logger.warning(f"DRD stream timeout (no data for 30s) from {self.device_key}")
break
except asyncio.IncompleteReadError:
logger.info(f"DRD stream closed by device {self.device_key}")
break
finally:
# Send SUB character to stop streaming
try:
writer.write(b"\x1A")
await writer.drain()
except Exception:
pass
writer.close()
with contextlib.suppress(Exception):
await writer.wait_closed()
logger.info(f"DRD stream ended for {self.device_key}")
async def set_measurement_time(self, preset: str):
"""Set measurement time preset.
Args:
preset: Time preset (10s, 1m, 5m, 10m, 15m, 30m, 1h, 8h, 24h, or custom like "00:05:30")
"""
await self._send_command(f"Measurement Time Preset Manual,{preset}\r\n")
logger.info(f"Set measurement time to {preset} on {self.device_key}")
async def get_measurement_time(self) -> str:
"""Get current measurement time preset.
Returns: Current time preset setting
"""
resp = await self._send_command("Measurement Time Preset Manual?\r\n")
return resp.strip()
async def set_leq_interval(self, preset: str):
"""Set Leq calculation interval preset.
Args:
preset: Interval preset (Off, 10s, 1m, 5m, 10m, 15m, 30m, 1h, 8h, 24h, or custom like "00:05:30")
"""
await self._send_command(f"Leq Calculation Interval Preset,{preset}\r\n")
logger.info(f"Set Leq interval to {preset} on {self.device_key}")
async def get_leq_interval(self) -> str:
"""Get current Leq calculation interval preset.
Returns: Current interval preset setting
"""
resp = await self._send_command("Leq Calculation Interval Preset?\r\n")
return resp.strip()
async def set_lp_interval(self, preset: str):
"""Set Lp store interval.
Args:
preset: Store interval (Off, 10ms, 25ms, 100ms, 200ms, 1s)
"""
await self._send_command(f"Lp Store Interval,{preset}\r\n")
logger.info(f"Set Lp interval to {preset} on {self.device_key}")
async def get_lp_interval(self) -> str:
"""Get current Lp store interval.
Returns: Current store interval setting
"""
resp = await self._send_command("Lp Store Interval?\r\n")
return resp.strip()
async def set_index_number(self, index: int):
"""Set index number for file numbering (Store Name).
Args:
index: Index number (0000-9999)
"""
if not 0 <= index <= 9999:
raise ValueError("Index must be between 0000 and 9999")
await self._send_command(f"Store Name,{index:04d}\r\n")
logger.info(f"Set store name (index) to {index:04d} on {self.device_key}")
async def get_index_number(self) -> str:
"""Get current index number (Store Name).
Returns: Current index number
"""
resp = await self._send_command("Store Name?\r\n")
return resp.strip()
async def get_overwrite_status(self) -> str:
"""Check if saved data exists at current store target.
This command checks whether saved data exists in the set store target
(store mode / store name / store address). Use this before storing
to prevent accidentally overwriting data.
Returns:
"None" - No data exists (safe to store)
"Exist" - Data exists (would overwrite)
"""
resp = await self._send_command("Overwrite?\r\n")
return resp.strip()
async def get_all_settings(self) -> dict:
"""Query all device settings for verification.
Returns: Dictionary with all current device settings
"""
settings = {}
# Measurement settings
try:
settings["measurement_state"] = await self.get_measurement_state()
except Exception as e:
settings["measurement_state"] = f"Error: {e}"
try:
settings["frequency_weighting"] = await self.get_frequency_weighting()
except Exception as e:
settings["frequency_weighting"] = f"Error: {e}"
try:
settings["time_weighting"] = await self.get_time_weighting()
except Exception as e:
settings["time_weighting"] = f"Error: {e}"
# Timing/interval settings
try:
settings["measurement_time"] = await self.get_measurement_time()
except Exception as e:
settings["measurement_time"] = f"Error: {e}"
try:
settings["leq_interval"] = await self.get_leq_interval()
except Exception as e:
settings["leq_interval"] = f"Error: {e}"
try:
settings["lp_interval"] = await self.get_lp_interval()
except Exception as e:
settings["lp_interval"] = f"Error: {e}"
try:
settings["index_number"] = await self.get_index_number()
except Exception as e:
settings["index_number"] = f"Error: {e}"
# Device info
try:
settings["battery_level"] = await self.get_battery_level()
except Exception as e:
settings["battery_level"] = f"Error: {e}"
try:
settings["clock"] = await self.get_clock()
except Exception as e:
settings["clock"] = f"Error: {e}"
# Sleep mode
try:
settings["sleep_mode"] = await self.get_sleep_status()
except Exception as e:
settings["sleep_mode"] = f"Error: {e}"
# FTP status
try:
settings["ftp_status"] = await self.get_ftp_status()
except Exception as e:
settings["ftp_status"] = f"Error: {e}"
logger.info(f"Retrieved all settings for {self.device_key}")
return settings
async def enable_ftp(self):
"""Enable FTP server on the device.
According to NL43 protocol: FTP,On enables the FTP server
"""
await self._send_command("FTP,On\r\n")
logger.info(f"FTP enabled on {self.device_key}")
async def disable_ftp(self):
"""Disable FTP server on the device.
According to NL43 protocol: FTP,Off disables the FTP server
"""
await self._send_command("FTP,Off\r\n")
logger.info(f"FTP disabled on {self.device_key}")
async def get_ftp_status(self) -> str:
"""Query FTP server status on the device.
Returns: "On" or "Off"
"""
resp = await self._send_command("FTP?\r\n")
logger.info(f"FTP status on {self.device_key}: {resp}")
return resp.strip()
async def list_ftp_files(self, remote_path: str = "/") -> List[dict]:
"""List files on the device via FTP.
Args:
remote_path: Directory path on the device (default: root)
Returns:
List of file info dicts with 'name', 'size', 'modified', 'is_dir'
"""
logger.info(f"Listing FTP files on {self.device_key} at {remote_path}")
def _list_ftp_sync():
"""Synchronous FTP listing using ftplib (supports active mode)."""
ftp = FTP()
ftp.set_debuglevel(0)
try:
# Connect and login
ftp.connect(self.host, 21, timeout=10)
ftp.login(self.ftp_username, self.ftp_password)
ftp.set_pasv(False) # Force active mode
# Change to target directory
if remote_path != "/":
ftp.cwd(remote_path)
# Get directory listing with details
files = []
lines = []
ftp.retrlines('LIST', lines.append)
for line in lines:
# Parse Unix-style ls output
parts = line.split(None, 8)
if len(parts) < 9:
continue
is_dir = parts[0].startswith('d')
size = int(parts[4]) if not is_dir else 0
name = parts[8]
# Skip . and ..
if name in ('.', '..'):
continue
# Parse modification time
# Format: "Jan 07 14:23" or "Dec 25 2025"
modified_str = f"{parts[5]} {parts[6]} {parts[7]}"
modified_timestamp = None
try:
from datetime import datetime
# Try parsing with time (recent files: "Jan 07 14:23")
try:
dt = datetime.strptime(modified_str, "%b %d %H:%M")
# Add current year since it's not in the format
dt = dt.replace(year=datetime.now().year)
# If the resulting date is in the future, it's actually from last year
if dt > datetime.now():
dt = dt.replace(year=dt.year - 1)
modified_timestamp = dt.isoformat()
except ValueError:
# Try parsing with year (older files: "Dec 25 2025")
dt = datetime.strptime(modified_str, "%b %d %Y")
modified_timestamp = dt.isoformat()
except Exception as e:
logger.warning(f"Failed to parse timestamp '{modified_str}': {e}")
file_info = {
"name": name,
"path": f"{remote_path.rstrip('/')}/{name}",
"size": size,
"modified": modified_str, # Keep original string
"modified_timestamp": modified_timestamp, # Add parsed timestamp
"is_dir": is_dir,
}
files.append(file_info)
logger.debug(f"Found file: {file_info}")
logger.info(f"Found {len(files)} files/directories on {self.device_key}")
return files
finally:
try:
ftp.quit()
except:
pass
try:
# Run synchronous FTP in thread pool
return await asyncio.to_thread(_list_ftp_sync)
except Exception as e:
logger.error(f"Failed to list FTP files on {self.device_key}: {e}")
raise ConnectionError(f"FTP connection failed: {str(e)}")
async def download_ftp_file(self, remote_path: str, local_path: str):
"""Download a file from the device via FTP.
Args:
remote_path: Full path to file on the device
local_path: Local path where file will be saved
"""
logger.info(f"Downloading {remote_path} from {self.device_key} to {local_path}")
def _download_ftp_sync():
"""Synchronous FTP download using ftplib (supports active mode)."""
ftp = FTP()
ftp.set_debuglevel(0)
try:
# Connect and login
ftp.connect(self.host, 21, timeout=10)
ftp.login(self.ftp_username, self.ftp_password)
ftp.set_pasv(False) # Force active mode
# Download file
with open(local_path, 'wb') as f:
ftp.retrbinary(f'RETR {remote_path}', f.write)
logger.info(f"Successfully downloaded {remote_path} to {local_path}")
finally:
try:
ftp.quit()
except:
pass
try:
# Run synchronous FTP in thread pool
await asyncio.to_thread(_download_ftp_sync)
except Exception as e:
logger.error(f"Failed to download {remote_path} from {self.device_key}: {e}")
raise ConnectionError(f"FTP download failed: {str(e)}")

View File

View File

@@ -1,92 +0,0 @@
"""
UI Layer Routes - HTML page routes only (no business logic)
"""
from fastapi import APIRouter, Request
from fastapi.responses import HTMLResponse, FileResponse
from fastapi.templating import Jinja2Templates
import os
router = APIRouter()
# Setup Jinja2 templates
templates = Jinja2Templates(directory="app/ui/templates")
# Read environment (development or production)
ENVIRONMENT = os.getenv("ENVIRONMENT", "production")
VERSION = "1.0.0" # Terra-View version
# Override TemplateResponse to include environment and version in context
original_template_response = templates.TemplateResponse
def custom_template_response(name, context=None, *args, **kwargs):
if context is None:
context = {}
context["environment"] = ENVIRONMENT
context["version"] = VERSION
return original_template_response(name, context, *args, **kwargs)
templates.TemplateResponse = custom_template_response
# ===== HTML PAGE ROUTES =====
@router.get("/", response_class=HTMLResponse)
async def dashboard(request: Request):
"""Dashboard home page"""
return templates.TemplateResponse("dashboard.html", {"request": request})
@router.get("/roster", response_class=HTMLResponse)
async def roster_page(request: Request):
"""Fleet roster page"""
return templates.TemplateResponse("roster.html", {"request": request})
@router.get("/unit/{unit_id}", response_class=HTMLResponse)
async def unit_detail_page(request: Request, unit_id: str):
"""Unit detail page"""
return templates.TemplateResponse("unit_detail.html", {
"request": request,
"unit_id": unit_id
})
@router.get("/settings", response_class=HTMLResponse)
async def settings_page(request: Request):
"""Settings page for roster management"""
return templates.TemplateResponse("settings.html", {"request": request})
@router.get("/sound-level-meters", response_class=HTMLResponse)
async def sound_level_meters_page(request: Request):
"""Sound Level Meters management dashboard"""
return templates.TemplateResponse("sound_level_meters.html", {"request": request})
@router.get("/seismographs", response_class=HTMLResponse)
async def seismographs_page(request: Request):
"""Seismographs management dashboard"""
return templates.TemplateResponse("seismographs.html", {"request": request})
# ===== PWA ROUTES =====
@router.get("/sw.js")
async def service_worker():
"""Serve service worker with proper headers for PWA"""
return FileResponse(
"app/ui/static/sw.js",
media_type="application/javascript",
headers={
"Service-Worker-Allowed": "/",
"Cache-Control": "no-cache"
}
)
@router.get("/offline-db.js")
async def offline_db_script():
"""Serve offline database script"""
return FileResponse(
"app/ui/static/offline-db.js",
media_type="application/javascript",
headers={"Cache-Control": "no-cache"}
)

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.9 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 2.2 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 2.2 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 2.9 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 5.8 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 7.8 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.1 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.4 KiB

View File

@@ -1,97 +0,0 @@
{% if units %}
<div class="overflow-x-auto">
<table class="w-full">
<thead class="bg-gray-50 dark:bg-slate-700 border-b border-gray-200 dark:border-gray-600">
<tr>
<th class="px-4 py-3 text-left text-xs font-medium text-gray-700 dark:text-gray-300 uppercase tracking-wider">Unit ID</th>
<th class="px-4 py-3 text-left text-xs font-medium text-gray-700 dark:text-gray-300 uppercase tracking-wider">Status</th>
<th class="px-4 py-3 text-left text-xs font-medium text-gray-700 dark:text-gray-300 uppercase tracking-wider">Modem</th>
<th class="px-4 py-3 text-left text-xs font-medium text-gray-700 dark:text-gray-300 uppercase tracking-wider">Location</th>
<th class="px-4 py-3 text-left text-xs font-medium text-gray-700 dark:text-gray-300 uppercase tracking-wider">Notes</th>
<th class="px-4 py-3 text-right text-xs font-medium text-gray-700 dark:text-gray-300 uppercase tracking-wider">Actions</th>
</tr>
</thead>
<tbody class="divide-y divide-gray-200 dark:divide-gray-700">
{% for unit in units %}
<tr class="hover:bg-gray-50 dark:hover:bg-slate-700 transition-colors">
<td class="px-4 py-3 whitespace-nowrap">
<a href="/unit/{{ unit.id }}" class="font-medium text-blue-600 dark:text-blue-400 hover:underline">
{{ unit.id }}
</a>
</td>
<td class="px-4 py-3 whitespace-nowrap">
{% if unit.deployed %}
<span class="inline-flex items-center px-2.5 py-0.5 rounded-full text-xs font-medium bg-green-100 text-green-800 dark:bg-green-900 dark:text-green-200">
<svg class="w-3 h-3 mr-1" fill="currentColor" viewBox="0 0 20 20">
<path fill-rule="evenodd" d="M10 18a8 8 0 100-16 8 8 0 000 16zm3.707-9.293a1 1 0 00-1.414-1.414L9 10.586 7.707 9.293a1 1 0 00-1.414 1.414l2 2a1 1 0 001.414 0l4-4z" clip-rule="evenodd"></path>
</svg>
Deployed
</span>
{% else %}
<span class="inline-flex items-center px-2.5 py-0.5 rounded-full text-xs font-medium bg-gray-100 text-gray-800 dark:bg-gray-700 dark:text-gray-300">
<svg class="w-3 h-3 mr-1" fill="currentColor" viewBox="0 0 20 20">
<path fill-rule="evenodd" d="M10 18a8 8 0 100-16 8 8 0 000 16zM8 7a1 1 0 00-1 1v4a1 1 0 001 1h4a1 1 0 001-1V8a1 1 0 00-1-1H8z" clip-rule="evenodd"></path>
</svg>
Benched
</span>
{% endif %}
</td>
<td class="px-4 py-3 whitespace-nowrap text-sm text-gray-900 dark:text-gray-300">
{% if unit.deployed_with_modem_id %}
<a href="/unit/{{ unit.deployed_with_modem_id }}" class="text-blue-600 dark:text-blue-400 hover:underline">
{{ unit.deployed_with_modem_id }}
</a>
{% else %}
<span class="text-gray-400 dark:text-gray-600">None</span>
{% endif %}
</td>
<td class="px-4 py-3 text-sm text-gray-900 dark:text-gray-300">
{% if unit.address %}
<span class="truncate max-w-xs inline-block" title="{{ unit.address }}">{{ unit.address }}</span>
{% elif unit.coordinates %}
<span class="text-gray-500 dark:text-gray-400">{{ unit.coordinates }}</span>
{% else %}
<span class="text-gray-400 dark:text-gray-600"></span>
{% endif %}
</td>
<td class="px-4 py-3 text-sm text-gray-700 dark:text-gray-400">
{% if unit.note %}
<span class="truncate max-w-xs inline-block" title="{{ unit.note }}">{{ unit.note }}</span>
{% else %}
<span class="text-gray-400 dark:text-gray-600"></span>
{% endif %}
</td>
<td class="px-4 py-3 whitespace-nowrap text-right text-sm">
<a href="/unit/{{ unit.id }}" class="text-blue-600 dark:text-blue-400 hover:underline">
View Details →
</a>
</td>
</tr>
{% endfor %}
</tbody>
</table>
</div>
{% if search %}
<div class="mt-4 text-sm text-gray-600 dark:text-gray-400">
Found {{ units|length }} seismograph(s) matching "{{ search }}"
</div>
{% endif %}
{% else %}
<div class="text-center py-12">
<svg class="mx-auto h-12 w-12 text-gray-400" fill="none" stroke="currentColor" viewBox="0 0 24 24">
<path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M9 19v-6a2 2 0 00-2-2H5a2 2 0 00-2 2v6a2 2 0 002 2h2a2 2 0 002-2zm0 0V9a2 2 0 012-2h2a2 2 0 012 2v10m-6 0a2 2 0 002 2h2a2 2 0 002-2m0 0V5a2 2 0 012-2h2a2 2 0 012 2v14a2 2 0 01-2 2h-2a2 2 0 01-2-2z"></path>
</svg>
<h3 class="mt-2 text-sm font-medium text-gray-900 dark:text-white">No seismographs found</h3>
{% if search %}
<p class="mt-1 text-sm text-gray-500 dark:text-gray-400">No seismographs match "{{ search }}"</p>
<button onclick="document.getElementById('seismo-search').value = ''; htmx.trigger('#seismo-search', 'keyup');"
class="mt-3 text-blue-600 dark:text-blue-400 hover:underline">
Clear search
</button>
{% else %}
<p class="mt-1 text-sm text-gray-500 dark:text-gray-400">Get started by adding a seismograph unit from the roster page.</p>
{% endif %}
</div>
{% endif %}

File diff suppressed because it is too large Load Diff

View File

@@ -1,76 +0,0 @@
{% extends "base.html" %}
{% block title %}Seismographs - Seismo Fleet Manager{% endblock %}
{% block content %}
<div class="mb-8">
<h1 class="text-3xl font-bold text-gray-900 dark:text-white">Seismographs</h1>
<p class="text-gray-600 dark:text-gray-400 mt-1">Manage and monitor seismograph units</p>
</div>
<!-- Auto-refresh stats every 30 seconds -->
<div hx-get="/api/seismo-dashboard/stats"
hx-trigger="load, every 30s"
hx-swap="innerHTML"
class="mb-8">
<div class="grid grid-cols-1 md:grid-cols-4 gap-4">
<div class="rounded-lg shadow bg-white dark:bg-slate-800 p-6">
<div class="flex items-center justify-between">
<div>
<p class="text-gray-600 dark:text-gray-400 text-sm">Loading...</p>
<p class="text-3xl font-bold text-gray-900 dark:text-white mt-2">--</p>
</div>
</div>
</div>
</div>
</div>
<!-- Seismograph List -->
<div class="rounded-xl shadow-lg bg-white dark:bg-slate-800 p-6">
<div class="mb-4 flex flex-col sm:flex-row sm:items-center sm:justify-between gap-4">
<h2 class="text-xl font-semibold text-gray-900 dark:text-white">All Seismographs</h2>
<!-- Search Box -->
<div class="relative">
<input
type="text"
id="seismo-search"
placeholder="Search seismographs..."
class="w-full sm:w-64 px-4 py-2 border border-gray-300 dark:border-gray-600 rounded-lg bg-white dark:bg-slate-700 text-gray-900 dark:text-white focus:ring-2 focus:ring-blue-500 focus:border-transparent"
hx-get="/api/seismo-dashboard/units"
hx-trigger="keyup changed delay:300ms"
hx-target="#seismo-units-list"
hx-include="[name='search']"
name="search"
/>
<svg class="absolute right-3 top-2.5 w-5 h-5 text-gray-400" fill="none" stroke="currentColor" viewBox="0 0 24 24">
<path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M21 21l-6-6m2-5a7 7 0 11-14 0 7 7 0 0114 0z"></path>
</svg>
</div>
</div>
<!-- Units List (loaded via HTMX) -->
<div id="seismo-units-list"
hx-get="/api/seismo-dashboard/units"
hx-trigger="load"
hx-swap="innerHTML">
<p class="text-gray-500 dark:text-gray-400">Loading seismographs...</p>
</div>
</div>
<script>
// Clear search input on escape key
document.addEventListener('DOMContentLoaded', function() {
const searchInput = document.getElementById('seismo-search');
if (searchInput) {
searchInput.addEventListener('keydown', function(e) {
if (e.key === 'Escape') {
this.value = '';
htmx.trigger(this, 'keyup');
}
});
}
});
</script>
{% endblock %}

View File

@@ -1,195 +0,0 @@
{% extends "base.html" %}
{% block title %}{{ unit_id }} - Sound Level Meter{% endblock %}
{% block content %}
<div class="mb-6">
<a href="/roster" class="text-seismo-orange hover:text-seismo-orange-dark flex items-center">
<svg class="w-4 h-4 mr-1" fill="none" stroke="currentColor" viewBox="0 0 24 24">
<path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M15 19l-7-7 7-7"></path>
</svg>
Back to Roster
</a>
</div>
<div class="mb-8">
<div class="flex justify-between items-start">
<div>
<h1 class="text-3xl font-bold text-gray-900 dark:text-white flex items-center">
<svg class="w-8 h-8 mr-3 text-seismo-orange" fill="none" stroke="currentColor" viewBox="0 0 24 24">
<path stroke-linecap="round" stroke-linejoin="round" stroke-width="2"
d="M9 19V6l12-3v13M9 19c0 1.105-1.343 2-3 2s-3-.895-3-2 1.343-2 3-2 3 .895 3 2zm12-3c0 1.105-1.343 2-3 2s-3-.895-3-2 1.343-2 3-2 3 .895 3 2zM9 10l12-3">
</path>
</svg>
{{ unit_id }}
</h1>
<p class="text-gray-600 dark:text-gray-400 mt-1">
{{ unit.slm_model or 'NL-43' }} Sound Level Meter
</p>
</div>
<div class="flex gap-2">
<span class="px-3 py-1 rounded-full text-sm font-medium
{% if unit.deployed %}bg-blue-100 text-blue-800 dark:bg-blue-900 dark:text-blue-200
{% else %}bg-gray-100 text-gray-800 dark:bg-gray-700 dark:text-gray-200{% endif %}">
{% if unit.deployed %}Deployed{% else %}Benched{% endif %}
</span>
</div>
</div>
</div>
<!-- Control Panel -->
<div class="mb-8">
<h2 class="text-xl font-semibold text-gray-900 dark:text-white mb-4">Control Panel</h2>
<div hx-get="/slm/partials/{{ unit_id }}/controls"
hx-trigger="load, every 5s"
hx-swap="innerHTML">
<div class="text-center py-8 text-gray-500">Loading controls...</div>
</div>
</div>
<!-- Real-time Data Stream -->
<div class="mb-8">
<h2 class="text-xl font-semibold text-gray-900 dark:text-white mb-4">Real-time Measurements</h2>
<div class="bg-white dark:bg-slate-800 rounded-xl shadow-lg p-6">
<div id="slm-stream-container">
<div class="text-center py-8">
<button onclick="startStream()"
id="stream-start-btn"
class="px-6 py-3 bg-seismo-orange text-white rounded-lg hover:bg-seismo-orange-dark transition-colors">
Start Real-time Stream
</button>
<p class="text-sm text-gray-500 mt-2">Click to begin streaming live measurement data</p>
</div>
<div id="stream-data" class="hidden">
<div class="grid grid-cols-2 md:grid-cols-4 gap-4 mb-4">
<div class="bg-gray-50 dark:bg-gray-900 rounded-lg p-4">
<div class="text-sm text-gray-600 dark:text-gray-400 mb-1">Lp (Instant)</div>
<div id="stream-lp" class="text-3xl font-bold text-gray-900 dark:text-white">--</div>
<div class="text-xs text-gray-500">dB</div>
</div>
<div class="bg-gray-50 dark:bg-gray-900 rounded-lg p-4">
<div class="text-sm text-gray-600 dark:text-gray-400 mb-1">Leq (Average)</div>
<div id="stream-leq" class="text-3xl font-bold text-blue-600 dark:text-blue-400">--</div>
<div class="text-xs text-gray-500">dB</div>
</div>
<div class="bg-gray-50 dark:bg-gray-900 rounded-lg p-4">
<div class="text-sm text-gray-600 dark:text-gray-400 mb-1">Lmax</div>
<div id="stream-lmax" class="text-3xl font-bold text-red-600 dark:text-red-400">--</div>
<div class="text-xs text-gray-500">dB</div>
</div>
<div class="bg-gray-50 dark:bg-gray-900 rounded-lg p-4">
<div class="text-sm text-gray-600 dark:text-gray-400 mb-1">Lmin</div>
<div id="stream-lmin" class="text-3xl font-bold text-green-600 dark:text-green-400">--</div>
<div class="text-xs text-gray-500">dB</div>
</div>
</div>
<div class="flex justify-between items-center">
<div class="text-xs text-gray-500">
<span class="inline-block w-2 h-2 bg-green-500 rounded-full mr-2 animate-pulse"></span>
Streaming
</div>
<button onclick="stopStream()"
class="px-4 py-2 bg-red-600 text-white text-sm rounded-lg hover:bg-red-700 transition-colors">
Stop Stream
</button>
</div>
</div>
</div>
</div>
</div>
<!-- Device Information -->
<div class="mb-8">
<h2 class="text-xl font-semibold text-gray-900 dark:text-white mb-4">Device Information</h2>
<div class="bg-white dark:bg-slate-800 rounded-xl shadow-lg p-6">
<div class="grid grid-cols-1 md:grid-cols-2 gap-4">
<div>
<div class="text-sm text-gray-600 dark:text-gray-400">Model</div>
<div class="text-lg font-medium text-gray-900 dark:text-white">{{ unit.slm_model or 'NL-43' }}</div>
</div>
<div>
<div class="text-sm text-gray-600 dark:text-gray-400">Serial Number</div>
<div class="text-lg font-medium text-gray-900 dark:text-white">{{ unit.slm_serial_number or 'N/A' }}</div>
</div>
<div>
<div class="text-sm text-gray-600 dark:text-gray-400">Host</div>
<div class="text-lg font-medium text-gray-900 dark:text-white">{{ unit.slm_host or 'Not configured' }}</div>
</div>
<div>
<div class="text-sm text-gray-600 dark:text-gray-400">TCP Port</div>
<div class="text-lg font-medium text-gray-900 dark:text-white">{{ unit.slm_tcp_port or 'N/A' }}</div>
</div>
<div>
<div class="text-sm text-gray-600 dark:text-gray-400">Frequency Weighting</div>
<div class="text-lg font-medium text-gray-900 dark:text-white">{{ unit.slm_frequency_weighting or 'A' }}</div>
</div>
<div>
<div class="text-sm text-gray-600 dark:text-gray-400">Time Weighting</div>
<div class="text-lg font-medium text-gray-900 dark:text-white">{{ unit.slm_time_weighting or 'F (Fast)' }}</div>
</div>
<div class="md:col-span-2">
<div class="text-sm text-gray-600 dark:text-gray-400">Location</div>
<div class="text-lg font-medium text-gray-900 dark:text-white">{{ unit.address or unit.location or 'Not specified' }}</div>
</div>
{% if unit.note %}
<div class="md:col-span-2">
<div class="text-sm text-gray-600 dark:text-gray-400">Notes</div>
<div class="text-gray-900 dark:text-white">{{ unit.note }}</div>
</div>
{% endif %}
</div>
</div>
</div>
<script>
let ws = null;
function startStream() {
const protocol = window.location.protocol === 'https:' ? 'wss:' : 'ws:';
const wsUrl = `${protocol}//${window.location.host}/api/slmm/{{ unit_id }}/stream`;
ws = new WebSocket(wsUrl);
ws.onopen = () => {
document.getElementById('stream-start-btn').classList.add('hidden');
document.getElementById('stream-data').classList.remove('hidden');
console.log('WebSocket connected');
};
ws.onmessage = (event) => {
const data = JSON.parse(event.data);
if (data.error) {
console.error('Stream error:', data.error);
stopStream();
alert('Error: ' + data.error);
return;
}
// Update values
document.getElementById('stream-lp').textContent = data.lp || '--';
document.getElementById('stream-leq').textContent = data.leq || '--';
document.getElementById('stream-lmax').textContent = data.lmax || '--';
document.getElementById('stream-lmin').textContent = data.lmin || '--';
};
ws.onerror = (error) => {
console.error('WebSocket error:', error);
stopStream();
};
ws.onclose = () => {
console.log('WebSocket closed');
};
}
function stopStream() {
if (ws) {
ws.close();
ws = null;
}
document.getElementById('stream-start-btn').classList.remove('hidden');
document.getElementById('stream-data').classList.add('hidden');
}
</script>
{% endblock %}

View File

@@ -1,249 +0,0 @@
{% extends "base.html" %}
{% block title %}Sound Level Meters - Seismo Fleet Manager{% endblock %}
{% block content %}
<div class="mb-8">
<h1 class="text-3xl font-bold text-gray-900 dark:text-white">Sound Level Meters</h1>
<p class="text-gray-600 dark:text-gray-400 mt-1">Monitor and manage sound level measurement devices</p>
</div>
<!-- Summary Stats -->
<div class="grid grid-cols-1 md:grid-cols-4 gap-6 mb-8"
hx-get="/api/slm-dashboard/stats"
hx-trigger="load, every 10s"
hx-swap="innerHTML">
<!-- Stats will be loaded here -->
<div class="animate-pulse bg-gray-200 dark:bg-gray-700 h-24 rounded-xl"></div>
<div class="animate-pulse bg-gray-200 dark:bg-gray-700 h-24 rounded-xl"></div>
<div class="animate-pulse bg-gray-200 dark:bg-gray-700 h-24 rounded-xl"></div>
<div class="animate-pulse bg-gray-200 dark:bg-gray-700 h-24 rounded-xl"></div>
</div>
<!-- Main Content Grid -->
<div class="grid grid-cols-1 lg:grid-cols-3 gap-6">
<!-- SLM List -->
<div class="lg:col-span-1">
<div class="bg-white dark:bg-slate-800 rounded-xl shadow-lg p-6">
<h2 class="text-xl font-semibold text-gray-900 dark:text-white mb-4">Active Units</h2>
<!-- Search/Filter -->
<div class="mb-4">
<input type="text"
placeholder="Search units..."
class="w-full px-4 py-2 border border-gray-300 dark:border-gray-600 rounded-lg bg-white dark:bg-gray-700 text-gray-900 dark:text-white"
hx-get="/api/slm-dashboard/units"
hx-trigger="keyup changed delay:300ms"
hx-target="#slm-list"
hx-include="this"
name="search">
</div>
<!-- SLM List -->
<div id="slm-list"
class="space-y-2 max-h-[600px] overflow-y-auto"
hx-get="/api/slm-dashboard/units"
hx-trigger="load, every 10s"
hx-swap="innerHTML">
<!-- Loading skeleton -->
<div class="animate-pulse space-y-2">
<div class="bg-gray-200 dark:bg-gray-700 h-20 rounded-lg"></div>
<div class="bg-gray-200 dark:bg-gray-700 h-20 rounded-lg"></div>
<div class="bg-gray-200 dark:bg-gray-700 h-20 rounded-lg"></div>
</div>
</div>
</div>
</div>
<!-- Live View Panel -->
<div class="lg:col-span-2">
<div id="live-view-panel" class="bg-white dark:bg-slate-800 rounded-xl shadow-lg p-6">
<!-- Initial state - no unit selected -->
<div class="flex flex-col items-center justify-center h-[600px] text-gray-400 dark:text-gray-500">
<svg class="w-24 h-24 mb-4" fill="none" stroke="currentColor" viewBox="0 0 24 24">
<path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M15.536 8.464a5 5 0 010 7.072m2.828-9.9a9 9 0 010 12.728M5.586 15H4a1 1 0 01-1-1v-4a1 1 0 011-1h1.586l4.707-4.707C10.923 3.663 12 4.109 12 5v14c0 .891-1.077 1.337-1.707.707L5.586 15z"></path>
</svg>
<p class="text-lg font-medium">No unit selected</p>
<p class="text-sm mt-2">Select a sound level meter from the list to view live data</p>
</div>
</div>
</div>
</div>
<!-- Configuration Modal -->
<div id="config-modal" class="hidden fixed inset-0 bg-black bg-opacity-50 flex items-center justify-center z-50">
<div class="bg-white dark:bg-slate-800 rounded-xl p-6 max-w-2xl w-full mx-4 max-h-[90vh] overflow-y-auto">
<div class="flex items-center justify-between mb-6">
<h3 class="text-2xl font-bold text-gray-900 dark:text-white">Configure SLM</h3>
<button onclick="closeConfigModal()" class="text-gray-500 hover:text-gray-700 dark:text-gray-400 dark:hover:text-gray-200">
<svg class="w-6 h-6" fill="none" stroke="currentColor" viewBox="0 0 24 24">
<path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M6 18L18 6M6 6l12 12"></path>
</svg>
</button>
</div>
<div id="config-modal-content">
<!-- Content loaded via HTMX -->
<div class="animate-pulse space-y-4">
<div class="h-4 bg-gray-200 dark:bg-gray-700 rounded w-3/4"></div>
<div class="h-4 bg-gray-200 dark:bg-gray-700 rounded"></div>
<div class="h-4 bg-gray-200 dark:bg-gray-700 rounded w-5/6"></div>
</div>
</div>
</div>
</div>
<script>
// Function to select a unit and load live view
function selectUnit(unitId) {
// Remove active state from all items
document.querySelectorAll('.slm-unit-item').forEach(item => {
item.classList.remove('bg-seismo-orange', 'text-white');
item.classList.add('bg-gray-100', 'dark:bg-gray-700');
});
// Add active state to clicked item
event.currentTarget.classList.remove('bg-gray-100', 'dark:bg-gray-700');
event.currentTarget.classList.add('bg-seismo-orange', 'text-white');
// Load live view for this unit
htmx.ajax('GET', `/api/slm-dashboard/live-view/${unitId}`, {
target: '#live-view-panel',
swap: 'innerHTML'
});
}
// Configuration modal functions
function openConfigModal(unitId) {
const modal = document.getElementById('config-modal');
modal.classList.remove('hidden');
// Load configuration form via HTMX
htmx.ajax('GET', `/api/slm-dashboard/config/${unitId}`, {
target: '#config-modal-content',
swap: 'innerHTML'
});
}
function closeConfigModal() {
document.getElementById('config-modal').classList.add('hidden');
}
// Close modal on escape key
document.addEventListener('keydown', function(e) {
if (e.key === 'Escape') {
closeConfigModal();
}
});
// Close modal when clicking outside
document.getElementById('config-modal')?.addEventListener('click', function(e) {
if (e.target === this) {
closeConfigModal();
}
});
// Initialize WebSocket for selected unit
let currentWebSocket = null;
function initLiveDataStream(unitId) {
// Close existing connection if any
if (currentWebSocket) {
currentWebSocket.close();
}
// WebSocket URL for SLMM backend via proxy
const wsProtocol = window.location.protocol === 'https:' ? 'wss:' : 'ws:';
const wsUrl = `${wsProtocol}//${window.location.host}/api/slmm/${unitId}/live`;
currentWebSocket = new WebSocket(wsUrl);
currentWebSocket.onopen = function() {
console.log('WebSocket connected');
// Toggle button visibility
const startBtn = document.getElementById('start-stream-btn');
const stopBtn = document.getElementById('stop-stream-btn');
if (startBtn) startBtn.style.display = 'none';
if (stopBtn) stopBtn.style.display = 'flex';
};
currentWebSocket.onmessage = function(event) {
const data = JSON.parse(event.data);
updateLiveChart(data);
updateLiveMetrics(data);
};
currentWebSocket.onerror = function(error) {
console.error('WebSocket error:', error);
};
currentWebSocket.onclose = function() {
console.log('WebSocket closed');
// Toggle button visibility
const startBtn = document.getElementById('start-stream-btn');
const stopBtn = document.getElementById('stop-stream-btn');
if (startBtn) startBtn.style.display = 'flex';
if (stopBtn) stopBtn.style.display = 'none';
};
}
function stopLiveDataStream() {
if (currentWebSocket) {
currentWebSocket.close();
currentWebSocket = null;
}
}
// Update live chart with new data point
let chartData = {
timestamps: [],
lp: [],
leq: []
};
function updateLiveChart(data) {
const now = new Date();
chartData.timestamps.push(now.toLocaleTimeString());
chartData.lp.push(parseFloat(data.lp || 0));
chartData.leq.push(parseFloat(data.leq || 0));
// Keep only last 60 data points (1 minute at 1 sample/sec)
if (chartData.timestamps.length > 60) {
chartData.timestamps.shift();
chartData.lp.shift();
chartData.leq.shift();
}
// Update chart (using Chart.js if available)
if (window.liveChart) {
window.liveChart.data.labels = chartData.timestamps;
window.liveChart.data.datasets[0].data = chartData.lp;
window.liveChart.data.datasets[1].data = chartData.leq;
window.liveChart.update('none'); // Update without animation for smooth real-time
}
}
function updateLiveMetrics(data) {
// Update metric displays
if (document.getElementById('live-lp')) {
document.getElementById('live-lp').textContent = data.lp || '--';
}
if (document.getElementById('live-leq')) {
document.getElementById('live-leq').textContent = data.leq || '--';
}
if (document.getElementById('live-lmax')) {
document.getElementById('live-lmax').textContent = data.lmax || '--';
}
if (document.getElementById('live-lmin')) {
document.getElementById('live-lmin').textContent = data.lmin || '--';
}
}
// Cleanup on page unload
window.addEventListener('beforeunload', function() {
if (currentWebSocket) {
currentWebSocket.close();
}
});
</script>
{% endblock %}

File diff suppressed because it is too large Load Diff

Binary file not shown.

After

Width:  |  Height:  |  Size: 36 KiB

108
backend/init_projects_db.py Normal file
View File

@@ -0,0 +1,108 @@
#!/usr/bin/env python3
"""
Database initialization script for Projects system.
This script creates the new project management tables and populates
the project_types table with default templates.
Usage:
python -m backend.init_projects_db
"""
from sqlalchemy.orm import Session
from backend.database import engine, SessionLocal
from backend.models import (
Base,
ProjectType,
Project,
MonitoringLocation,
UnitAssignment,
ScheduledAction,
MonitoringSession,
DataFile,
)
from datetime import datetime
def init_project_types(db: Session):
"""Initialize default project types."""
project_types = [
{
"id": "sound_monitoring",
"name": "Sound Monitoring",
"description": "Noise monitoring projects with sound level meters and NRLs (Noise Recording Locations)",
"icon": "volume-2", # Lucide icon name
"supports_sound": True,
"supports_vibration": False,
},
{
"id": "vibration_monitoring",
"name": "Vibration Monitoring",
"description": "Seismic/vibration monitoring projects with seismographs and monitoring points",
"icon": "activity", # Lucide icon name
"supports_sound": False,
"supports_vibration": True,
},
{
"id": "combined",
"name": "Combined Monitoring",
"description": "Full-spectrum monitoring with both sound and vibration capabilities",
"icon": "layers", # Lucide icon name
"supports_sound": True,
"supports_vibration": True,
},
]
for pt_data in project_types:
existing = db.query(ProjectType).filter_by(id=pt_data["id"]).first()
if not existing:
pt = ProjectType(**pt_data)
db.add(pt)
print(f"✓ Created project type: {pt_data['name']}")
else:
print(f" Project type already exists: {pt_data['name']}")
db.commit()
def create_tables():
"""Create all tables defined in models."""
print("Creating project management tables...")
Base.metadata.create_all(bind=engine)
print("✓ Tables created successfully")
def main():
print("=" * 60)
print("Terra-View Projects System - Database Initialization")
print("=" * 60)
print()
# Create tables
create_tables()
print()
# Initialize project types
db = SessionLocal()
try:
print("Initializing project types...")
init_project_types(db)
print()
print("=" * 60)
print("✓ Database initialization complete!")
print("=" * 60)
print()
print("Next steps:")
print(" 1. Restart Terra-View to load new routes")
print(" 2. Navigate to /projects to create your first project")
print(" 3. Check documentation for API endpoints")
except Exception as e:
print(f"✗ Error during initialization: {e}")
db.rollback()
raise
finally:
db.close()
if __name__ == "__main__":
main()

854
backend/main.py Normal file
View File

@@ -0,0 +1,854 @@
import os
import logging
from fastapi import FastAPI, Request, Depends, HTTPException
from fastapi.middleware.cors import CORSMiddleware
from fastapi.staticfiles import StaticFiles
from fastapi.templating import Jinja2Templates
from fastapi.responses import HTMLResponse, FileResponse, JSONResponse
from fastapi.exceptions import RequestValidationError
from sqlalchemy.orm import Session
from typing import List, Dict, Optional
from pydantic import BaseModel
# Configure logging
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s'
)
logger = logging.getLogger(__name__)
from backend.database import engine, Base, get_db
from backend.routers import roster, units, photos, roster_edit, roster_rename, dashboard, dashboard_tabs, activity, slmm, slm_ui, slm_dashboard, seismo_dashboard, projects, project_locations, scheduler, modem_dashboard
from backend.services.snapshot import emit_status_snapshot
from backend.models import IgnoredUnit
from backend.utils.timezone import get_user_timezone
# Create database tables
Base.metadata.create_all(bind=engine)
# Read environment (development or production)
ENVIRONMENT = os.getenv("ENVIRONMENT", "production")
# Initialize FastAPI app
VERSION = "0.9.3"
if ENVIRONMENT == "development":
_build = os.getenv("BUILD_NUMBER", "0")
if _build and _build != "0":
VERSION = f"{VERSION}-{_build}"
app = FastAPI(
title="Seismo Fleet Manager",
description="Backend API for managing seismograph fleet status",
version=VERSION
)
# Add validation error handler to log details
@app.exception_handler(RequestValidationError)
async def validation_exception_handler(request: Request, exc: RequestValidationError):
logger.error(f"Validation error on {request.url}: {exc.errors()}")
logger.error(f"Body: {await request.body()}")
return JSONResponse(
status_code=400,
content={"detail": exc.errors()}
)
# Configure CORS
app.add_middleware(
CORSMiddleware,
allow_origins=["*"],
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)
# Mount static files
app.mount("/static", StaticFiles(directory="backend/static"), name="static")
# Use shared templates configuration with timezone filters
from backend.templates_config import templates
# Add custom context processor to inject environment variable into all templates
@app.middleware("http")
async def add_environment_to_context(request: Request, call_next):
"""Middleware to add environment variable to request state"""
request.state.environment = ENVIRONMENT
response = await call_next(request)
return response
# Override TemplateResponse to include environment and version in context
original_template_response = templates.TemplateResponse
def custom_template_response(name, context=None, *args, **kwargs):
if context is None:
context = {}
context["environment"] = ENVIRONMENT
context["version"] = VERSION
return original_template_response(name, context, *args, **kwargs)
templates.TemplateResponse = custom_template_response
# Include API routers
app.include_router(roster.router)
app.include_router(units.router)
app.include_router(photos.router)
app.include_router(roster_edit.router)
app.include_router(roster_rename.router)
app.include_router(dashboard.router)
app.include_router(dashboard_tabs.router)
app.include_router(activity.router)
app.include_router(slmm.router)
app.include_router(slm_ui.router)
app.include_router(slm_dashboard.router)
app.include_router(seismo_dashboard.router)
app.include_router(modem_dashboard.router)
from backend.routers import settings
app.include_router(settings.router)
from backend.routers import watcher_manager
app.include_router(watcher_manager.router)
# Projects system routers
app.include_router(projects.router)
app.include_router(project_locations.router)
app.include_router(scheduler.router)
# Report templates router
from backend.routers import report_templates
app.include_router(report_templates.router)
# Alerts router
from backend.routers import alerts
app.include_router(alerts.router)
# Recurring schedules router
from backend.routers import recurring_schedules
app.include_router(recurring_schedules.router)
# Fleet Calendar router
from backend.routers import fleet_calendar
app.include_router(fleet_calendar.router)
# Deployment Records router
from backend.routers import deployments
app.include_router(deployments.router)
# Start scheduler service and device status monitor on application startup
from backend.services.scheduler import start_scheduler, stop_scheduler
from backend.services.device_status_monitor import start_device_status_monitor, stop_device_status_monitor
@app.on_event("startup")
async def startup_event():
"""Initialize services on app startup"""
logger.info("Starting scheduler service...")
await start_scheduler()
logger.info("Scheduler service started")
logger.info("Starting device status monitor...")
await start_device_status_monitor()
logger.info("Device status monitor started")
@app.on_event("shutdown")
def shutdown_event():
"""Clean up services on app shutdown"""
logger.info("Stopping device status monitor...")
stop_device_status_monitor()
logger.info("Device status monitor stopped")
logger.info("Stopping scheduler service...")
stop_scheduler()
logger.info("Scheduler service stopped")
# Legacy routes from the original backend
from backend import routes as legacy_routes
app.include_router(legacy_routes.router)
# HTML page routes
@app.get("/", response_class=HTMLResponse)
async def dashboard(request: Request):
"""Dashboard home page"""
return templates.TemplateResponse("dashboard.html", {"request": request})
@app.get("/roster", response_class=HTMLResponse)
async def roster_page(request: Request):
"""Fleet roster page"""
return templates.TemplateResponse("roster.html", {"request": request})
@app.get("/unit/{unit_id}", response_class=HTMLResponse)
async def unit_detail_page(request: Request, unit_id: str):
"""Unit detail page"""
return templates.TemplateResponse("unit_detail.html", {
"request": request,
"unit_id": unit_id
})
@app.get("/settings", response_class=HTMLResponse)
async def settings_page(request: Request):
"""Settings page for roster management"""
return templates.TemplateResponse("settings.html", {"request": request})
@app.get("/sound-level-meters", response_class=HTMLResponse)
async def sound_level_meters_page(request: Request):
"""Sound Level Meters management dashboard"""
return templates.TemplateResponse("sound_level_meters.html", {"request": request})
@app.get("/slm/{unit_id}", response_class=HTMLResponse)
async def slm_legacy_dashboard(
request: Request,
unit_id: str,
from_project: Optional[str] = None,
from_nrl: Optional[str] = None,
db: Session = Depends(get_db)
):
"""Legacy SLM control center dashboard for a specific unit"""
# Get project details if from_project is provided
project = None
if from_project:
from backend.models import Project
project = db.query(Project).filter_by(id=from_project).first()
# Get NRL location details if from_nrl is provided
nrl_location = None
if from_nrl:
from backend.models import NRLLocation
nrl_location = db.query(NRLLocation).filter_by(id=from_nrl).first()
return templates.TemplateResponse("slm_legacy_dashboard.html", {
"request": request,
"unit_id": unit_id,
"from_project": from_project,
"from_nrl": from_nrl,
"project": project,
"nrl_location": nrl_location
})
@app.get("/seismographs", response_class=HTMLResponse)
async def seismographs_page(request: Request):
"""Seismographs management dashboard"""
return templates.TemplateResponse("seismographs.html", {"request": request})
@app.get("/modems", response_class=HTMLResponse)
async def modems_page(request: Request):
"""Field modems management dashboard"""
return templates.TemplateResponse("modems.html", {"request": request})
@app.get("/pair-devices", response_class=HTMLResponse)
async def pair_devices_page(request: Request, db: Session = Depends(get_db)):
"""
Device pairing page - two-column layout for pairing recorders with modems.
"""
from backend.models import RosterUnit
# Get all non-retired recorders (seismographs and SLMs)
recorders = db.query(RosterUnit).filter(
RosterUnit.retired == False,
RosterUnit.device_type.in_(["seismograph", "slm", None]) # None defaults to seismograph
).order_by(RosterUnit.id).all()
# Get all non-retired modems
modems = db.query(RosterUnit).filter(
RosterUnit.retired == False,
RosterUnit.device_type == "modem"
).order_by(RosterUnit.id).all()
# Build existing pairings list
pairings = []
for recorder in recorders:
if recorder.deployed_with_modem_id:
modem = next((m for m in modems if m.id == recorder.deployed_with_modem_id), None)
pairings.append({
"recorder_id": recorder.id,
"recorder_type": (recorder.device_type or "seismograph").upper(),
"modem_id": recorder.deployed_with_modem_id,
"modem_ip": modem.ip_address if modem else None
})
# Convert to dicts for template
recorders_data = [
{
"id": r.id,
"device_type": r.device_type or "seismograph",
"deployed": r.deployed,
"deployed_with_modem_id": r.deployed_with_modem_id
}
for r in recorders
]
modems_data = [
{
"id": m.id,
"deployed": m.deployed,
"deployed_with_unit_id": m.deployed_with_unit_id,
"ip_address": m.ip_address,
"phone_number": m.phone_number
}
for m in modems
]
return templates.TemplateResponse("pair_devices.html", {
"request": request,
"recorders": recorders_data,
"modems": modems_data,
"pairings": pairings
})
@app.get("/projects", response_class=HTMLResponse)
async def projects_page(request: Request):
"""Projects management and overview"""
return templates.TemplateResponse("projects/overview.html", {"request": request})
@app.get("/projects/{project_id}", response_class=HTMLResponse)
async def project_detail_page(request: Request, project_id: str):
"""Project detail dashboard"""
return templates.TemplateResponse("projects/detail.html", {
"request": request,
"project_id": project_id
})
@app.get("/projects/{project_id}/nrl/{location_id}", response_class=HTMLResponse)
async def nrl_detail_page(
request: Request,
project_id: str,
location_id: str,
db: Session = Depends(get_db)
):
"""NRL (Noise Recording Location) detail page with tabs"""
from backend.models import Project, MonitoringLocation, UnitAssignment, RosterUnit, MonitoringSession, DataFile
from sqlalchemy import and_
# Get project
project = db.query(Project).filter_by(id=project_id).first()
if not project:
return templates.TemplateResponse("404.html", {
"request": request,
"message": "Project not found"
}, status_code=404)
# Get location
location = db.query(MonitoringLocation).filter_by(
id=location_id,
project_id=project_id
).first()
if not location:
return templates.TemplateResponse("404.html", {
"request": request,
"message": "Location not found"
}, status_code=404)
# Get active assignment
assignment = db.query(UnitAssignment).filter(
and_(
UnitAssignment.location_id == location_id,
UnitAssignment.status == "active"
)
).first()
assigned_unit = None
assigned_modem = None
if assignment:
assigned_unit = db.query(RosterUnit).filter_by(id=assignment.unit_id).first()
if assigned_unit and assigned_unit.deployed_with_modem_id:
assigned_modem = db.query(RosterUnit).filter_by(id=assigned_unit.deployed_with_modem_id).first()
# Get session count
session_count = db.query(MonitoringSession).filter_by(location_id=location_id).count()
# Get file count (DataFile links to session, not directly to location)
file_count = db.query(DataFile).join(
MonitoringSession,
DataFile.session_id == MonitoringSession.id
).filter(MonitoringSession.location_id == location_id).count()
# Check for active session
active_session = db.query(MonitoringSession).filter(
and_(
MonitoringSession.location_id == location_id,
MonitoringSession.status == "recording"
)
).first()
# Parse connection_mode from location_metadata JSON
import json as _json
connection_mode = "connected"
try:
meta = _json.loads(location.location_metadata or "{}")
connection_mode = meta.get("connection_mode", "connected")
except Exception:
pass
template = "vibration_location_detail.html" if location.location_type == "vibration" else "nrl_detail.html"
return templates.TemplateResponse(template, {
"request": request,
"project_id": project_id,
"location_id": location_id,
"project": project,
"location": location,
"assignment": assignment,
"assigned_unit": assigned_unit,
"assigned_modem": assigned_modem,
"session_count": session_count,
"file_count": file_count,
"active_session": active_session,
"connection_mode": connection_mode,
})
# ===== PWA ROUTES =====
@app.get("/sw.js")
async def service_worker():
"""Serve service worker with proper headers for PWA"""
return FileResponse(
"backend/static/sw.js",
media_type="application/javascript",
headers={
"Service-Worker-Allowed": "/",
"Cache-Control": "no-cache"
}
)
@app.get("/offline-db.js")
async def offline_db_script():
"""Serve offline database script"""
return FileResponse(
"backend/static/offline-db.js",
media_type="application/javascript",
headers={"Cache-Control": "no-cache"}
)
# Pydantic model for sync edits
class EditItem(BaseModel):
id: int
unitId: str
changes: Dict
timestamp: int
class SyncEditsRequest(BaseModel):
edits: List[EditItem]
@app.post("/api/sync-edits")
async def sync_edits(request: SyncEditsRequest, db: Session = Depends(get_db)):
"""Process offline edit queue and sync to database"""
from backend.models import RosterUnit
results = []
synced_ids = []
for edit in request.edits:
try:
# Find the unit
unit = db.query(RosterUnit).filter_by(id=edit.unitId).first()
if not unit:
results.append({
"id": edit.id,
"status": "error",
"reason": f"Unit {edit.unitId} not found"
})
continue
# Apply changes
for key, value in edit.changes.items():
if hasattr(unit, key):
# Handle boolean conversions
if key in ['deployed', 'retired']:
setattr(unit, key, value in ['true', True, 'True', '1', 1])
else:
setattr(unit, key, value if value != '' else None)
db.commit()
results.append({
"id": edit.id,
"status": "success"
})
synced_ids.append(edit.id)
except Exception as e:
db.rollback()
results.append({
"id": edit.id,
"status": "error",
"reason": str(e)
})
synced_count = len(synced_ids)
return JSONResponse({
"synced": synced_count,
"total": len(request.edits),
"synced_ids": synced_ids,
"results": results
})
@app.get("/partials/roster-deployed", response_class=HTMLResponse)
async def roster_deployed_partial(request: Request):
"""Partial template for deployed units tab"""
from datetime import datetime
snapshot = emit_status_snapshot()
units_list = []
for unit_id, unit_data in snapshot["active"].items():
units_list.append({
"id": unit_id,
"status": unit_data.get("status", "Unknown"),
"age": unit_data.get("age", "N/A"),
"last_seen": unit_data.get("last", "Never"),
"deployed": unit_data.get("deployed", False),
"note": unit_data.get("note", ""),
"device_type": unit_data.get("device_type", "seismograph"),
"address": unit_data.get("address", ""),
"coordinates": unit_data.get("coordinates", ""),
"project_id": unit_data.get("project_id", ""),
"last_calibrated": unit_data.get("last_calibrated"),
"next_calibration_due": unit_data.get("next_calibration_due"),
"deployed_with_modem_id": unit_data.get("deployed_with_modem_id"),
"ip_address": unit_data.get("ip_address"),
"phone_number": unit_data.get("phone_number"),
"hardware_model": unit_data.get("hardware_model"),
})
# Sort by status priority (Missing > Pending > OK) then by ID
status_priority = {"Missing": 0, "Pending": 1, "OK": 2}
units_list.sort(key=lambda x: (status_priority.get(x["status"], 3), x["id"]))
return templates.TemplateResponse("partials/roster_table.html", {
"request": request,
"units": units_list,
"timestamp": datetime.now().strftime("%H:%M:%S")
})
@app.get("/partials/roster-benched", response_class=HTMLResponse)
async def roster_benched_partial(request: Request):
"""Partial template for benched units tab"""
from datetime import datetime
snapshot = emit_status_snapshot()
units_list = []
for unit_id, unit_data in snapshot["benched"].items():
units_list.append({
"id": unit_id,
"status": unit_data.get("status", "N/A"),
"age": unit_data.get("age", "N/A"),
"last_seen": unit_data.get("last", "Never"),
"deployed": unit_data.get("deployed", False),
"note": unit_data.get("note", ""),
"device_type": unit_data.get("device_type", "seismograph"),
"address": unit_data.get("address", ""),
"coordinates": unit_data.get("coordinates", ""),
"project_id": unit_data.get("project_id", ""),
"last_calibrated": unit_data.get("last_calibrated"),
"next_calibration_due": unit_data.get("next_calibration_due"),
"deployed_with_modem_id": unit_data.get("deployed_with_modem_id"),
"ip_address": unit_data.get("ip_address"),
"phone_number": unit_data.get("phone_number"),
"hardware_model": unit_data.get("hardware_model"),
})
# Sort by ID
units_list.sort(key=lambda x: x["id"])
return templates.TemplateResponse("partials/roster_table.html", {
"request": request,
"units": units_list,
"timestamp": datetime.now().strftime("%H:%M:%S")
})
@app.get("/partials/roster-retired", response_class=HTMLResponse)
async def roster_retired_partial(request: Request):
"""Partial template for retired units tab"""
from datetime import datetime
snapshot = emit_status_snapshot()
units_list = []
for unit_id, unit_data in snapshot["retired"].items():
units_list.append({
"id": unit_id,
"status": unit_data["status"],
"age": unit_data["age"],
"last_seen": unit_data["last"],
"deployed": unit_data["deployed"],
"note": unit_data.get("note", ""),
"device_type": unit_data.get("device_type", "seismograph"),
"last_calibrated": unit_data.get("last_calibrated"),
"next_calibration_due": unit_data.get("next_calibration_due"),
"deployed_with_modem_id": unit_data.get("deployed_with_modem_id"),
"ip_address": unit_data.get("ip_address"),
"phone_number": unit_data.get("phone_number"),
"hardware_model": unit_data.get("hardware_model"),
})
# Sort by ID
units_list.sort(key=lambda x: x["id"])
return templates.TemplateResponse("partials/retired_table.html", {
"request": request,
"units": units_list,
"timestamp": datetime.now().strftime("%H:%M:%S")
})
@app.get("/partials/roster-ignored", response_class=HTMLResponse)
async def roster_ignored_partial(request: Request, db: Session = Depends(get_db)):
"""Partial template for ignored units tab"""
from datetime import datetime
ignored = db.query(IgnoredUnit).all()
ignored_list = []
for unit in ignored:
ignored_list.append({
"id": unit.id,
"reason": unit.reason or "",
"ignored_at": unit.ignored_at.strftime("%Y-%m-%d %H:%M:%S") if unit.ignored_at else "Unknown"
})
# Sort by ID
ignored_list.sort(key=lambda x: x["id"])
return templates.TemplateResponse("partials/ignored_table.html", {
"request": request,
"ignored_units": ignored_list,
"timestamp": datetime.now().strftime("%H:%M:%S")
})
@app.get("/partials/unknown-emitters", response_class=HTMLResponse)
async def unknown_emitters_partial(request: Request):
"""Partial template for unknown emitters (HTMX)"""
snapshot = emit_status_snapshot()
unknown_list = []
for unit_id, unit_data in snapshot.get("unknown", {}).items():
unknown_list.append({
"id": unit_id,
"status": unit_data["status"],
"age": unit_data["age"],
"fname": unit_data.get("fname", ""),
})
# Sort by ID
unknown_list.sort(key=lambda x: x["id"])
return templates.TemplateResponse("partials/unknown_emitters.html", {
"request": request,
"unknown_units": unknown_list
})
@app.get("/partials/devices-all", response_class=HTMLResponse)
async def devices_all_partial(request: Request):
"""Unified partial template for ALL devices with comprehensive filtering support"""
from datetime import datetime
snapshot = emit_status_snapshot()
units_list = []
# Add deployed/active units
for unit_id, unit_data in snapshot["active"].items():
units_list.append({
"id": unit_id,
"status": unit_data.get("status", "Unknown"),
"age": unit_data.get("age", "N/A"),
"last_seen": unit_data.get("last", "Never"),
"deployed": True,
"retired": False,
"out_for_calibration": False,
"ignored": False,
"note": unit_data.get("note", ""),
"device_type": unit_data.get("device_type", "seismograph"),
"address": unit_data.get("address", ""),
"coordinates": unit_data.get("coordinates", ""),
"project_id": unit_data.get("project_id", ""),
"last_calibrated": unit_data.get("last_calibrated"),
"next_calibration_due": unit_data.get("next_calibration_due"),
"deployed_with_modem_id": unit_data.get("deployed_with_modem_id"),
"deployed_with_unit_id": unit_data.get("deployed_with_unit_id"),
"ip_address": unit_data.get("ip_address"),
"phone_number": unit_data.get("phone_number"),
"hardware_model": unit_data.get("hardware_model"),
})
# Add benched units
for unit_id, unit_data in snapshot["benched"].items():
units_list.append({
"id": unit_id,
"status": unit_data.get("status", "N/A"),
"age": unit_data.get("age", "N/A"),
"last_seen": unit_data.get("last", "Never"),
"deployed": False,
"retired": False,
"out_for_calibration": False,
"ignored": False,
"note": unit_data.get("note", ""),
"device_type": unit_data.get("device_type", "seismograph"),
"address": unit_data.get("address", ""),
"coordinates": unit_data.get("coordinates", ""),
"project_id": unit_data.get("project_id", ""),
"last_calibrated": unit_data.get("last_calibrated"),
"next_calibration_due": unit_data.get("next_calibration_due"),
"deployed_with_modem_id": unit_data.get("deployed_with_modem_id"),
"deployed_with_unit_id": unit_data.get("deployed_with_unit_id"),
"ip_address": unit_data.get("ip_address"),
"phone_number": unit_data.get("phone_number"),
"hardware_model": unit_data.get("hardware_model"),
})
# Add allocated units
for unit_id, unit_data in snapshot.get("allocated", {}).items():
units_list.append({
"id": unit_id,
"status": "Allocated",
"age": "N/A",
"last_seen": "N/A",
"deployed": False,
"retired": False,
"out_for_calibration": False,
"allocated": True,
"allocated_to_project_id": unit_data.get("allocated_to_project_id", ""),
"ignored": False,
"note": unit_data.get("note", ""),
"device_type": unit_data.get("device_type", "seismograph"),
"address": unit_data.get("address", ""),
"coordinates": unit_data.get("coordinates", ""),
"project_id": unit_data.get("project_id", ""),
"last_calibrated": unit_data.get("last_calibrated"),
"next_calibration_due": unit_data.get("next_calibration_due"),
"deployed_with_modem_id": unit_data.get("deployed_with_modem_id"),
"deployed_with_unit_id": unit_data.get("deployed_with_unit_id"),
"ip_address": unit_data.get("ip_address"),
"phone_number": unit_data.get("phone_number"),
"hardware_model": unit_data.get("hardware_model"),
})
# Add out-for-calibration units
for unit_id, unit_data in snapshot["out_for_calibration"].items():
units_list.append({
"id": unit_id,
"status": "Out for Calibration",
"age": "N/A",
"last_seen": "N/A",
"deployed": False,
"retired": False,
"out_for_calibration": True,
"ignored": False,
"note": unit_data.get("note", ""),
"device_type": unit_data.get("device_type", "seismograph"),
"address": unit_data.get("address", ""),
"coordinates": unit_data.get("coordinates", ""),
"project_id": unit_data.get("project_id", ""),
"last_calibrated": unit_data.get("last_calibrated"),
"next_calibration_due": unit_data.get("next_calibration_due"),
"deployed_with_modem_id": unit_data.get("deployed_with_modem_id"),
"deployed_with_unit_id": unit_data.get("deployed_with_unit_id"),
"ip_address": unit_data.get("ip_address"),
"phone_number": unit_data.get("phone_number"),
"hardware_model": unit_data.get("hardware_model"),
})
# Add retired units
for unit_id, unit_data in snapshot["retired"].items():
units_list.append({
"id": unit_id,
"status": "Retired",
"age": "N/A",
"last_seen": "N/A",
"deployed": False,
"retired": True,
"out_for_calibration": False,
"ignored": False,
"note": unit_data.get("note", ""),
"device_type": unit_data.get("device_type", "seismograph"),
"address": unit_data.get("address", ""),
"coordinates": unit_data.get("coordinates", ""),
"project_id": unit_data.get("project_id", ""),
"last_calibrated": unit_data.get("last_calibrated"),
"next_calibration_due": unit_data.get("next_calibration_due"),
"deployed_with_modem_id": unit_data.get("deployed_with_modem_id"),
"deployed_with_unit_id": unit_data.get("deployed_with_unit_id"),
"ip_address": unit_data.get("ip_address"),
"phone_number": unit_data.get("phone_number"),
"hardware_model": unit_data.get("hardware_model"),
})
# Add ignored units
for unit_id, unit_data in snapshot.get("ignored", {}).items():
units_list.append({
"id": unit_id,
"status": "Ignored",
"age": "N/A",
"last_seen": "N/A",
"deployed": False,
"retired": False,
"out_for_calibration": False,
"ignored": True,
"note": unit_data.get("note", unit_data.get("reason", "")),
"device_type": unit_data.get("device_type", "unknown"),
"address": "",
"coordinates": "",
"project_id": "",
"last_calibrated": None,
"next_calibration_due": None,
"deployed_with_modem_id": None,
"deployed_with_unit_id": None,
"ip_address": None,
"phone_number": None,
"hardware_model": None,
})
# Sort by status category, then by ID
def sort_key(unit):
# Priority: deployed (active) -> allocated -> benched -> out_for_calibration -> retired -> ignored
if unit["deployed"]:
return (0, unit["id"])
elif unit.get("allocated"):
return (1, unit["id"])
elif not unit["retired"] and not unit["out_for_calibration"] and not unit["ignored"]:
return (2, unit["id"])
elif unit["out_for_calibration"]:
return (3, unit["id"])
elif unit["retired"]:
return (4, unit["id"])
else:
return (5, unit["id"])
units_list.sort(key=sort_key)
return templates.TemplateResponse("partials/devices_table.html", {
"request": request,
"units": units_list,
"timestamp": datetime.now().strftime("%H:%M:%S"),
"user_timezone": get_user_timezone()
})
@app.get("/health")
def health_check():
"""Health check endpoint"""
return {
"message": f"Seismo Fleet Manager v{VERSION}",
"status": "running",
"version": VERSION
}
if __name__ == "__main__":
import uvicorn
uvicorn.run(app, host="0.0.0.0", port=8001)

View File

@@ -0,0 +1,35 @@
"""
Migration: Add allocated and allocated_to_project_id columns to roster table.
Run once: python backend/migrate_add_allocated.py
"""
import sqlite3
import os
DB_PATH = os.path.join(os.path.dirname(__file__), '..', 'data', 'seismo_fleet.db')
def run():
conn = sqlite3.connect(DB_PATH)
cur = conn.cursor()
# Check existing columns
cur.execute("PRAGMA table_info(roster)")
cols = {row[1] for row in cur.fetchall()}
if 'allocated' not in cols:
cur.execute("ALTER TABLE roster ADD COLUMN allocated BOOLEAN DEFAULT 0 NOT NULL")
print("Added column: allocated")
else:
print("Column already exists: allocated")
if 'allocated_to_project_id' not in cols:
cur.execute("ALTER TABLE roster ADD COLUMN allocated_to_project_id VARCHAR")
print("Added column: allocated_to_project_id")
else:
print("Column already exists: allocated_to_project_id")
conn.commit()
conn.close()
print("Migration complete.")
if __name__ == '__main__':
run()

View File

@@ -0,0 +1,67 @@
"""
Migration: Add auto_increment_index column to recurring_schedules table
This migration adds the auto_increment_index column that controls whether
the scheduler should automatically find an unused store index before starting
a new measurement.
Run this script once to update existing databases:
python -m backend.migrate_add_auto_increment_index
"""
import sqlite3
import os
DB_PATH = "data/seismo_fleet.db"
def migrate():
"""Add auto_increment_index column to recurring_schedules table."""
if not os.path.exists(DB_PATH):
print(f"Database not found at {DB_PATH}")
return False
conn = sqlite3.connect(DB_PATH)
cursor = conn.cursor()
try:
# Check if recurring_schedules table exists
cursor.execute("""
SELECT name FROM sqlite_master
WHERE type='table' AND name='recurring_schedules'
""")
if not cursor.fetchone():
print("recurring_schedules table does not exist yet. Will be created on app startup.")
conn.close()
return True
# Check if auto_increment_index column already exists
cursor.execute("PRAGMA table_info(recurring_schedules)")
columns = [row[1] for row in cursor.fetchall()]
if "auto_increment_index" in columns:
print("auto_increment_index column already exists in recurring_schedules table.")
conn.close()
return True
# Add the column
print("Adding auto_increment_index column to recurring_schedules table...")
cursor.execute("""
ALTER TABLE recurring_schedules
ADD COLUMN auto_increment_index BOOLEAN DEFAULT 1
""")
conn.commit()
print("Successfully added auto_increment_index column.")
conn.close()
return True
except Exception as e:
print(f"Migration failed: {e}")
conn.close()
return False
if __name__ == "__main__":
success = migrate()
exit(0 if success else 1)

View File

@@ -0,0 +1,79 @@
"""
Migration: Add deployment_records table.
Tracks each time a unit is sent to the field and returned.
The active deployment is the row with actual_removal_date IS NULL.
Run once per database:
python backend/migrate_add_deployment_records.py
"""
import sqlite3
import os
DB_PATH = "./data/seismo_fleet.db"
def migrate_database():
if not os.path.exists(DB_PATH):
print(f"Database not found at {DB_PATH}")
return
conn = sqlite3.connect(DB_PATH)
cursor = conn.cursor()
try:
# Check if table already exists
cursor.execute("""
SELECT name FROM sqlite_master
WHERE type='table' AND name='deployment_records'
""")
if cursor.fetchone():
print("✓ deployment_records table already exists, skipping")
return
print("Creating deployment_records table...")
cursor.execute("""
CREATE TABLE deployment_records (
id TEXT PRIMARY KEY,
unit_id TEXT NOT NULL,
deployed_date DATE,
estimated_removal_date DATE,
actual_removal_date DATE,
project_ref TEXT,
project_id TEXT,
location_name TEXT,
notes TEXT,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
)
""")
cursor.execute("""
CREATE INDEX idx_deployment_records_unit_id
ON deployment_records(unit_id)
""")
cursor.execute("""
CREATE INDEX idx_deployment_records_project_id
ON deployment_records(project_id)
""")
# Index for finding active deployments quickly
cursor.execute("""
CREATE INDEX idx_deployment_records_active
ON deployment_records(unit_id, actual_removal_date)
""")
conn.commit()
print("✓ deployment_records table created successfully")
print("✓ Indexes created")
except Exception as e:
conn.rollback()
print(f"✗ Migration failed: {e}")
raise
finally:
conn.close()
if __name__ == "__main__":
migrate_database()

View File

@@ -0,0 +1,84 @@
"""
Migration script to add deployment_type and deployed_with_unit_id fields to roster table.
deployment_type: tracks what type of device a modem is deployed with:
- "seismograph" - Modem is connected to a seismograph
- "slm" - Modem is connected to a sound level meter
- NULL/empty - Not assigned or unknown
deployed_with_unit_id: stores the ID of the seismograph/SLM this modem is deployed with
(reverse relationship of deployed_with_modem_id)
Run this script once to migrate an existing database.
"""
import sqlite3
import os
# Database path
DB_PATH = "./data/seismo_fleet.db"
def migrate_database():
"""Add deployment_type and deployed_with_unit_id columns to roster table"""
if not os.path.exists(DB_PATH):
print(f"Database not found at {DB_PATH}")
print("The database will be created automatically when you run the application.")
return
print(f"Migrating database: {DB_PATH}")
conn = sqlite3.connect(DB_PATH)
cursor = conn.cursor()
# Check if roster table exists
cursor.execute("SELECT name FROM sqlite_master WHERE type='table' AND name='roster'")
table_exists = cursor.fetchone()
if not table_exists:
print("Roster table does not exist yet - will be created when app runs")
conn.close()
return
# Check existing columns
cursor.execute("PRAGMA table_info(roster)")
columns = [col[1] for col in cursor.fetchall()]
try:
# Add deployment_type if not exists
if 'deployment_type' not in columns:
print("Adding deployment_type column to roster table...")
cursor.execute("ALTER TABLE roster ADD COLUMN deployment_type TEXT")
print(" Added deployment_type column")
cursor.execute("CREATE INDEX IF NOT EXISTS ix_roster_deployment_type ON roster(deployment_type)")
print(" Created index on deployment_type")
else:
print("deployment_type column already exists")
# Add deployed_with_unit_id if not exists
if 'deployed_with_unit_id' not in columns:
print("Adding deployed_with_unit_id column to roster table...")
cursor.execute("ALTER TABLE roster ADD COLUMN deployed_with_unit_id TEXT")
print(" Added deployed_with_unit_id column")
cursor.execute("CREATE INDEX IF NOT EXISTS ix_roster_deployed_with_unit_id ON roster(deployed_with_unit_id)")
print(" Created index on deployed_with_unit_id")
else:
print("deployed_with_unit_id column already exists")
conn.commit()
print("\nMigration completed successfully!")
except sqlite3.Error as e:
print(f"\nError during migration: {e}")
conn.rollback()
raise
finally:
conn.close()
if __name__ == "__main__":
migrate_database()

View File

@@ -0,0 +1,84 @@
"""
Migration script to add device type support to the roster table.
This adds columns for:
- device_type (seismograph/modem discriminator)
- Seismograph-specific fields (calibration dates, modem pairing)
- Modem-specific fields (IP address, phone number, hardware model)
Run this script once to migrate an existing database.
"""
import sqlite3
import os
# Database path
DB_PATH = "./data/seismo_fleet.db"
def migrate_database():
"""Add new columns to the roster table"""
if not os.path.exists(DB_PATH):
print(f"Database not found at {DB_PATH}")
print("The database will be created automatically when you run the application.")
return
print(f"Migrating database: {DB_PATH}")
conn = sqlite3.connect(DB_PATH)
cursor = conn.cursor()
# Check if device_type column already exists
cursor.execute("PRAGMA table_info(roster)")
columns = [col[1] for col in cursor.fetchall()]
if "device_type" in columns:
print("Migration already applied - device_type column exists")
conn.close()
return
print("Adding new columns to roster table...")
try:
# Add device type discriminator
cursor.execute("ALTER TABLE roster ADD COLUMN device_type TEXT DEFAULT 'seismograph'")
print(" ✓ Added device_type column")
# Add seismograph-specific fields
cursor.execute("ALTER TABLE roster ADD COLUMN last_calibrated DATE")
print(" ✓ Added last_calibrated column")
cursor.execute("ALTER TABLE roster ADD COLUMN next_calibration_due DATE")
print(" ✓ Added next_calibration_due column")
cursor.execute("ALTER TABLE roster ADD COLUMN deployed_with_modem_id TEXT")
print(" ✓ Added deployed_with_modem_id column")
# Add modem-specific fields
cursor.execute("ALTER TABLE roster ADD COLUMN ip_address TEXT")
print(" ✓ Added ip_address column")
cursor.execute("ALTER TABLE roster ADD COLUMN phone_number TEXT")
print(" ✓ Added phone_number column")
cursor.execute("ALTER TABLE roster ADD COLUMN hardware_model TEXT")
print(" ✓ Added hardware_model column")
# Set all existing units to seismograph type
cursor.execute("UPDATE roster SET device_type = 'seismograph' WHERE device_type IS NULL")
print(" ✓ Set existing units to seismograph type")
conn.commit()
print("\nMigration completed successfully!")
except sqlite3.Error as e:
print(f"\nError during migration: {e}")
conn.rollback()
raise
finally:
conn.close()
if __name__ == "__main__":
migrate_database()

View File

@@ -0,0 +1,62 @@
"""
Migration: Add estimated_units to job_reservations
Adds column:
- job_reservations.estimated_units: Estimated number of units for the reservation (nullable integer)
"""
import sqlite3
import sys
from pathlib import Path
# Default database path (matches production pattern)
DB_PATH = "./data/seismo_fleet.db"
def migrate(db_path: str):
"""Run the migration."""
print(f"Migrating database: {db_path}")
conn = sqlite3.connect(db_path)
cursor = conn.cursor()
try:
# Check if job_reservations table exists
cursor.execute("SELECT name FROM sqlite_master WHERE type='table' AND name='job_reservations'")
if not cursor.fetchone():
print("job_reservations table does not exist. Skipping migration.")
return
# Get existing columns in job_reservations
cursor.execute("PRAGMA table_info(job_reservations)")
existing_cols = {row[1] for row in cursor.fetchall()}
# Add estimated_units column if it doesn't exist
if 'estimated_units' not in existing_cols:
print("Adding estimated_units column to job_reservations...")
cursor.execute("ALTER TABLE job_reservations ADD COLUMN estimated_units INTEGER")
else:
print("estimated_units column already exists. Skipping.")
conn.commit()
print("Migration completed successfully!")
except Exception as e:
print(f"Migration failed: {e}")
conn.rollback()
raise
finally:
conn.close()
if __name__ == "__main__":
db_path = DB_PATH
if len(sys.argv) > 1:
db_path = sys.argv[1]
if not Path(db_path).exists():
print(f"Database not found: {db_path}")
sys.exit(1)
migrate(db_path)

View File

@@ -0,0 +1,103 @@
"""
Migration script to add job reservations for the Fleet Calendar feature.
This creates two tables:
- job_reservations: Track future unit assignments for jobs/projects
- job_reservation_units: Link specific units to reservations
Run this script once to migrate an existing database.
"""
import sqlite3
import os
# Database path
DB_PATH = "./data/seismo_fleet.db"
def migrate_database():
"""Create the job_reservations and job_reservation_units tables"""
if not os.path.exists(DB_PATH):
print(f"Database not found at {DB_PATH}")
print("The database will be created automatically when you run the application.")
return
print(f"Migrating database: {DB_PATH}")
conn = sqlite3.connect(DB_PATH)
cursor = conn.cursor()
# Check if job_reservations table already exists
cursor.execute("SELECT name FROM sqlite_master WHERE type='table' AND name='job_reservations'")
if cursor.fetchone():
print("Migration already applied - job_reservations table exists")
conn.close()
return
print("Creating job_reservations table...")
try:
# Create job_reservations table
cursor.execute("""
CREATE TABLE job_reservations (
id TEXT PRIMARY KEY,
name TEXT NOT NULL,
project_id TEXT,
start_date DATE NOT NULL,
end_date DATE NOT NULL,
assignment_type TEXT NOT NULL DEFAULT 'quantity',
device_type TEXT DEFAULT 'seismograph',
quantity_needed INTEGER,
notes TEXT,
color TEXT DEFAULT '#3B82F6',
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
)
""")
print(" Created job_reservations table")
# Create indexes for job_reservations
cursor.execute("CREATE INDEX idx_job_reservations_project_id ON job_reservations(project_id)")
print(" Created index on project_id")
cursor.execute("CREATE INDEX idx_job_reservations_dates ON job_reservations(start_date, end_date)")
print(" Created index on dates")
# Create job_reservation_units table
print("Creating job_reservation_units table...")
cursor.execute("""
CREATE TABLE job_reservation_units (
id TEXT PRIMARY KEY,
reservation_id TEXT NOT NULL,
unit_id TEXT NOT NULL,
assignment_source TEXT DEFAULT 'specific',
assigned_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
FOREIGN KEY (reservation_id) REFERENCES job_reservations(id),
FOREIGN KEY (unit_id) REFERENCES roster(id)
)
""")
print(" Created job_reservation_units table")
# Create indexes for job_reservation_units
cursor.execute("CREATE INDEX idx_job_reservation_units_reservation_id ON job_reservation_units(reservation_id)")
print(" Created index on reservation_id")
cursor.execute("CREATE INDEX idx_job_reservation_units_unit_id ON job_reservation_units(unit_id)")
print(" Created index on unit_id")
conn.commit()
print("\nMigration completed successfully!")
print("You can now use the Fleet Calendar to manage unit reservations.")
except sqlite3.Error as e:
print(f"\nError during migration: {e}")
conn.rollback()
raise
finally:
conn.close()
if __name__ == "__main__":
migrate_database()

View File

@@ -0,0 +1,24 @@
"""
Migration: Add location_slots column to job_reservations table.
Stores the full ordered slot list (including empty/unassigned slots) as JSON.
Run once per database.
"""
import sqlite3
import os
DB_PATH = os.environ.get("DB_PATH", "/app/data/seismo_fleet.db")
def run():
conn = sqlite3.connect(DB_PATH)
cursor = conn.cursor()
existing = [r[1] for r in cursor.execute("PRAGMA table_info(job_reservations)").fetchall()]
if "location_slots" not in existing:
cursor.execute("ALTER TABLE job_reservations ADD COLUMN location_slots TEXT")
conn.commit()
print("Added location_slots column to job_reservations.")
else:
print("location_slots column already exists, skipping.")
conn.close()
if __name__ == "__main__":
run()

View File

@@ -0,0 +1,73 @@
"""
Migration: Add one-off schedule fields to recurring_schedules table
Adds start_datetime and end_datetime columns for one-off recording schedules.
Run this script once to update existing databases:
python -m backend.migrate_add_oneoff_schedule_fields
"""
import sqlite3
import os
DB_PATH = "data/seismo_fleet.db"
def migrate():
"""Add one-off schedule columns to recurring_schedules table."""
if not os.path.exists(DB_PATH):
print(f"Database not found at {DB_PATH}")
return False
conn = sqlite3.connect(DB_PATH)
cursor = conn.cursor()
try:
cursor.execute("""
SELECT name FROM sqlite_master
WHERE type='table' AND name='recurring_schedules'
""")
if not cursor.fetchone():
print("recurring_schedules table does not exist yet. Will be created on app startup.")
conn.close()
return True
cursor.execute("PRAGMA table_info(recurring_schedules)")
columns = [row[1] for row in cursor.fetchall()]
added = False
if "start_datetime" not in columns:
print("Adding start_datetime column to recurring_schedules table...")
cursor.execute("""
ALTER TABLE recurring_schedules
ADD COLUMN start_datetime DATETIME NULL
""")
added = True
if "end_datetime" not in columns:
print("Adding end_datetime column to recurring_schedules table...")
cursor.execute("""
ALTER TABLE recurring_schedules
ADD COLUMN end_datetime DATETIME NULL
""")
added = True
if added:
conn.commit()
print("Successfully added one-off schedule columns.")
else:
print("One-off schedule columns already exist.")
conn.close()
return True
except Exception as e:
print(f"Migration failed: {e}")
conn.close()
return False
if __name__ == "__main__":
success = migrate()
exit(0 if success else 1)

View File

@@ -0,0 +1,54 @@
"""
Database Migration: Add out_for_calibration field to roster table
Changes:
- Adds out_for_calibration BOOLEAN column (default FALSE) to roster table
- Safe to run multiple times (idempotent)
- No data loss
Usage:
python backend/migrate_add_out_for_calibration.py
"""
import sys
import os
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from sqlalchemy import create_engine, text
from sqlalchemy.orm import sessionmaker
SQLALCHEMY_DATABASE_URL = "sqlite:///./data/seismo_fleet.db"
engine = create_engine(SQLALCHEMY_DATABASE_URL, connect_args={"check_same_thread": False})
SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)
def migrate():
db = SessionLocal()
try:
print("=" * 60)
print("Migration: Add out_for_calibration to roster")
print("=" * 60)
# Check if column already exists
result = db.execute(text("PRAGMA table_info(roster)")).fetchall()
columns = [row[1] for row in result]
if "out_for_calibration" in columns:
print("Column out_for_calibration already exists. Skipping.")
else:
db.execute(text("ALTER TABLE roster ADD COLUMN out_for_calibration BOOLEAN DEFAULT FALSE"))
db.commit()
print("Added out_for_calibration column to roster table.")
print("Migration complete.")
except Exception as e:
db.rollback()
print(f"Error: {e}")
raise
finally:
db.close()
if __name__ == "__main__":
migrate()

View File

@@ -0,0 +1,53 @@
#!/usr/bin/env python3
"""
Migration: Add data_collection_mode column to projects table.
Values:
"remote" — units have modems; data pulled via FTP/scheduler automatically
"manual" — no modem; SD cards retrieved daily and uploaded by hand
All existing projects are backfilled to "manual" (safe conservative default).
Run once inside the Docker container:
docker exec terra-view python3 backend/migrate_add_project_data_collection_mode.py
"""
from pathlib import Path
DB_PATH = Path("data/seismo_fleet.db")
def migrate():
import sqlite3
if not DB_PATH.exists():
print(f"Database not found at {DB_PATH}. Are you running from /home/serversdown/terra-view?")
return
conn = sqlite3.connect(DB_PATH)
conn.row_factory = sqlite3.Row
cur = conn.cursor()
# ── 1. Add column (idempotent) ───────────────────────────────────────────
cur.execute("PRAGMA table_info(projects)")
existing_cols = {row["name"] for row in cur.fetchall()}
if "data_collection_mode" not in existing_cols:
cur.execute("ALTER TABLE projects ADD COLUMN data_collection_mode TEXT DEFAULT 'manual'")
conn.commit()
print("✓ Added column data_collection_mode to projects")
else:
print("○ Column data_collection_mode already exists — skipping ALTER TABLE")
# ── 2. Backfill NULLs to 'manual' ────────────────────────────────────────
cur.execute("UPDATE projects SET data_collection_mode = 'manual' WHERE data_collection_mode IS NULL")
updated = cur.rowcount
conn.commit()
conn.close()
if updated:
print(f"✓ Backfilled {updated} project(s) to data_collection_mode='manual'.")
print("Migration complete.")
if __name__ == "__main__":
migrate()

View File

@@ -0,0 +1,56 @@
"""
Migration: Add deleted_at column to projects table
Adds columns:
- projects.deleted_at: Timestamp set when status='deleted'; data hard-deleted after 60 days
"""
import sqlite3
import sys
from pathlib import Path
def migrate(db_path: str):
"""Run the migration."""
print(f"Migrating database: {db_path}")
conn = sqlite3.connect(db_path)
cursor = conn.cursor()
try:
cursor.execute("SELECT name FROM sqlite_master WHERE type='table' AND name='projects'")
if not cursor.fetchone():
print("projects table does not exist. Skipping migration.")
return
cursor.execute("PRAGMA table_info(projects)")
existing_cols = {row[1] for row in cursor.fetchall()}
if 'deleted_at' not in existing_cols:
print("Adding deleted_at column to projects...")
cursor.execute("ALTER TABLE projects ADD COLUMN deleted_at DATETIME")
else:
print("deleted_at column already exists. Skipping.")
conn.commit()
print("Migration completed successfully!")
except Exception as e:
print(f"Migration failed: {e}")
conn.rollback()
raise
finally:
conn.close()
if __name__ == "__main__":
db_path = "./data/seismo_fleet.db"
if len(sys.argv) > 1:
db_path = sys.argv[1]
if not Path(db_path).exists():
print(f"Database not found: {db_path}")
sys.exit(1)
migrate(db_path)

View File

@@ -0,0 +1,71 @@
"""
Migration: Add project_modules table and seed from existing project_type_id values.
Safe to run multiple times — idempotent.
"""
import sqlite3
import uuid
import os
DB_PATH = os.path.join(os.path.dirname(__file__), "..", "data", "seismo_fleet.db")
DB_PATH = os.path.abspath(DB_PATH)
def run():
conn = sqlite3.connect(DB_PATH)
conn.row_factory = sqlite3.Row
cur = conn.cursor()
# 1. Create project_modules table if not exists
cur.execute("""
CREATE TABLE IF NOT EXISTS project_modules (
id TEXT PRIMARY KEY,
project_id TEXT NOT NULL,
module_type TEXT NOT NULL,
enabled INTEGER NOT NULL DEFAULT 1,
created_at TEXT NOT NULL DEFAULT (datetime('now')),
UNIQUE(project_id, module_type)
)
""")
print(" Table 'project_modules' ready.")
# 2. Seed modules from existing project_type_id values
cur.execute("SELECT id, project_type_id FROM projects WHERE project_type_id IS NOT NULL")
projects = cur.fetchall()
seeded = 0
for p in projects:
pid = p["id"]
ptype = p["project_type_id"]
modules_to_add = []
if ptype == "sound_monitoring":
modules_to_add = ["sound_monitoring"]
elif ptype == "vibration_monitoring":
modules_to_add = ["vibration_monitoring"]
elif ptype == "combined":
modules_to_add = ["sound_monitoring", "vibration_monitoring"]
for module_type in modules_to_add:
# INSERT OR IGNORE — skip if already exists
cur.execute("""
INSERT OR IGNORE INTO project_modules (id, project_id, module_type, enabled)
VALUES (?, ?, ?, 1)
""", (str(uuid.uuid4()), pid, module_type))
if cur.rowcount > 0:
seeded += 1
conn.commit()
print(f" Seeded {seeded} module record(s) from existing project_type_id values.")
# 3. Make project_type_id nullable (SQLite doesn't support ALTER COLUMN,
# but since we're just loosening a constraint this is a no-op in SQLite —
# the column already accepts NULL in practice. Nothing to do.)
print(" project_type_id column is now treated as nullable (legacy field).")
conn.close()
print("Migration complete.")
if __name__ == "__main__":
run()

View File

@@ -0,0 +1,80 @@
"""
Migration script to add project_number field to projects table.
This adds a new column for TMI internal project numbering:
- Format: xxxx-YY (e.g., "2567-23")
- xxxx = incremental project number
- YY = year project was started
Combined with client_name and name (project/site name), this enables
smart searching across all project identifiers.
Run this script once to migrate an existing database.
"""
import sqlite3
import os
# Database path
DB_PATH = "./data/seismo_fleet.db"
def migrate_database():
"""Add project_number column to projects table"""
if not os.path.exists(DB_PATH):
print(f"Database not found at {DB_PATH}")
print("The database will be created automatically when you run the application.")
return
print(f"Migrating database: {DB_PATH}")
conn = sqlite3.connect(DB_PATH)
cursor = conn.cursor()
# Check if projects table exists
cursor.execute("SELECT name FROM sqlite_master WHERE type='table' AND name='projects'")
table_exists = cursor.fetchone()
if not table_exists:
print("Projects table does not exist yet - will be created when app runs")
conn.close()
return
# Check if project_number column already exists
cursor.execute("PRAGMA table_info(projects)")
columns = [col[1] for col in cursor.fetchall()]
if 'project_number' in columns:
print("Migration already applied - project_number column exists")
conn.close()
return
print("Adding project_number column to projects table...")
try:
cursor.execute("ALTER TABLE projects ADD COLUMN project_number TEXT")
print(" Added project_number column")
# Create index for faster searching
cursor.execute("CREATE INDEX IF NOT EXISTS ix_projects_project_number ON projects(project_number)")
print(" Created index on project_number")
# Also add index on client_name if it doesn't exist
cursor.execute("CREATE INDEX IF NOT EXISTS ix_projects_client_name ON projects(client_name)")
print(" Created index on client_name")
conn.commit()
print("\nMigration completed successfully!")
except sqlite3.Error as e:
print(f"\nError during migration: {e}")
conn.rollback()
raise
finally:
conn.close()
if __name__ == "__main__":
migrate_database()

View File

@@ -0,0 +1,88 @@
"""
Migration script to add report_templates table.
This creates a new table for storing report generation configurations:
- Template name and project association
- Time filtering settings (start/end time)
- Date range filtering (optional)
- Report title defaults
Run this script once to migrate an existing database.
"""
import sqlite3
import os
# Database path
DB_PATH = "./data/seismo_fleet.db"
def migrate_database():
"""Create report_templates table"""
if not os.path.exists(DB_PATH):
print(f"Database not found at {DB_PATH}")
print("The database will be created automatically when you run the application.")
return
print(f"Migrating database: {DB_PATH}")
conn = sqlite3.connect(DB_PATH)
cursor = conn.cursor()
# Check if report_templates table already exists
cursor.execute("SELECT name FROM sqlite_master WHERE type='table' AND name='report_templates'")
table_exists = cursor.fetchone()
if table_exists:
print("Migration already applied - report_templates table exists")
conn.close()
return
print("Creating report_templates table...")
try:
cursor.execute("""
CREATE TABLE report_templates (
id TEXT PRIMARY KEY,
name TEXT NOT NULL,
project_id TEXT,
report_title TEXT DEFAULT 'Background Noise Study',
start_time TEXT,
end_time TEXT,
start_date TEXT,
end_date TEXT,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
)
""")
print(" ✓ Created report_templates table")
# Insert default templates
import uuid
default_templates = [
(str(uuid.uuid4()), "Nighttime (7PM-7AM)", None, "Background Noise Study", "19:00", "07:00", None, None),
(str(uuid.uuid4()), "Daytime (7AM-7PM)", None, "Background Noise Study", "07:00", "19:00", None, None),
(str(uuid.uuid4()), "Full Day (All Data)", None, "Background Noise Study", None, None, None, None),
]
cursor.executemany("""
INSERT INTO report_templates (id, name, project_id, report_title, start_time, end_time, start_date, end_date)
VALUES (?, ?, ?, ?, ?, ?, ?, ?)
""", default_templates)
print(" ✓ Inserted default templates (Nighttime, Daytime, Full Day)")
conn.commit()
print("\nMigration completed successfully!")
except sqlite3.Error as e:
print(f"\nError during migration: {e}")
conn.rollback()
raise
finally:
conn.close()
if __name__ == "__main__":
migrate_database()

View File

@@ -0,0 +1,127 @@
#!/usr/bin/env python3
"""
Migration: Add device_model column to monitoring_sessions table.
Records which physical SLM model produced each session's data (e.g. "NL-43",
"NL-53", "NL-32"). Used by report generation to apply the correct parsing
logic without re-opening files to detect format.
Run once inside the Docker container:
docker exec terra-view python3 backend/migrate_add_session_device_model.py
Backfill strategy for existing rows:
1. If session.unit_id is set, use roster.slm_model for that unit.
2. Else, peek at the first .rnd file in the session: presence of the 'LAeq'
column header identifies AU2 / NL-32 format.
Sessions where neither hint is available remain NULL — the file-content
fallback in report code handles them transparently.
"""
import csv
import io
from pathlib import Path
DB_PATH = Path("data/seismo_fleet.db")
def _peek_first_row(abs_path: Path) -> dict:
"""Read only the header + first data row of an RND file. Very cheap."""
try:
with open(abs_path, "r", encoding="utf-8", errors="replace") as f:
reader = csv.DictReader(f)
return next(reader, None) or {}
except Exception:
return {}
def _detect_model_from_rnd(abs_path: Path) -> str | None:
"""Return 'NL-32' if file uses AU2 column format, else None."""
row = _peek_first_row(abs_path)
if "LAeq" in row:
return "NL-32"
return None
def migrate():
import sqlite3
if not DB_PATH.exists():
print(f"Database not found at {DB_PATH}. Are you running from /home/serversdown/terra-view?")
return
conn = sqlite3.connect(DB_PATH)
conn.row_factory = sqlite3.Row
cur = conn.cursor()
# ── 1. Add column (idempotent) ───────────────────────────────────────────
cur.execute("PRAGMA table_info(monitoring_sessions)")
existing_cols = {row["name"] for row in cur.fetchall()}
if "device_model" not in existing_cols:
cur.execute("ALTER TABLE monitoring_sessions ADD COLUMN device_model TEXT")
conn.commit()
print("✓ Added column device_model to monitoring_sessions")
else:
print("○ Column device_model already exists — skipping ALTER TABLE")
# ── 2. Backfill existing NULL rows ───────────────────────────────────────
cur.execute(
"SELECT id, unit_id FROM monitoring_sessions WHERE device_model IS NULL"
)
sessions = cur.fetchall()
print(f"Backfilling {len(sessions)} session(s) with device_model=NULL...")
updated = skipped = 0
for row in sessions:
session_id = row["id"]
unit_id = row["unit_id"]
device_model = None
# Strategy A: look up unit's slm_model from the roster
if unit_id:
cur.execute(
"SELECT slm_model FROM roster WHERE id = ?", (unit_id,)
)
unit_row = cur.fetchone()
if unit_row and unit_row["slm_model"]:
device_model = unit_row["slm_model"]
# Strategy B: detect from first .rnd file in the session
if device_model is None:
cur.execute(
"""SELECT file_path FROM data_files
WHERE session_id = ?
AND lower(file_path) LIKE '%.rnd'
LIMIT 1""",
(session_id,),
)
file_row = cur.fetchone()
if file_row:
abs_path = Path("data") / file_row["file_path"]
device_model = _detect_model_from_rnd(abs_path)
# None here means NL-43/NL-53 format (or unreadable file) —
# leave as NULL so the existing fallback applies.
if device_model:
cur.execute(
"UPDATE monitoring_sessions SET device_model = ? WHERE id = ?",
(device_model, session_id),
)
updated += 1
else:
skipped += 1
conn.commit()
conn.close()
print(f"✓ Backfilled {updated} session(s) with a device_model.")
if skipped:
print(
f" {skipped} session(s) left as NULL "
"(no unit link and no AU2 file hint — NL-43/NL-53 or unknown; "
"file-content detection applies at report time)."
)
print("Migration complete.")
if __name__ == "__main__":
migrate()

View File

@@ -0,0 +1,42 @@
"""
Migration: add period_start_hour and period_end_hour to monitoring_sessions.
Run once:
python backend/migrate_add_session_period_hours.py
Or inside the container:
docker exec terra-view python3 backend/migrate_add_session_period_hours.py
"""
import sys
import os
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from backend.database import engine
from sqlalchemy import text
def run():
with engine.connect() as conn:
# Check which columns already exist
result = conn.execute(text("PRAGMA table_info(monitoring_sessions)"))
existing = {row[1] for row in result}
added = []
for col, definition in [
("period_start_hour", "INTEGER"),
("period_end_hour", "INTEGER"),
]:
if col not in existing:
conn.execute(text(f"ALTER TABLE monitoring_sessions ADD COLUMN {col} {definition}"))
added.append(col)
else:
print(f" Column '{col}' already exists — skipping.")
conn.commit()
if added:
print(f" Added columns: {', '.join(added)}")
print("Migration complete.")
if __name__ == "__main__":
run()

View File

@@ -0,0 +1,131 @@
#!/usr/bin/env python3
"""
Migration: Add session_label and period_type columns to monitoring_sessions.
session_label - user-editable display name, e.g. "NRL-1 Sun 2/23 Night"
period_type - one of: weekday_day | weekday_night | weekend_day | weekend_night
Auto-derived from started_at when NULL.
Period definitions (used in report stats table):
weekday_day Mon-Fri 07:00-22:00 -> Daytime (7AM-10PM)
weekday_night Mon-Fri 22:00-07:00 -> Nighttime (10PM-7AM)
weekend_day Sat-Sun 07:00-22:00 -> Daytime (7AM-10PM)
weekend_night Sat-Sun 22:00-07:00 -> Nighttime (10PM-7AM)
Run once inside the Docker container:
docker exec terra-view python3 backend/migrate_add_session_period_type.py
"""
from pathlib import Path
from datetime import datetime
DB_PATH = Path("data/seismo_fleet.db")
def _derive_period_type(started_at_str: str) -> str | None:
"""Derive period_type from a started_at ISO datetime string."""
if not started_at_str:
return None
try:
dt = datetime.fromisoformat(started_at_str)
except ValueError:
return None
is_weekend = dt.weekday() >= 5 # 5=Sat, 6=Sun
is_night = dt.hour >= 22 or dt.hour < 7
if is_weekend:
return "weekend_night" if is_night else "weekend_day"
else:
return "weekday_night" if is_night else "weekday_day"
def _build_label(started_at_str: str, location_name: str | None, period_type: str | None) -> str | None:
"""Build a human-readable session label."""
if not started_at_str:
return None
try:
dt = datetime.fromisoformat(started_at_str)
except ValueError:
return None
day_abbr = dt.strftime("%a") # Mon, Tue, Sun, etc.
date_str = dt.strftime("%-m/%-d") # 2/23
period_labels = {
"weekday_day": "Day",
"weekday_night": "Night",
"weekend_day": "Day",
"weekend_night": "Night",
}
period_str = period_labels.get(period_type or "", "")
parts = []
if location_name:
parts.append(location_name)
parts.append(f"{day_abbr} {date_str}")
if period_str:
parts.append(period_str)
return "".join(parts)
def migrate():
import sqlite3
if not DB_PATH.exists():
print(f"Database not found at {DB_PATH}. Are you running from /home/serversdown/terra-view?")
return
conn = sqlite3.connect(DB_PATH)
conn.row_factory = sqlite3.Row
cur = conn.cursor()
# 1. Add columns (idempotent)
cur.execute("PRAGMA table_info(monitoring_sessions)")
existing_cols = {row["name"] for row in cur.fetchall()}
for col, typedef in [("session_label", "TEXT"), ("period_type", "TEXT")]:
if col not in existing_cols:
cur.execute(f"ALTER TABLE monitoring_sessions ADD COLUMN {col} {typedef}")
conn.commit()
print(f"✓ Added column {col} to monitoring_sessions")
else:
print(f"○ Column {col} already exists — skipping ALTER TABLE")
# 2. Backfill existing rows
cur.execute(
"""SELECT ms.id, ms.started_at, ms.location_id
FROM monitoring_sessions ms
WHERE ms.period_type IS NULL OR ms.session_label IS NULL"""
)
sessions = cur.fetchall()
print(f"Backfilling {len(sessions)} session(s)...")
updated = 0
for row in sessions:
session_id = row["id"]
started_at = row["started_at"]
location_id = row["location_id"]
# Look up location name
location_name = None
if location_id:
cur.execute("SELECT name FROM monitoring_locations WHERE id = ?", (location_id,))
loc_row = cur.fetchone()
if loc_row:
location_name = loc_row["name"]
period_type = _derive_period_type(started_at)
label = _build_label(started_at, location_name, period_type)
cur.execute(
"UPDATE monitoring_sessions SET period_type = ?, session_label = ? WHERE id = ?",
(period_type, label, session_id),
)
updated += 1
conn.commit()
conn.close()
print(f"✓ Backfilled {updated} session(s).")
print("Migration complete.")
if __name__ == "__main__":
migrate()

View File

@@ -0,0 +1,41 @@
"""
Migration: add report_date to monitoring_sessions.
Run once:
python backend/migrate_add_session_report_date.py
Or inside the container:
docker exec terra-view-terra-view-1 python3 backend/migrate_add_session_report_date.py
"""
import sys
import os
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from backend.database import engine
from sqlalchemy import text
def run():
with engine.connect() as conn:
# Check which columns already exist
result = conn.execute(text("PRAGMA table_info(monitoring_sessions)"))
existing = {row[1] for row in result}
added = []
for col, definition in [
("report_date", "DATE"),
]:
if col not in existing:
conn.execute(text(f"ALTER TABLE monitoring_sessions ADD COLUMN {col} {definition}"))
added.append(col)
else:
print(f" Column '{col}' already exists — skipping.")
conn.commit()
if added:
print(f" Added columns: {', '.join(added)}")
print("Migration complete.")
if __name__ == "__main__":
run()

View File

@@ -0,0 +1,78 @@
#!/usr/bin/env python3
"""
Database migration: Add sound level meter fields to roster table.
Adds columns for sound_level_meter device type support.
"""
import sqlite3
from pathlib import Path
def migrate():
"""Add SLM fields to roster table if they don't exist."""
# Try multiple possible database locations
possible_paths = [
Path("data/seismo_fleet.db"),
Path("data/sfm.db"),
Path("data/seismo.db"),
]
db_path = None
for path in possible_paths:
if path.exists():
db_path = path
break
if db_path is None:
print(f"Database not found in any of: {[str(p) for p in possible_paths]}")
print("Creating database with models.py will include new fields automatically.")
return
print(f"Using database: {db_path}")
conn = sqlite3.connect(db_path)
cursor = conn.cursor()
# Check if columns already exist
cursor.execute("PRAGMA table_info(roster)")
existing_columns = {row[1] for row in cursor.fetchall()}
new_columns = {
"slm_host": "TEXT",
"slm_tcp_port": "INTEGER",
"slm_model": "TEXT",
"slm_serial_number": "TEXT",
"slm_frequency_weighting": "TEXT",
"slm_time_weighting": "TEXT",
"slm_measurement_range": "TEXT",
"slm_last_check": "DATETIME",
}
migrations_applied = []
for column_name, column_type in new_columns.items():
if column_name not in existing_columns:
try:
cursor.execute(f"ALTER TABLE roster ADD COLUMN {column_name} {column_type}")
migrations_applied.append(column_name)
print(f"✓ Added column: {column_name} ({column_type})")
except sqlite3.OperationalError as e:
print(f"✗ Failed to add column {column_name}: {e}")
else:
print(f"○ Column already exists: {column_name}")
conn.commit()
conn.close()
if migrations_applied:
print(f"\n✓ Migration complete! Added {len(migrations_applied)} new columns.")
else:
print("\n○ No migration needed - all columns already exist.")
print("\nSound level meter fields are now available in the roster table.")
print("Note: Use device_type='slm' for Sound Level Meters. Legacy 'sound_level_meter' has been deprecated.")
if __name__ == "__main__":
migrate()

View File

@@ -0,0 +1,89 @@
"""
Migration: Add TBD date support to job reservations
Adds columns:
- job_reservations.estimated_end_date: For planning when end is TBD
- job_reservations.end_date_tbd: Boolean flag for TBD end dates
- job_reservation_units.unit_start_date: Unit-specific start (for swaps)
- job_reservation_units.unit_end_date: Unit-specific end (for swaps)
- job_reservation_units.unit_end_tbd: Unit-specific TBD flag
- job_reservation_units.notes: Notes for the assignment
Also makes job_reservations.end_date nullable.
"""
import sqlite3
import sys
from pathlib import Path
def migrate(db_path: str):
"""Run the migration."""
print(f"Migrating database: {db_path}")
conn = sqlite3.connect(db_path)
cursor = conn.cursor()
try:
# Check if job_reservations table exists
cursor.execute("SELECT name FROM sqlite_master WHERE type='table' AND name='job_reservations'")
if not cursor.fetchone():
print("job_reservations table does not exist. Skipping migration.")
return
# Get existing columns in job_reservations
cursor.execute("PRAGMA table_info(job_reservations)")
existing_cols = {row[1] for row in cursor.fetchall()}
# Add new columns to job_reservations if they don't exist
if 'estimated_end_date' not in existing_cols:
print("Adding estimated_end_date column to job_reservations...")
cursor.execute("ALTER TABLE job_reservations ADD COLUMN estimated_end_date DATE")
if 'end_date_tbd' not in existing_cols:
print("Adding end_date_tbd column to job_reservations...")
cursor.execute("ALTER TABLE job_reservations ADD COLUMN end_date_tbd BOOLEAN DEFAULT 0")
# Get existing columns in job_reservation_units
cursor.execute("PRAGMA table_info(job_reservation_units)")
unit_cols = {row[1] for row in cursor.fetchall()}
# Add new columns to job_reservation_units if they don't exist
if 'unit_start_date' not in unit_cols:
print("Adding unit_start_date column to job_reservation_units...")
cursor.execute("ALTER TABLE job_reservation_units ADD COLUMN unit_start_date DATE")
if 'unit_end_date' not in unit_cols:
print("Adding unit_end_date column to job_reservation_units...")
cursor.execute("ALTER TABLE job_reservation_units ADD COLUMN unit_end_date DATE")
if 'unit_end_tbd' not in unit_cols:
print("Adding unit_end_tbd column to job_reservation_units...")
cursor.execute("ALTER TABLE job_reservation_units ADD COLUMN unit_end_tbd BOOLEAN DEFAULT 0")
if 'notes' not in unit_cols:
print("Adding notes column to job_reservation_units...")
cursor.execute("ALTER TABLE job_reservation_units ADD COLUMN notes TEXT")
conn.commit()
print("Migration completed successfully!")
except Exception as e:
print(f"Migration failed: {e}")
conn.rollback()
raise
finally:
conn.close()
if __name__ == "__main__":
# Default to dev database
db_path = "./data-dev/seismo_fleet.db"
if len(sys.argv) > 1:
db_path = sys.argv[1]
if not Path(db_path).exists():
print(f"Database not found: {db_path}")
sys.exit(1)
migrate(db_path)

View File

@@ -0,0 +1,78 @@
"""
Migration script to add unit history timeline support.
This creates the unit_history table to track all changes to units:
- Note changes (archived old notes, new notes)
- Deployment status changes (deployed/benched)
- Retired status changes
- Other field changes
Run this script once to migrate an existing database.
"""
import sqlite3
import os
# Database path
DB_PATH = "./data/seismo_fleet.db"
def migrate_database():
"""Create the unit_history table"""
if not os.path.exists(DB_PATH):
print(f"Database not found at {DB_PATH}")
print("The database will be created automatically when you run the application.")
return
print(f"Migrating database: {DB_PATH}")
conn = sqlite3.connect(DB_PATH)
cursor = conn.cursor()
# Check if unit_history table already exists
cursor.execute("SELECT name FROM sqlite_master WHERE type='table' AND name='unit_history'")
if cursor.fetchone():
print("Migration already applied - unit_history table exists")
conn.close()
return
print("Creating unit_history table...")
try:
cursor.execute("""
CREATE TABLE unit_history (
id INTEGER PRIMARY KEY AUTOINCREMENT,
unit_id TEXT NOT NULL,
change_type TEXT NOT NULL,
field_name TEXT,
old_value TEXT,
new_value TEXT,
changed_at TIMESTAMP NOT NULL,
source TEXT DEFAULT 'manual',
notes TEXT
)
""")
print(" ✓ Created unit_history table")
# Create indexes for better query performance
cursor.execute("CREATE INDEX idx_unit_history_unit_id ON unit_history(unit_id)")
print(" ✓ Created index on unit_id")
cursor.execute("CREATE INDEX idx_unit_history_changed_at ON unit_history(changed_at)")
print(" ✓ Created index on changed_at")
conn.commit()
print("\nMigration completed successfully!")
print("Units will now track their complete history of changes.")
except sqlite3.Error as e:
print(f"\nError during migration: {e}")
conn.rollback()
raise
finally:
conn.close()
if __name__ == "__main__":
migrate_database()

View File

@@ -0,0 +1,80 @@
"""
Migration script to add user_preferences table.
This creates a new table for storing persistent user preferences:
- Display settings (timezone, theme, date format)
- Auto-refresh configuration
- Calibration defaults
- Status threshold customization
Run this script once to migrate an existing database.
"""
import sqlite3
import os
# Database path
DB_PATH = "./data/seismo_fleet.db"
def migrate_database():
"""Create user_preferences table"""
if not os.path.exists(DB_PATH):
print(f"Database not found at {DB_PATH}")
print("The database will be created automatically when you run the application.")
return
print(f"Migrating database: {DB_PATH}")
conn = sqlite3.connect(DB_PATH)
cursor = conn.cursor()
# Check if user_preferences table already exists
cursor.execute("SELECT name FROM sqlite_master WHERE type='table' AND name='user_preferences'")
table_exists = cursor.fetchone()
if table_exists:
print("Migration already applied - user_preferences table exists")
conn.close()
return
print("Creating user_preferences table...")
try:
cursor.execute("""
CREATE TABLE user_preferences (
id INTEGER PRIMARY KEY DEFAULT 1,
timezone TEXT DEFAULT 'America/New_York',
theme TEXT DEFAULT 'auto',
auto_refresh_interval INTEGER DEFAULT 10,
date_format TEXT DEFAULT 'MM/DD/YYYY',
table_rows_per_page INTEGER DEFAULT 25,
calibration_interval_days INTEGER DEFAULT 365,
calibration_warning_days INTEGER DEFAULT 30,
status_ok_threshold_hours INTEGER DEFAULT 12,
status_pending_threshold_hours INTEGER DEFAULT 24,
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
)
""")
print(" ✓ Created user_preferences table")
# Insert default preferences
cursor.execute("""
INSERT INTO user_preferences (id) VALUES (1)
""")
print(" ✓ Inserted default preferences")
conn.commit()
print("\nMigration completed successfully!")
except sqlite3.Error as e:
print(f"\nError during migration: {e}")
conn.rollback()
raise
finally:
conn.close()
if __name__ == "__main__":
migrate_database()

View File

@@ -0,0 +1,105 @@
"""
Migration: Make job_reservations.end_date nullable for TBD support
SQLite doesn't support ALTER COLUMN, so we need to:
1. Create a new table with the correct schema
2. Copy data
3. Drop old table
4. Rename new table
"""
import sqlite3
import sys
from pathlib import Path
def migrate(db_path: str):
"""Run the migration."""
print(f"Migrating database: {db_path}")
conn = sqlite3.connect(db_path)
cursor = conn.cursor()
try:
# Check if job_reservations table exists
cursor.execute("SELECT name FROM sqlite_master WHERE type='table' AND name='job_reservations'")
if not cursor.fetchone():
print("job_reservations table does not exist. Skipping migration.")
return
# Check current schema
cursor.execute("PRAGMA table_info(job_reservations)")
columns = cursor.fetchall()
col_info = {row[1]: row for row in columns}
# Check if end_date is already nullable (notnull=0)
if 'end_date' in col_info and col_info['end_date'][3] == 0:
print("end_date is already nullable. Skipping table recreation.")
return
print("Recreating job_reservations table with nullable end_date...")
# Create new table with correct schema
cursor.execute("""
CREATE TABLE job_reservations_new (
id TEXT PRIMARY KEY,
name TEXT NOT NULL,
project_id TEXT,
start_date DATE NOT NULL,
end_date DATE,
estimated_end_date DATE,
end_date_tbd BOOLEAN DEFAULT 0,
assignment_type TEXT NOT NULL DEFAULT 'quantity',
device_type TEXT DEFAULT 'seismograph',
quantity_needed INTEGER,
notes TEXT,
color TEXT DEFAULT '#3B82F6',
created_at DATETIME,
updated_at DATETIME
)
""")
# Copy existing data
cursor.execute("""
INSERT INTO job_reservations_new
SELECT
id, name, project_id, start_date, end_date,
COALESCE(estimated_end_date, NULL) as estimated_end_date,
COALESCE(end_date_tbd, 0) as end_date_tbd,
assignment_type, device_type, quantity_needed, notes, color,
created_at, updated_at
FROM job_reservations
""")
# Drop old table
cursor.execute("DROP TABLE job_reservations")
# Rename new table
cursor.execute("ALTER TABLE job_reservations_new RENAME TO job_reservations")
# Recreate index
cursor.execute("CREATE INDEX IF NOT EXISTS ix_job_reservations_id ON job_reservations (id)")
cursor.execute("CREATE INDEX IF NOT EXISTS ix_job_reservations_project_id ON job_reservations (project_id)")
conn.commit()
print("Migration completed successfully!")
except Exception as e:
print(f"Migration failed: {e}")
conn.rollback()
raise
finally:
conn.close()
if __name__ == "__main__":
# Default to dev database
db_path = "./data-dev/seismo_fleet.db"
if len(sys.argv) > 1:
db_path = sys.argv[1]
if not Path(db_path).exists():
print(f"Database not found: {db_path}")
sys.exit(1)
migrate(db_path)

View File

@@ -0,0 +1,54 @@
"""
Migration: Rename recording_sessions table to monitoring_sessions
Renames the table and updates the model name from RecordingSession to MonitoringSession.
Run once per database: python backend/migrate_rename_recording_to_monitoring_sessions.py
"""
import sqlite3
import sys
from pathlib import Path
def migrate(db_path: str):
"""Run the migration."""
print(f"Migrating database: {db_path}")
conn = sqlite3.connect(db_path)
cursor = conn.cursor()
try:
cursor.execute("SELECT name FROM sqlite_master WHERE type='table' AND name='recording_sessions'")
if not cursor.fetchone():
cursor.execute("SELECT name FROM sqlite_master WHERE type='table' AND name='monitoring_sessions'")
if cursor.fetchone():
print("monitoring_sessions table already exists. Skipping migration.")
else:
print("recording_sessions table does not exist. Skipping migration.")
return
print("Renaming recording_sessions -> monitoring_sessions...")
cursor.execute("ALTER TABLE recording_sessions RENAME TO monitoring_sessions")
conn.commit()
print("Migration completed successfully!")
except Exception as e:
print(f"Migration failed: {e}")
conn.rollback()
raise
finally:
conn.close()
if __name__ == "__main__":
db_path = "./data/seismo_fleet.db"
if len(sys.argv) > 1:
db_path = sys.argv[1]
if not Path(db_path).exists():
print(f"Database not found: {db_path}")
sys.exit(1)
migrate(db_path)

View File

@@ -0,0 +1,106 @@
"""
Database Migration: Standardize device_type values
This migration ensures all device_type values follow the official schema:
- "seismograph" - Seismic monitoring devices
- "modem" - Field modems and network equipment
- "slm" - Sound level meters (NL-43/NL-53)
Changes:
- Converts "sound_level_meter""slm"
- Safe to run multiple times (idempotent)
- No data loss
Usage:
python backend/migrate_standardize_device_types.py
"""
import sys
import os
# Add parent directory to path so we can import backend modules
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from sqlalchemy import create_engine, text
from sqlalchemy.orm import sessionmaker
# Database configuration
SQLALCHEMY_DATABASE_URL = "sqlite:///./data/seismo_fleet.db"
engine = create_engine(SQLALCHEMY_DATABASE_URL, connect_args={"check_same_thread": False})
SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)
def migrate():
"""Standardize device_type values in the database"""
db = SessionLocal()
try:
print("=" * 70)
print("Database Migration: Standardize device_type values")
print("=" * 70)
print()
# Check for existing "sound_level_meter" values
result = db.execute(
text("SELECT COUNT(*) as count FROM roster WHERE device_type = 'sound_level_meter'")
).fetchone()
count_to_migrate = result[0] if result else 0
if count_to_migrate == 0:
print("✓ No records need migration - all device_type values are already standardized")
print()
print("Current device_type distribution:")
# Show distribution
distribution = db.execute(
text("SELECT device_type, COUNT(*) as count FROM roster GROUP BY device_type ORDER BY count DESC")
).fetchall()
for row in distribution:
device_type, count = row
print(f" - {device_type}: {count} units")
print()
print("Migration not needed.")
return
print(f"Found {count_to_migrate} record(s) with device_type='sound_level_meter'")
print()
print("Converting 'sound_level_meter''slm'...")
# Perform the migration
db.execute(
text("UPDATE roster SET device_type = 'slm' WHERE device_type = 'sound_level_meter'")
)
db.commit()
print(f"✓ Successfully migrated {count_to_migrate} record(s)")
print()
# Show final distribution
print("Updated device_type distribution:")
distribution = db.execute(
text("SELECT device_type, COUNT(*) as count FROM roster GROUP BY device_type ORDER BY count DESC")
).fetchall()
for row in distribution:
device_type, count = row
print(f" - {device_type}: {count} units")
print()
print("=" * 70)
print("Migration completed successfully!")
print("=" * 70)
except Exception as e:
db.rollback()
print(f"\n❌ Error during migration: {e}")
print("\nRolling back changes...")
raise
finally:
db.close()
if __name__ == "__main__":
migrate()

593
backend/models.py Normal file
View File

@@ -0,0 +1,593 @@
from sqlalchemy import Column, String, DateTime, Boolean, Text, Date, Integer, UniqueConstraint
from datetime import datetime
from backend.database import Base
class Emitter(Base):
__tablename__ = "emitters"
id = Column(String, primary_key=True, index=True)
unit_type = Column(String, nullable=False)
last_seen = Column(DateTime, default=datetime.utcnow)
last_file = Column(String, nullable=False)
status = Column(String, nullable=False)
notes = Column(String, nullable=True)
class RosterUnit(Base):
"""
Roster table: represents our *intended assignment* of a unit.
This is editable from the GUI.
Supports multiple device types with type-specific fields:
- "seismograph" - Seismic monitoring devices (default)
- "modem" - Field modems and network equipment
- "slm" - Sound level meters (NL-43/NL-53)
"""
__tablename__ = "roster"
# Core fields (all device types)
id = Column(String, primary_key=True, index=True)
unit_type = Column(String, default="series3") # Backward compatibility
device_type = Column(String, default="seismograph") # "seismograph" | "modem" | "slm"
deployed = Column(Boolean, default=True)
retired = Column(Boolean, default=False)
out_for_calibration = Column(Boolean, default=False)
allocated = Column(Boolean, default=False) # Staged for an upcoming job, not yet deployed
allocated_to_project_id = Column(String, nullable=True) # Which project it's allocated to
note = Column(String, nullable=True)
project_id = Column(String, nullable=True)
location = Column(String, nullable=True) # Legacy field - use address/coordinates instead
address = Column(String, nullable=True) # Human-readable address
coordinates = Column(String, nullable=True) # Lat,Lon format: "34.0522,-118.2437"
last_updated = Column(DateTime, default=datetime.utcnow)
# Seismograph-specific fields (nullable for modems and SLMs)
last_calibrated = Column(Date, nullable=True)
next_calibration_due = Column(Date, nullable=True)
# Modem assignment (shared by seismographs and SLMs)
deployed_with_modem_id = Column(String, nullable=True) # FK to another RosterUnit (device_type=modem)
# Modem-specific fields (nullable for seismographs and SLMs)
ip_address = Column(String, nullable=True)
phone_number = Column(String, nullable=True)
hardware_model = Column(String, nullable=True)
deployment_type = Column(String, nullable=True) # "seismograph" | "slm" - what type of device this modem is deployed with
deployed_with_unit_id = Column(String, nullable=True) # ID of seismograph/SLM this modem is deployed with
# Sound Level Meter-specific fields (nullable for seismographs and modems)
slm_host = Column(String, nullable=True) # Device IP or hostname
slm_tcp_port = Column(Integer, nullable=True) # TCP control port (default 2255)
slm_ftp_port = Column(Integer, nullable=True) # FTP data retrieval port (default 21)
slm_model = Column(String, nullable=True) # NL-43, NL-53, etc.
slm_serial_number = Column(String, nullable=True) # Device serial number
slm_frequency_weighting = Column(String, nullable=True) # A, C, Z
slm_time_weighting = Column(String, nullable=True) # F (Fast), S (Slow), I (Impulse)
slm_measurement_range = Column(String, nullable=True) # e.g., "30-130 dB"
slm_last_check = Column(DateTime, nullable=True) # Last communication check
class WatcherAgent(Base):
"""
Watcher agents: tracks the watcher processes (series3-watcher, thor-watcher)
that run on field machines and report unit heartbeats.
Updated on every heartbeat received from each source_id.
"""
__tablename__ = "watcher_agents"
id = Column(String, primary_key=True, index=True) # source_id (hostname)
source_type = Column(String, nullable=False) # series3_watcher | series4_watcher
version = Column(String, nullable=True) # e.g. "1.4.0"
last_seen = Column(DateTime, default=datetime.utcnow)
status = Column(String, nullable=False, default="unknown") # ok | pending | missing | error | unknown
ip_address = Column(String, nullable=True)
log_tail = Column(Text, nullable=True) # last N log lines (JSON array of strings)
update_pending = Column(Boolean, default=False) # set True to trigger remote update
update_version = Column(String, nullable=True) # target version to update to
class IgnoredUnit(Base):
"""
Ignored units: units that report but should be filtered out from unknown emitters.
Used to suppress noise from old projects.
"""
__tablename__ = "ignored_units"
id = Column(String, primary_key=True, index=True)
reason = Column(String, nullable=True)
ignored_at = Column(DateTime, default=datetime.utcnow)
class UnitHistory(Base):
"""
Unit history: complete timeline of changes to each unit.
Tracks note changes, status changes, deployment/benched events, and more.
"""
__tablename__ = "unit_history"
id = Column(Integer, primary_key=True, autoincrement=True)
unit_id = Column(String, nullable=False, index=True) # FK to RosterUnit.id
change_type = Column(String, nullable=False) # note_change, deployed_change, retired_change, etc.
field_name = Column(String, nullable=True) # Which field changed
old_value = Column(Text, nullable=True) # Previous value
new_value = Column(Text, nullable=True) # New value
changed_at = Column(DateTime, default=datetime.utcnow, nullable=False, index=True)
source = Column(String, default="manual") # manual, csv_import, telemetry, offline_sync
notes = Column(Text, nullable=True) # Optional reason/context for the change
class UserPreferences(Base):
"""
User preferences: persistent storage for application settings.
Single-row table (id=1) to store global user preferences.
"""
__tablename__ = "user_preferences"
id = Column(Integer, primary_key=True, default=1)
timezone = Column(String, default="America/New_York")
theme = Column(String, default="auto") # auto, light, dark
auto_refresh_interval = Column(Integer, default=10) # seconds
date_format = Column(String, default="MM/DD/YYYY")
table_rows_per_page = Column(Integer, default=25)
calibration_interval_days = Column(Integer, default=365)
calibration_warning_days = Column(Integer, default=30)
status_ok_threshold_hours = Column(Integer, default=12)
status_pending_threshold_hours = Column(Integer, default=24)
updated_at = Column(DateTime, default=datetime.utcnow, onupdate=datetime.utcnow)
# ============================================================================
# Project Management System
# ============================================================================
class ProjectType(Base):
"""
Project type templates: defines available project types and their capabilities.
Pre-populated with: sound_monitoring, vibration_monitoring, combined.
"""
__tablename__ = "project_types"
id = Column(String, primary_key=True) # sound_monitoring, vibration_monitoring, combined
name = Column(String, nullable=False, unique=True) # "Sound Monitoring", "Vibration Monitoring"
description = Column(Text, nullable=True)
icon = Column(String, nullable=True) # Icon identifier for UI
supports_sound = Column(Boolean, default=False) # Enables SLM features
supports_vibration = Column(Boolean, default=False) # Enables seismograph features
created_at = Column(DateTime, default=datetime.utcnow)
class Project(Base):
"""
Projects: top-level organization for monitoring work.
Type-aware to enable/disable features based on project_type_id.
Project naming convention:
- project_number: TMI internal ID format xxxx-YY (e.g., "2567-23")
- client_name: Client/contractor name (e.g., "PJ Dick")
- name: Project/site name (e.g., "RKM Hall", "CMU Campus")
Display format: "2567-23 - PJ Dick - RKM Hall"
Users can search by any of these fields.
"""
__tablename__ = "projects"
id = Column(String, primary_key=True, index=True) # UUID
project_number = Column(String, nullable=True, index=True) # TMI ID: xxxx-YY format (e.g., "2567-23")
name = Column(String, nullable=False, unique=True) # Project/site name (e.g., "RKM Hall")
description = Column(Text, nullable=True)
project_type_id = Column(String, nullable=True) # Legacy FK to ProjectType.id; use ProjectModule for feature flags
status = Column(String, default="active") # active, on_hold, completed, archived, deleted
# Data collection mode: how field data reaches Terra-View.
# "remote" — units have modems; data pulled via FTP/scheduler automatically
# "manual" — no modem; SD cards retrieved daily and uploaded by hand
data_collection_mode = Column(String, default="manual") # remote | manual
# Project metadata
client_name = Column(String, nullable=True, index=True) # Client name (e.g., "PJ Dick")
site_address = Column(String, nullable=True)
site_coordinates = Column(String, nullable=True) # "lat,lon"
start_date = Column(Date, nullable=True)
end_date = Column(Date, nullable=True)
created_at = Column(DateTime, default=datetime.utcnow)
updated_at = Column(DateTime, default=datetime.utcnow, onupdate=datetime.utcnow)
deleted_at = Column(DateTime, nullable=True) # Set when status='deleted'; hard delete scheduled after 60 days
class ProjectModule(Base):
"""
Modules enabled on a project. Each module unlocks a set of features/tabs.
A project can have zero or more modules (sound_monitoring, vibration_monitoring, etc.).
"""
__tablename__ = "project_modules"
id = Column(String, primary_key=True, default=lambda: __import__('uuid').uuid4().__str__())
project_id = Column(String, nullable=False, index=True) # FK to projects.id
module_type = Column(String, nullable=False) # sound_monitoring | vibration_monitoring | ...
enabled = Column(Boolean, default=True, nullable=False)
created_at = Column(DateTime, default=datetime.utcnow)
__table_args__ = (UniqueConstraint("project_id", "module_type", name="uq_project_module"),)
class MonitoringLocation(Base):
"""
Monitoring locations: generic location for monitoring activities.
Can be NRL (Noise Recording Location) for sound projects,
or monitoring point for vibration projects.
"""
__tablename__ = "monitoring_locations"
id = Column(String, primary_key=True, index=True) # UUID
project_id = Column(String, nullable=False, index=True) # FK to Project.id
location_type = Column(String, nullable=False) # "sound" | "vibration"
name = Column(String, nullable=False) # NRL-001, VP-North, etc.
description = Column(Text, nullable=True)
coordinates = Column(String, nullable=True) # "lat,lon"
address = Column(String, nullable=True)
# Type-specific metadata stored as JSON
# For sound: {"ambient_conditions": "urban", "expected_sources": ["traffic"]}
# For vibration: {"ground_type": "bedrock", "depth": "10m"}
location_metadata = Column(Text, nullable=True)
created_at = Column(DateTime, default=datetime.utcnow)
updated_at = Column(DateTime, default=datetime.utcnow, onupdate=datetime.utcnow)
class UnitAssignment(Base):
"""
Unit assignments: links devices (SLMs or seismographs) to monitoring locations.
Supports temporary assignments with assigned_until.
"""
__tablename__ = "unit_assignments"
id = Column(String, primary_key=True, index=True) # UUID
unit_id = Column(String, nullable=False, index=True) # FK to RosterUnit.id
location_id = Column(String, nullable=False, index=True) # FK to MonitoringLocation.id
assigned_at = Column(DateTime, default=datetime.utcnow)
assigned_until = Column(DateTime, nullable=True) # Null = indefinite
status = Column(String, default="active") # active, completed, cancelled
notes = Column(Text, nullable=True)
# Denormalized for efficient queries
device_type = Column(String, nullable=False) # "slm" | "seismograph"
project_id = Column(String, nullable=False, index=True) # FK to Project.id
created_at = Column(DateTime, default=datetime.utcnow)
class ScheduledAction(Base):
"""
Scheduled actions: automation for recording start/stop/download.
Terra-View executes these by calling SLMM or SFM endpoints.
"""
__tablename__ = "scheduled_actions"
id = Column(String, primary_key=True, index=True) # UUID
project_id = Column(String, nullable=False, index=True) # FK to Project.id
location_id = Column(String, nullable=False, index=True) # FK to MonitoringLocation.id
unit_id = Column(String, nullable=True, index=True) # FK to RosterUnit.id (nullable if location-based)
action_type = Column(String, nullable=False) # start, stop, download, cycle, calibrate
device_type = Column(String, nullable=False) # "slm" | "seismograph"
scheduled_time = Column(DateTime, nullable=False, index=True)
executed_at = Column(DateTime, nullable=True)
execution_status = Column(String, default="pending") # pending, completed, failed, cancelled
# Response from device module (SLMM or SFM)
module_response = Column(Text, nullable=True) # JSON
error_message = Column(Text, nullable=True)
notes = Column(Text, nullable=True)
created_at = Column(DateTime, default=datetime.utcnow)
class MonitoringSession(Base):
"""
Monitoring sessions: tracks actual monitoring sessions.
Created when monitoring starts, updated when it stops.
"""
__tablename__ = "monitoring_sessions"
id = Column(String, primary_key=True, index=True) # UUID
project_id = Column(String, nullable=False, index=True) # FK to Project.id
location_id = Column(String, nullable=False, index=True) # FK to MonitoringLocation.id
unit_id = Column(String, nullable=True, index=True) # FK to RosterUnit.id (nullable for offline uploads)
# Physical device model that produced this session's data (e.g. "NL-43", "NL-53", "NL-32").
# Null for older records; report code falls back to file-content detection when null.
device_model = Column(String, nullable=True)
session_type = Column(String, nullable=False) # sound | vibration
started_at = Column(DateTime, nullable=False)
stopped_at = Column(DateTime, nullable=True)
duration_seconds = Column(Integer, nullable=True)
status = Column(String, default="recording") # recording, completed, failed
# Human-readable label auto-derived from date/location, editable by user.
# e.g. "NRL-1 — Sun 2/23 — Night"
session_label = Column(String, nullable=True)
# Period classification for report stats columns.
# weekday_day | weekday_night | weekend_day | weekend_night
period_type = Column(String, nullable=True)
# Effective monitoring window (hours 023). Night sessions cross midnight
# (period_end_hour < period_start_hour). NULL = no filtering applied.
# e.g. Day: start=7, end=19 Night: start=19, end=7
period_start_hour = Column(Integer, nullable=True)
period_end_hour = Column(Integer, nullable=True)
# For day sessions: the specific calendar date to use for report filtering.
# Overrides the automatic "last date with daytime rows" heuristic.
# Null = use heuristic.
report_date = Column(Date, nullable=True)
# Snapshot of device configuration at recording time
session_metadata = Column(Text, nullable=True) # JSON
created_at = Column(DateTime, default=datetime.utcnow)
updated_at = Column(DateTime, default=datetime.utcnow, onupdate=datetime.utcnow)
class DataFile(Base):
"""
Data files: references to recorded data files.
Terra-View tracks file metadata; actual files stored in data/Projects/ directory.
"""
__tablename__ = "data_files"
id = Column(String, primary_key=True, index=True) # UUID
session_id = Column(String, nullable=False, index=True) # FK to MonitoringSession.id
file_path = Column(String, nullable=False) # Relative to data/Projects/
file_type = Column(String, nullable=False) # wav, csv, mseed, json
file_size_bytes = Column(Integer, nullable=True)
downloaded_at = Column(DateTime, nullable=True)
checksum = Column(String, nullable=True) # SHA256 or MD5
# Additional file metadata
file_metadata = Column(Text, nullable=True) # JSON
created_at = Column(DateTime, default=datetime.utcnow)
class ReportTemplate(Base):
"""
Report templates: saved configurations for generating Excel reports.
Allows users to save time filter presets, titles, etc. for reuse.
"""
__tablename__ = "report_templates"
id = Column(String, primary_key=True, index=True) # UUID
name = Column(String, nullable=False) # "Nighttime Report", "Full Day Report"
project_id = Column(String, nullable=True) # Optional: project-specific template
# Template settings
report_title = Column(String, default="Background Noise Study")
start_time = Column(String, nullable=True) # "19:00" format
end_time = Column(String, nullable=True) # "07:00" format
start_date = Column(String, nullable=True) # "2025-01-15" format (optional)
end_date = Column(String, nullable=True) # "2025-01-20" format (optional)
created_at = Column(DateTime, default=datetime.utcnow)
updated_at = Column(DateTime, default=datetime.utcnow, onupdate=datetime.utcnow)
# ============================================================================
# Sound Monitoring Scheduler
# ============================================================================
class RecurringSchedule(Base):
"""
Recurring schedule definitions for automated sound monitoring.
Supports three schedule types:
- "weekly_calendar": Select specific days with start/end times (e.g., Mon/Wed/Fri 7pm-7am)
- "simple_interval": For 24/7 monitoring with daily stop/download/restart cycles
- "one_off": Single recording session with specific start and end date/time
"""
__tablename__ = "recurring_schedules"
id = Column(String, primary_key=True, index=True) # UUID
project_id = Column(String, nullable=False, index=True) # FK to Project.id
location_id = Column(String, nullable=False, index=True) # FK to MonitoringLocation.id
unit_id = Column(String, nullable=True, index=True) # FK to RosterUnit.id (optional, can use assignment)
name = Column(String, nullable=False) # "Weeknight Monitoring", "24/7 Continuous"
schedule_type = Column(String, nullable=False) # "weekly_calendar" | "simple_interval" | "one_off"
device_type = Column(String, nullable=False) # "slm" | "seismograph"
# Weekly Calendar fields (schedule_type = "weekly_calendar")
# JSON format: {
# "monday": {"enabled": true, "start": "19:00", "end": "07:00"},
# "tuesday": {"enabled": false},
# ...
# }
weekly_pattern = Column(Text, nullable=True)
# Simple Interval fields (schedule_type = "simple_interval")
interval_type = Column(String, nullable=True) # "daily" | "hourly"
cycle_time = Column(String, nullable=True) # "00:00" - time to run stop/download/restart
include_download = Column(Boolean, default=True) # Download data before restart
# One-Off fields (schedule_type = "one_off")
start_datetime = Column(DateTime, nullable=True) # Exact start date+time (stored as UTC)
end_datetime = Column(DateTime, nullable=True) # Exact end date+time (stored as UTC)
# Automation options (applies to all schedule types)
auto_increment_index = Column(Boolean, default=True) # Auto-increment store/index number before start
# When True: prevents "overwrite data?" prompts by using a new index each time
# Shared configuration
enabled = Column(Boolean, default=True)
timezone = Column(String, default="America/New_York")
# Tracking
last_generated_at = Column(DateTime, nullable=True) # When actions were last generated
next_occurrence = Column(DateTime, nullable=True) # Computed next action time
created_at = Column(DateTime, default=datetime.utcnow)
updated_at = Column(DateTime, default=datetime.utcnow, onupdate=datetime.utcnow)
class Alert(Base):
"""
In-app alerts for device status changes and system events.
Designed for future expansion to email/webhook notifications.
Currently supports:
- device_offline: Device became unreachable
- device_online: Device came back online
- schedule_failed: Scheduled action failed to execute
- schedule_completed: Scheduled action completed successfully
"""
__tablename__ = "alerts"
id = Column(String, primary_key=True, index=True) # UUID
# Alert classification
alert_type = Column(String, nullable=False) # "device_offline" | "device_online" | "schedule_failed" | "schedule_completed"
severity = Column(String, default="warning") # "info" | "warning" | "critical"
# Related entities (nullable - may not all apply)
project_id = Column(String, nullable=True, index=True)
location_id = Column(String, nullable=True, index=True)
unit_id = Column(String, nullable=True, index=True)
schedule_id = Column(String, nullable=True) # RecurringSchedule or ScheduledAction id
# Alert content
title = Column(String, nullable=False) # "NRL-001 Device Offline"
message = Column(Text, nullable=True) # Detailed description
alert_metadata = Column(Text, nullable=True) # JSON: additional context data
# Status tracking
status = Column(String, default="active") # "active" | "acknowledged" | "resolved" | "dismissed"
acknowledged_at = Column(DateTime, nullable=True)
resolved_at = Column(DateTime, nullable=True)
created_at = Column(DateTime, default=datetime.utcnow)
expires_at = Column(DateTime, nullable=True) # Auto-dismiss after this time
# ============================================================================
# Deployment Records
# ============================================================================
class DeploymentRecord(Base):
"""
Deployment records: tracks each time a unit is sent to the field and returned.
Each row represents one deployment. The active deployment is the record
with actual_removal_date IS NULL. The fleet calendar uses this to show
units as "In Field" and surface their expected return date.
project_ref is a freeform string for legacy/vibration jobs like "Fay I-80".
project_id will be populated once those jobs are migrated to proper Project records.
"""
__tablename__ = "deployment_records"
id = Column(String, primary_key=True, index=True) # UUID
unit_id = Column(String, nullable=False, index=True) # FK to RosterUnit.id
deployed_date = Column(Date, nullable=True) # When unit left the yard
estimated_removal_date = Column(Date, nullable=True) # Expected return date
actual_removal_date = Column(Date, nullable=True) # Filled in when returned; NULL = still out
# Project linkage: freeform for legacy jobs, FK for proper project records
project_ref = Column(String, nullable=True) # e.g. "Fay I-80" (vibration jobs)
project_id = Column(String, nullable=True, index=True) # FK to Project.id (when available)
location_name = Column(String, nullable=True) # e.g. "North Gate", "VP-001"
notes = Column(Text, nullable=True)
created_at = Column(DateTime, default=datetime.utcnow)
updated_at = Column(DateTime, default=datetime.utcnow, onupdate=datetime.utcnow)
# ============================================================================
# Fleet Calendar & Job Reservations
# ============================================================================
class JobReservation(Base):
"""
Job reservations: reserve units for future jobs/projects.
Supports two assignment modes:
- "specific": Pick exact units (SN-001, SN-002, etc.)
- "quantity": Reserve a number of units (e.g., "need 8 seismographs")
Used by the Fleet Calendar to visualize unit availability over time.
"""
__tablename__ = "job_reservations"
id = Column(String, primary_key=True, index=True) # UUID
name = Column(String, nullable=False) # "Job A - March deployment"
project_id = Column(String, nullable=True, index=True) # Optional FK to Project
# Date range for the reservation
start_date = Column(Date, nullable=False)
end_date = Column(Date, nullable=True) # Nullable = TBD / ongoing
estimated_end_date = Column(Date, nullable=True) # For planning when end is TBD
end_date_tbd = Column(Boolean, default=False) # True = end date unknown
# Assignment type: "specific" or "quantity"
assignment_type = Column(String, nullable=False, default="quantity")
# For quantity reservations
device_type = Column(String, default="seismograph") # seismograph | slm
quantity_needed = Column(Integer, nullable=True) # e.g., 8 units
estimated_units = Column(Integer, nullable=True)
# Full slot list as JSON: [{"location_name": "North Gate", "unit_id": null}, ...]
# Includes empty slots (no unit assigned yet). Filled slots are authoritative in JobReservationUnit.
location_slots = Column(Text, nullable=True)
# Metadata
notes = Column(Text, nullable=True)
color = Column(String, default="#3B82F6") # For calendar display (blue default)
created_at = Column(DateTime, default=datetime.utcnow)
updated_at = Column(DateTime, default=datetime.utcnow, onupdate=datetime.utcnow)
class JobReservationUnit(Base):
"""
Links specific units to job reservations.
Used when:
- assignment_type="specific": Units are directly assigned
- assignment_type="quantity": Units can be filled in later
Supports unit swaps: same reservation can have multiple units with
different date ranges (e.g., BE17353 Feb-Jun, then BE18438 Jun-Nov).
"""
__tablename__ = "job_reservation_units"
id = Column(String, primary_key=True, index=True) # UUID
reservation_id = Column(String, nullable=False, index=True) # FK to JobReservation
unit_id = Column(String, nullable=False, index=True) # FK to RosterUnit
# Unit-specific date range (for swaps) - defaults to reservation dates if null
unit_start_date = Column(Date, nullable=True) # When this specific unit starts
unit_end_date = Column(Date, nullable=True) # When this unit ends (swap out date)
unit_end_tbd = Column(Boolean, default=False) # True = end unknown (until cal expires or job ends)
# Track how this assignment was made
assignment_source = Column(String, default="specific") # "specific" | "filled" | "swap"
assigned_at = Column(DateTime, default=datetime.utcnow)
notes = Column(Text, nullable=True) # "Replacing BE17353" etc.
# Power requirements for this deployment slot
power_type = Column(String, nullable=True) # "ac" | "solar" | None
# Location identity
location_name = Column(String, nullable=True) # e.g. "North Gate", "Main Entrance"
slot_index = Column(Integer, nullable=True) # Order within reservation (0-based)

View File

@@ -4,8 +4,8 @@ from sqlalchemy import desc
from pathlib import Path
from datetime import datetime, timedelta, timezone
from typing import List, Dict, Any
from app.seismo.database import get_db
from app.seismo.models import UnitHistory, Emitter, RosterUnit
from backend.database import get_db
from backend.models import UnitHistory, Emitter, RosterUnit
router = APIRouter(prefix="/api", tags=["activity"])

326
backend/routers/alerts.py Normal file
View File

@@ -0,0 +1,326 @@
"""
Alerts Router
API endpoints for managing in-app alerts.
"""
from fastapi import APIRouter, Request, Depends, HTTPException, Query
from fastapi.responses import HTMLResponse, JSONResponse
from sqlalchemy.orm import Session
from typing import Optional
from datetime import datetime, timedelta
from backend.database import get_db
from backend.models import Alert, RosterUnit
from backend.services.alert_service import get_alert_service
from backend.templates_config import templates
router = APIRouter(prefix="/api/alerts", tags=["alerts"])
# ============================================================================
# Alert List and Count
# ============================================================================
@router.get("/")
async def list_alerts(
db: Session = Depends(get_db),
status: Optional[str] = Query(None, description="Filter by status: active, acknowledged, resolved, dismissed"),
project_id: Optional[str] = Query(None),
unit_id: Optional[str] = Query(None),
alert_type: Optional[str] = Query(None, description="Filter by type: device_offline, device_online, schedule_failed"),
limit: int = Query(50, le=100),
offset: int = Query(0, ge=0),
):
"""
List alerts with optional filters.
"""
alert_service = get_alert_service(db)
alerts = alert_service.get_all_alerts(
status=status,
project_id=project_id,
unit_id=unit_id,
alert_type=alert_type,
limit=limit,
offset=offset,
)
return {
"alerts": [
{
"id": a.id,
"alert_type": a.alert_type,
"severity": a.severity,
"title": a.title,
"message": a.message,
"status": a.status,
"unit_id": a.unit_id,
"project_id": a.project_id,
"location_id": a.location_id,
"created_at": a.created_at.isoformat() if a.created_at else None,
"acknowledged_at": a.acknowledged_at.isoformat() if a.acknowledged_at else None,
"resolved_at": a.resolved_at.isoformat() if a.resolved_at else None,
}
for a in alerts
],
"count": len(alerts),
"limit": limit,
"offset": offset,
}
@router.get("/active")
async def list_active_alerts(
db: Session = Depends(get_db),
project_id: Optional[str] = Query(None),
unit_id: Optional[str] = Query(None),
alert_type: Optional[str] = Query(None),
min_severity: Optional[str] = Query(None, description="Minimum severity: info, warning, critical"),
limit: int = Query(50, le=100),
):
"""
List only active alerts.
"""
alert_service = get_alert_service(db)
alerts = alert_service.get_active_alerts(
project_id=project_id,
unit_id=unit_id,
alert_type=alert_type,
min_severity=min_severity,
limit=limit,
)
return {
"alerts": [
{
"id": a.id,
"alert_type": a.alert_type,
"severity": a.severity,
"title": a.title,
"message": a.message,
"unit_id": a.unit_id,
"project_id": a.project_id,
"created_at": a.created_at.isoformat() if a.created_at else None,
}
for a in alerts
],
"count": len(alerts),
}
@router.get("/active/count")
async def get_active_alert_count(db: Session = Depends(get_db)):
"""
Get count of active alerts (for navbar badge).
"""
alert_service = get_alert_service(db)
count = alert_service.get_active_alert_count()
return {"count": count}
# ============================================================================
# Single Alert Operations
# ============================================================================
@router.get("/{alert_id}")
async def get_alert(
alert_id: str,
db: Session = Depends(get_db),
):
"""
Get a specific alert.
"""
alert = db.query(Alert).filter_by(id=alert_id).first()
if not alert:
raise HTTPException(status_code=404, detail="Alert not found")
# Get related unit info
unit = None
if alert.unit_id:
unit = db.query(RosterUnit).filter_by(id=alert.unit_id).first()
return {
"id": alert.id,
"alert_type": alert.alert_type,
"severity": alert.severity,
"title": alert.title,
"message": alert.message,
"metadata": alert.alert_metadata,
"status": alert.status,
"unit_id": alert.unit_id,
"unit_name": unit.id if unit else None,
"project_id": alert.project_id,
"location_id": alert.location_id,
"schedule_id": alert.schedule_id,
"created_at": alert.created_at.isoformat() if alert.created_at else None,
"acknowledged_at": alert.acknowledged_at.isoformat() if alert.acknowledged_at else None,
"resolved_at": alert.resolved_at.isoformat() if alert.resolved_at else None,
"expires_at": alert.expires_at.isoformat() if alert.expires_at else None,
}
@router.post("/{alert_id}/acknowledge")
async def acknowledge_alert(
alert_id: str,
db: Session = Depends(get_db),
):
"""
Mark alert as acknowledged.
"""
alert_service = get_alert_service(db)
alert = alert_service.acknowledge_alert(alert_id)
if not alert:
raise HTTPException(status_code=404, detail="Alert not found")
return {
"success": True,
"alert_id": alert.id,
"status": alert.status,
}
@router.post("/{alert_id}/dismiss")
async def dismiss_alert(
alert_id: str,
db: Session = Depends(get_db),
):
"""
Dismiss alert.
"""
alert_service = get_alert_service(db)
alert = alert_service.dismiss_alert(alert_id)
if not alert:
raise HTTPException(status_code=404, detail="Alert not found")
return {
"success": True,
"alert_id": alert.id,
"status": alert.status,
}
@router.post("/{alert_id}/resolve")
async def resolve_alert(
alert_id: str,
db: Session = Depends(get_db),
):
"""
Manually resolve an alert.
"""
alert_service = get_alert_service(db)
alert = alert_service.resolve_alert(alert_id)
if not alert:
raise HTTPException(status_code=404, detail="Alert not found")
return {
"success": True,
"alert_id": alert.id,
"status": alert.status,
}
# ============================================================================
# HTML Partials for HTMX
# ============================================================================
@router.get("/partials/dropdown", response_class=HTMLResponse)
async def get_alert_dropdown(
request: Request,
db: Session = Depends(get_db),
):
"""
Return HTML partial for alert dropdown in navbar.
"""
alert_service = get_alert_service(db)
alerts = alert_service.get_active_alerts(limit=10)
# Calculate relative time for each alert
now = datetime.utcnow()
alerts_data = []
for alert in alerts:
delta = now - alert.created_at
if delta.days > 0:
time_ago = f"{delta.days}d ago"
elif delta.seconds >= 3600:
time_ago = f"{delta.seconds // 3600}h ago"
elif delta.seconds >= 60:
time_ago = f"{delta.seconds // 60}m ago"
else:
time_ago = "just now"
alerts_data.append({
"alert": alert,
"time_ago": time_ago,
})
return templates.TemplateResponse("partials/alerts/alert_dropdown.html", {
"request": request,
"alerts": alerts_data,
"total_count": alert_service.get_active_alert_count(),
})
@router.get("/partials/list", response_class=HTMLResponse)
async def get_alert_list(
request: Request,
db: Session = Depends(get_db),
status: Optional[str] = Query(None),
limit: int = Query(20),
):
"""
Return HTML partial for alert list page.
"""
alert_service = get_alert_service(db)
if status:
alerts = alert_service.get_all_alerts(status=status, limit=limit)
else:
alerts = alert_service.get_all_alerts(limit=limit)
# Calculate relative time for each alert
now = datetime.utcnow()
alerts_data = []
for alert in alerts:
delta = now - alert.created_at
if delta.days > 0:
time_ago = f"{delta.days}d ago"
elif delta.seconds >= 3600:
time_ago = f"{delta.seconds // 3600}h ago"
elif delta.seconds >= 60:
time_ago = f"{delta.seconds // 60}m ago"
else:
time_ago = "just now"
alerts_data.append({
"alert": alert,
"time_ago": time_ago,
})
return templates.TemplateResponse("partials/alerts/alert_list.html", {
"request": request,
"alerts": alerts_data,
"status_filter": status,
})
# ============================================================================
# Cleanup
# ============================================================================
@router.post("/cleanup-expired")
async def cleanup_expired_alerts(db: Session = Depends(get_db)):
"""
Cleanup expired alerts (admin/maintenance endpoint).
"""
alert_service = get_alert_service(db)
count = alert_service.cleanup_expired_alerts()
return {
"success": True,
"cleaned_up": count,
}

View File

@@ -0,0 +1,106 @@
from fastapi import APIRouter, Request, Depends
from sqlalchemy.orm import Session
from sqlalchemy import and_
from datetime import datetime, timedelta
from backend.database import get_db
from backend.models import ScheduledAction, MonitoringLocation, Project
from backend.services.snapshot import emit_status_snapshot
from backend.templates_config import templates
from backend.utils.timezone import utc_to_local, local_to_utc, get_user_timezone
router = APIRouter()
@router.get("/dashboard/active")
def dashboard_active(request: Request):
snapshot = emit_status_snapshot()
return templates.TemplateResponse(
"partials/active_table.html",
{"request": request, "units": snapshot["active"]}
)
@router.get("/dashboard/benched")
def dashboard_benched(request: Request):
snapshot = emit_status_snapshot()
return templates.TemplateResponse(
"partials/benched_table.html",
{"request": request, "units": snapshot["benched"]}
)
@router.get("/dashboard/todays-actions")
def dashboard_todays_actions(request: Request, db: Session = Depends(get_db)):
"""
Get today's scheduled actions for the dashboard card.
Shows upcoming, completed, and failed actions for today.
"""
import json
from zoneinfo import ZoneInfo
# Get today's date range in local timezone
tz = ZoneInfo(get_user_timezone())
now_local = datetime.now(tz)
today_start_local = now_local.replace(hour=0, minute=0, second=0, microsecond=0)
today_end_local = today_start_local + timedelta(days=1)
# Convert to UTC for database query
today_start_utc = today_start_local.astimezone(ZoneInfo("UTC")).replace(tzinfo=None)
today_end_utc = today_end_local.astimezone(ZoneInfo("UTC")).replace(tzinfo=None)
# Exclude actions from paused/removed projects
paused_project_ids = [
p.id for p in db.query(Project.id).filter(
Project.status.in_(["on_hold", "archived", "deleted"])
).all()
]
# Query today's actions
actions = db.query(ScheduledAction).filter(
ScheduledAction.scheduled_time >= today_start_utc,
ScheduledAction.scheduled_time < today_end_utc,
ScheduledAction.project_id.notin_(paused_project_ids),
).order_by(ScheduledAction.scheduled_time.asc()).all()
# Enrich with location/project info and parse results
enriched_actions = []
for action in actions:
location = None
project = None
if action.location_id:
location = db.query(MonitoringLocation).filter_by(id=action.location_id).first()
if action.project_id:
project = db.query(Project).filter_by(id=action.project_id).first()
# Parse module_response for result details
result_data = None
if action.module_response:
try:
result_data = json.loads(action.module_response)
except json.JSONDecodeError:
pass
enriched_actions.append({
"action": action,
"location": location,
"project": project,
"result": result_data,
})
# Count by status
pending_count = sum(1 for a in actions if a.execution_status == "pending")
completed_count = sum(1 for a in actions if a.execution_status == "completed")
failed_count = sum(1 for a in actions if a.execution_status == "failed")
return templates.TemplateResponse(
"partials/dashboard/todays_actions.html",
{
"request": request,
"actions": enriched_actions,
"pending_count": pending_count,
"completed_count": completed_count,
"failed_count": failed_count,
"total_count": len(actions),
}
)

View File

@@ -2,8 +2,8 @@
from fastapi import APIRouter, Depends
from sqlalchemy.orm import Session
from app.seismo.database import get_db
from app.seismo.services.snapshot import emit_status_snapshot
from backend.database import get_db
from backend.services.snapshot import emit_status_snapshot
router = APIRouter(prefix="/dashboard", tags=["dashboard-tabs"])

View File

@@ -0,0 +1,154 @@
from fastapi import APIRouter, Depends, HTTPException
from sqlalchemy.orm import Session
from datetime import datetime, date
from typing import Optional
import uuid
from backend.database import get_db
from backend.models import DeploymentRecord, RosterUnit
router = APIRouter(prefix="/api", tags=["deployments"])
def _serialize(record: DeploymentRecord) -> dict:
return {
"id": record.id,
"unit_id": record.unit_id,
"deployed_date": record.deployed_date.isoformat() if record.deployed_date else None,
"estimated_removal_date": record.estimated_removal_date.isoformat() if record.estimated_removal_date else None,
"actual_removal_date": record.actual_removal_date.isoformat() if record.actual_removal_date else None,
"project_ref": record.project_ref,
"project_id": record.project_id,
"location_name": record.location_name,
"notes": record.notes,
"created_at": record.created_at.isoformat() if record.created_at else None,
"updated_at": record.updated_at.isoformat() if record.updated_at else None,
"is_active": record.actual_removal_date is None,
}
@router.get("/deployments/{unit_id}")
def get_deployments(unit_id: str, db: Session = Depends(get_db)):
"""Get all deployment records for a unit, newest first."""
unit = db.query(RosterUnit).filter_by(id=unit_id).first()
if not unit:
raise HTTPException(status_code=404, detail=f"Unit {unit_id} not found")
records = (
db.query(DeploymentRecord)
.filter_by(unit_id=unit_id)
.order_by(DeploymentRecord.deployed_date.desc(), DeploymentRecord.created_at.desc())
.all()
)
return {"deployments": [_serialize(r) for r in records]}
@router.get("/deployments/{unit_id}/active")
def get_active_deployment(unit_id: str, db: Session = Depends(get_db)):
"""Get the current active deployment (actual_removal_date is NULL), or null."""
record = (
db.query(DeploymentRecord)
.filter(
DeploymentRecord.unit_id == unit_id,
DeploymentRecord.actual_removal_date == None
)
.order_by(DeploymentRecord.created_at.desc())
.first()
)
return {"deployment": _serialize(record) if record else None}
@router.post("/deployments/{unit_id}")
def create_deployment(unit_id: str, payload: dict, db: Session = Depends(get_db)):
"""
Create a new deployment record for a unit.
Body fields (all optional):
deployed_date (YYYY-MM-DD)
estimated_removal_date (YYYY-MM-DD)
project_ref (freeform string)
project_id (UUID if linked to Project)
location_name
notes
"""
unit = db.query(RosterUnit).filter_by(id=unit_id).first()
if not unit:
raise HTTPException(status_code=404, detail=f"Unit {unit_id} not found")
def parse_date(val) -> Optional[date]:
if not val:
return None
if isinstance(val, date):
return val
return date.fromisoformat(str(val))
record = DeploymentRecord(
id=str(uuid.uuid4()),
unit_id=unit_id,
deployed_date=parse_date(payload.get("deployed_date")),
estimated_removal_date=parse_date(payload.get("estimated_removal_date")),
actual_removal_date=None,
project_ref=payload.get("project_ref"),
project_id=payload.get("project_id"),
location_name=payload.get("location_name"),
notes=payload.get("notes"),
created_at=datetime.utcnow(),
updated_at=datetime.utcnow(),
)
db.add(record)
db.commit()
db.refresh(record)
return _serialize(record)
@router.put("/deployments/{unit_id}/{deployment_id}")
def update_deployment(unit_id: str, deployment_id: str, payload: dict, db: Session = Depends(get_db)):
"""
Update a deployment record. Used for:
- Setting/changing estimated_removal_date
- Closing a deployment (set actual_removal_date to mark unit returned)
- Editing project_ref, location_name, notes
"""
record = db.query(DeploymentRecord).filter_by(id=deployment_id, unit_id=unit_id).first()
if not record:
raise HTTPException(status_code=404, detail="Deployment record not found")
def parse_date(val) -> Optional[date]:
if val is None:
return None
if val == "":
return None
if isinstance(val, date):
return val
return date.fromisoformat(str(val))
if "deployed_date" in payload:
record.deployed_date = parse_date(payload["deployed_date"])
if "estimated_removal_date" in payload:
record.estimated_removal_date = parse_date(payload["estimated_removal_date"])
if "actual_removal_date" in payload:
record.actual_removal_date = parse_date(payload["actual_removal_date"])
if "project_ref" in payload:
record.project_ref = payload["project_ref"]
if "project_id" in payload:
record.project_id = payload["project_id"]
if "location_name" in payload:
record.location_name = payload["location_name"]
if "notes" in payload:
record.notes = payload["notes"]
record.updated_at = datetime.utcnow()
db.commit()
db.refresh(record)
return _serialize(record)
@router.delete("/deployments/{unit_id}/{deployment_id}")
def delete_deployment(unit_id: str, deployment_id: str, db: Session = Depends(get_db)):
"""Delete a deployment record."""
record = db.query(DeploymentRecord).filter_by(id=deployment_id, unit_id=unit_id).first()
if not record:
raise HTTPException(status_code=404, detail="Deployment record not found")
db.delete(record)
db.commit()
return {"ok": True}

View File

@@ -0,0 +1,928 @@
"""
Fleet Calendar Router
API endpoints for the Fleet Calendar feature:
- Calendar page and data
- Job reservation CRUD
- Unit assignment management
- Availability checking
"""
from fastapi import APIRouter, Request, Depends, HTTPException, Query
from fastapi.responses import HTMLResponse, JSONResponse
from sqlalchemy.orm import Session
from datetime import datetime, date, timedelta
from typing import Optional, List
import uuid
import logging
from backend.database import get_db
from backend.models import (
RosterUnit, JobReservation, JobReservationUnit,
UserPreferences, Project, MonitoringLocation, UnitAssignment
)
from backend.templates_config import templates
from backend.services.fleet_calendar_service import (
get_day_summary,
get_calendar_year_data,
get_rolling_calendar_data,
check_calibration_conflicts,
get_available_units_for_period,
get_calibration_status
)
router = APIRouter(tags=["fleet-calendar"])
logger = logging.getLogger(__name__)
# ============================================================================
# Calendar Page
# ============================================================================
@router.get("/fleet-calendar", response_class=HTMLResponse)
async def fleet_calendar_page(
request: Request,
year: Optional[int] = None,
month: Optional[int] = None,
device_type: str = "seismograph",
db: Session = Depends(get_db)
):
"""Main Fleet Calendar page with rolling 12-month view."""
today = date.today()
# Default to current month as the start
if year is None:
year = today.year
if month is None:
month = today.month
# Get calendar data for 12 months starting from year/month
calendar_data = get_rolling_calendar_data(db, year, month, device_type)
# Get projects for the reservation form dropdown
projects = db.query(Project).filter(
Project.status.in_(["active", "upcoming", "on_hold"])
).order_by(Project.name).all()
# Build a serializable list of items with dates for calendar bars
# Includes both tracked Projects (with dates) and Job Reservations (matching device_type)
project_colors = ['#3B82F6', '#10B981', '#F59E0B', '#EF4444', '#8B5CF6', '#EC4899', '#06B6D4', '#F97316']
# Map calendar device_type to project_type_ids
device_type_to_project_types = {
"seismograph": ["vibration_monitoring", "combined"],
"slm": ["sound_monitoring", "combined"],
}
relevant_project_types = device_type_to_project_types.get(device_type, [])
calendar_projects = []
for i, p in enumerate(projects):
if p.start_date and p.project_type_id in relevant_project_types:
calendar_projects.append({
"id": p.id,
"name": p.name,
"start_date": p.start_date.isoformat(),
"end_date": p.end_date.isoformat() if p.end_date else None,
"color": project_colors[i % len(project_colors)],
"confirmed": True,
})
# Add job reservations for this device_type as bars
from sqlalchemy import or_ as _or
cal_window_end = date(year + ((month + 10) // 12), ((month + 10) % 12) + 1, 1)
reservations_for_cal = db.query(JobReservation).filter(
JobReservation.device_type == device_type,
JobReservation.start_date <= cal_window_end,
_or(
JobReservation.end_date >= date(year, month, 1),
JobReservation.end_date == None,
)
).all()
for res in reservations_for_cal:
end = res.end_date or res.estimated_end_date
calendar_projects.append({
"id": res.id,
"name": res.name,
"start_date": res.start_date.isoformat(),
"end_date": end.isoformat() if end else None,
"color": res.color,
"confirmed": bool(res.project_id),
})
# Calculate prev/next month navigation
prev_year, prev_month = (year - 1, 12) if month == 1 else (year, month - 1)
next_year, next_month = (year + 1, 1) if month == 12 else (year, month + 1)
return templates.TemplateResponse(
"fleet_calendar.html",
{
"request": request,
"start_year": year,
"start_month": month,
"prev_year": prev_year,
"prev_month": prev_month,
"next_year": next_year,
"next_month": next_month,
"device_type": device_type,
"calendar_data": calendar_data,
"projects": projects,
"calendar_projects": calendar_projects,
"today": today.isoformat()
}
)
# ============================================================================
# Calendar Data API
# ============================================================================
@router.get("/api/fleet-calendar/data", response_class=JSONResponse)
async def get_calendar_data(
year: int,
device_type: str = "seismograph",
db: Session = Depends(get_db)
):
"""Get calendar data for a specific year."""
return get_calendar_year_data(db, year, device_type)
@router.get("/api/fleet-calendar/day/{date_str}", response_class=HTMLResponse)
async def get_day_detail(
request: Request,
date_str: str,
device_type: str = "seismograph",
db: Session = Depends(get_db)
):
"""Get detailed view for a specific day (HTMX partial)."""
try:
check_date = date.fromisoformat(date_str)
except ValueError:
raise HTTPException(status_code=400, detail="Invalid date format. Use YYYY-MM-DD")
day_data = get_day_summary(db, check_date, device_type)
# Get projects for display names
projects = {p.id: p for p in db.query(Project).all()}
return templates.TemplateResponse(
"partials/fleet_calendar/day_detail.html",
{
"request": request,
"day_data": day_data,
"date_str": date_str,
"date_display": check_date.strftime("%B %d, %Y"),
"device_type": device_type,
"projects": projects
}
)
# ============================================================================
# Reservation CRUD
# ============================================================================
@router.post("/api/fleet-calendar/reservations", response_class=JSONResponse)
async def create_reservation(
request: Request,
db: Session = Depends(get_db)
):
"""Create a new job reservation."""
data = await request.json()
# Validate required fields
required = ["name", "start_date", "assignment_type"]
for field in required:
if field not in data:
raise HTTPException(status_code=400, detail=f"Missing required field: {field}")
# Need either end_date or end_date_tbd
end_date_tbd = data.get("end_date_tbd", False)
if not end_date_tbd and not data.get("end_date"):
raise HTTPException(status_code=400, detail="End date is required unless marked as TBD")
try:
start_date = date.fromisoformat(data["start_date"])
end_date = date.fromisoformat(data["end_date"]) if data.get("end_date") else None
estimated_end_date = date.fromisoformat(data["estimated_end_date"]) if data.get("estimated_end_date") else None
except ValueError:
raise HTTPException(status_code=400, detail="Invalid date format. Use YYYY-MM-DD")
if end_date and end_date < start_date:
raise HTTPException(status_code=400, detail="End date must be after start date")
if estimated_end_date and estimated_end_date < start_date:
raise HTTPException(status_code=400, detail="Estimated end date must be after start date")
import json as _json
reservation = JobReservation(
id=str(uuid.uuid4()),
name=data["name"],
project_id=data.get("project_id"),
start_date=start_date,
end_date=end_date,
estimated_end_date=estimated_end_date,
end_date_tbd=end_date_tbd,
assignment_type=data["assignment_type"],
device_type=data.get("device_type", "seismograph"),
quantity_needed=data.get("quantity_needed"),
estimated_units=data.get("estimated_units"),
location_slots=_json.dumps(data["location_slots"]) if data.get("location_slots") is not None else None,
notes=data.get("notes"),
color=data.get("color", "#3B82F6")
)
db.add(reservation)
# If specific units were provided, assign them
if data.get("unit_ids") and data["assignment_type"] == "specific":
for unit_id in data["unit_ids"]:
assignment = JobReservationUnit(
id=str(uuid.uuid4()),
reservation_id=reservation.id,
unit_id=unit_id,
assignment_source="specific"
)
db.add(assignment)
db.commit()
logger.info(f"Created reservation: {reservation.name} ({reservation.id})")
return {
"success": True,
"reservation_id": reservation.id,
"message": f"Created reservation: {reservation.name}"
}
@router.get("/api/fleet-calendar/reservations/{reservation_id}", response_class=JSONResponse)
async def get_reservation(
reservation_id: str,
db: Session = Depends(get_db)
):
"""Get a specific reservation with its assigned units."""
reservation = db.query(JobReservation).filter_by(id=reservation_id).first()
if not reservation:
raise HTTPException(status_code=404, detail="Reservation not found")
# Get assigned units
assignments = db.query(JobReservationUnit).filter_by(
reservation_id=reservation_id
).all()
# Sort assignments by slot_index so order is preserved
assignments_sorted = sorted(assignments, key=lambda a: (a.slot_index if a.slot_index is not None else 999))
unit_ids = [a.unit_id for a in assignments_sorted]
units = db.query(RosterUnit).filter(RosterUnit.id.in_(unit_ids)).all() if unit_ids else []
units_by_id = {u.id: u for u in units}
# Build per-unit lookups from assignments
assignment_map = {a.unit_id: a for a in assignments_sorted}
import json as _json
stored_slots = _json.loads(reservation.location_slots) if reservation.location_slots else None
return {
"id": reservation.id,
"name": reservation.name,
"project_id": reservation.project_id,
"start_date": reservation.start_date.isoformat(),
"end_date": reservation.end_date.isoformat() if reservation.end_date else None,
"estimated_end_date": reservation.estimated_end_date.isoformat() if reservation.estimated_end_date else None,
"end_date_tbd": reservation.end_date_tbd,
"assignment_type": reservation.assignment_type,
"device_type": reservation.device_type,
"quantity_needed": reservation.quantity_needed,
"estimated_units": reservation.estimated_units,
"location_slots": stored_slots,
"notes": reservation.notes,
"color": reservation.color,
"assigned_units": [
{
"id": uid,
"last_calibrated": units_by_id[uid].last_calibrated.isoformat() if uid in units_by_id and units_by_id[uid].last_calibrated else None,
"deployed": units_by_id[uid].deployed if uid in units_by_id else False,
"power_type": assignment_map[uid].power_type,
"notes": assignment_map[uid].notes,
"location_name": assignment_map[uid].location_name,
"slot_index": assignment_map[uid].slot_index,
}
for uid in unit_ids
]
}
@router.put("/api/fleet-calendar/reservations/{reservation_id}", response_class=JSONResponse)
async def update_reservation(
reservation_id: str,
request: Request,
db: Session = Depends(get_db)
):
"""Update an existing reservation."""
reservation = db.query(JobReservation).filter_by(id=reservation_id).first()
if not reservation:
raise HTTPException(status_code=404, detail="Reservation not found")
data = await request.json()
# Update fields if provided
if "name" in data:
reservation.name = data["name"]
if "project_id" in data:
reservation.project_id = data["project_id"]
if "start_date" in data:
reservation.start_date = date.fromisoformat(data["start_date"])
if "end_date" in data:
reservation.end_date = date.fromisoformat(data["end_date"]) if data["end_date"] else None
if "estimated_end_date" in data:
reservation.estimated_end_date = date.fromisoformat(data["estimated_end_date"]) if data["estimated_end_date"] else None
if "end_date_tbd" in data:
reservation.end_date_tbd = data["end_date_tbd"]
if "assignment_type" in data:
reservation.assignment_type = data["assignment_type"]
if "quantity_needed" in data:
reservation.quantity_needed = data["quantity_needed"]
if "estimated_units" in data:
reservation.estimated_units = data["estimated_units"]
if "location_slots" in data:
import json as _json
reservation.location_slots = _json.dumps(data["location_slots"]) if data["location_slots"] is not None else None
if "notes" in data:
reservation.notes = data["notes"]
if "color" in data:
reservation.color = data["color"]
reservation.updated_at = datetime.utcnow()
db.commit()
logger.info(f"Updated reservation: {reservation.name} ({reservation.id})")
return {
"success": True,
"message": f"Updated reservation: {reservation.name}"
}
@router.delete("/api/fleet-calendar/reservations/{reservation_id}", response_class=JSONResponse)
async def delete_reservation(
reservation_id: str,
db: Session = Depends(get_db)
):
"""Delete a reservation and its unit assignments."""
reservation = db.query(JobReservation).filter_by(id=reservation_id).first()
if not reservation:
raise HTTPException(status_code=404, detail="Reservation not found")
# Delete unit assignments first
db.query(JobReservationUnit).filter_by(reservation_id=reservation_id).delete()
# Delete the reservation
db.delete(reservation)
db.commit()
logger.info(f"Deleted reservation: {reservation.name} ({reservation_id})")
return {
"success": True,
"message": "Reservation deleted"
}
# ============================================================================
# Unit Assignment
# ============================================================================
@router.post("/api/fleet-calendar/reservations/{reservation_id}/assign-units", response_class=JSONResponse)
async def assign_units_to_reservation(
reservation_id: str,
request: Request,
db: Session = Depends(get_db)
):
"""Assign specific units to a reservation."""
reservation = db.query(JobReservation).filter_by(id=reservation_id).first()
if not reservation:
raise HTTPException(status_code=404, detail="Reservation not found")
data = await request.json()
unit_ids = data.get("unit_ids", [])
# Optional per-unit dicts keyed by unit_id
power_types = data.get("power_types", {})
location_notes = data.get("location_notes", {})
location_names = data.get("location_names", {})
# slot_indices: {"BE17354": 0, "BE9441": 1, ...}
slot_indices = data.get("slot_indices", {})
# Verify units exist (allow empty list to clear all assignments)
if unit_ids:
units = db.query(RosterUnit).filter(RosterUnit.id.in_(unit_ids)).all()
found_ids = {u.id for u in units}
missing = set(unit_ids) - found_ids
if missing:
raise HTTPException(status_code=404, detail=f"Units not found: {', '.join(missing)}")
# Full replace: delete all existing assignments for this reservation first
db.query(JobReservationUnit).filter_by(reservation_id=reservation_id).delete()
db.flush()
# Check for conflicts with other reservations and insert new assignments
conflicts = []
for unit_id in unit_ids:
# Check overlapping reservations
if reservation.end_date:
overlapping = db.query(JobReservation).join(
JobReservationUnit, JobReservation.id == JobReservationUnit.reservation_id
).filter(
JobReservationUnit.unit_id == unit_id,
JobReservation.id != reservation_id,
JobReservation.start_date <= reservation.end_date,
JobReservation.end_date >= reservation.start_date
).first()
if overlapping:
conflicts.append({
"unit_id": unit_id,
"conflict_reservation": overlapping.name,
"conflict_dates": f"{overlapping.start_date} - {overlapping.end_date}"
})
continue
# Add assignment
assignment = JobReservationUnit(
id=str(uuid.uuid4()),
reservation_id=reservation_id,
unit_id=unit_id,
assignment_source="filled" if reservation.assignment_type == "quantity" else "specific",
power_type=power_types.get(unit_id),
notes=location_notes.get(unit_id),
location_name=location_names.get(unit_id),
slot_index=slot_indices.get(unit_id),
)
db.add(assignment)
db.commit()
# Check for calibration conflicts
cal_conflicts = check_calibration_conflicts(db, reservation_id)
assigned_count = db.query(JobReservationUnit).filter_by(
reservation_id=reservation_id
).count()
return {
"success": True,
"assigned_count": assigned_count,
"conflicts": conflicts,
"calibration_warnings": cal_conflicts,
"message": f"Assigned {len(unit_ids) - len(conflicts)} units"
}
@router.delete("/api/fleet-calendar/reservations/{reservation_id}/units/{unit_id}", response_class=JSONResponse)
async def remove_unit_from_reservation(
reservation_id: str,
unit_id: str,
db: Session = Depends(get_db)
):
"""Remove a unit from a reservation."""
assignment = db.query(JobReservationUnit).filter_by(
reservation_id=reservation_id,
unit_id=unit_id
).first()
if not assignment:
raise HTTPException(status_code=404, detail="Unit assignment not found")
db.delete(assignment)
db.commit()
return {
"success": True,
"message": f"Removed {unit_id} from reservation"
}
# ============================================================================
# Availability & Conflicts
# ============================================================================
@router.get("/api/fleet-calendar/availability", response_class=JSONResponse)
async def check_availability(
start_date: str,
end_date: str,
device_type: str = "seismograph",
exclude_reservation_id: Optional[str] = None,
db: Session = Depends(get_db)
):
"""Get units available for a specific date range."""
try:
start = date.fromisoformat(start_date)
end = date.fromisoformat(end_date)
except ValueError:
raise HTTPException(status_code=400, detail="Invalid date format. Use YYYY-MM-DD")
available = get_available_units_for_period(
db, start, end, device_type, exclude_reservation_id
)
return {
"start_date": start_date,
"end_date": end_date,
"device_type": device_type,
"available_units": available,
"count": len(available)
}
@router.get("/api/fleet-calendar/reservations/{reservation_id}/conflicts", response_class=JSONResponse)
async def get_reservation_conflicts(
reservation_id: str,
db: Session = Depends(get_db)
):
"""Check for calibration conflicts in a reservation."""
reservation = db.query(JobReservation).filter_by(id=reservation_id).first()
if not reservation:
raise HTTPException(status_code=404, detail="Reservation not found")
conflicts = check_calibration_conflicts(db, reservation_id)
return {
"reservation_id": reservation_id,
"reservation_name": reservation.name,
"conflicts": conflicts,
"has_conflicts": len(conflicts) > 0
}
# ============================================================================
# HTMX Partials
# ============================================================================
@router.get("/api/fleet-calendar/reservations-list", response_class=HTMLResponse)
async def get_reservations_list(
request: Request,
year: Optional[int] = None,
month: Optional[int] = None,
device_type: str = "seismograph",
db: Session = Depends(get_db)
):
"""Get list of reservations as HTMX partial."""
from sqlalchemy import or_
today = date.today()
if year is None:
year = today.year
if month is None:
month = today.month
# Calculate 12-month window
start_date = date(year, month, 1)
# End date is 12 months later
end_year = year + ((month + 10) // 12)
end_month = ((month + 10) % 12) + 1
if end_month == 12:
end_date = date(end_year, 12, 31)
else:
end_date = date(end_year, end_month + 1, 1) - timedelta(days=1)
# Filter by device_type and date window
reservations = db.query(JobReservation).filter(
JobReservation.device_type == device_type,
JobReservation.start_date <= end_date,
or_(
JobReservation.end_date >= start_date,
JobReservation.end_date == None # TBD reservations
)
).order_by(JobReservation.start_date).all()
# Get assignment counts
reservation_data = []
for res in reservations:
assignments = db.query(JobReservationUnit).filter_by(
reservation_id=res.id
).all()
assigned_count = len(assignments)
# Enrich assignments with unit details, sorted by slot_index
assignments_sorted = sorted(assignments, key=lambda a: (a.slot_index if a.slot_index is not None else 999))
unit_ids = [a.unit_id for a in assignments_sorted]
units = db.query(RosterUnit).filter(RosterUnit.id.in_(unit_ids)).all() if unit_ids else []
units_by_id = {u.id: u for u in units}
assigned_units = [
{
"id": a.unit_id,
"power_type": a.power_type,
"notes": a.notes,
"location_name": a.location_name,
"slot_index": a.slot_index,
"deployed": units_by_id[a.unit_id].deployed if a.unit_id in units_by_id else False,
"last_calibrated": units_by_id[a.unit_id].last_calibrated if a.unit_id in units_by_id else None,
}
for a in assignments_sorted
]
# Check for calibration conflicts
conflicts = check_calibration_conflicts(db, res.id)
location_count = res.quantity_needed or assigned_count
reservation_data.append({
"reservation": res,
"assigned_count": assigned_count,
"location_count": location_count,
"assigned_units": assigned_units,
"has_conflicts": len(conflicts) > 0,
"conflict_count": len(conflicts)
})
return templates.TemplateResponse(
"partials/fleet_calendar/reservations_list.html",
{
"request": request,
"reservations": reservation_data,
"year": year,
"device_type": device_type
}
)
@router.get("/api/fleet-calendar/planner-availability", response_class=JSONResponse)
async def get_planner_availability(
device_type: str = "seismograph",
start_date: Optional[str] = None,
end_date: Optional[str] = None,
exclude_reservation_id: Optional[str] = None,
db: Session = Depends(get_db)
):
"""Get available units for the reservation planner split-panel UI.
Dates are optional — if omitted, returns all non-retired units regardless of reservations.
"""
if start_date and end_date:
try:
start = date.fromisoformat(start_date)
end = date.fromisoformat(end_date)
except ValueError:
raise HTTPException(status_code=400, detail="Invalid date format. Use YYYY-MM-DD")
units = get_available_units_for_period(db, start, end, device_type, exclude_reservation_id)
else:
# No dates: return all non-retired units of this type, with current reservation info
from backend.models import RosterUnit as RU
from datetime import timedelta
today = date.today()
all_units = db.query(RU).filter(
RU.device_type == device_type,
RU.retired == False
).all()
# Build a map: unit_id -> list of active/upcoming reservations
active_assignments = db.query(JobReservationUnit).join(
JobReservation, JobReservationUnit.reservation_id == JobReservation.id
).filter(
JobReservation.device_type == device_type,
JobReservation.end_date >= today
).all()
unit_reservations = {}
for assignment in active_assignments:
res = db.query(JobReservation).filter(JobReservation.id == assignment.reservation_id).first()
if not res:
continue
unit_reservations.setdefault(assignment.unit_id, []).append({
"reservation_id": res.id,
"reservation_name": res.name,
"start_date": res.start_date.isoformat() if res.start_date else None,
"end_date": res.end_date.isoformat() if res.end_date else None,
"color": res.color or "#3B82F6"
})
units = []
for u in all_units:
expiry = (u.last_calibrated + timedelta(days=365)) if u.last_calibrated else None
units.append({
"id": u.id,
"last_calibrated": u.last_calibrated.isoformat() if u.last_calibrated else None,
"expiry_date": expiry.isoformat() if expiry else None,
"calibration_status": "needs_calibration" if not u.last_calibrated else "valid",
"deployed": u.deployed,
"out_for_calibration": u.out_for_calibration or False,
"allocated": getattr(u, 'allocated', False) or False,
"allocated_to_project_id": getattr(u, 'allocated_to_project_id', None) or "",
"note": u.note or "",
"reservations": unit_reservations.get(u.id, [])
})
# Sort: benched first (easier to assign), then deployed, then by ID
units.sort(key=lambda u: (1 if u["deployed"] else 0, u["id"]))
return {
"units": units,
"start_date": start_date,
"end_date": end_date,
"count": len(units)
}
@router.get("/api/fleet-calendar/unit-quick-info/{unit_id}", response_class=JSONResponse)
async def get_unit_quick_info(unit_id: str, db: Session = Depends(get_db)):
"""Return at-a-glance info for the planner quick-view modal."""
from backend.models import Emitter
u = db.query(RosterUnit).filter(RosterUnit.id == unit_id).first()
if not u:
raise HTTPException(status_code=404, detail="Unit not found")
today = date.today()
expiry = (u.last_calibrated + timedelta(days=365)) if u.last_calibrated else None
# Active/upcoming reservations
assignments = db.query(JobReservationUnit).filter(JobReservationUnit.unit_id == unit_id).all()
reservations = []
for a in assignments:
res = db.query(JobReservation).filter(
JobReservation.id == a.reservation_id,
JobReservation.end_date >= today
).first()
if res:
reservations.append({
"name": res.name,
"start_date": res.start_date.isoformat() if res.start_date else None,
"end_date": res.end_date.isoformat() if res.end_date else None,
"end_date_tbd": res.end_date_tbd,
"color": res.color or "#3B82F6",
"location_name": a.location_name,
})
# Last seen from emitter
emitter = db.query(Emitter).filter(Emitter.unit_type == unit_id).first()
return {
"id": u.id,
"unit_type": u.unit_type,
"deployed": u.deployed,
"out_for_calibration": u.out_for_calibration or False,
"note": u.note or "",
"project_id": u.project_id or "",
"address": u.address or u.location or "",
"coordinates": u.coordinates or "",
"deployed_with_modem_id": u.deployed_with_modem_id or "",
"last_calibrated": u.last_calibrated.isoformat() if u.last_calibrated else None,
"next_calibration_due": u.next_calibration_due.isoformat() if u.next_calibration_due else (expiry.isoformat() if expiry else None),
"cal_expired": not u.last_calibrated or (expiry and expiry < today),
"last_seen": emitter.last_seen.isoformat() if emitter and emitter.last_seen else None,
"reservations": reservations,
}
@router.get("/api/fleet-calendar/available-units", response_class=HTMLResponse)
async def get_available_units_partial(
request: Request,
start_date: str,
end_date: str,
device_type: str = "seismograph",
reservation_id: Optional[str] = None,
db: Session = Depends(get_db)
):
"""Get available units as HTMX partial for the assignment modal."""
try:
start = date.fromisoformat(start_date)
end = date.fromisoformat(end_date)
except ValueError:
raise HTTPException(status_code=400, detail="Invalid date format")
available = get_available_units_for_period(
db, start, end, device_type, reservation_id
)
return templates.TemplateResponse(
"partials/fleet_calendar/available_units.html",
{
"request": request,
"units": available,
"start_date": start_date,
"end_date": end_date,
"device_type": device_type,
"reservation_id": reservation_id
}
)
@router.get("/api/fleet-calendar/month/{year}/{month}", response_class=HTMLResponse)
async def get_month_partial(
request: Request,
year: int,
month: int,
device_type: str = "seismograph",
db: Session = Depends(get_db)
):
"""Get a single month calendar as HTMX partial."""
calendar_data = get_calendar_year_data(db, year, device_type)
month_data = calendar_data["months"].get(month)
if not month_data:
raise HTTPException(status_code=404, detail="Invalid month")
return templates.TemplateResponse(
"partials/fleet_calendar/month_grid.html",
{
"request": request,
"year": year,
"month": month,
"month_data": month_data,
"device_type": device_type,
"today": date.today().isoformat()
}
)
# ============================================================================
# Promote Reservation to Project
# ============================================================================
@router.post("/api/fleet-calendar/reservations/{reservation_id}/promote-to-project", response_class=JSONResponse)
async def promote_reservation_to_project(
reservation_id: str,
request: Request,
db: Session = Depends(get_db)
):
"""
Promote a job reservation to a full project in the projects DB.
Creates: Project + MonitoringLocations + UnitAssignments.
"""
reservation = db.query(JobReservation).filter_by(id=reservation_id).first()
if not reservation:
raise HTTPException(status_code=404, detail="Reservation not found")
data = await request.json()
project_number = data.get("project_number") or None
client_name = data.get("client_name") or None
# Map device_type to project_type_id
if reservation.device_type == "slm":
project_type_id = "sound_monitoring"
location_type = "sound"
else:
project_type_id = "vibration_monitoring"
location_type = "vibration"
# Check for duplicate project name
existing = db.query(Project).filter_by(name=reservation.name).first()
if existing:
raise HTTPException(status_code=409, detail=f"A project named '{reservation.name}' already exists.")
# Create the project
project_id = str(uuid.uuid4())
project = Project(
id=project_id,
name=reservation.name,
project_number=project_number,
client_name=client_name,
project_type_id=project_type_id,
status="upcoming",
start_date=reservation.start_date,
end_date=reservation.end_date,
description=reservation.notes,
)
db.add(project)
db.flush()
# Load assignments sorted by slot_index
assignments = db.query(JobReservationUnit).filter_by(reservation_id=reservation_id).all()
assignments_sorted = sorted(assignments, key=lambda a: (a.slot_index if a.slot_index is not None else 999))
locations_created = 0
units_assigned = 0
for i, assignment in enumerate(assignments_sorted):
loc_num = str(i + 1).zfill(3)
loc_name = assignment.location_name or f"Location {i + 1}"
location = MonitoringLocation(
id=str(uuid.uuid4()),
project_id=project_id,
location_type=location_type,
name=loc_name,
description=assignment.notes,
)
db.add(location)
db.flush()
locations_created += 1
if assignment.unit_id:
unit_assignment = UnitAssignment(
id=str(uuid.uuid4()),
unit_id=assignment.unit_id,
location_id=location.id,
project_id=project_id,
device_type=reservation.device_type or "seismograph",
status="active",
notes=f"Power: {assignment.power_type}" if assignment.power_type else None,
)
db.add(unit_assignment)
units_assigned += 1
db.commit()
logger.info(f"Promoted reservation '{reservation.name}' to project {project_id}")
return {
"success": True,
"project_id": project_id,
"project_name": reservation.name,
"locations_created": locations_created,
"units_assigned": units_assigned,
}

View File

@@ -0,0 +1,429 @@
"""
Modem Dashboard Router
Provides API endpoints for the Field Modems management page.
"""
from fastapi import APIRouter, Request, Depends, Query
from fastapi.responses import HTMLResponse
from sqlalchemy.orm import Session
from datetime import datetime
import subprocess
import time
import logging
from backend.database import get_db
from backend.models import RosterUnit
from backend.templates_config import templates
logger = logging.getLogger(__name__)
router = APIRouter(prefix="/api/modem-dashboard", tags=["modem-dashboard"])
@router.get("/stats", response_class=HTMLResponse)
async def get_modem_stats(request: Request, db: Session = Depends(get_db)):
"""
Get summary statistics for modem dashboard.
Returns HTML partial with stat cards.
"""
# Query all modems
all_modems = db.query(RosterUnit).filter_by(device_type="modem").all()
# Get IDs of modems that have devices paired to them
paired_modem_ids = set()
devices_with_modems = db.query(RosterUnit).filter(
RosterUnit.deployed_with_modem_id.isnot(None),
RosterUnit.retired == False
).all()
for device in devices_with_modems:
if device.deployed_with_modem_id:
paired_modem_ids.add(device.deployed_with_modem_id)
# Count categories
total_count = len(all_modems)
retired_count = sum(1 for m in all_modems if m.retired)
# In use = deployed AND paired with a device
in_use_count = sum(1 for m in all_modems
if m.deployed and not m.retired and m.id in paired_modem_ids)
# Spare = deployed but NOT paired (available for assignment)
spare_count = sum(1 for m in all_modems
if m.deployed and not m.retired and m.id not in paired_modem_ids)
# Benched = not deployed and not retired
benched_count = sum(1 for m in all_modems if not m.deployed and not m.retired)
return templates.TemplateResponse("partials/modem_stats.html", {
"request": request,
"total_count": total_count,
"in_use_count": in_use_count,
"spare_count": spare_count,
"benched_count": benched_count,
"retired_count": retired_count
})
@router.get("/units", response_class=HTMLResponse)
async def get_modem_units(
request: Request,
db: Session = Depends(get_db),
search: str = Query(None),
filter_status: str = Query(None), # "in_use", "spare", "benched", "retired"
):
"""
Get list of modem units for the dashboard.
Returns HTML partial with modem cards.
"""
query = db.query(RosterUnit).filter_by(device_type="modem")
# Filter by search term if provided
if search:
search_term = f"%{search}%"
query = query.filter(
(RosterUnit.id.ilike(search_term)) |
(RosterUnit.ip_address.ilike(search_term)) |
(RosterUnit.hardware_model.ilike(search_term)) |
(RosterUnit.phone_number.ilike(search_term)) |
(RosterUnit.location.ilike(search_term))
)
modems = query.order_by(
RosterUnit.retired.asc(),
RosterUnit.deployed.desc(),
RosterUnit.id.asc()
).all()
# Get paired device info for each modem
paired_devices = {}
devices_with_modems = db.query(RosterUnit).filter(
RosterUnit.deployed_with_modem_id.isnot(None),
RosterUnit.retired == False
).all()
for device in devices_with_modems:
if device.deployed_with_modem_id:
paired_devices[device.deployed_with_modem_id] = {
"id": device.id,
"device_type": device.device_type,
"deployed": device.deployed
}
# Annotate modems with paired device info
modem_list = []
for modem in modems:
paired = paired_devices.get(modem.id)
# Determine status category
if modem.retired:
status = "retired"
elif not modem.deployed:
status = "benched"
elif paired:
status = "in_use"
else:
status = "spare"
# Apply filter if specified
if filter_status and status != filter_status:
continue
modem_list.append({
"id": modem.id,
"ip_address": modem.ip_address,
"phone_number": modem.phone_number,
"hardware_model": modem.hardware_model,
"deployed": modem.deployed,
"retired": modem.retired,
"location": modem.location,
"project_id": modem.project_id,
"paired_device": paired,
"status": status
})
return templates.TemplateResponse("partials/modem_list.html", {
"request": request,
"modems": modem_list
})
@router.get("/{modem_id}/paired-device")
async def get_paired_device(modem_id: str, db: Session = Depends(get_db)):
"""
Get the device (SLM/seismograph) that is paired with this modem.
Returns JSON with device info or null if not paired.
"""
# Check modem exists
modem = db.query(RosterUnit).filter_by(id=modem_id, device_type="modem").first()
if not modem:
return {"status": "error", "detail": f"Modem {modem_id} not found"}
# Find device paired with this modem
device = db.query(RosterUnit).filter(
RosterUnit.deployed_with_modem_id == modem_id,
RosterUnit.retired == False
).first()
if device:
return {
"paired": True,
"device": {
"id": device.id,
"device_type": device.device_type,
"deployed": device.deployed,
"project_id": device.project_id,
"location": device.location or device.address
}
}
return {"paired": False, "device": None}
@router.get("/{modem_id}/paired-device-html", response_class=HTMLResponse)
async def get_paired_device_html(modem_id: str, request: Request, db: Session = Depends(get_db)):
"""
Get HTML partial showing the device paired with this modem.
Used by unit_detail.html for modems.
"""
# Check modem exists
modem = db.query(RosterUnit).filter_by(id=modem_id, device_type="modem").first()
if not modem:
return HTMLResponse('<p class="text-red-500">Modem not found</p>')
# Find device paired with this modem
device = db.query(RosterUnit).filter(
RosterUnit.deployed_with_modem_id == modem_id,
RosterUnit.retired == False
).first()
return templates.TemplateResponse("partials/modem_paired_device.html", {
"request": request,
"modem_id": modem_id,
"device": device
})
@router.get("/{modem_id}/ping")
async def ping_modem(modem_id: str, db: Session = Depends(get_db)):
"""
Test modem connectivity with a simple ping.
Returns response time and connection status.
"""
# Get modem from database
modem = db.query(RosterUnit).filter_by(id=modem_id, device_type="modem").first()
if not modem:
return {"status": "error", "detail": f"Modem {modem_id} not found"}
if not modem.ip_address:
return {"status": "error", "detail": f"Modem {modem_id} has no IP address configured"}
try:
# Ping the modem (1 packet, 2 second timeout)
start_time = time.time()
result = subprocess.run(
["ping", "-c", "1", "-W", "2", modem.ip_address],
capture_output=True,
text=True,
timeout=3
)
response_time = int((time.time() - start_time) * 1000) # Convert to milliseconds
if result.returncode == 0:
return {
"status": "success",
"modem_id": modem_id,
"ip_address": modem.ip_address,
"response_time_ms": response_time,
"message": "Modem is responding"
}
else:
return {
"status": "error",
"modem_id": modem_id,
"ip_address": modem.ip_address,
"detail": "Modem not responding to ping"
}
except subprocess.TimeoutExpired:
return {
"status": "error",
"modem_id": modem_id,
"ip_address": modem.ip_address,
"detail": "Ping timeout"
}
except Exception as e:
logger.error(f"Failed to ping modem {modem_id}: {e}")
return {
"status": "error",
"modem_id": modem_id,
"detail": str(e)
}
@router.get("/{modem_id}/diagnostics")
async def get_modem_diagnostics(modem_id: str, db: Session = Depends(get_db)):
"""
Get modem diagnostics (signal strength, data usage, uptime).
Currently returns placeholders. When ModemManager is available,
this endpoint will query it for real diagnostics.
"""
modem = db.query(RosterUnit).filter_by(id=modem_id, device_type="modem").first()
if not modem:
return {"status": "error", "detail": f"Modem {modem_id} not found"}
# TODO: Query ModemManager backend when available
return {
"status": "unavailable",
"message": "ModemManager integration not yet available",
"modem_id": modem_id,
"signal_strength_dbm": None,
"data_usage_mb": None,
"uptime_seconds": None,
"carrier": None,
"connection_type": None # LTE, 5G, etc.
}
@router.get("/{modem_id}/pairable-devices")
async def get_pairable_devices(
modem_id: str,
db: Session = Depends(get_db),
search: str = Query(None),
hide_paired: bool = Query(True)
):
"""
Get list of devices (seismographs and SLMs) that can be paired with this modem.
Used by the device picker modal in unit_detail.html.
"""
# Check modem exists
modem = db.query(RosterUnit).filter_by(id=modem_id, device_type="modem").first()
if not modem:
return {"status": "error", "detail": f"Modem {modem_id} not found"}
# Query seismographs and SLMs
query = db.query(RosterUnit).filter(
RosterUnit.device_type.in_(["seismograph", "sound_level_meter"]),
RosterUnit.retired == False
)
# Filter by search term if provided
if search:
search_term = f"%{search}%"
query = query.filter(
(RosterUnit.id.ilike(search_term)) |
(RosterUnit.project_id.ilike(search_term)) |
(RosterUnit.location.ilike(search_term)) |
(RosterUnit.address.ilike(search_term)) |
(RosterUnit.note.ilike(search_term))
)
devices = query.order_by(
RosterUnit.deployed.desc(),
RosterUnit.device_type.asc(),
RosterUnit.id.asc()
).all()
# Build device list
device_list = []
for device in devices:
# Skip already paired devices if hide_paired is True
is_paired_to_other = (
device.deployed_with_modem_id is not None and
device.deployed_with_modem_id != modem_id
)
is_paired_to_this = device.deployed_with_modem_id == modem_id
if hide_paired and is_paired_to_other:
continue
device_list.append({
"id": device.id,
"device_type": device.device_type,
"deployed": device.deployed,
"project_id": device.project_id,
"location": device.location or device.address,
"note": device.note,
"paired_modem_id": device.deployed_with_modem_id,
"is_paired_to_this": is_paired_to_this,
"is_paired_to_other": is_paired_to_other
})
return {"devices": device_list, "modem_id": modem_id}
@router.post("/{modem_id}/pair")
async def pair_device_to_modem(
modem_id: str,
db: Session = Depends(get_db),
device_id: str = Query(..., description="ID of the device to pair")
):
"""
Pair a device (seismograph or SLM) to this modem.
Updates the device's deployed_with_modem_id field.
"""
# Check modem exists
modem = db.query(RosterUnit).filter_by(id=modem_id, device_type="modem").first()
if not modem:
return {"status": "error", "detail": f"Modem {modem_id} not found"}
# Find the device
device = db.query(RosterUnit).filter(
RosterUnit.id == device_id,
RosterUnit.device_type.in_(["seismograph", "sound_level_meter"]),
RosterUnit.retired == False
).first()
if not device:
return {"status": "error", "detail": f"Device {device_id} not found"}
# Unpair any device currently paired to this modem
currently_paired = db.query(RosterUnit).filter(
RosterUnit.deployed_with_modem_id == modem_id
).all()
for paired_device in currently_paired:
paired_device.deployed_with_modem_id = None
# Pair the new device
device.deployed_with_modem_id = modem_id
db.commit()
return {
"status": "success",
"modem_id": modem_id,
"device_id": device_id,
"message": f"Device {device_id} paired to modem {modem_id}"
}
@router.post("/{modem_id}/unpair")
async def unpair_device_from_modem(modem_id: str, db: Session = Depends(get_db)):
"""
Unpair any device currently paired to this modem.
"""
# Check modem exists
modem = db.query(RosterUnit).filter_by(id=modem_id, device_type="modem").first()
if not modem:
return {"status": "error", "detail": f"Modem {modem_id} not found"}
# Find and unpair device
device = db.query(RosterUnit).filter(
RosterUnit.deployed_with_modem_id == modem_id
).first()
if device:
old_device_id = device.id
device.deployed_with_modem_id = None
db.commit()
return {
"status": "success",
"modem_id": modem_id,
"unpaired_device_id": old_device_id,
"message": f"Device {old_device_id} unpaired from modem {modem_id}"
}
return {
"status": "success",
"modem_id": modem_id,
"message": "No device was paired to this modem"
}

View File

@@ -8,8 +8,8 @@ import shutil
from PIL import Image
from PIL.ExifTags import TAGS, GPSTAGS
from sqlalchemy.orm import Session
from app.seismo.database import get_db
from app.seismo.models import RosterUnit
from backend.database import get_db
from backend.models import RosterUnit
router = APIRouter(prefix="/api", tags=["photos"])

File diff suppressed because it is too large Load Diff

4806
backend/routers/projects.py Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,522 @@
"""
Recurring Schedules Router
API endpoints for managing recurring monitoring schedules.
"""
from fastapi import APIRouter, Request, Depends, HTTPException, Query
from fastapi.responses import HTMLResponse, JSONResponse
from sqlalchemy.orm import Session
from typing import Optional
from datetime import datetime
import json
from backend.database import get_db
from backend.models import RecurringSchedule, MonitoringLocation, Project, RosterUnit
from backend.services.recurring_schedule_service import get_recurring_schedule_service
from backend.templates_config import templates
router = APIRouter(prefix="/api/projects/{project_id}/recurring-schedules", tags=["recurring-schedules"])
# ============================================================================
# List and Get
# ============================================================================
@router.get("/")
async def list_recurring_schedules(
project_id: str,
db: Session = Depends(get_db),
enabled_only: bool = Query(False),
):
"""
List all recurring schedules for a project.
"""
project = db.query(Project).filter_by(id=project_id).first()
if not project:
raise HTTPException(status_code=404, detail="Project not found")
query = db.query(RecurringSchedule).filter_by(project_id=project_id)
if enabled_only:
query = query.filter_by(enabled=True)
schedules = query.order_by(RecurringSchedule.created_at.desc()).all()
return {
"schedules": [
{
"id": s.id,
"name": s.name,
"schedule_type": s.schedule_type,
"device_type": s.device_type,
"location_id": s.location_id,
"unit_id": s.unit_id,
"enabled": s.enabled,
"weekly_pattern": json.loads(s.weekly_pattern) if s.weekly_pattern else None,
"interval_type": s.interval_type,
"cycle_time": s.cycle_time,
"include_download": s.include_download,
"timezone": s.timezone,
"next_occurrence": s.next_occurrence.isoformat() if s.next_occurrence else None,
"last_generated_at": s.last_generated_at.isoformat() if s.last_generated_at else None,
"created_at": s.created_at.isoformat() if s.created_at else None,
}
for s in schedules
],
"count": len(schedules),
}
@router.get("/{schedule_id}")
async def get_recurring_schedule(
project_id: str,
schedule_id: str,
db: Session = Depends(get_db),
):
"""
Get a specific recurring schedule.
"""
schedule = db.query(RecurringSchedule).filter_by(
id=schedule_id,
project_id=project_id,
).first()
if not schedule:
raise HTTPException(status_code=404, detail="Schedule not found")
# Get related location and unit info
location = db.query(MonitoringLocation).filter_by(id=schedule.location_id).first()
unit = None
if schedule.unit_id:
unit = db.query(RosterUnit).filter_by(id=schedule.unit_id).first()
return {
"id": schedule.id,
"name": schedule.name,
"schedule_type": schedule.schedule_type,
"device_type": schedule.device_type,
"location_id": schedule.location_id,
"location_name": location.name if location else None,
"unit_id": schedule.unit_id,
"unit_name": unit.id if unit else None,
"enabled": schedule.enabled,
"weekly_pattern": json.loads(schedule.weekly_pattern) if schedule.weekly_pattern else None,
"interval_type": schedule.interval_type,
"cycle_time": schedule.cycle_time,
"include_download": schedule.include_download,
"timezone": schedule.timezone,
"next_occurrence": schedule.next_occurrence.isoformat() if schedule.next_occurrence else None,
"last_generated_at": schedule.last_generated_at.isoformat() if schedule.last_generated_at else None,
"created_at": schedule.created_at.isoformat() if schedule.created_at else None,
"updated_at": schedule.updated_at.isoformat() if schedule.updated_at else None,
}
# ============================================================================
# Create
# ============================================================================
@router.post("/")
async def create_recurring_schedule(
project_id: str,
request: Request,
db: Session = Depends(get_db),
):
"""
Create recurring schedules for one or more locations.
Body for weekly_calendar (supports multiple locations):
{
"name": "Weeknight Monitoring",
"schedule_type": "weekly_calendar",
"location_ids": ["uuid1", "uuid2"], // Array of location IDs
"weekly_pattern": {
"monday": {"enabled": true, "start": "19:00", "end": "07:00"},
"tuesday": {"enabled": false},
...
},
"include_download": true,
"auto_increment_index": true,
"timezone": "America/New_York"
}
Body for simple_interval (supports multiple locations):
{
"name": "24/7 Continuous",
"schedule_type": "simple_interval",
"location_ids": ["uuid1", "uuid2"], // Array of location IDs
"interval_type": "daily",
"cycle_time": "00:00",
"include_download": true,
"auto_increment_index": true,
"timezone": "America/New_York"
}
Legacy single location support (backwards compatible):
{
"name": "...",
"location_id": "uuid", // Single location ID
...
}
"""
project = db.query(Project).filter_by(id=project_id).first()
if not project:
raise HTTPException(status_code=404, detail="Project not found")
data = await request.json()
# Support both location_ids (array) and location_id (single) for backwards compatibility
location_ids = data.get("location_ids", [])
if not location_ids and data.get("location_id"):
location_ids = [data.get("location_id")]
if not location_ids:
raise HTTPException(status_code=400, detail="At least one location is required")
# Validate all locations exist
locations = db.query(MonitoringLocation).filter(
MonitoringLocation.id.in_(location_ids),
MonitoringLocation.project_id == project_id,
).all()
if len(locations) != len(location_ids):
raise HTTPException(status_code=404, detail="One or more locations not found")
service = get_recurring_schedule_service(db)
created_schedules = []
base_name = data.get("name", "Unnamed Schedule")
# Parse one-off datetime fields if applicable
one_off_start = None
one_off_end = None
if data.get("schedule_type") == "one_off":
from zoneinfo import ZoneInfo
tz = ZoneInfo(data.get("timezone", "America/New_York"))
start_dt_str = data.get("start_datetime")
end_dt_str = data.get("end_datetime")
if not start_dt_str or not end_dt_str:
raise HTTPException(status_code=400, detail="One-off schedules require start and end date/time")
try:
start_local = datetime.fromisoformat(start_dt_str).replace(tzinfo=tz)
end_local = datetime.fromisoformat(end_dt_str).replace(tzinfo=tz)
except ValueError:
raise HTTPException(status_code=400, detail="Invalid datetime format")
duration = end_local - start_local
if duration.total_seconds() < 900:
raise HTTPException(status_code=400, detail="Duration must be at least 15 minutes")
if duration.total_seconds() > 86400:
raise HTTPException(status_code=400, detail="Duration cannot exceed 24 hours")
from datetime import timezone as dt_timezone
now_local = datetime.now(tz)
if start_local <= now_local:
raise HTTPException(status_code=400, detail="Start time must be in the future")
# Convert to UTC for storage
one_off_start = start_local.astimezone(ZoneInfo("UTC")).replace(tzinfo=None)
one_off_end = end_local.astimezone(ZoneInfo("UTC")).replace(tzinfo=None)
# Create a schedule for each location
for location in locations:
# Determine device type from location
device_type = "slm" if location.location_type == "sound" else "seismograph"
# Append location name if multiple locations
schedule_name = f"{base_name} - {location.name}" if len(locations) > 1 else base_name
schedule = service.create_schedule(
project_id=project_id,
location_id=location.id,
name=schedule_name,
schedule_type=data.get("schedule_type", "weekly_calendar"),
device_type=device_type,
unit_id=data.get("unit_id"),
weekly_pattern=data.get("weekly_pattern"),
interval_type=data.get("interval_type"),
cycle_time=data.get("cycle_time"),
include_download=data.get("include_download", True),
auto_increment_index=data.get("auto_increment_index", True),
timezone=data.get("timezone", "America/New_York"),
start_datetime=one_off_start,
end_datetime=one_off_end,
)
# Generate actions immediately so they appear right away
generated_actions = service.generate_actions_for_schedule(schedule, horizon_days=7)
created_schedules.append({
"schedule_id": schedule.id,
"location_id": location.id,
"location_name": location.name,
"actions_generated": len(generated_actions),
})
total_actions = sum(s.get("actions_generated", 0) for s in created_schedules)
return JSONResponse({
"success": True,
"schedules": created_schedules,
"count": len(created_schedules),
"actions_generated": total_actions,
"message": f"Created {len(created_schedules)} recurring schedule(s) with {total_actions} upcoming actions",
})
# ============================================================================
# Update
# ============================================================================
@router.put("/{schedule_id}")
async def update_recurring_schedule(
project_id: str,
schedule_id: str,
request: Request,
db: Session = Depends(get_db),
):
"""
Update a recurring schedule.
"""
schedule = db.query(RecurringSchedule).filter_by(
id=schedule_id,
project_id=project_id,
).first()
if not schedule:
raise HTTPException(status_code=404, detail="Schedule not found")
data = await request.json()
service = get_recurring_schedule_service(db)
# Build update kwargs
update_kwargs = {}
for field in ["name", "weekly_pattern", "interval_type", "cycle_time",
"include_download", "auto_increment_index", "timezone", "unit_id"]:
if field in data:
update_kwargs[field] = data[field]
updated = service.update_schedule(schedule_id, **update_kwargs)
return {
"success": True,
"schedule_id": updated.id,
"message": "Schedule updated successfully",
}
# ============================================================================
# Delete
# ============================================================================
@router.delete("/{schedule_id}")
async def delete_recurring_schedule(
project_id: str,
schedule_id: str,
db: Session = Depends(get_db),
):
"""
Delete a recurring schedule.
"""
service = get_recurring_schedule_service(db)
deleted = service.delete_schedule(schedule_id)
if not deleted:
raise HTTPException(status_code=404, detail="Schedule not found")
return {
"success": True,
"message": "Schedule deleted successfully",
}
# ============================================================================
# Enable/Disable
# ============================================================================
@router.post("/{schedule_id}/enable")
async def enable_schedule(
project_id: str,
schedule_id: str,
db: Session = Depends(get_db),
):
"""
Enable a disabled schedule.
"""
service = get_recurring_schedule_service(db)
schedule = service.enable_schedule(schedule_id)
if not schedule:
raise HTTPException(status_code=404, detail="Schedule not found")
return {
"success": True,
"schedule_id": schedule.id,
"enabled": schedule.enabled,
"message": "Schedule enabled",
}
@router.post("/{schedule_id}/disable")
async def disable_schedule(
project_id: str,
schedule_id: str,
db: Session = Depends(get_db),
):
"""
Disable a schedule and cancel all its pending actions.
"""
service = get_recurring_schedule_service(db)
# Count pending actions before disabling (for response message)
from sqlalchemy import and_
from backend.models import ScheduledAction
pending_count = db.query(ScheduledAction).filter(
and_(
ScheduledAction.execution_status == "pending",
ScheduledAction.notes.like(f'%"schedule_id": "{schedule_id}"%'),
)
).count()
schedule = service.disable_schedule(schedule_id)
if not schedule:
raise HTTPException(status_code=404, detail="Schedule not found")
message = "Schedule disabled"
if pending_count > 0:
message += f" and {pending_count} pending action(s) cancelled"
return {
"success": True,
"schedule_id": schedule.id,
"enabled": schedule.enabled,
"cancelled_actions": pending_count,
"message": message,
}
# ============================================================================
# Preview Generated Actions
# ============================================================================
@router.post("/{schedule_id}/generate-preview")
async def preview_generated_actions(
project_id: str,
schedule_id: str,
db: Session = Depends(get_db),
days: int = Query(7, ge=1, le=30),
):
"""
Preview what actions would be generated without saving them.
"""
schedule = db.query(RecurringSchedule).filter_by(
id=schedule_id,
project_id=project_id,
).first()
if not schedule:
raise HTTPException(status_code=404, detail="Schedule not found")
service = get_recurring_schedule_service(db)
actions = service.generate_actions_for_schedule(
schedule,
horizon_days=days,
preview_only=True,
)
return {
"schedule_id": schedule_id,
"schedule_name": schedule.name,
"preview_days": days,
"actions": [
{
"action_type": a.action_type,
"scheduled_time": a.scheduled_time.isoformat(),
"notes": a.notes,
}
for a in actions
],
"action_count": len(actions),
}
# ============================================================================
# Manual Generation Trigger
# ============================================================================
@router.post("/{schedule_id}/generate")
async def generate_actions_now(
project_id: str,
schedule_id: str,
db: Session = Depends(get_db),
days: int = Query(7, ge=1, le=30),
):
"""
Manually trigger action generation for a schedule.
"""
schedule = db.query(RecurringSchedule).filter_by(
id=schedule_id,
project_id=project_id,
).first()
if not schedule:
raise HTTPException(status_code=404, detail="Schedule not found")
if not schedule.enabled:
raise HTTPException(status_code=400, detail="Schedule is disabled")
service = get_recurring_schedule_service(db)
actions = service.generate_actions_for_schedule(
schedule,
horizon_days=days,
preview_only=False,
)
return {
"success": True,
"schedule_id": schedule_id,
"generated_count": len(actions),
"message": f"Generated {len(actions)} scheduled actions",
}
# ============================================================================
# HTML Partials
# ============================================================================
@router.get("/partials/list", response_class=HTMLResponse)
async def get_schedule_list_partial(
project_id: str,
request: Request,
db: Session = Depends(get_db),
):
"""
Return HTML partial for schedule list.
"""
project = db.query(Project).filter_by(id=project_id).first()
project_status = project.status if project else "active"
schedules = db.query(RecurringSchedule).filter_by(
project_id=project_id
).order_by(RecurringSchedule.created_at.desc()).all()
# Enrich with location info
schedule_data = []
for s in schedules:
location = db.query(MonitoringLocation).filter_by(id=s.location_id).first()
schedule_data.append({
"schedule": s,
"location": location,
"pattern": json.loads(s.weekly_pattern) if s.weekly_pattern else None,
})
return templates.TemplateResponse("partials/projects/recurring_schedule_list.html", {
"request": request,
"project_id": project_id,
"schedules": schedule_data,
"project_status": project_status,
})

View File

@@ -0,0 +1,187 @@
"""
Report Templates Router
CRUD operations for report template management.
Templates store time filter presets and report configuration for reuse.
"""
from fastapi import APIRouter, Depends, HTTPException
from fastapi.responses import JSONResponse
from sqlalchemy.orm import Session
from datetime import datetime
from typing import Optional
import uuid
from backend.database import get_db
from backend.models import ReportTemplate
router = APIRouter(prefix="/api/report-templates", tags=["report-templates"])
@router.get("")
async def list_templates(
project_id: Optional[str] = None,
db: Session = Depends(get_db),
):
"""
List all report templates.
Optionally filter by project_id (includes global templates with project_id=None).
"""
query = db.query(ReportTemplate)
if project_id:
# Include global templates (project_id=None) AND project-specific templates
query = query.filter(
(ReportTemplate.project_id == None) | (ReportTemplate.project_id == project_id)
)
templates = query.order_by(ReportTemplate.name).all()
return [
{
"id": t.id,
"name": t.name,
"project_id": t.project_id,
"report_title": t.report_title,
"start_time": t.start_time,
"end_time": t.end_time,
"start_date": t.start_date,
"end_date": t.end_date,
"created_at": t.created_at.isoformat() if t.created_at else None,
"updated_at": t.updated_at.isoformat() if t.updated_at else None,
}
for t in templates
]
@router.post("")
async def create_template(
data: dict,
db: Session = Depends(get_db),
):
"""
Create a new report template.
Request body:
- name: Template name (required)
- project_id: Optional project ID for project-specific template
- report_title: Default report title
- start_time: Start time filter (HH:MM format)
- end_time: End time filter (HH:MM format)
- start_date: Start date filter (YYYY-MM-DD format)
- end_date: End date filter (YYYY-MM-DD format)
"""
name = data.get("name")
if not name:
raise HTTPException(status_code=400, detail="Template name is required")
template = ReportTemplate(
id=str(uuid.uuid4()),
name=name,
project_id=data.get("project_id"),
report_title=data.get("report_title", "Background Noise Study"),
start_time=data.get("start_time"),
end_time=data.get("end_time"),
start_date=data.get("start_date"),
end_date=data.get("end_date"),
)
db.add(template)
db.commit()
db.refresh(template)
return {
"id": template.id,
"name": template.name,
"project_id": template.project_id,
"report_title": template.report_title,
"start_time": template.start_time,
"end_time": template.end_time,
"start_date": template.start_date,
"end_date": template.end_date,
"created_at": template.created_at.isoformat() if template.created_at else None,
}
@router.get("/{template_id}")
async def get_template(
template_id: str,
db: Session = Depends(get_db),
):
"""Get a specific report template by ID."""
template = db.query(ReportTemplate).filter_by(id=template_id).first()
if not template:
raise HTTPException(status_code=404, detail="Template not found")
return {
"id": template.id,
"name": template.name,
"project_id": template.project_id,
"report_title": template.report_title,
"start_time": template.start_time,
"end_time": template.end_time,
"start_date": template.start_date,
"end_date": template.end_date,
"created_at": template.created_at.isoformat() if template.created_at else None,
"updated_at": template.updated_at.isoformat() if template.updated_at else None,
}
@router.put("/{template_id}")
async def update_template(
template_id: str,
data: dict,
db: Session = Depends(get_db),
):
"""Update an existing report template."""
template = db.query(ReportTemplate).filter_by(id=template_id).first()
if not template:
raise HTTPException(status_code=404, detail="Template not found")
# Update fields if provided
if "name" in data:
template.name = data["name"]
if "project_id" in data:
template.project_id = data["project_id"]
if "report_title" in data:
template.report_title = data["report_title"]
if "start_time" in data:
template.start_time = data["start_time"]
if "end_time" in data:
template.end_time = data["end_time"]
if "start_date" in data:
template.start_date = data["start_date"]
if "end_date" in data:
template.end_date = data["end_date"]
template.updated_at = datetime.utcnow()
db.commit()
db.refresh(template)
return {
"id": template.id,
"name": template.name,
"project_id": template.project_id,
"report_title": template.report_title,
"start_time": template.start_time,
"end_time": template.end_time,
"start_date": template.start_date,
"end_date": template.end_date,
"updated_at": template.updated_at.isoformat() if template.updated_at else None,
}
@router.delete("/{template_id}")
async def delete_template(
template_id: str,
db: Session = Depends(get_db),
):
"""Delete a report template."""
template = db.query(ReportTemplate).filter_by(id=template_id).first()
if not template:
raise HTTPException(status_code=404, detail="Template not found")
db.delete(template)
db.commit()
return JSONResponse({"status": "success", "message": "Template deleted"})

View File

@@ -2,20 +2,32 @@ from fastapi import APIRouter, Depends
from sqlalchemy.orm import Session
from datetime import datetime, timedelta
from typing import Dict, Any
import asyncio
import logging
import random
from app.seismo.database import get_db
from app.seismo.services.snapshot import emit_status_snapshot
from backend.database import get_db
from backend.services.snapshot import emit_status_snapshot
from backend.services.slm_status_sync import sync_slm_status_to_emitters
router = APIRouter(prefix="/api", tags=["roster"])
logger = logging.getLogger(__name__)
@router.get("/status-snapshot")
def get_status_snapshot(db: Session = Depends(get_db)):
async def get_status_snapshot(db: Session = Depends(get_db)):
"""
Calls emit_status_snapshot() to get current fleet status.
This will be replaced with real Series3 emitter logic later.
Syncs SLM status from SLMM before generating snapshot.
"""
# Sync SLM status from SLMM (with timeout to prevent blocking)
try:
await asyncio.wait_for(sync_slm_status_to_emitters(), timeout=2.0)
except asyncio.TimeoutError:
logger.warning("SLM status sync timed out, using cached data")
except Exception as e:
logger.warning(f"SLM status sync failed: {e}")
return emit_status_snapshot()

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,139 @@
"""
Roster Unit Rename Router
Provides endpoint for safely renaming unit IDs across all database tables.
"""
from fastapi import APIRouter, Depends, HTTPException, Form
from sqlalchemy.orm import Session
from datetime import datetime
import logging
from backend.database import get_db
from backend.models import RosterUnit, Emitter, UnitHistory
from backend.routers.roster_edit import record_history, sync_slm_to_slmm_cache
router = APIRouter(prefix="/api/roster", tags=["roster-rename"])
logger = logging.getLogger(__name__)
@router.post("/rename")
async def rename_unit(
old_id: str = Form(...),
new_id: str = Form(...),
db: Session = Depends(get_db)
):
"""
Rename a unit ID across all tables.
Updates the unit ID in roster, emitters, unit_history, and all foreign key references.
IMPORTANT: This operation updates the primary key, which affects all relationships.
"""
# Validate input
if not old_id or not new_id:
raise HTTPException(status_code=400, detail="Both old_id and new_id are required")
if old_id == new_id:
raise HTTPException(status_code=400, detail="New ID must be different from old ID")
# Check if old unit exists
old_unit = db.query(RosterUnit).filter(RosterUnit.id == old_id).first()
if not old_unit:
raise HTTPException(status_code=404, detail=f"Unit '{old_id}' not found")
# Check if new ID already exists
existing_unit = db.query(RosterUnit).filter(RosterUnit.id == new_id).first()
if existing_unit:
raise HTTPException(status_code=409, detail=f"Unit ID '{new_id}' already exists")
device_type = old_unit.device_type
try:
# Record history for the rename operation (using old_id since that's still valid)
record_history(
db=db,
unit_id=old_id,
change_type="id_change",
field_name="id",
old_value=old_id,
new_value=new_id,
source="manual",
notes=f"Unit renamed from '{old_id}' to '{new_id}'"
)
# Update roster table (primary)
old_unit.id = new_id
old_unit.last_updated = datetime.utcnow()
# Update emitters table
emitter = db.query(Emitter).filter(Emitter.id == old_id).first()
if emitter:
emitter.id = new_id
# Update unit_history table (all entries for this unit)
db.query(UnitHistory).filter(UnitHistory.unit_id == old_id).update(
{"unit_id": new_id},
synchronize_session=False
)
# Update deployed_with_modem_id references (units that reference this as modem)
db.query(RosterUnit).filter(RosterUnit.deployed_with_modem_id == old_id).update(
{"deployed_with_modem_id": new_id},
synchronize_session=False
)
# Update unit_assignments table (if exists)
try:
from backend.models import UnitAssignment
db.query(UnitAssignment).filter(UnitAssignment.unit_id == old_id).update(
{"unit_id": new_id},
synchronize_session=False
)
except Exception as e:
logger.warning(f"Could not update unit_assignments: {e}")
# Update monitoring_sessions table (if exists)
try:
from backend.models import MonitoringSession
db.query(MonitoringSession).filter(MonitoringSession.unit_id == old_id).update(
{"unit_id": new_id},
synchronize_session=False
)
except Exception as e:
logger.warning(f"Could not update monitoring_sessions: {e}")
# Commit all changes
db.commit()
# If sound level meter, sync updated config to SLMM cache
if device_type == "slm":
logger.info(f"Syncing renamed SLM {new_id} (was {old_id}) config to SLMM cache...")
result = await sync_slm_to_slmm_cache(
unit_id=new_id,
host=old_unit.slm_host,
tcp_port=old_unit.slm_tcp_port,
ftp_port=old_unit.slm_ftp_port,
deployed_with_modem_id=old_unit.deployed_with_modem_id,
db=db
)
if not result["success"]:
logger.warning(f"SLMM cache sync warning for renamed unit {new_id}: {result['message']}")
logger.info(f"Successfully renamed unit '{old_id}' to '{new_id}'")
return {
"success": True,
"message": f"Successfully renamed unit from '{old_id}' to '{new_id}'",
"old_id": old_id,
"new_id": new_id,
"device_type": device_type
}
except Exception as e:
db.rollback()
logger.error(f"Error renaming unit '{old_id}' to '{new_id}': {e}")
raise HTTPException(
status_code=500,
detail=f"Failed to rename unit: {str(e)}"
)

View File

@@ -0,0 +1,408 @@
"""
Scheduler Router
Handles scheduled actions for automated recording control.
"""
from fastapi import APIRouter, Request, Depends, HTTPException, Query
from fastapi.responses import HTMLResponse, JSONResponse
from sqlalchemy.orm import Session
from sqlalchemy import and_, or_
from datetime import datetime, timedelta
from typing import Optional
import uuid
import json
from backend.database import get_db
from backend.models import (
Project,
ScheduledAction,
MonitoringLocation,
UnitAssignment,
RosterUnit,
)
from backend.services.scheduler import get_scheduler
from backend.templates_config import templates
router = APIRouter(prefix="/api/projects/{project_id}/scheduler", tags=["scheduler"])
# ============================================================================
# Scheduled Actions List
# ============================================================================
@router.get("/actions", response_class=HTMLResponse)
async def get_scheduled_actions(
project_id: str,
request: Request,
db: Session = Depends(get_db),
status: Optional[str] = Query(None),
start_date: Optional[str] = Query(None),
end_date: Optional[str] = Query(None),
):
"""
Get scheduled actions for a project.
Returns HTML partial with agenda/calendar view.
"""
query = db.query(ScheduledAction).filter_by(project_id=project_id)
# Filter by status
if status:
query = query.filter_by(execution_status=status)
else:
# By default, show pending and upcoming completed/failed
query = query.filter(
or_(
ScheduledAction.execution_status == "pending",
and_(
ScheduledAction.execution_status.in_(["completed", "failed"]),
ScheduledAction.scheduled_time >= datetime.utcnow() - timedelta(days=7),
),
)
)
# Filter by date range
if start_date:
query = query.filter(ScheduledAction.scheduled_time >= datetime.fromisoformat(start_date))
if end_date:
query = query.filter(ScheduledAction.scheduled_time <= datetime.fromisoformat(end_date))
actions = query.order_by(ScheduledAction.scheduled_time).all()
# Enrich with location and unit details
actions_data = []
for action in actions:
location = db.query(MonitoringLocation).filter_by(id=action.location_id).first()
unit = None
if action.unit_id:
unit = db.query(RosterUnit).filter_by(id=action.unit_id).first()
else:
# Get from assignment
assignment = db.query(UnitAssignment).filter(
and_(
UnitAssignment.location_id == action.location_id,
UnitAssignment.status == "active",
)
).first()
if assignment:
unit = db.query(RosterUnit).filter_by(id=assignment.unit_id).first()
actions_data.append({
"action": action,
"location": location,
"unit": unit,
})
return templates.TemplateResponse("partials/projects/scheduler_agenda.html", {
"request": request,
"project_id": project_id,
"actions": actions_data,
})
# ============================================================================
# Create Scheduled Action
# ============================================================================
@router.post("/actions/create")
async def create_scheduled_action(
project_id: str,
request: Request,
db: Session = Depends(get_db),
):
"""
Create a new scheduled action.
"""
project = db.query(Project).filter_by(id=project_id).first()
if not project:
raise HTTPException(status_code=404, detail="Project not found")
form_data = await request.form()
location_id = form_data.get("location_id")
location = db.query(MonitoringLocation).filter_by(
id=location_id,
project_id=project_id,
).first()
if not location:
raise HTTPException(status_code=404, detail="Location not found")
# Determine device type from location
device_type = "slm" if location.location_type == "sound" else "seismograph"
# Get unit_id (optional - can be determined from assignment at execution time)
unit_id = form_data.get("unit_id")
action = ScheduledAction(
id=str(uuid.uuid4()),
project_id=project_id,
location_id=location_id,
unit_id=unit_id,
action_type=form_data.get("action_type"),
device_type=device_type,
scheduled_time=datetime.fromisoformat(form_data.get("scheduled_time")),
execution_status="pending",
notes=form_data.get("notes"),
)
db.add(action)
db.commit()
db.refresh(action)
return JSONResponse({
"success": True,
"action_id": action.id,
"message": f"Scheduled action '{action.action_type}' created for {action.scheduled_time}",
})
# ============================================================================
# Schedule Recording Session
# ============================================================================
@router.post("/schedule-session")
async def schedule_recording_session(
project_id: str,
request: Request,
db: Session = Depends(get_db),
):
"""
Schedule a complete recording session (start + stop).
Creates two scheduled actions: start and stop.
"""
project = db.query(Project).filter_by(id=project_id).first()
if not project:
raise HTTPException(status_code=404, detail="Project not found")
form_data = await request.form()
location_id = form_data.get("location_id")
location = db.query(MonitoringLocation).filter_by(
id=location_id,
project_id=project_id,
).first()
if not location:
raise HTTPException(status_code=404, detail="Location not found")
device_type = "slm" if location.location_type == "sound" else "seismograph"
unit_id = form_data.get("unit_id")
start_time = datetime.fromisoformat(form_data.get("start_time"))
duration_minutes = int(form_data.get("duration_minutes", 60))
stop_time = start_time + timedelta(minutes=duration_minutes)
# Create START action
start_action = ScheduledAction(
id=str(uuid.uuid4()),
project_id=project_id,
location_id=location_id,
unit_id=unit_id,
action_type="start",
device_type=device_type,
scheduled_time=start_time,
execution_status="pending",
notes=form_data.get("notes"),
)
# Create STOP action
stop_action = ScheduledAction(
id=str(uuid.uuid4()),
project_id=project_id,
location_id=location_id,
unit_id=unit_id,
action_type="stop",
device_type=device_type,
scheduled_time=stop_time,
execution_status="pending",
notes=f"Auto-stop after {duration_minutes} minutes",
)
db.add(start_action)
db.add(stop_action)
db.commit()
return JSONResponse({
"success": True,
"start_action_id": start_action.id,
"stop_action_id": stop_action.id,
"message": f"Recording session scheduled from {start_time} to {stop_time}",
})
# ============================================================================
# Update/Cancel Scheduled Action
# ============================================================================
@router.put("/actions/{action_id}")
async def update_scheduled_action(
project_id: str,
action_id: str,
request: Request,
db: Session = Depends(get_db),
):
"""
Update a scheduled action (only if not yet executed).
"""
action = db.query(ScheduledAction).filter_by(
id=action_id,
project_id=project_id,
).first()
if not action:
raise HTTPException(status_code=404, detail="Action not found")
if action.execution_status != "pending":
raise HTTPException(
status_code=400,
detail="Cannot update action that has already been executed",
)
data = await request.json()
if "scheduled_time" in data:
action.scheduled_time = datetime.fromisoformat(data["scheduled_time"])
if "notes" in data:
action.notes = data["notes"]
db.commit()
return {"success": True, "message": "Action updated successfully"}
@router.post("/actions/{action_id}/cancel")
async def cancel_scheduled_action(
project_id: str,
action_id: str,
db: Session = Depends(get_db),
):
"""
Cancel a pending scheduled action.
"""
action = db.query(ScheduledAction).filter_by(
id=action_id,
project_id=project_id,
).first()
if not action:
raise HTTPException(status_code=404, detail="Action not found")
if action.execution_status != "pending":
raise HTTPException(
status_code=400,
detail="Can only cancel pending actions",
)
action.execution_status = "cancelled"
db.commit()
return {"success": True, "message": "Action cancelled successfully"}
@router.delete("/actions/{action_id}")
async def delete_scheduled_action(
project_id: str,
action_id: str,
db: Session = Depends(get_db),
):
"""
Delete a scheduled action (only if pending or cancelled).
"""
action = db.query(ScheduledAction).filter_by(
id=action_id,
project_id=project_id,
).first()
if not action:
raise HTTPException(status_code=404, detail="Action not found")
if action.execution_status not in ["pending", "cancelled"]:
raise HTTPException(
status_code=400,
detail="Cannot delete action that has been executed",
)
db.delete(action)
db.commit()
return {"success": True, "message": "Action deleted successfully"}
# ============================================================================
# Manual Execution
# ============================================================================
@router.post("/actions/{action_id}/execute")
async def execute_action_now(
project_id: str,
action_id: str,
db: Session = Depends(get_db),
):
"""
Manually trigger execution of a scheduled action (for testing/debugging).
"""
action = db.query(ScheduledAction).filter_by(
id=action_id,
project_id=project_id,
).first()
if not action:
raise HTTPException(status_code=404, detail="Action not found")
if action.execution_status != "pending":
raise HTTPException(
status_code=400,
detail="Action is not pending",
)
# Execute via scheduler service
scheduler = get_scheduler()
result = await scheduler.execute_action_by_id(action_id)
# Refresh from DB to get updated status
db.refresh(action)
return JSONResponse({
"success": result.get("success", False),
"result": result,
"action": {
"id": action.id,
"execution_status": action.execution_status,
"executed_at": action.executed_at.isoformat() if action.executed_at else None,
"error_message": action.error_message,
},
})
# ============================================================================
# Scheduler Status
# ============================================================================
@router.get("/status")
async def get_scheduler_status():
"""
Get scheduler service status.
"""
scheduler = get_scheduler()
return {
"running": scheduler.running,
"check_interval": scheduler.check_interval,
}
@router.post("/execute-pending")
async def trigger_pending_execution():
"""
Manually trigger execution of all pending actions (for testing).
"""
scheduler = get_scheduler()
results = await scheduler.execute_pending_actions()
return {
"success": True,
"executed_count": len(results),
"results": results,
}

View File

@@ -0,0 +1,228 @@
"""
Seismograph Dashboard API Router
Provides endpoints for the seismograph-specific dashboard
"""
from datetime import date, datetime, timedelta
from fastapi import APIRouter, Request, Depends, Query, Form, HTTPException
from fastapi.responses import HTMLResponse
from sqlalchemy.orm import Session
from backend.database import get_db
from backend.models import RosterUnit, UnitHistory, UserPreferences
from backend.templates_config import templates
router = APIRouter(prefix="/api/seismo-dashboard", tags=["seismo-dashboard"])
@router.get("/stats", response_class=HTMLResponse)
async def get_seismo_stats(request: Request, db: Session = Depends(get_db)):
"""
Returns HTML partial with seismograph statistics summary
"""
# Get all seismograph units
seismos = db.query(RosterUnit).filter_by(
device_type="seismograph",
retired=False
).all()
total = len(seismos)
deployed = sum(1 for s in seismos if s.deployed)
benched = sum(1 for s in seismos if not s.deployed and not s.out_for_calibration)
out_for_calibration = sum(1 for s in seismos if s.out_for_calibration)
# Count modems assigned to deployed seismographs
with_modem = sum(1 for s in seismos if s.deployed and s.deployed_with_modem_id)
without_modem = deployed - with_modem
return templates.TemplateResponse(
"partials/seismo_stats.html",
{
"request": request,
"total": total,
"deployed": deployed,
"benched": benched,
"out_for_calibration": out_for_calibration,
"with_modem": with_modem,
"without_modem": without_modem
}
)
@router.get("/units", response_class=HTMLResponse)
async def get_seismo_units(
request: Request,
db: Session = Depends(get_db),
search: str = Query(None),
sort: str = Query("id"),
order: str = Query("asc"),
status: str = Query(None),
modem: str = Query(None)
):
"""
Returns HTML partial with filterable and sortable seismograph unit list
"""
query = db.query(RosterUnit).filter_by(
device_type="seismograph",
retired=False
)
# Apply search filter
if search:
query = query.filter(
(RosterUnit.id.ilike(f"%{search}%")) |
(RosterUnit.note.ilike(f"%{search}%")) |
(RosterUnit.address.ilike(f"%{search}%"))
)
# Apply status filter
if status == "deployed":
query = query.filter(RosterUnit.deployed == True)
elif status == "benched":
query = query.filter(RosterUnit.deployed == False, RosterUnit.out_for_calibration == False)
elif status == "out_for_calibration":
query = query.filter(RosterUnit.out_for_calibration == True)
# Apply modem filter
if modem == "with":
query = query.filter(RosterUnit.deployed_with_modem_id.isnot(None))
elif modem == "without":
query = query.filter(RosterUnit.deployed_with_modem_id.is_(None))
# Apply sorting
sort_column_map = {
"id": RosterUnit.id,
"status": RosterUnit.deployed,
"modem": RosterUnit.deployed_with_modem_id,
"location": RosterUnit.address,
"last_calibrated": RosterUnit.last_calibrated,
"notes": RosterUnit.note
}
sort_column = sort_column_map.get(sort, RosterUnit.id)
if order == "desc":
query = query.order_by(sort_column.desc())
else:
query = query.order_by(sort_column.asc())
seismos = query.all()
return templates.TemplateResponse(
"partials/seismo_unit_list.html",
{
"request": request,
"units": seismos,
"search": search or "",
"sort": sort,
"order": order,
"status": status or "",
"modem": modem or "",
"today": date.today()
}
)
def _get_calibration_interval(db: Session) -> int:
prefs = db.query(UserPreferences).first()
if prefs and prefs.calibration_interval_days:
return prefs.calibration_interval_days
return 365
def _row_context(request: Request, unit: RosterUnit) -> dict:
return {"request": request, "unit": unit, "today": date.today()}
@router.get("/unit/{unit_id}/view-row", response_class=HTMLResponse)
async def get_seismo_view_row(unit_id: str, request: Request, db: Session = Depends(get_db)):
unit = db.query(RosterUnit).filter(RosterUnit.id == unit_id).first()
if not unit:
raise HTTPException(status_code=404, detail="Unit not found")
return templates.TemplateResponse("partials/seismo_row_view.html", _row_context(request, unit))
@router.get("/unit/{unit_id}/edit-row", response_class=HTMLResponse)
async def get_seismo_edit_row(unit_id: str, request: Request, db: Session = Depends(get_db)):
unit = db.query(RosterUnit).filter(RosterUnit.id == unit_id).first()
if not unit:
raise HTTPException(status_code=404, detail="Unit not found")
return templates.TemplateResponse("partials/seismo_row_edit.html", _row_context(request, unit))
@router.post("/unit/{unit_id}/quick-update", response_class=HTMLResponse)
async def quick_update_seismo_unit(
unit_id: str,
request: Request,
db: Session = Depends(get_db),
status: str = Form(...),
last_calibrated: str = Form(""),
note: str = Form(""),
):
unit = db.query(RosterUnit).filter(RosterUnit.id == unit_id).first()
if not unit:
raise HTTPException(status_code=404, detail="Unit not found")
# --- Status ---
old_deployed = unit.deployed
old_out_for_cal = unit.out_for_calibration
if status == "deployed":
unit.deployed = True
unit.out_for_calibration = False
elif status == "out_for_calibration":
unit.deployed = False
unit.out_for_calibration = True
else:
unit.deployed = False
unit.out_for_calibration = False
if unit.deployed != old_deployed or unit.out_for_calibration != old_out_for_cal:
old_status = "deployed" if old_deployed else ("out_for_calibration" if old_out_for_cal else "benched")
db.add(UnitHistory(
unit_id=unit_id,
change_type="deployed_change",
field_name="status",
old_value=old_status,
new_value=status,
source="manual",
))
# --- Last calibrated ---
old_cal = unit.last_calibrated
if last_calibrated:
try:
new_cal = datetime.strptime(last_calibrated, "%Y-%m-%d").date()
except ValueError:
raise HTTPException(status_code=400, detail="Invalid date format. Use YYYY-MM-DD")
unit.last_calibrated = new_cal
unit.next_calibration_due = new_cal + timedelta(days=_get_calibration_interval(db))
else:
unit.last_calibrated = None
unit.next_calibration_due = None
if unit.last_calibrated != old_cal:
db.add(UnitHistory(
unit_id=unit_id,
change_type="calibration_status_change",
field_name="last_calibrated",
old_value=old_cal.strftime("%Y-%m-%d") if old_cal else None,
new_value=last_calibrated or None,
source="manual",
))
# --- Note ---
old_note = unit.note
unit.note = note or None
if unit.note != old_note:
db.add(UnitHistory(
unit_id=unit_id,
change_type="note_change",
field_name="note",
old_value=old_note,
new_value=unit.note,
source="manual",
))
db.commit()
db.refresh(unit)
return templates.TemplateResponse("partials/seismo_row_view.html", _row_context(request, unit))

View File

@@ -9,9 +9,9 @@ import io
import shutil
from pathlib import Path
from app.seismo.database import get_db
from app.seismo.models import RosterUnit, Emitter, IgnoredUnit, UserPreferences
from app.seismo.services.database_backup import DatabaseBackupService
from backend.database import get_db
from backend.models import RosterUnit, Emitter, IgnoredUnit, UserPreferences
from backend.services.database_backup import DatabaseBackupService
router = APIRouter(prefix="/api/settings", tags=["settings"])
@@ -477,3 +477,75 @@ async def upload_snapshot(file: UploadFile = File(...)):
except Exception as e:
raise HTTPException(status_code=500, detail=f"Upload failed: {str(e)}")
# ============================================================================
# SLMM SYNC ENDPOINTS
# ============================================================================
@router.post("/slmm/sync-all")
async def sync_all_slms(db: Session = Depends(get_db)):
"""
Manually trigger full sync of all SLM devices from Terra-View roster to SLMM.
This ensures SLMM database matches Terra-View roster (source of truth).
Also cleans up orphaned devices in SLMM that are not in Terra-View.
"""
from backend.services.slmm_sync import sync_all_slms_to_slmm, cleanup_orphaned_slmm_devices
try:
# Sync all SLMs
sync_results = await sync_all_slms_to_slmm(db)
# Clean up orphaned devices
cleanup_results = await cleanup_orphaned_slmm_devices(db)
return {
"status": "ok",
"sync": sync_results,
"cleanup": cleanup_results
}
except Exception as e:
raise HTTPException(status_code=500, detail=f"Sync failed: {str(e)}")
@router.get("/slmm/status")
async def get_slmm_sync_status(db: Session = Depends(get_db)):
"""
Get status of SLMM synchronization.
Shows which devices are in Terra-View roster vs SLMM database.
"""
from backend.services.slmm_sync import get_slmm_devices
try:
# Get devices from both systems
roster_slms = db.query(RosterUnit).filter_by(device_type="slm").all()
slmm_devices = await get_slmm_devices()
if slmm_devices is None:
raise HTTPException(status_code=503, detail="SLMM service unavailable")
roster_unit_ids = {unit.unit_type for unit in roster_slms}
slmm_unit_ids = set(slmm_devices)
# Find differences
in_roster_only = roster_unit_ids - slmm_unit_ids
in_slmm_only = slmm_unit_ids - roster_unit_ids
in_both = roster_unit_ids & slmm_unit_ids
return {
"status": "ok",
"terra_view_total": len(roster_unit_ids),
"slmm_total": len(slmm_unit_ids),
"synced": len(in_both),
"missing_from_slmm": list(in_roster_only),
"orphaned_in_slmm": list(in_slmm_only),
"in_sync": len(in_roster_only) == 0 and len(in_slmm_only) == 0
}
except HTTPException:
raise
except Exception as e:
raise HTTPException(status_code=500, detail=f"Status check failed: {str(e)}")

View File

@@ -0,0 +1,363 @@
"""
SLM Dashboard Router
Provides API endpoints for the Sound Level Meters dashboard page.
"""
from fastapi import APIRouter, Request, Depends, Query
from fastapi.responses import HTMLResponse
from sqlalchemy.orm import Session
from sqlalchemy import func
from datetime import datetime, timedelta
import asyncio
import httpx
import logging
import os
from backend.database import get_db
from backend.models import RosterUnit
from backend.routers.roster_edit import sync_slm_to_slmm_cache
from backend.templates_config import templates
logger = logging.getLogger(__name__)
router = APIRouter(prefix="/api/slm-dashboard", tags=["slm-dashboard"])
# SLMM backend URL - configurable via environment variable
SLMM_BASE_URL = os.getenv("SLMM_BASE_URL", "http://localhost:8100")
@router.get("/stats", response_class=HTMLResponse)
async def get_slm_stats(request: Request, db: Session = Depends(get_db)):
"""
Get summary statistics for SLM dashboard.
Returns HTML partial with stat cards.
"""
# Query all SLMs
all_slms = db.query(RosterUnit).filter_by(device_type="slm").all()
# Count deployed vs benched
deployed_count = sum(1 for slm in all_slms if slm.deployed and not slm.retired)
benched_count = sum(1 for slm in all_slms if not slm.deployed and not slm.retired)
retired_count = sum(1 for slm in all_slms if slm.retired)
# Count recently active (checked in last hour)
one_hour_ago = datetime.utcnow() - timedelta(hours=1)
active_count = sum(1 for slm in all_slms
if slm.slm_last_check and slm.slm_last_check > one_hour_ago)
return templates.TemplateResponse("partials/slm_stats.html", {
"request": request,
"total_count": len(all_slms),
"deployed_count": deployed_count,
"benched_count": benched_count,
"active_count": active_count,
"retired_count": retired_count
})
@router.get("/units", response_class=HTMLResponse)
async def get_slm_units(
request: Request,
db: Session = Depends(get_db),
search: str = Query(None),
project: str = Query(None),
include_measurement: bool = Query(False),
):
"""
Get list of SLM units for the sidebar.
Returns HTML partial with unit cards.
"""
query = db.query(RosterUnit).filter_by(device_type="slm")
# Filter by project if provided
if project:
query = query.filter(RosterUnit.project_id == project)
# Filter by search term if provided
if search:
search_term = f"%{search}%"
query = query.filter(
(RosterUnit.id.like(search_term)) |
(RosterUnit.slm_model.like(search_term)) |
(RosterUnit.address.like(search_term))
)
units = query.order_by(
RosterUnit.retired.asc(),
RosterUnit.deployed.desc(),
RosterUnit.id.asc()
).all()
one_hour_ago = datetime.utcnow() - timedelta(hours=1)
for unit in units:
unit.is_recent = bool(unit.slm_last_check and unit.slm_last_check > one_hour_ago)
if include_measurement:
async def fetch_measurement_state(client: httpx.AsyncClient, unit_id: str) -> str | None:
try:
response = await client.get(f"{SLMM_BASE_URL}/api/nl43/{unit_id}/measurement-state")
if response.status_code == 200:
return response.json().get("measurement_state")
except Exception:
return None
return None
deployed_units = [unit for unit in units if unit.deployed and not unit.retired]
if deployed_units:
async with httpx.AsyncClient(timeout=3.0) as client:
tasks = [fetch_measurement_state(client, unit.id) for unit in deployed_units]
results = await asyncio.gather(*tasks, return_exceptions=True)
for unit, state in zip(deployed_units, results):
if isinstance(state, Exception):
unit.measurement_state = None
else:
unit.measurement_state = state
return templates.TemplateResponse("partials/slm_device_list.html", {
"request": request,
"units": units
})
@router.get("/live-view/{unit_id}", response_class=HTMLResponse)
async def get_live_view(request: Request, unit_id: str, db: Session = Depends(get_db)):
"""
Get live view panel for a specific SLM unit.
Returns HTML partial with live metrics and chart.
"""
# Get unit from database
unit = db.query(RosterUnit).filter_by(id=unit_id, device_type="slm").first()
if not unit:
return templates.TemplateResponse("partials/slm_live_view_error.html", {
"request": request,
"error": f"Unit {unit_id} not found"
})
# Get modem information if assigned
modem = None
modem_ip = None
if unit.deployed_with_modem_id:
modem = db.query(RosterUnit).filter_by(id=unit.deployed_with_modem_id, device_type="modem").first()
if modem:
modem_ip = modem.ip_address
else:
logger.warning(f"SLM {unit_id} is assigned to modem {unit.deployed_with_modem_id} but modem not found")
# Fallback to direct slm_host if no modem assigned (backward compatibility)
if not modem_ip and unit.slm_host:
modem_ip = unit.slm_host
logger.info(f"Using legacy slm_host for {unit_id}: {modem_ip}")
# Try to get current status from SLMM
current_status = None
measurement_state = None
is_measuring = False
try:
async with httpx.AsyncClient(timeout=10.0) as client:
# Get measurement state
state_response = await client.get(
f"{SLMM_BASE_URL}/api/nl43/{unit_id}/measurement-state"
)
if state_response.status_code == 200:
state_data = state_response.json()
measurement_state = state_data.get("measurement_state", "Unknown")
is_measuring = state_data.get("is_measuring", False)
# Get live status (measurement_start_time is already stored in SLMM database)
status_response = await client.get(
f"{SLMM_BASE_URL}/api/nl43/{unit_id}/live"
)
if status_response.status_code == 200:
status_data = status_response.json()
current_status = status_data.get("data", {})
except Exception as e:
logger.error(f"Failed to get status for {unit_id}: {e}")
return templates.TemplateResponse("partials/slm_live_view.html", {
"request": request,
"unit": unit,
"modem": modem,
"modem_ip": modem_ip,
"current_status": current_status,
"measurement_state": measurement_state,
"is_measuring": is_measuring
})
@router.post("/control/{unit_id}/{action}")
async def control_slm(unit_id: str, action: str):
"""
Send control commands to SLM (start, stop, pause, resume, reset).
Proxies to SLMM backend.
"""
valid_actions = ["start", "stop", "pause", "resume", "reset"]
if action not in valid_actions:
return {"status": "error", "detail": f"Invalid action. Must be one of: {valid_actions}"}
try:
async with httpx.AsyncClient(timeout=10.0) as client:
response = await client.post(
f"{SLMM_BASE_URL}/api/nl43/{unit_id}/{action}"
)
if response.status_code == 200:
return response.json()
else:
return {
"status": "error",
"detail": f"SLMM returned status {response.status_code}"
}
except Exception as e:
logger.error(f"Failed to control {unit_id}: {e}")
return {
"status": "error",
"detail": str(e)
}
@router.get("/config/{unit_id}", response_class=HTMLResponse)
async def get_slm_config(request: Request, unit_id: str, db: Session = Depends(get_db)):
"""
Get configuration form for a specific SLM unit.
Returns HTML partial with configuration form.
"""
unit = db.query(RosterUnit).filter_by(id=unit_id, device_type="slm").first()
if not unit:
return HTMLResponse(
content=f'<div class="text-red-500">Unit {unit_id} not found</div>',
status_code=404
)
return templates.TemplateResponse("partials/slm_config_form.html", {
"request": request,
"unit": unit
})
@router.post("/config/{unit_id}")
async def save_slm_config(request: Request, unit_id: str, db: Session = Depends(get_db)):
"""
Save SLM configuration.
Updates unit parameters in the database.
"""
unit = db.query(RosterUnit).filter_by(id=unit_id, device_type="slm").first()
if not unit:
return {"status": "error", "detail": f"Unit {unit_id} not found"}
try:
# Get form data
form_data = await request.form()
# Update SLM-specific fields
unit.slm_model = form_data.get("slm_model") or None
unit.slm_serial_number = form_data.get("slm_serial_number") or None
unit.slm_frequency_weighting = form_data.get("slm_frequency_weighting") or None
unit.slm_time_weighting = form_data.get("slm_time_weighting") or None
unit.slm_measurement_range = form_data.get("slm_measurement_range") or None
# Update network configuration
modem_id = form_data.get("deployed_with_modem_id")
unit.deployed_with_modem_id = modem_id if modem_id else None
# Always update TCP and FTP ports (used regardless of modem assignment)
unit.slm_tcp_port = int(form_data.get("slm_tcp_port")) if form_data.get("slm_tcp_port") else None
unit.slm_ftp_port = int(form_data.get("slm_ftp_port")) if form_data.get("slm_ftp_port") else None
# Only update direct IP if no modem is assigned
if not modem_id:
unit.slm_host = form_data.get("slm_host") or None
else:
# Clear legacy direct IP field when modem is assigned
unit.slm_host = None
db.commit()
logger.info(f"Updated configuration for SLM {unit_id}")
# Sync updated configuration to SLMM cache
logger.info(f"Syncing SLM {unit_id} config changes to SLMM cache...")
result = await sync_slm_to_slmm_cache(
unit_id=unit_id,
host=unit.slm_host, # Use the updated host from Terra-View
tcp_port=unit.slm_tcp_port,
ftp_port=unit.slm_ftp_port,
deployed_with_modem_id=unit.deployed_with_modem_id, # Resolve modem IP if assigned
db=db
)
if not result["success"]:
logger.warning(f"SLMM cache sync warning for {unit_id}: {result['message']}")
# Config still saved in Terra-View (source of truth)
return {"status": "success", "unit_id": unit_id}
except Exception as e:
db.rollback()
logger.error(f"Failed to save config for {unit_id}: {e}")
return {"status": "error", "detail": str(e)}
@router.get("/test-modem/{modem_id}")
async def test_modem_connection(modem_id: str, db: Session = Depends(get_db)):
"""
Test modem connectivity with a simple ping/health check.
Returns response time and connection status.
"""
import subprocess
import time
# Get modem from database
modem = db.query(RosterUnit).filter_by(id=modem_id, device_type="modem").first()
if not modem:
return {"status": "error", "detail": f"Modem {modem_id} not found"}
if not modem.ip_address:
return {"status": "error", "detail": f"Modem {modem_id} has no IP address configured"}
try:
# Ping the modem (1 packet, 2 second timeout)
start_time = time.time()
result = subprocess.run(
["ping", "-c", "1", "-W", "2", modem.ip_address],
capture_output=True,
text=True,
timeout=3
)
response_time = int((time.time() - start_time) * 1000) # Convert to milliseconds
if result.returncode == 0:
return {
"status": "success",
"modem_id": modem_id,
"ip_address": modem.ip_address,
"response_time": response_time,
"message": "Modem is responding to ping"
}
else:
return {
"status": "error",
"modem_id": modem_id,
"ip_address": modem.ip_address,
"detail": "Modem not responding to ping"
}
except subprocess.TimeoutExpired:
return {
"status": "error",
"modem_id": modem_id,
"ip_address": modem.ip_address,
"detail": "Ping timeout (> 2 seconds)"
}
except Exception as e:
logger.error(f"Failed to ping modem {modem_id}: {e}")
return {
"status": "error",
"modem_id": modem_id,
"detail": str(e)
}

122
backend/routers/slm_ui.py Normal file
View File

@@ -0,0 +1,122 @@
"""
Sound Level Meter UI Router
Provides endpoints for SLM dashboard cards, detail pages, and real-time data.
"""
from fastapi import APIRouter, Depends, HTTPException, Request
from fastapi.responses import HTMLResponse
from sqlalchemy.orm import Session
from datetime import datetime
import httpx
import logging
import os
from backend.database import get_db
from backend.models import RosterUnit
from backend.templates_config import templates
logger = logging.getLogger(__name__)
router = APIRouter(prefix="/slm", tags=["slm-ui"])
SLMM_BASE_URL = os.getenv("SLMM_BASE_URL", "http://172.19.0.1:8100")
@router.get("/{unit_id}", response_class=HTMLResponse)
async def slm_detail_page(request: Request, unit_id: str, db: Session = Depends(get_db)):
"""Sound level meter detail page with controls."""
# Get roster unit
unit = db.query(RosterUnit).filter_by(id=unit_id).first()
if not unit or unit.device_type != "slm":
raise HTTPException(status_code=404, detail="Sound level meter not found")
return templates.TemplateResponse("slm_detail.html", {
"request": request,
"unit": unit,
"unit_id": unit_id
})
@router.get("/api/{unit_id}/summary")
async def get_slm_summary(unit_id: str, db: Session = Depends(get_db)):
"""Get SLM summary data for dashboard card."""
# Get roster unit
unit = db.query(RosterUnit).filter_by(id=unit_id).first()
if not unit or unit.device_type != "slm":
raise HTTPException(status_code=404, detail="Sound level meter not found")
# Try to get live status from SLMM
status_data = None
try:
async with httpx.AsyncClient(timeout=3.0) as client:
response = await client.get(f"{SLMM_BASE_URL}/api/nl43/{unit_id}/status")
if response.status_code == 200:
status_data = response.json().get("data")
except Exception as e:
logger.warning(f"Failed to get SLM status for {unit_id}: {e}")
return {
"unit_id": unit_id,
"device_type": "slm",
"deployed": unit.deployed,
"model": unit.slm_model or "NL-43",
"location": unit.address or unit.location,
"coordinates": unit.coordinates,
"note": unit.note,
"status": status_data,
"last_check": unit.slm_last_check.isoformat() if unit.slm_last_check else None,
}
@router.get("/partials/{unit_id}/card", response_class=HTMLResponse)
async def slm_dashboard_card(request: Request, unit_id: str, db: Session = Depends(get_db)):
"""Render SLM dashboard card partial."""
summary = await get_slm_summary(unit_id, db)
return templates.TemplateResponse("partials/slm_card.html", {
"request": request,
"slm": summary
})
@router.get("/partials/{unit_id}/controls", response_class=HTMLResponse)
async def slm_controls_partial(request: Request, unit_id: str, db: Session = Depends(get_db)):
"""Render SLM control panel partial."""
unit = db.query(RosterUnit).filter_by(id=unit_id).first()
if not unit or unit.device_type != "slm":
raise HTTPException(status_code=404, detail="Sound level meter not found")
# Get current status from SLMM
measurement_state = None
battery_level = None
try:
async with httpx.AsyncClient(timeout=3.0) as client:
# Get measurement state
state_response = await client.get(
f"{SLMM_BASE_URL}/api/nl43/{unit_id}/measurement-state"
)
if state_response.status_code == 200:
measurement_state = state_response.json().get("measurement_state")
# Get battery level
battery_response = await client.get(
f"{SLMM_BASE_URL}/api/nl43/{unit_id}/battery"
)
if battery_response.status_code == 200:
battery_level = battery_response.json().get("battery_level")
except Exception as e:
logger.warning(f"Failed to get SLM control data for {unit_id}: {e}")
return templates.TemplateResponse("partials/slm_controls.html", {
"request": request,
"unit_id": unit_id,
"unit": unit,
"measurement_state": measurement_state,
"battery_level": battery_level,
"is_measuring": measurement_state == "Start"
})

301
backend/routers/slmm.py Normal file
View File

@@ -0,0 +1,301 @@
"""
SLMM (Sound Level Meter Manager) Proxy Router
Proxies requests from SFM to the standalone SLMM backend service.
SLMM runs on port 8100 and handles NL43/NL53 sound level meter communication.
"""
from fastapi import APIRouter, HTTPException, Request, Response, WebSocket, WebSocketDisconnect
from fastapi.responses import StreamingResponse
import httpx
import websockets
import asyncio
import logging
import os
logger = logging.getLogger(__name__)
router = APIRouter(prefix="/api/slmm", tags=["slmm"])
# SLMM backend URL - configurable via environment variable
SLMM_BASE_URL = os.getenv("SLMM_BASE_URL", "http://localhost:8100")
# WebSocket URL derived from HTTP URL
SLMM_WS_BASE_URL = SLMM_BASE_URL.replace("http://", "ws://").replace("https://", "wss://")
@router.get("/health")
async def check_slmm_health():
"""
Check if the SLMM backend service is reachable and healthy.
"""
try:
async with httpx.AsyncClient(timeout=5.0) as client:
response = await client.get(f"{SLMM_BASE_URL}/health")
if response.status_code == 200:
data = response.json()
return {
"status": "ok",
"slmm_status": "connected",
"slmm_url": SLMM_BASE_URL,
"slmm_version": data.get("version", "unknown"),
"slmm_response": data
}
else:
return {
"status": "degraded",
"slmm_status": "error",
"slmm_url": SLMM_BASE_URL,
"detail": f"SLMM returned status {response.status_code}"
}
except httpx.ConnectError:
return {
"status": "error",
"slmm_status": "unreachable",
"slmm_url": SLMM_BASE_URL,
"detail": "Cannot connect to SLMM backend. Is it running?"
}
except Exception as e:
return {
"status": "error",
"slmm_status": "error",
"slmm_url": SLMM_BASE_URL,
"detail": str(e)
}
# WebSocket routes MUST come before the catch-all route
@router.websocket("/{unit_id}/stream")
async def proxy_websocket_stream(websocket: WebSocket, unit_id: str):
"""
Proxy WebSocket connections to SLMM's /stream endpoint.
This allows real-time streaming of measurement data from NL43 devices
through the SFM unified interface.
"""
await websocket.accept()
logger.info(f"WebSocket connection accepted for SLMM unit {unit_id}")
# Build target WebSocket URL
target_ws_url = f"{SLMM_WS_BASE_URL}/api/nl43/{unit_id}/stream"
logger.info(f"Connecting to SLMM WebSocket: {target_ws_url}")
backend_ws = None
try:
# Connect to SLMM backend WebSocket
backend_ws = await websockets.connect(target_ws_url)
logger.info(f"Connected to SLMM backend WebSocket for {unit_id}")
# Create tasks for bidirectional communication
async def forward_to_backend():
"""Forward messages from client to SLMM backend"""
try:
while True:
data = await websocket.receive_text()
await backend_ws.send(data)
except WebSocketDisconnect:
logger.info(f"Client WebSocket disconnected for {unit_id}")
except Exception as e:
logger.error(f"Error forwarding to backend: {e}")
async def forward_to_client():
"""Forward messages from SLMM backend to client"""
try:
async for message in backend_ws:
await websocket.send_text(message)
except websockets.exceptions.ConnectionClosed:
logger.info(f"Backend WebSocket closed for {unit_id}")
except Exception as e:
logger.error(f"Error forwarding to client: {e}")
# Run both forwarding tasks concurrently
await asyncio.gather(
forward_to_backend(),
forward_to_client(),
return_exceptions=True
)
except websockets.exceptions.WebSocketException as e:
logger.error(f"WebSocket error connecting to SLMM backend: {e}")
try:
await websocket.send_json({
"error": "Failed to connect to SLMM backend",
"detail": str(e)
})
except Exception:
pass
except Exception as e:
logger.error(f"Unexpected error in WebSocket proxy for {unit_id}: {e}")
try:
await websocket.send_json({
"error": "Internal server error",
"detail": str(e)
})
except Exception:
pass
finally:
# Clean up connections
if backend_ws:
try:
await backend_ws.close()
except Exception:
pass
try:
await websocket.close()
except Exception:
pass
logger.info(f"WebSocket proxy closed for {unit_id}")
@router.websocket("/{unit_id}/live")
async def proxy_websocket_live(websocket: WebSocket, unit_id: str):
"""
Proxy WebSocket connections to SLMM's /live endpoint.
Alternative WebSocket endpoint that may be used by some frontend components.
"""
await websocket.accept()
logger.info(f"WebSocket connection accepted for SLMM unit {unit_id} (live endpoint)")
# Build target WebSocket URL - try /stream endpoint as SLMM uses that for WebSocket
target_ws_url = f"{SLMM_WS_BASE_URL}/api/nl43/{unit_id}/stream"
logger.info(f"Connecting to SLMM WebSocket: {target_ws_url}")
backend_ws = None
try:
# Connect to SLMM backend WebSocket
backend_ws = await websockets.connect(target_ws_url)
logger.info(f"Connected to SLMM backend WebSocket for {unit_id} (live endpoint)")
# Create tasks for bidirectional communication
async def forward_to_backend():
"""Forward messages from client to SLMM backend"""
try:
while True:
data = await websocket.receive_text()
await backend_ws.send(data)
except WebSocketDisconnect:
logger.info(f"Client WebSocket disconnected for {unit_id} (live)")
except Exception as e:
logger.error(f"Error forwarding to backend (live): {e}")
async def forward_to_client():
"""Forward messages from SLMM backend to client"""
try:
async for message in backend_ws:
await websocket.send_text(message)
except websockets.exceptions.ConnectionClosed:
logger.info(f"Backend WebSocket closed for {unit_id} (live)")
except Exception as e:
logger.error(f"Error forwarding to client (live): {e}")
# Run both forwarding tasks concurrently
await asyncio.gather(
forward_to_backend(),
forward_to_client(),
return_exceptions=True
)
except websockets.exceptions.WebSocketException as e:
logger.error(f"WebSocket error connecting to SLMM backend (live): {e}")
try:
await websocket.send_json({
"error": "Failed to connect to SLMM backend",
"detail": str(e)
})
except Exception:
pass
except Exception as e:
logger.error(f"Unexpected error in WebSocket proxy for {unit_id} (live): {e}")
try:
await websocket.send_json({
"error": "Internal server error",
"detail": str(e)
})
except Exception:
pass
finally:
# Clean up connections
if backend_ws:
try:
await backend_ws.close()
except Exception:
pass
try:
await websocket.close()
except Exception:
pass
logger.info(f"WebSocket proxy closed for {unit_id} (live)")
# HTTP catch-all route MUST come after specific routes (including WebSocket routes)
@router.api_route("/{path:path}", methods=["GET", "POST", "PUT", "DELETE", "PATCH"])
async def proxy_to_slmm(path: str, request: Request):
"""
Proxy all requests to the SLMM backend service.
This allows SFM to act as a unified frontend for all device types,
while SLMM remains a standalone backend service.
"""
# Build target URL
target_url = f"{SLMM_BASE_URL}/api/nl43/{path}"
# Get query parameters
query_params = dict(request.query_params)
# Get request body if present
body = None
if request.method in ["POST", "PUT", "PATCH"]:
try:
body = await request.body()
except Exception as e:
logger.error(f"Failed to read request body: {e}")
body = None
# Get headers (exclude host and other proxy-specific headers)
headers = dict(request.headers)
headers_to_exclude = ["host", "content-length", "transfer-encoding", "connection"]
proxy_headers = {k: v for k, v in headers.items() if k.lower() not in headers_to_exclude}
logger.info(f"Proxying {request.method} request to SLMM: {target_url}")
try:
async with httpx.AsyncClient(timeout=30.0) as client:
# Forward the request to SLMM
response = await client.request(
method=request.method,
url=target_url,
params=query_params,
headers=proxy_headers,
content=body
)
# Return the response from SLMM
return Response(
content=response.content,
status_code=response.status_code,
headers=dict(response.headers),
media_type=response.headers.get("content-type")
)
except httpx.ConnectError:
logger.error(f"Failed to connect to SLMM backend at {SLMM_BASE_URL}")
raise HTTPException(
status_code=503,
detail=f"SLMM backend service unavailable. Is SLMM running on {SLMM_BASE_URL}?"
)
except httpx.TimeoutException:
logger.error(f"Timeout connecting to SLMM backend at {SLMM_BASE_URL}")
raise HTTPException(
status_code=504,
detail="SLMM backend timeout"
)
except Exception as e:
logger.error(f"Error proxying to SLMM: {e}")
raise HTTPException(
status_code=500,
detail=f"Failed to proxy request to SLMM: {str(e)}"
)

View File

@@ -3,8 +3,9 @@ from sqlalchemy.orm import Session
from datetime import datetime
from typing import Dict, Any
from app.seismo.database import get_db
from app.seismo.services.snapshot import emit_status_snapshot
from backend.database import get_db
from backend.services.snapshot import emit_status_snapshot
from backend.models import RosterUnit
router = APIRouter(prefix="/api", tags=["units"])
@@ -42,3 +43,32 @@ def get_unit_detail(unit_id: str, db: Session = Depends(get_db)):
"note": unit_data.get("note", ""),
"coordinates": coords
}
@router.get("/units/{unit_id}")
def get_unit_by_id(unit_id: str, db: Session = Depends(get_db)):
"""
Get unit data directly from the roster (for settings/configuration).
"""
unit = db.query(RosterUnit).filter_by(id=unit_id).first()
if not unit:
raise HTTPException(status_code=404, detail=f"Unit {unit_id} not found")
return {
"id": unit.id,
"unit_type": unit.unit_type,
"device_type": unit.device_type,
"deployed": unit.deployed,
"retired": unit.retired,
"note": unit.note,
"location": unit.location,
"address": unit.address,
"coordinates": unit.coordinates,
"slm_host": unit.slm_host,
"slm_tcp_port": unit.slm_tcp_port,
"slm_ftp_port": unit.slm_ftp_port,
"slm_model": unit.slm_model,
"slm_serial_number": unit.slm_serial_number,
"deployed_with_modem_id": unit.deployed_with_modem_id
}

Some files were not shown because too many files have changed in this diff Show More