diff --git a/CHANGELOG.md b/CHANGELOG.md index 8612636..9d37d9d 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,10 +1,48 @@ # Changelog -All notable changes to Seismo Fleet Manager will be documented in this file. +All notable changes to Terra-View will be documented in this file. The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html). +## [0.5.1] - 2026-01-27 + +### Added +- **Dashboard Schedule View**: Today's scheduled actions now display directly on the main dashboard + - New "Today's Actions" panel showing upcoming and past scheduled events + - Schedule list partial for project-specific schedule views + - API endpoint for fetching today's schedule data +- **New Branding Assets**: Complete logo rework for Terra-View + - New Terra-View logos for light and dark themes + - Retina-ready (@2x) logo variants + - Updated favicons (16px and 32px) + - Refreshed PWA icons (72px through 512px) + +### Changed +- **Dashboard Layout**: Reorganized to include schedule information panel +- **Base Template**: Updated to use new Terra-View logos with theme-aware switching + +## [0.5.0] - 2026-01-23 + +_Note: This version was not formally released; changes were included in v0.5.1._ + +## [0.4.4] - 2026-01-23 + +### Added +- **Recurring schedules**: New scheduler service, recurring schedule APIs, and schedule templates (calendar/interval/list). +- **Alerts UI + backend**: Alerting service plus dropdown/list templates for surfacing notifications. +- **Report templates + viewers**: CRUD API for report templates, report preview screen, and RND file viewer. +- **SLM tooling**: SLM settings modal and SLM project report generator workflow. + +### Changed +- **Project data management**: Unified files view, refreshed FTP browser, and new project header/templates for file/session/unit/assignment lists. +- **Device/SLM sync**: Standardized SLM device types and tightened SLMM sync paths. +- **Docs/scripts**: Cleanup pass and expanded device-type documentation. + +### Fixed +- **Scheduler actions**: Strict command definitions so actions run reliably. +- **Project view title**: Resolved JSON string rendering in project headers. + ## [0.4.3] - 2026-01-14 ### Added @@ -361,6 +399,9 @@ No database migration required for v0.4.0. All new features use existing databas - Photo management per unit - Automated status categorization (OK/Pending/Missing) +[0.5.1]: https://github.com/serversdwn/seismo-fleet-manager/compare/v0.5.0...v0.5.1 +[0.5.0]: https://github.com/serversdwn/seismo-fleet-manager/compare/v0.4.4...v0.5.0 +[0.4.4]: https://github.com/serversdwn/seismo-fleet-manager/compare/v0.4.3...v0.4.4 [0.4.3]: https://github.com/serversdwn/seismo-fleet-manager/compare/v0.4.2...v0.4.3 [0.4.2]: https://github.com/serversdwn/seismo-fleet-manager/compare/v0.4.1...v0.4.2 [0.4.1]: https://github.com/serversdwn/seismo-fleet-manager/compare/v0.4.0...v0.4.1 diff --git a/README.md b/README.md index fd9dbd5..5248f17 100644 --- a/README.md +++ b/README.md @@ -1,4 +1,4 @@ -# Seismo Fleet Manager v0.4.3 +# Terra-View v0.5.1 Backend API and HTMX-powered web interface for managing a mixed fleet of seismographs and field modems. Track deployments, monitor health in real time, merge roster intent with incoming telemetry, and control your fleet through a unified database and dashboard. ## Features @@ -308,7 +308,7 @@ print(response.json()) |-------|------|-------------| | id | string | Unit identifier (primary key) | | unit_type | string | Hardware model name (default: `series3`) | -| device_type | string | `seismograph` or `modem` discriminator | +| device_type | string | Device type: `"seismograph"`, `"modem"`, or `"slm"` (sound level meter) | | deployed | boolean | Whether the unit is in the field | | retired | boolean | Removes the unit from deployments but preserves history | | note | string | Notes about the unit | @@ -334,6 +334,39 @@ print(response.json()) | phone_number | string | Cellular number for the modem | | hardware_model | string | Modem hardware reference | +**Sound Level Meter (SLM) fields** + +| Field | Type | Description | +|-------|------|-------------| +| slm_host | string | Direct IP address for SLM (if not using modem) | +| slm_tcp_port | integer | TCP control port (default: 2255) | +| slm_ftp_port | integer | FTP file transfer port (default: 21) | +| slm_model | string | Device model (NL-43, NL-53) | +| slm_serial_number | string | Manufacturer serial number | +| slm_frequency_weighting | string | Frequency weighting setting (A, C, Z) | +| slm_time_weighting | string | Time weighting setting (F=Fast, S=Slow) | +| slm_measurement_range | string | Measurement range setting | +| slm_last_check | datetime | Last status check timestamp | +| deployed_with_modem_id | string | Modem pairing (shared with seismographs) | + +### Device Type Schema + +Terra-View supports three device types with the following standardized `device_type` values: + +- **`"seismograph"`** (default) - Seismic monitoring devices (Series 3, Series 4, Micromate) + - Uses: calibration dates, modem pairing + - Examples: BE1234, UM12345 (Series 3/4 units) + +- **`"modem"`** - Field modems and network equipment + - Uses: IP address, phone number, hardware model + - Examples: MDM001, MODEM-2025-01 + +- **`"slm"`** - Sound level meters (Rion NL-43/NL-53) + - Uses: TCP/FTP configuration, measurement settings, modem pairing + - Examples: SLM-43-01, NL43-001 + +**Important**: All `device_type` values must be lowercase. The legacy value `"sound_level_meter"` has been deprecated in favor of the shorter `"slm"`. Run `backend/migrate_standardize_device_types.py` to update existing databases. + ### Emitter Table (Device Check-ins) | Field | Type | Description | @@ -538,9 +571,13 @@ MIT ## Version -**Current: 0.4.3** — SLM roster/project view refresh, project insight panels, FTP browser folder downloads, and SLMM sync (2026-01-14) +**Current: 0.5.1** — Dashboard schedule view with today's actions panel, new Terra-View branding and logo rework (2026-01-27) -Previous: 0.4.2 — SLM configuration interface with TCP/FTP controls, modem diagnostics, and dashboard endpoints for Sound Level Meters (2026-01-05) +Previous: 0.4.4 — Recurring schedules, alerting UI, report templates + RND viewer, and SLM workflow polish (2026-01-23) + +0.4.3 — SLM roster/project view refresh, project insight panels, FTP browser folder downloads, and SLMM sync (2026-01-14) + +0.4.2 — SLM configuration interface with TCP/FTP controls, modem diagnostics, and dashboard endpoints for Sound Level Meters (2026-01-05) 0.4.1 — Sound Level Meter integration with full management UI for SLM units (2026-01-05) diff --git a/assets/terra-view-icon_large.png b/assets/terra-view-icon_large.png new file mode 100644 index 0000000..51881a9 Binary files /dev/null and b/assets/terra-view-icon_large.png differ diff --git a/backend/main.py b/backend/main.py index 9daa452..c09adbd 100644 --- a/backend/main.py +++ b/backend/main.py @@ -1,6 +1,6 @@ import os import logging -from fastapi import FastAPI, Request, Depends +from fastapi import FastAPI, Request, Depends, HTTPException from fastapi.middleware.cors import CORSMiddleware from fastapi.staticfiles import StaticFiles from fastapi.templating import Jinja2Templates @@ -18,7 +18,7 @@ logging.basicConfig( logger = logging.getLogger(__name__) from backend.database import engine, Base, get_db -from backend.routers import roster, units, photos, roster_edit, roster_rename, dashboard, dashboard_tabs, activity, slmm, slm_ui, slm_dashboard, seismo_dashboard, projects, project_locations, scheduler +from backend.routers import roster, units, photos, roster_edit, roster_rename, dashboard, dashboard_tabs, activity, slmm, slm_ui, slm_dashboard, seismo_dashboard, projects, project_locations, scheduler, modem_dashboard from backend.services.snapshot import emit_status_snapshot from backend.models import IgnoredUnit @@ -29,7 +29,7 @@ Base.metadata.create_all(bind=engine) ENVIRONMENT = os.getenv("ENVIRONMENT", "production") # Initialize FastAPI app -VERSION = "0.4.3" +VERSION = "0.5.1" app = FastAPI( title="Seismo Fleet Manager", description="Backend API for managing seismograph fleet status", @@ -58,8 +58,8 @@ app.add_middleware( # Mount static files app.mount("/static", StaticFiles(directory="backend/static"), name="static") -# Setup Jinja2 templates -templates = Jinja2Templates(directory="templates") +# Use shared templates configuration with timezone filters +from backend.templates_config import templates # Add custom context processor to inject environment variable into all templates @app.middleware("http") @@ -92,6 +92,7 @@ app.include_router(slmm.router) app.include_router(slm_ui.router) app.include_router(slm_dashboard.router) app.include_router(seismo_dashboard.router) +app.include_router(modem_dashboard.router) from backend.routers import settings app.include_router(settings.router) @@ -101,8 +102,21 @@ app.include_router(projects.router) app.include_router(project_locations.router) app.include_router(scheduler.router) -# Start scheduler service on application startup +# Report templates router +from backend.routers import report_templates +app.include_router(report_templates.router) + +# Alerts router +from backend.routers import alerts +app.include_router(alerts.router) + +# Recurring schedules router +from backend.routers import recurring_schedules +app.include_router(recurring_schedules.router) + +# Start scheduler service and device status monitor on application startup from backend.services.scheduler import start_scheduler, stop_scheduler +from backend.services.device_status_monitor import start_device_status_monitor, stop_device_status_monitor @app.on_event("startup") async def startup_event(): @@ -111,9 +125,17 @@ async def startup_event(): await start_scheduler() logger.info("Scheduler service started") + logger.info("Starting device status monitor...") + await start_device_status_monitor() + logger.info("Device status monitor started") + @app.on_event("shutdown") def shutdown_event(): """Clean up services on app shutdown""" + logger.info("Stopping device status monitor...") + stop_device_status_monitor() + logger.info("Device status monitor stopped") + logger.info("Stopping scheduler service...") stop_scheduler() logger.info("Scheduler service stopped") @@ -195,6 +217,12 @@ async def seismographs_page(request: Request): return templates.TemplateResponse("seismographs.html", {"request": request}) +@app.get("/modems", response_class=HTMLResponse) +async def modems_page(request: Request): + """Field modems management dashboard""" + return templates.TemplateResponse("modems.html", {"request": request}) + + @app.get("/projects", response_class=HTMLResponse) async def projects_page(request: Request): """Projects management and overview""" diff --git a/backend/migrate_add_auto_increment_index.py b/backend/migrate_add_auto_increment_index.py new file mode 100644 index 0000000..f91a3e2 --- /dev/null +++ b/backend/migrate_add_auto_increment_index.py @@ -0,0 +1,67 @@ +""" +Migration: Add auto_increment_index column to recurring_schedules table + +This migration adds the auto_increment_index column that controls whether +the scheduler should automatically find an unused store index before starting +a new measurement. + +Run this script once to update existing databases: + python -m backend.migrate_add_auto_increment_index +""" + +import sqlite3 +import os + +DB_PATH = "data/seismo_fleet.db" + + +def migrate(): + """Add auto_increment_index column to recurring_schedules table.""" + if not os.path.exists(DB_PATH): + print(f"Database not found at {DB_PATH}") + return False + + conn = sqlite3.connect(DB_PATH) + cursor = conn.cursor() + + try: + # Check if recurring_schedules table exists + cursor.execute(""" + SELECT name FROM sqlite_master + WHERE type='table' AND name='recurring_schedules' + """) + if not cursor.fetchone(): + print("recurring_schedules table does not exist yet. Will be created on app startup.") + conn.close() + return True + + # Check if auto_increment_index column already exists + cursor.execute("PRAGMA table_info(recurring_schedules)") + columns = [row[1] for row in cursor.fetchall()] + + if "auto_increment_index" in columns: + print("auto_increment_index column already exists in recurring_schedules table.") + conn.close() + return True + + # Add the column + print("Adding auto_increment_index column to recurring_schedules table...") + cursor.execute(""" + ALTER TABLE recurring_schedules + ADD COLUMN auto_increment_index BOOLEAN DEFAULT 1 + """) + conn.commit() + print("Successfully added auto_increment_index column.") + + conn.close() + return True + + except Exception as e: + print(f"Migration failed: {e}") + conn.close() + return False + + +if __name__ == "__main__": + success = migrate() + exit(0 if success else 1) diff --git a/backend/migrate_add_deployment_type.py b/backend/migrate_add_deployment_type.py new file mode 100644 index 0000000..c18573e --- /dev/null +++ b/backend/migrate_add_deployment_type.py @@ -0,0 +1,84 @@ +""" +Migration script to add deployment_type and deployed_with_unit_id fields to roster table. + +deployment_type: tracks what type of device a modem is deployed with: +- "seismograph" - Modem is connected to a seismograph +- "slm" - Modem is connected to a sound level meter +- NULL/empty - Not assigned or unknown + +deployed_with_unit_id: stores the ID of the seismograph/SLM this modem is deployed with +(reverse relationship of deployed_with_modem_id) + +Run this script once to migrate an existing database. +""" + +import sqlite3 +import os + +# Database path +DB_PATH = "./data/seismo_fleet.db" + + +def migrate_database(): + """Add deployment_type and deployed_with_unit_id columns to roster table""" + + if not os.path.exists(DB_PATH): + print(f"Database not found at {DB_PATH}") + print("The database will be created automatically when you run the application.") + return + + print(f"Migrating database: {DB_PATH}") + + conn = sqlite3.connect(DB_PATH) + cursor = conn.cursor() + + # Check if roster table exists + cursor.execute("SELECT name FROM sqlite_master WHERE type='table' AND name='roster'") + table_exists = cursor.fetchone() + + if not table_exists: + print("Roster table does not exist yet - will be created when app runs") + conn.close() + return + + # Check existing columns + cursor.execute("PRAGMA table_info(roster)") + columns = [col[1] for col in cursor.fetchall()] + + try: + # Add deployment_type if not exists + if 'deployment_type' not in columns: + print("Adding deployment_type column to roster table...") + cursor.execute("ALTER TABLE roster ADD COLUMN deployment_type TEXT") + print(" Added deployment_type column") + + cursor.execute("CREATE INDEX IF NOT EXISTS ix_roster_deployment_type ON roster(deployment_type)") + print(" Created index on deployment_type") + else: + print("deployment_type column already exists") + + # Add deployed_with_unit_id if not exists + if 'deployed_with_unit_id' not in columns: + print("Adding deployed_with_unit_id column to roster table...") + cursor.execute("ALTER TABLE roster ADD COLUMN deployed_with_unit_id TEXT") + print(" Added deployed_with_unit_id column") + + cursor.execute("CREATE INDEX IF NOT EXISTS ix_roster_deployed_with_unit_id ON roster(deployed_with_unit_id)") + print(" Created index on deployed_with_unit_id") + else: + print("deployed_with_unit_id column already exists") + + conn.commit() + print("\nMigration completed successfully!") + + except sqlite3.Error as e: + print(f"\nError during migration: {e}") + conn.rollback() + raise + + finally: + conn.close() + + +if __name__ == "__main__": + migrate_database() diff --git a/backend/migrate_add_project_number.py b/backend/migrate_add_project_number.py new file mode 100644 index 0000000..656dc37 --- /dev/null +++ b/backend/migrate_add_project_number.py @@ -0,0 +1,80 @@ +""" +Migration script to add project_number field to projects table. + +This adds a new column for TMI internal project numbering: +- Format: xxxx-YY (e.g., "2567-23") +- xxxx = incremental project number +- YY = year project was started + +Combined with client_name and name (project/site name), this enables +smart searching across all project identifiers. + +Run this script once to migrate an existing database. +""" + +import sqlite3 +import os + +# Database path +DB_PATH = "./data/seismo_fleet.db" + + +def migrate_database(): + """Add project_number column to projects table""" + + if not os.path.exists(DB_PATH): + print(f"Database not found at {DB_PATH}") + print("The database will be created automatically when you run the application.") + return + + print(f"Migrating database: {DB_PATH}") + + conn = sqlite3.connect(DB_PATH) + cursor = conn.cursor() + + # Check if projects table exists + cursor.execute("SELECT name FROM sqlite_master WHERE type='table' AND name='projects'") + table_exists = cursor.fetchone() + + if not table_exists: + print("Projects table does not exist yet - will be created when app runs") + conn.close() + return + + # Check if project_number column already exists + cursor.execute("PRAGMA table_info(projects)") + columns = [col[1] for col in cursor.fetchall()] + + if 'project_number' in columns: + print("Migration already applied - project_number column exists") + conn.close() + return + + print("Adding project_number column to projects table...") + + try: + cursor.execute("ALTER TABLE projects ADD COLUMN project_number TEXT") + print(" Added project_number column") + + # Create index for faster searching + cursor.execute("CREATE INDEX IF NOT EXISTS ix_projects_project_number ON projects(project_number)") + print(" Created index on project_number") + + # Also add index on client_name if it doesn't exist + cursor.execute("CREATE INDEX IF NOT EXISTS ix_projects_client_name ON projects(client_name)") + print(" Created index on client_name") + + conn.commit() + print("\nMigration completed successfully!") + + except sqlite3.Error as e: + print(f"\nError during migration: {e}") + conn.rollback() + raise + + finally: + conn.close() + + +if __name__ == "__main__": + migrate_database() diff --git a/backend/migrate_add_report_templates.py b/backend/migrate_add_report_templates.py new file mode 100644 index 0000000..10df82f --- /dev/null +++ b/backend/migrate_add_report_templates.py @@ -0,0 +1,88 @@ +""" +Migration script to add report_templates table. + +This creates a new table for storing report generation configurations: +- Template name and project association +- Time filtering settings (start/end time) +- Date range filtering (optional) +- Report title defaults + +Run this script once to migrate an existing database. +""" + +import sqlite3 +import os + +# Database path +DB_PATH = "./data/seismo_fleet.db" + +def migrate_database(): + """Create report_templates table""" + + if not os.path.exists(DB_PATH): + print(f"Database not found at {DB_PATH}") + print("The database will be created automatically when you run the application.") + return + + print(f"Migrating database: {DB_PATH}") + + conn = sqlite3.connect(DB_PATH) + cursor = conn.cursor() + + # Check if report_templates table already exists + cursor.execute("SELECT name FROM sqlite_master WHERE type='table' AND name='report_templates'") + table_exists = cursor.fetchone() + + if table_exists: + print("Migration already applied - report_templates table exists") + conn.close() + return + + print("Creating report_templates table...") + + try: + cursor.execute(""" + CREATE TABLE report_templates ( + id TEXT PRIMARY KEY, + name TEXT NOT NULL, + project_id TEXT, + report_title TEXT DEFAULT 'Background Noise Study', + start_time TEXT, + end_time TEXT, + start_date TEXT, + end_date TEXT, + created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP, + updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP + ) + """) + print(" ✓ Created report_templates table") + + # Insert default templates + import uuid + + default_templates = [ + (str(uuid.uuid4()), "Nighttime (7PM-7AM)", None, "Background Noise Study", "19:00", "07:00", None, None), + (str(uuid.uuid4()), "Daytime (7AM-7PM)", None, "Background Noise Study", "07:00", "19:00", None, None), + (str(uuid.uuid4()), "Full Day (All Data)", None, "Background Noise Study", None, None, None, None), + ] + + cursor.executemany(""" + INSERT INTO report_templates (id, name, project_id, report_title, start_time, end_time, start_date, end_date) + VALUES (?, ?, ?, ?, ?, ?, ?, ?) + """, default_templates) + print(" ✓ Inserted default templates (Nighttime, Daytime, Full Day)") + + conn.commit() + print("\nMigration completed successfully!") + + except sqlite3.Error as e: + print(f"\nError during migration: {e}") + conn.rollback() + raise + + finally: + conn.close() + + +if __name__ == "__main__": + migrate_database() diff --git a/backend/migrate_add_slm_fields.py b/backend/migrate_add_slm_fields.py index 1c1b50e..fc7995d 100644 --- a/backend/migrate_add_slm_fields.py +++ b/backend/migrate_add_slm_fields.py @@ -71,7 +71,7 @@ def migrate(): print("\n○ No migration needed - all columns already exist.") print("\nSound level meter fields are now available in the roster table.") - print("You can now set device_type='sound_level_meter' for SLM devices.") + print("Note: Use device_type='slm' for Sound Level Meters. Legacy 'sound_level_meter' has been deprecated.") if __name__ == "__main__": diff --git a/backend/migrate_standardize_device_types.py b/backend/migrate_standardize_device_types.py new file mode 100644 index 0000000..45b85ac --- /dev/null +++ b/backend/migrate_standardize_device_types.py @@ -0,0 +1,106 @@ +""" +Database Migration: Standardize device_type values + +This migration ensures all device_type values follow the official schema: +- "seismograph" - Seismic monitoring devices +- "modem" - Field modems and network equipment +- "slm" - Sound level meters (NL-43/NL-53) + +Changes: +- Converts "sound_level_meter" → "slm" +- Safe to run multiple times (idempotent) +- No data loss + +Usage: + python backend/migrate_standardize_device_types.py +""" + +import sys +import os + +# Add parent directory to path so we can import backend modules +sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) + +from sqlalchemy import create_engine, text +from sqlalchemy.orm import sessionmaker + +# Database configuration +SQLALCHEMY_DATABASE_URL = "sqlite:///./data/seismo_fleet.db" +engine = create_engine(SQLALCHEMY_DATABASE_URL, connect_args={"check_same_thread": False}) +SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine) + + +def migrate(): + """Standardize device_type values in the database""" + db = SessionLocal() + + try: + print("=" * 70) + print("Database Migration: Standardize device_type values") + print("=" * 70) + print() + + # Check for existing "sound_level_meter" values + result = db.execute( + text("SELECT COUNT(*) as count FROM roster WHERE device_type = 'sound_level_meter'") + ).fetchone() + + count_to_migrate = result[0] if result else 0 + + if count_to_migrate == 0: + print("✓ No records need migration - all device_type values are already standardized") + print() + print("Current device_type distribution:") + + # Show distribution + distribution = db.execute( + text("SELECT device_type, COUNT(*) as count FROM roster GROUP BY device_type ORDER BY count DESC") + ).fetchall() + + for row in distribution: + device_type, count = row + print(f" - {device_type}: {count} units") + + print() + print("Migration not needed.") + return + + print(f"Found {count_to_migrate} record(s) with device_type='sound_level_meter'") + print() + print("Converting 'sound_level_meter' → 'slm'...") + + # Perform the migration + db.execute( + text("UPDATE roster SET device_type = 'slm' WHERE device_type = 'sound_level_meter'") + ) + db.commit() + + print(f"✓ Successfully migrated {count_to_migrate} record(s)") + print() + + # Show final distribution + print("Updated device_type distribution:") + distribution = db.execute( + text("SELECT device_type, COUNT(*) as count FROM roster GROUP BY device_type ORDER BY count DESC") + ).fetchall() + + for row in distribution: + device_type, count = row + print(f" - {device_type}: {count} units") + + print() + print("=" * 70) + print("Migration completed successfully!") + print("=" * 70) + + except Exception as e: + db.rollback() + print(f"\n❌ Error during migration: {e}") + print("\nRolling back changes...") + raise + finally: + db.close() + + +if __name__ == "__main__": + migrate() diff --git a/backend/models.py b/backend/models.py index 723c1dc..bd22b0c 100644 --- a/backend/models.py +++ b/backend/models.py @@ -19,14 +19,17 @@ class RosterUnit(Base): Roster table: represents our *intended assignment* of a unit. This is editable from the GUI. - Supports multiple device types (seismograph, modem, sound_level_meter) with type-specific fields. + Supports multiple device types with type-specific fields: + - "seismograph" - Seismic monitoring devices (default) + - "modem" - Field modems and network equipment + - "slm" - Sound level meters (NL-43/NL-53) """ __tablename__ = "roster" # Core fields (all device types) id = Column(String, primary_key=True, index=True) unit_type = Column(String, default="series3") # Backward compatibility - device_type = Column(String, default="seismograph") # "seismograph" | "modem" | "sound_level_meter" + device_type = Column(String, default="seismograph") # "seismograph" | "modem" | "slm" deployed = Column(Boolean, default=True) retired = Column(Boolean, default=False) note = Column(String, nullable=True) @@ -47,6 +50,8 @@ class RosterUnit(Base): ip_address = Column(String, nullable=True) phone_number = Column(String, nullable=True) hardware_model = Column(String, nullable=True) + deployment_type = Column(String, nullable=True) # "seismograph" | "slm" - what type of device this modem is deployed with + deployed_with_unit_id = Column(String, nullable=True) # ID of seismograph/SLM this modem is deployed with # Sound Level Meter-specific fields (nullable for seismographs and modems) slm_host = Column(String, nullable=True) # Device IP or hostname @@ -134,17 +139,26 @@ class Project(Base): """ Projects: top-level organization for monitoring work. Type-aware to enable/disable features based on project_type_id. + + Project naming convention: + - project_number: TMI internal ID format xxxx-YY (e.g., "2567-23") + - client_name: Client/contractor name (e.g., "PJ Dick") + - name: Project/site name (e.g., "RKM Hall", "CMU Campus") + + Display format: "2567-23 - PJ Dick - RKM Hall" + Users can search by any of these fields. """ __tablename__ = "projects" id = Column(String, primary_key=True, index=True) # UUID - name = Column(String, nullable=False, unique=True) + project_number = Column(String, nullable=True, index=True) # TMI ID: xxxx-YY format (e.g., "2567-23") + name = Column(String, nullable=False, unique=True) # Project/site name (e.g., "RKM Hall") description = Column(Text, nullable=True) project_type_id = Column(String, nullable=False) # FK to ProjectType.id status = Column(String, default="active") # active, completed, archived # Project metadata - client_name = Column(String, nullable=True) + client_name = Column(String, nullable=True, index=True) # Client name (e.g., "PJ Dick") site_address = Column(String, nullable=True) site_coordinates = Column(String, nullable=True) # "lat,lon" start_date = Column(Date, nullable=True) @@ -197,7 +211,7 @@ class UnitAssignment(Base): notes = Column(Text, nullable=True) # Denormalized for efficient queries - device_type = Column(String, nullable=False) # sound_level_meter | seismograph + device_type = Column(String, nullable=False) # "slm" | "seismograph" project_id = Column(String, nullable=False, index=True) # FK to Project.id created_at = Column(DateTime, default=datetime.utcnow) @@ -216,7 +230,7 @@ class ScheduledAction(Base): unit_id = Column(String, nullable=True, index=True) # FK to RosterUnit.id (nullable if location-based) action_type = Column(String, nullable=False) # start, stop, download, calibrate - device_type = Column(String, nullable=False) # sound_level_meter | seismograph + device_type = Column(String, nullable=False) # "slm" | "seismograph" scheduled_time = Column(DateTime, nullable=False, index=True) executed_at = Column(DateTime, nullable=True) @@ -275,3 +289,116 @@ class DataFile(Base): file_metadata = Column(Text, nullable=True) # JSON created_at = Column(DateTime, default=datetime.utcnow) + + +class ReportTemplate(Base): + """ + Report templates: saved configurations for generating Excel reports. + Allows users to save time filter presets, titles, etc. for reuse. + """ + __tablename__ = "report_templates" + + id = Column(String, primary_key=True, index=True) # UUID + name = Column(String, nullable=False) # "Nighttime Report", "Full Day Report" + project_id = Column(String, nullable=True) # Optional: project-specific template + + # Template settings + report_title = Column(String, default="Background Noise Study") + start_time = Column(String, nullable=True) # "19:00" format + end_time = Column(String, nullable=True) # "07:00" format + start_date = Column(String, nullable=True) # "2025-01-15" format (optional) + end_date = Column(String, nullable=True) # "2025-01-20" format (optional) + + created_at = Column(DateTime, default=datetime.utcnow) + updated_at = Column(DateTime, default=datetime.utcnow, onupdate=datetime.utcnow) + + +# ============================================================================ +# Sound Monitoring Scheduler +# ============================================================================ + +class RecurringSchedule(Base): + """ + Recurring schedule definitions for automated sound monitoring. + + Supports two schedule types: + - "weekly_calendar": Select specific days with start/end times (e.g., Mon/Wed/Fri 7pm-7am) + - "simple_interval": For 24/7 monitoring with daily stop/download/restart cycles + """ + __tablename__ = "recurring_schedules" + + id = Column(String, primary_key=True, index=True) # UUID + project_id = Column(String, nullable=False, index=True) # FK to Project.id + location_id = Column(String, nullable=False, index=True) # FK to MonitoringLocation.id + unit_id = Column(String, nullable=True, index=True) # FK to RosterUnit.id (optional, can use assignment) + + name = Column(String, nullable=False) # "Weeknight Monitoring", "24/7 Continuous" + schedule_type = Column(String, nullable=False) # "weekly_calendar" | "simple_interval" + device_type = Column(String, nullable=False) # "slm" | "seismograph" + + # Weekly Calendar fields (schedule_type = "weekly_calendar") + # JSON format: { + # "monday": {"enabled": true, "start": "19:00", "end": "07:00"}, + # "tuesday": {"enabled": false}, + # ... + # } + weekly_pattern = Column(Text, nullable=True) + + # Simple Interval fields (schedule_type = "simple_interval") + interval_type = Column(String, nullable=True) # "daily" | "hourly" + cycle_time = Column(String, nullable=True) # "00:00" - time to run stop/download/restart + include_download = Column(Boolean, default=True) # Download data before restart + + # Automation options (applies to both schedule types) + auto_increment_index = Column(Boolean, default=True) # Auto-increment store/index number before start + # When True: prevents "overwrite data?" prompts by using a new index each time + + # Shared configuration + enabled = Column(Boolean, default=True) + timezone = Column(String, default="America/New_York") + + # Tracking + last_generated_at = Column(DateTime, nullable=True) # When actions were last generated + next_occurrence = Column(DateTime, nullable=True) # Computed next action time + + created_at = Column(DateTime, default=datetime.utcnow) + updated_at = Column(DateTime, default=datetime.utcnow, onupdate=datetime.utcnow) + + +class Alert(Base): + """ + In-app alerts for device status changes and system events. + + Designed for future expansion to email/webhook notifications. + Currently supports: + - device_offline: Device became unreachable + - device_online: Device came back online + - schedule_failed: Scheduled action failed to execute + - schedule_completed: Scheduled action completed successfully + """ + __tablename__ = "alerts" + + id = Column(String, primary_key=True, index=True) # UUID + + # Alert classification + alert_type = Column(String, nullable=False) # "device_offline" | "device_online" | "schedule_failed" | "schedule_completed" + severity = Column(String, default="warning") # "info" | "warning" | "critical" + + # Related entities (nullable - may not all apply) + project_id = Column(String, nullable=True, index=True) + location_id = Column(String, nullable=True, index=True) + unit_id = Column(String, nullable=True, index=True) + schedule_id = Column(String, nullable=True) # RecurringSchedule or ScheduledAction id + + # Alert content + title = Column(String, nullable=False) # "NRL-001 Device Offline" + message = Column(Text, nullable=True) # Detailed description + alert_metadata = Column(Text, nullable=True) # JSON: additional context data + + # Status tracking + status = Column(String, default="active") # "active" | "acknowledged" | "resolved" | "dismissed" + acknowledged_at = Column(DateTime, nullable=True) + resolved_at = Column(DateTime, nullable=True) + + created_at = Column(DateTime, default=datetime.utcnow) + expires_at = Column(DateTime, nullable=True) # Auto-dismiss after this time diff --git a/backend/routers/alerts.py b/backend/routers/alerts.py new file mode 100644 index 0000000..67e4a47 --- /dev/null +++ b/backend/routers/alerts.py @@ -0,0 +1,326 @@ +""" +Alerts Router + +API endpoints for managing in-app alerts. +""" + +from fastapi import APIRouter, Request, Depends, HTTPException, Query +from fastapi.responses import HTMLResponse, JSONResponse +from sqlalchemy.orm import Session +from typing import Optional +from datetime import datetime, timedelta + +from backend.database import get_db +from backend.models import Alert, RosterUnit +from backend.services.alert_service import get_alert_service +from backend.templates_config import templates + +router = APIRouter(prefix="/api/alerts", tags=["alerts"]) + + +# ============================================================================ +# Alert List and Count +# ============================================================================ + +@router.get("/") +async def list_alerts( + db: Session = Depends(get_db), + status: Optional[str] = Query(None, description="Filter by status: active, acknowledged, resolved, dismissed"), + project_id: Optional[str] = Query(None), + unit_id: Optional[str] = Query(None), + alert_type: Optional[str] = Query(None, description="Filter by type: device_offline, device_online, schedule_failed"), + limit: int = Query(50, le=100), + offset: int = Query(0, ge=0), +): + """ + List alerts with optional filters. + """ + alert_service = get_alert_service(db) + + alerts = alert_service.get_all_alerts( + status=status, + project_id=project_id, + unit_id=unit_id, + alert_type=alert_type, + limit=limit, + offset=offset, + ) + + return { + "alerts": [ + { + "id": a.id, + "alert_type": a.alert_type, + "severity": a.severity, + "title": a.title, + "message": a.message, + "status": a.status, + "unit_id": a.unit_id, + "project_id": a.project_id, + "location_id": a.location_id, + "created_at": a.created_at.isoformat() if a.created_at else None, + "acknowledged_at": a.acknowledged_at.isoformat() if a.acknowledged_at else None, + "resolved_at": a.resolved_at.isoformat() if a.resolved_at else None, + } + for a in alerts + ], + "count": len(alerts), + "limit": limit, + "offset": offset, + } + + +@router.get("/active") +async def list_active_alerts( + db: Session = Depends(get_db), + project_id: Optional[str] = Query(None), + unit_id: Optional[str] = Query(None), + alert_type: Optional[str] = Query(None), + min_severity: Optional[str] = Query(None, description="Minimum severity: info, warning, critical"), + limit: int = Query(50, le=100), +): + """ + List only active alerts. + """ + alert_service = get_alert_service(db) + + alerts = alert_service.get_active_alerts( + project_id=project_id, + unit_id=unit_id, + alert_type=alert_type, + min_severity=min_severity, + limit=limit, + ) + + return { + "alerts": [ + { + "id": a.id, + "alert_type": a.alert_type, + "severity": a.severity, + "title": a.title, + "message": a.message, + "unit_id": a.unit_id, + "project_id": a.project_id, + "created_at": a.created_at.isoformat() if a.created_at else None, + } + for a in alerts + ], + "count": len(alerts), + } + + +@router.get("/active/count") +async def get_active_alert_count(db: Session = Depends(get_db)): + """ + Get count of active alerts (for navbar badge). + """ + alert_service = get_alert_service(db) + count = alert_service.get_active_alert_count() + return {"count": count} + + +# ============================================================================ +# Single Alert Operations +# ============================================================================ + +@router.get("/{alert_id}") +async def get_alert( + alert_id: str, + db: Session = Depends(get_db), +): + """ + Get a specific alert. + """ + alert = db.query(Alert).filter_by(id=alert_id).first() + if not alert: + raise HTTPException(status_code=404, detail="Alert not found") + + # Get related unit info + unit = None + if alert.unit_id: + unit = db.query(RosterUnit).filter_by(id=alert.unit_id).first() + + return { + "id": alert.id, + "alert_type": alert.alert_type, + "severity": alert.severity, + "title": alert.title, + "message": alert.message, + "metadata": alert.alert_metadata, + "status": alert.status, + "unit_id": alert.unit_id, + "unit_name": unit.id if unit else None, + "project_id": alert.project_id, + "location_id": alert.location_id, + "schedule_id": alert.schedule_id, + "created_at": alert.created_at.isoformat() if alert.created_at else None, + "acknowledged_at": alert.acknowledged_at.isoformat() if alert.acknowledged_at else None, + "resolved_at": alert.resolved_at.isoformat() if alert.resolved_at else None, + "expires_at": alert.expires_at.isoformat() if alert.expires_at else None, + } + + +@router.post("/{alert_id}/acknowledge") +async def acknowledge_alert( + alert_id: str, + db: Session = Depends(get_db), +): + """ + Mark alert as acknowledged. + """ + alert_service = get_alert_service(db) + alert = alert_service.acknowledge_alert(alert_id) + + if not alert: + raise HTTPException(status_code=404, detail="Alert not found") + + return { + "success": True, + "alert_id": alert.id, + "status": alert.status, + } + + +@router.post("/{alert_id}/dismiss") +async def dismiss_alert( + alert_id: str, + db: Session = Depends(get_db), +): + """ + Dismiss alert. + """ + alert_service = get_alert_service(db) + alert = alert_service.dismiss_alert(alert_id) + + if not alert: + raise HTTPException(status_code=404, detail="Alert not found") + + return { + "success": True, + "alert_id": alert.id, + "status": alert.status, + } + + +@router.post("/{alert_id}/resolve") +async def resolve_alert( + alert_id: str, + db: Session = Depends(get_db), +): + """ + Manually resolve an alert. + """ + alert_service = get_alert_service(db) + alert = alert_service.resolve_alert(alert_id) + + if not alert: + raise HTTPException(status_code=404, detail="Alert not found") + + return { + "success": True, + "alert_id": alert.id, + "status": alert.status, + } + + +# ============================================================================ +# HTML Partials for HTMX +# ============================================================================ + +@router.get("/partials/dropdown", response_class=HTMLResponse) +async def get_alert_dropdown( + request: Request, + db: Session = Depends(get_db), +): + """ + Return HTML partial for alert dropdown in navbar. + """ + alert_service = get_alert_service(db) + alerts = alert_service.get_active_alerts(limit=10) + + # Calculate relative time for each alert + now = datetime.utcnow() + alerts_data = [] + for alert in alerts: + delta = now - alert.created_at + if delta.days > 0: + time_ago = f"{delta.days}d ago" + elif delta.seconds >= 3600: + time_ago = f"{delta.seconds // 3600}h ago" + elif delta.seconds >= 60: + time_ago = f"{delta.seconds // 60}m ago" + else: + time_ago = "just now" + + alerts_data.append({ + "alert": alert, + "time_ago": time_ago, + }) + + return templates.TemplateResponse("partials/alerts/alert_dropdown.html", { + "request": request, + "alerts": alerts_data, + "total_count": alert_service.get_active_alert_count(), + }) + + +@router.get("/partials/list", response_class=HTMLResponse) +async def get_alert_list( + request: Request, + db: Session = Depends(get_db), + status: Optional[str] = Query(None), + limit: int = Query(20), +): + """ + Return HTML partial for alert list page. + """ + alert_service = get_alert_service(db) + + if status: + alerts = alert_service.get_all_alerts(status=status, limit=limit) + else: + alerts = alert_service.get_all_alerts(limit=limit) + + # Calculate relative time for each alert + now = datetime.utcnow() + alerts_data = [] + for alert in alerts: + delta = now - alert.created_at + if delta.days > 0: + time_ago = f"{delta.days}d ago" + elif delta.seconds >= 3600: + time_ago = f"{delta.seconds // 3600}h ago" + elif delta.seconds >= 60: + time_ago = f"{delta.seconds // 60}m ago" + else: + time_ago = "just now" + + alerts_data.append({ + "alert": alert, + "time_ago": time_ago, + }) + + return templates.TemplateResponse("partials/alerts/alert_list.html", { + "request": request, + "alerts": alerts_data, + "status_filter": status, + }) + + +# ============================================================================ +# Cleanup +# ============================================================================ + +@router.post("/cleanup-expired") +async def cleanup_expired_alerts(db: Session = Depends(get_db)): + """ + Cleanup expired alerts (admin/maintenance endpoint). + """ + alert_service = get_alert_service(db) + count = alert_service.cleanup_expired_alerts() + + return { + "success": True, + "cleaned_up": count, + } diff --git a/backend/routers/dashboard.py b/backend/routers/dashboard.py index 525edec..c9e61bb 100644 --- a/backend/routers/dashboard.py +++ b/backend/routers/dashboard.py @@ -1,10 +1,14 @@ from fastapi import APIRouter, Request, Depends -from fastapi.templating import Jinja2Templates +from sqlalchemy.orm import Session +from datetime import datetime, timedelta +from backend.database import get_db +from backend.models import ScheduledAction, MonitoringLocation, Project from backend.services.snapshot import emit_status_snapshot +from backend.templates_config import templates +from backend.utils.timezone import utc_to_local, local_to_utc, get_user_timezone router = APIRouter() -templates = Jinja2Templates(directory="templates") @router.get("/dashboard/active") @@ -23,3 +27,71 @@ def dashboard_benched(request: Request): "partials/benched_table.html", {"request": request, "units": snapshot["benched"]} ) + + +@router.get("/dashboard/todays-actions") +def dashboard_todays_actions(request: Request, db: Session = Depends(get_db)): + """ + Get today's scheduled actions for the dashboard card. + Shows upcoming, completed, and failed actions for today. + """ + import json + from zoneinfo import ZoneInfo + + # Get today's date range in local timezone + tz = ZoneInfo(get_user_timezone()) + now_local = datetime.now(tz) + today_start_local = now_local.replace(hour=0, minute=0, second=0, microsecond=0) + today_end_local = today_start_local + timedelta(days=1) + + # Convert to UTC for database query + today_start_utc = today_start_local.astimezone(ZoneInfo("UTC")).replace(tzinfo=None) + today_end_utc = today_end_local.astimezone(ZoneInfo("UTC")).replace(tzinfo=None) + + # Query today's actions + actions = db.query(ScheduledAction).filter( + ScheduledAction.scheduled_time >= today_start_utc, + ScheduledAction.scheduled_time < today_end_utc, + ).order_by(ScheduledAction.scheduled_time.asc()).all() + + # Enrich with location/project info and parse results + enriched_actions = [] + for action in actions: + location = None + project = None + if action.location_id: + location = db.query(MonitoringLocation).filter_by(id=action.location_id).first() + if action.project_id: + project = db.query(Project).filter_by(id=action.project_id).first() + + # Parse module_response for result details + result_data = None + if action.module_response: + try: + result_data = json.loads(action.module_response) + except json.JSONDecodeError: + pass + + enriched_actions.append({ + "action": action, + "location": location, + "project": project, + "result": result_data, + }) + + # Count by status + pending_count = sum(1 for a in actions if a.execution_status == "pending") + completed_count = sum(1 for a in actions if a.execution_status == "completed") + failed_count = sum(1 for a in actions if a.execution_status == "failed") + + return templates.TemplateResponse( + "partials/dashboard/todays_actions.html", + { + "request": request, + "actions": enriched_actions, + "pending_count": pending_count, + "completed_count": completed_count, + "failed_count": failed_count, + "total_count": len(actions), + } + ) diff --git a/backend/routers/modem_dashboard.py b/backend/routers/modem_dashboard.py new file mode 100644 index 0000000..a4d13c5 --- /dev/null +++ b/backend/routers/modem_dashboard.py @@ -0,0 +1,286 @@ +""" +Modem Dashboard Router + +Provides API endpoints for the Field Modems management page. +""" + +from fastapi import APIRouter, Request, Depends, Query +from fastapi.responses import HTMLResponse +from sqlalchemy.orm import Session +from datetime import datetime +import subprocess +import time +import logging + +from backend.database import get_db +from backend.models import RosterUnit +from backend.templates_config import templates + +logger = logging.getLogger(__name__) + +router = APIRouter(prefix="/api/modem-dashboard", tags=["modem-dashboard"]) + + +@router.get("/stats", response_class=HTMLResponse) +async def get_modem_stats(request: Request, db: Session = Depends(get_db)): + """ + Get summary statistics for modem dashboard. + Returns HTML partial with stat cards. + """ + # Query all modems + all_modems = db.query(RosterUnit).filter_by(device_type="modem").all() + + # Get IDs of modems that have devices paired to them + paired_modem_ids = set() + devices_with_modems = db.query(RosterUnit).filter( + RosterUnit.deployed_with_modem_id.isnot(None), + RosterUnit.retired == False + ).all() + for device in devices_with_modems: + if device.deployed_with_modem_id: + paired_modem_ids.add(device.deployed_with_modem_id) + + # Count categories + total_count = len(all_modems) + retired_count = sum(1 for m in all_modems if m.retired) + + # In use = deployed AND paired with a device + in_use_count = sum(1 for m in all_modems + if m.deployed and not m.retired and m.id in paired_modem_ids) + + # Spare = deployed but NOT paired (available for assignment) + spare_count = sum(1 for m in all_modems + if m.deployed and not m.retired and m.id not in paired_modem_ids) + + # Benched = not deployed and not retired + benched_count = sum(1 for m in all_modems if not m.deployed and not m.retired) + + return templates.TemplateResponse("partials/modem_stats.html", { + "request": request, + "total_count": total_count, + "in_use_count": in_use_count, + "spare_count": spare_count, + "benched_count": benched_count, + "retired_count": retired_count + }) + + +@router.get("/units", response_class=HTMLResponse) +async def get_modem_units( + request: Request, + db: Session = Depends(get_db), + search: str = Query(None), + filter_status: str = Query(None), # "in_use", "spare", "benched", "retired" +): + """ + Get list of modem units for the dashboard. + Returns HTML partial with modem cards. + """ + query = db.query(RosterUnit).filter_by(device_type="modem") + + # Filter by search term if provided + if search: + search_term = f"%{search}%" + query = query.filter( + (RosterUnit.id.ilike(search_term)) | + (RosterUnit.ip_address.ilike(search_term)) | + (RosterUnit.hardware_model.ilike(search_term)) | + (RosterUnit.phone_number.ilike(search_term)) | + (RosterUnit.location.ilike(search_term)) + ) + + modems = query.order_by( + RosterUnit.retired.asc(), + RosterUnit.deployed.desc(), + RosterUnit.id.asc() + ).all() + + # Get paired device info for each modem + paired_devices = {} + devices_with_modems = db.query(RosterUnit).filter( + RosterUnit.deployed_with_modem_id.isnot(None), + RosterUnit.retired == False + ).all() + for device in devices_with_modems: + if device.deployed_with_modem_id: + paired_devices[device.deployed_with_modem_id] = { + "id": device.id, + "device_type": device.device_type, + "deployed": device.deployed + } + + # Annotate modems with paired device info + modem_list = [] + for modem in modems: + paired = paired_devices.get(modem.id) + + # Determine status category + if modem.retired: + status = "retired" + elif not modem.deployed: + status = "benched" + elif paired: + status = "in_use" + else: + status = "spare" + + # Apply filter if specified + if filter_status and status != filter_status: + continue + + modem_list.append({ + "id": modem.id, + "ip_address": modem.ip_address, + "phone_number": modem.phone_number, + "hardware_model": modem.hardware_model, + "deployed": modem.deployed, + "retired": modem.retired, + "location": modem.location, + "project_id": modem.project_id, + "paired_device": paired, + "status": status + }) + + return templates.TemplateResponse("partials/modem_list.html", { + "request": request, + "modems": modem_list + }) + + +@router.get("/{modem_id}/paired-device") +async def get_paired_device(modem_id: str, db: Session = Depends(get_db)): + """ + Get the device (SLM/seismograph) that is paired with this modem. + Returns JSON with device info or null if not paired. + """ + # Check modem exists + modem = db.query(RosterUnit).filter_by(id=modem_id, device_type="modem").first() + if not modem: + return {"status": "error", "detail": f"Modem {modem_id} not found"} + + # Find device paired with this modem + device = db.query(RosterUnit).filter( + RosterUnit.deployed_with_modem_id == modem_id, + RosterUnit.retired == False + ).first() + + if device: + return { + "paired": True, + "device": { + "id": device.id, + "device_type": device.device_type, + "deployed": device.deployed, + "project_id": device.project_id, + "location": device.location or device.address + } + } + + return {"paired": False, "device": None} + + +@router.get("/{modem_id}/paired-device-html", response_class=HTMLResponse) +async def get_paired_device_html(modem_id: str, request: Request, db: Session = Depends(get_db)): + """ + Get HTML partial showing the device paired with this modem. + Used by unit_detail.html for modems. + """ + # Check modem exists + modem = db.query(RosterUnit).filter_by(id=modem_id, device_type="modem").first() + if not modem: + return HTMLResponse('

Modem not found

') + + # Find device paired with this modem + device = db.query(RosterUnit).filter( + RosterUnit.deployed_with_modem_id == modem_id, + RosterUnit.retired == False + ).first() + + return templates.TemplateResponse("partials/modem_paired_device.html", { + "request": request, + "modem_id": modem_id, + "device": device + }) + + +@router.get("/{modem_id}/ping") +async def ping_modem(modem_id: str, db: Session = Depends(get_db)): + """ + Test modem connectivity with a simple ping. + Returns response time and connection status. + """ + # Get modem from database + modem = db.query(RosterUnit).filter_by(id=modem_id, device_type="modem").first() + + if not modem: + return {"status": "error", "detail": f"Modem {modem_id} not found"} + + if not modem.ip_address: + return {"status": "error", "detail": f"Modem {modem_id} has no IP address configured"} + + try: + # Ping the modem (1 packet, 2 second timeout) + start_time = time.time() + result = subprocess.run( + ["ping", "-c", "1", "-W", "2", modem.ip_address], + capture_output=True, + text=True, + timeout=3 + ) + response_time = int((time.time() - start_time) * 1000) # Convert to milliseconds + + if result.returncode == 0: + return { + "status": "success", + "modem_id": modem_id, + "ip_address": modem.ip_address, + "response_time_ms": response_time, + "message": "Modem is responding" + } + else: + return { + "status": "error", + "modem_id": modem_id, + "ip_address": modem.ip_address, + "detail": "Modem not responding to ping" + } + + except subprocess.TimeoutExpired: + return { + "status": "error", + "modem_id": modem_id, + "ip_address": modem.ip_address, + "detail": "Ping timeout" + } + except Exception as e: + logger.error(f"Failed to ping modem {modem_id}: {e}") + return { + "status": "error", + "modem_id": modem_id, + "detail": str(e) + } + + +@router.get("/{modem_id}/diagnostics") +async def get_modem_diagnostics(modem_id: str, db: Session = Depends(get_db)): + """ + Get modem diagnostics (signal strength, data usage, uptime). + + Currently returns placeholders. When ModemManager is available, + this endpoint will query it for real diagnostics. + """ + modem = db.query(RosterUnit).filter_by(id=modem_id, device_type="modem").first() + if not modem: + return {"status": "error", "detail": f"Modem {modem_id} not found"} + + # TODO: Query ModemManager backend when available + return { + "status": "unavailable", + "message": "ModemManager integration not yet available", + "modem_id": modem_id, + "signal_strength_dbm": None, + "data_usage_mb": None, + "uptime_seconds": None, + "carrier": None, + "connection_type": None # LTE, 5G, etc. + } diff --git a/backend/routers/project_locations.py b/backend/routers/project_locations.py index 801e21a..54d36b1 100644 --- a/backend/routers/project_locations.py +++ b/backend/routers/project_locations.py @@ -6,7 +6,6 @@ and unit assignments within projects. """ from fastapi import APIRouter, Request, Depends, HTTPException, Query -from fastapi.templating import Jinja2Templates from fastapi.responses import HTMLResponse, JSONResponse from sqlalchemy.orm import Session from sqlalchemy import and_, or_ @@ -24,9 +23,9 @@ from backend.models import ( RosterUnit, RecordingSession, ) +from backend.templates_config import templates router = APIRouter(prefix="/api/projects/{project_id}", tags=["project-locations"]) -templates = Jinja2Templates(directory="templates") # ============================================================================ @@ -90,6 +89,40 @@ async def get_project_locations( }) +@router.get("/locations-json") +async def get_project_locations_json( + project_id: str, + db: Session = Depends(get_db), + location_type: Optional[str] = Query(None), +): + """ + Get all monitoring locations for a project as JSON. + Used by the schedule modal to populate location dropdown. + """ + project = db.query(Project).filter_by(id=project_id).first() + if not project: + raise HTTPException(status_code=404, detail="Project not found") + + query = db.query(MonitoringLocation).filter_by(project_id=project_id) + + if location_type: + query = query.filter_by(location_type=location_type) + + locations = query.order_by(MonitoringLocation.name).all() + + return [ + { + "id": loc.id, + "name": loc.name, + "location_type": loc.location_type, + "description": loc.description, + "address": loc.address, + "coordinates": loc.coordinates, + } + for loc in locations + ] + + @router.post("/locations/create") async def create_location( project_id: str, @@ -273,7 +306,7 @@ async def assign_unit_to_location( raise HTTPException(status_code=404, detail="Unit not found") # Check device type matches location type - expected_device_type = "sound_level_meter" if location.location_type == "sound" else "seismograph" + expected_device_type = "slm" if location.location_type == "sound" else "seismograph" if unit.device_type != expected_device_type: raise HTTPException( status_code=400, @@ -375,7 +408,7 @@ async def get_available_units( Filters by device type matching the location type. """ # Determine required device type - required_device_type = "sound_level_meter" if location_type == "sound" else "seismograph" + required_device_type = "slm" if location_type == "sound" else "seismograph" # Get all units of the required type that are deployed and not retired all_units = db.query(RosterUnit).filter( @@ -397,7 +430,7 @@ async def get_available_units( "id": unit.id, "device_type": unit.device_type, "location": unit.address or unit.location, - "model": unit.slm_model if unit.device_type == "sound_level_meter" else unit.unit_type, + "model": unit.slm_model if unit.device_type == "slm" else unit.unit_type, } for unit in all_units if unit.id not in assigned_unit_ids diff --git a/backend/routers/projects.py b/backend/routers/projects.py index 0b89433..2fcf0f0 100644 --- a/backend/routers/projects.py +++ b/backend/routers/projects.py @@ -9,15 +9,18 @@ Provides API endpoints for the Projects system: """ from fastapi import APIRouter, Request, Depends, HTTPException, Query -from fastapi.templating import Jinja2Templates -from fastapi.responses import HTMLResponse, JSONResponse +from fastapi.responses import HTMLResponse, JSONResponse, StreamingResponse from sqlalchemy.orm import Session -from sqlalchemy import func, and_ +from sqlalchemy import func, and_, or_ from datetime import datetime, timedelta from typing import Optional +from collections import OrderedDict import uuid import json import logging +import io + +from backend.utils.timezone import utc_to_local, format_local_datetime from backend.database import get_db from backend.models import ( @@ -27,11 +30,12 @@ from backend.models import ( UnitAssignment, RecordingSession, ScheduledAction, + RecurringSchedule, RosterUnit, ) +from backend.templates_config import templates router = APIRouter(prefix="/api/projects", tags=["projects"]) -templates = Jinja2Templates(directory="templates") logger = logging.getLogger(__name__) @@ -143,6 +147,107 @@ async def get_projects_stats(request: Request, db: Session = Depends(get_db)): }) +# ============================================================================ +# Project Search (Smart Autocomplete) +# ============================================================================ + +def _build_project_display(project: Project) -> str: + """Build display string from project fields: 'xxxx-YY - Client - Name'""" + parts = [] + if project.project_number: + parts.append(project.project_number) + if project.client_name: + parts.append(project.client_name) + if project.name: + parts.append(project.name) + return " - ".join(parts) if parts else project.id + + +@router.get("/search", response_class=HTMLResponse) +async def search_projects( + request: Request, + q: str = Query("", description="Search term"), + db: Session = Depends(get_db), + limit: int = Query(10, le=50), +): + """ + Fuzzy search across project fields for autocomplete. + Searches: project_number, client_name, name (project/site name) + Returns HTML partial for HTMX dropdown. + """ + if not q.strip(): + # Return recent active projects when no search term + projects = db.query(Project).filter( + Project.status != "archived" + ).order_by(Project.updated_at.desc()).limit(limit).all() + else: + search_term = f"%{q}%" + projects = db.query(Project).filter( + and_( + Project.status != "archived", + or_( + Project.project_number.ilike(search_term), + Project.client_name.ilike(search_term), + Project.name.ilike(search_term), + ) + ) + ).order_by(Project.updated_at.desc()).limit(limit).all() + + # Build display data for each project + projects_data = [{ + "id": p.id, + "project_number": p.project_number, + "client_name": p.client_name, + "name": p.name, + "display": _build_project_display(p), + "status": p.status, + } for p in projects] + + return templates.TemplateResponse("partials/project_search_results.html", { + "request": request, + "projects": projects_data, + "query": q, + "show_create": len(projects) == 0 and q.strip(), + }) + + +@router.get("/search-json") +async def search_projects_json( + q: str = Query("", description="Search term"), + db: Session = Depends(get_db), + limit: int = Query(10, le=50), +): + """ + Fuzzy search across project fields - JSON response. + For programmatic/API consumption. + """ + if not q.strip(): + projects = db.query(Project).filter( + Project.status != "archived" + ).order_by(Project.updated_at.desc()).limit(limit).all() + else: + search_term = f"%{q}%" + projects = db.query(Project).filter( + and_( + Project.status != "archived", + or_( + Project.project_number.ilike(search_term), + Project.client_name.ilike(search_term), + Project.name.ilike(search_term), + ) + ) + ).order_by(Project.updated_at.desc()).limit(limit).all() + + return [{ + "id": p.id, + "project_number": p.project_number, + "client_name": p.client_name, + "name": p.name, + "display": _build_project_display(p), + "status": p.status, + } for p in projects] + + # ============================================================================ # Project CRUD # ============================================================================ @@ -157,6 +262,7 @@ async def create_project(request: Request, db: Session = Depends(get_db)): project = Project( id=str(uuid.uuid4()), + project_number=form_data.get("project_number"), # TMI ID: xxxx-YY format name=form_data.get("name"), description=form_data.get("description"), project_type_id=form_data.get("project_type_id"), @@ -193,6 +299,7 @@ async def get_project(project_id: str, db: Session = Depends(get_db)): return { "id": project.id, + "project_number": project.project_number, "name": project.name, "description": project.description, "project_type_id": project.project_type_id, @@ -347,11 +454,15 @@ async def get_project_dashboard( # Project Types # ============================================================================ -@router.get("/{project_id}/header", response_class=JSONResponse) -async def get_project_header(project_id: str, db: Session = Depends(get_db)): +@router.get("/{project_id}/header", response_class=HTMLResponse) +async def get_project_header( + project_id: str, + request: Request, + db: Session = Depends(get_db) +): """ Get project header information for dynamic display. - Returns JSON with project name, status, and type. + Returns HTML partial with project name, status, and type. """ project = db.query(Project).filter_by(id=project_id).first() if not project: @@ -359,12 +470,10 @@ async def get_project_header(project_id: str, db: Session = Depends(get_db)): project_type = db.query(ProjectType).filter_by(id=project.project_type_id).first() - return JSONResponse({ - "id": project.id, - "name": project.name, - "status": project.status, - "project_type_id": project.project_type_id, - "project_type_name": project_type.name if project_type else None, + return templates.TemplateResponse("partials/projects/project_header.html", { + "request": request, + "project": project, + "project_type": project_type, }) @@ -457,24 +566,131 @@ async def get_project_schedules( if status: query = query.filter(ScheduledAction.execution_status == status) - schedules = query.order_by(ScheduledAction.scheduled_time.desc()).all() + # For pending actions, show soonest first (ascending) + # For completed/failed, show most recent first (descending) + if status == "pending": + schedules = query.order_by(ScheduledAction.scheduled_time.asc()).all() + else: + schedules = query.order_by(ScheduledAction.scheduled_time.desc()).all() - # Enrich with location details - schedules_data = [] + # Enrich with location details and group by date + schedules_by_date = OrderedDict() for schedule in schedules: location = None if schedule.location_id: location = db.query(MonitoringLocation).filter_by(id=schedule.location_id).first() - schedules_data.append({ + # Get local date for grouping + if schedule.scheduled_time: + local_dt = utc_to_local(schedule.scheduled_time) + date_key = local_dt.strftime("%Y-%m-%d") + date_display = local_dt.strftime("%A, %B %d, %Y") # "Wednesday, January 22, 2026" + else: + date_key = "unknown" + date_display = "Unknown Date" + + if date_key not in schedules_by_date: + schedules_by_date[date_key] = { + "date_display": date_display, + "date_key": date_key, + "actions": [], + } + + # Parse module_response for display + result_data = None + if schedule.module_response: + try: + result_data = json.loads(schedule.module_response) + except json.JSONDecodeError: + pass + + schedules_by_date[date_key]["actions"].append({ "schedule": schedule, "location": location, + "result": result_data, }) return templates.TemplateResponse("partials/projects/schedule_list.html", { "request": request, "project_id": project_id, - "schedules": schedules_data, + "schedules_by_date": schedules_by_date, + }) + + +@router.post("/{project_id}/schedules/{schedule_id}/execute") +async def execute_scheduled_action( + project_id: str, + schedule_id: str, + db: Session = Depends(get_db), +): + """ + Manually execute a scheduled action now. + """ + from backend.services.scheduler import get_scheduler + + action = db.query(ScheduledAction).filter_by( + id=schedule_id, + project_id=project_id, + ).first() + + if not action: + raise HTTPException(status_code=404, detail="Action not found") + + if action.execution_status != "pending": + raise HTTPException( + status_code=400, + detail=f"Action is not pending (status: {action.execution_status})", + ) + + # Execute via scheduler service + scheduler = get_scheduler() + result = await scheduler.execute_action_by_id(schedule_id) + + # Refresh from DB to get updated status + db.refresh(action) + + return JSONResponse({ + "success": result.get("success", False), + "message": f"Action executed: {action.action_type}", + "result": result, + "action": { + "id": action.id, + "execution_status": action.execution_status, + "executed_at": action.executed_at.isoformat() if action.executed_at else None, + "error_message": action.error_message, + }, + }) + + +@router.post("/{project_id}/schedules/{schedule_id}/cancel") +async def cancel_scheduled_action( + project_id: str, + schedule_id: str, + db: Session = Depends(get_db), +): + """ + Cancel a pending scheduled action. + """ + action = db.query(ScheduledAction).filter_by( + id=schedule_id, + project_id=project_id, + ).first() + + if not action: + raise HTTPException(status_code=404, detail="Action not found") + + if action.execution_status != "pending": + raise HTTPException( + status_code=400, + detail=f"Can only cancel pending actions (status: {action.execution_status})", + ) + + action.execution_status = "cancelled" + db.commit() + + return JSONResponse({ + "success": True, + "message": "Action cancelled successfully", }) @@ -522,51 +738,6 @@ async def get_project_sessions( }) -@router.get("/{project_id}/files", response_class=HTMLResponse) -async def get_project_files( - project_id: str, - request: Request, - db: Session = Depends(get_db), - file_type: Optional[str] = Query(None), -): - """ - Get all data files from all sessions in this project. - Returns HTML partial with file list. - Optional file_type filter: audio, data, log, etc. - """ - from backend.models import DataFile - - # Join through RecordingSession to get project files - query = db.query(DataFile).join( - RecordingSession, - DataFile.session_id == RecordingSession.id - ).filter(RecordingSession.project_id == project_id) - - # Filter by file type if provided - if file_type: - query = query.filter(DataFile.file_type == file_type) - - files = query.order_by(DataFile.created_at.desc()).all() - - # Enrich with session details - files_data = [] - for file in files: - session = None - if file.session_id: - session = db.query(RecordingSession).filter_by(id=file.session_id).first() - - files_data.append({ - "file": file, - "session": session, - }) - - return templates.TemplateResponse("partials/projects/file_list.html", { - "request": request, - "project_id": project_id, - "files": files_data, - }) - - @router.get("/{project_id}/ftp-browser", response_class=HTMLResponse) async def get_ftp_browser( project_id: str, @@ -594,7 +765,7 @@ async def get_ftp_browser( location = db.query(MonitoringLocation).filter_by(id=assignment.location_id).first() # Only include SLM units - if unit and unit.device_type == "sound_level_meter": + if unit and unit.device_type == "slm": units_data.append({ "assignment": assignment, "unit": unit, @@ -649,10 +820,11 @@ async def ftp_download_to_server( project_id=project_id, location_id=location_id, unit_id=unit_id, + session_type="sound", # SLMs are sound monitoring devices status="completed", started_at=datetime.utcnow(), stopped_at=datetime.utcnow(), - notes="Auto-created for FTP download" + session_metadata='{"source": "ftp_download", "note": "Auto-created for FTP download"}' ) db.add(session) db.commit() @@ -680,12 +852,37 @@ async def ftp_download_to_server( # Determine file type from extension ext = os.path.splitext(filename)[1].lower() file_type_map = { + # Audio files '.wav': 'audio', '.mp3': 'audio', + '.flac': 'audio', + '.m4a': 'audio', + '.aac': 'audio', + # Sound level meter measurement files + '.rnd': 'measurement', + # Data files '.csv': 'data', '.txt': 'data', - '.log': 'log', '.json': 'data', + '.xml': 'data', + '.dat': 'data', + # Log files + '.log': 'log', + # Archives + '.zip': 'archive', + '.tar': 'archive', + '.gz': 'archive', + '.7z': 'archive', + '.rar': 'archive', + # Images + '.jpg': 'image', + '.jpeg': 'image', + '.png': 'image', + '.gif': 'image', + # Documents + '.pdf': 'document', + '.doc': 'document', + '.docx': 'document', } file_type = file_type_map.get(ext, 'data') @@ -751,12 +948,15 @@ async def ftp_download_folder_to_server( db: Session = Depends(get_db), ): """ - Download an entire folder from an SLM to the server via FTP as a ZIP file. - Creates a DataFile record and stores the ZIP in data/Projects/{project_id}/ + Download an entire folder from an SLM to the server via FTP. + Extracts all files from the ZIP and preserves folder structure. + Creates individual DataFile records for each file. """ import httpx import os import hashlib + import zipfile + import io from pathlib import Path from backend.models import DataFile @@ -785,16 +985,17 @@ async def ftp_download_folder_to_server( project_id=project_id, location_id=location_id, unit_id=unit_id, + session_type="sound", # SLMs are sound monitoring devices status="completed", started_at=datetime.utcnow(), stopped_at=datetime.utcnow(), - notes="Auto-created for FTP folder download" + session_metadata='{"source": "ftp_folder_download", "note": "Auto-created for FTP folder download"}' ) db.add(session) db.commit() db.refresh(session) - # Download folder from SLMM + # Download folder from SLMM (returns ZIP) SLMM_BASE_URL = os.getenv("SLMM_BASE_URL", "http://localhost:8100") try: @@ -812,49 +1013,93 @@ async def ftp_download_folder_to_server( # Extract folder name from remote_path folder_name = os.path.basename(remote_path.rstrip('/')) - filename = f"{folder_name}.zip" - # Create directory structure: data/Projects/{project_id}/{session_id}/ - project_dir = Path(f"data/Projects/{project_id}/{session.id}") - project_dir.mkdir(parents=True, exist_ok=True) + # Create base directory: data/Projects/{project_id}/{session_id}/{folder_name}/ + base_dir = Path(f"data/Projects/{project_id}/{session.id}/{folder_name}") + base_dir.mkdir(parents=True, exist_ok=True) - # Save ZIP file to disk - file_path = project_dir / filename - file_content = response.content + # Extract ZIP and save individual files + zip_content = response.content + created_files = [] + total_size = 0 - with open(file_path, 'wb') as f: - f.write(file_content) + # File type mapping for classification + file_type_map = { + # Audio files + '.wav': 'audio', '.mp3': 'audio', '.flac': 'audio', '.m4a': 'audio', '.aac': 'audio', + # Data files + '.csv': 'data', '.txt': 'data', '.json': 'data', '.xml': 'data', '.dat': 'data', + # Log files + '.log': 'log', + # Archives + '.zip': 'archive', '.tar': 'archive', '.gz': 'archive', '.7z': 'archive', '.rar': 'archive', + # Images + '.jpg': 'image', '.jpeg': 'image', '.png': 'image', '.gif': 'image', + # Documents + '.pdf': 'document', '.doc': 'document', '.docx': 'document', + } - # Calculate checksum - checksum = hashlib.sha256(file_content).hexdigest() + with zipfile.ZipFile(io.BytesIO(zip_content)) as zf: + for zip_info in zf.filelist: + # Skip directories + if zip_info.is_dir(): + continue - # Create DataFile record - data_file = DataFile( - id=str(uuid.uuid4()), - session_id=session.id, - file_path=str(file_path.relative_to("data")), # Store relative to data/ - file_type='archive', # ZIP archives - file_size_bytes=len(file_content), - downloaded_at=datetime.utcnow(), - checksum=checksum, - file_metadata=json.dumps({ - "source": "ftp_folder", - "remote_path": remote_path, - "unit_id": unit_id, - "location_id": location_id, - "folder_name": folder_name, - }) - ) + # Read file from ZIP + file_data = zf.read(zip_info.filename) + + # Determine file path (preserve structure within folder) + # zip_info.filename might be like "Auto_0001/measurement.wav" + file_path = base_dir / zip_info.filename + file_path.parent.mkdir(parents=True, exist_ok=True) + + # Write file to disk + with open(file_path, 'wb') as f: + f.write(file_data) + + # Calculate checksum + checksum = hashlib.sha256(file_data).hexdigest() + + # Determine file type + ext = os.path.splitext(zip_info.filename)[1].lower() + file_type = file_type_map.get(ext, 'data') + + # Create DataFile record + data_file = DataFile( + id=str(uuid.uuid4()), + session_id=session.id, + file_path=str(file_path.relative_to("data")), + file_type=file_type, + file_size_bytes=len(file_data), + downloaded_at=datetime.utcnow(), + checksum=checksum, + file_metadata=json.dumps({ + "source": "ftp_folder", + "remote_path": remote_path, + "unit_id": unit_id, + "location_id": location_id, + "folder_name": folder_name, + "relative_path": zip_info.filename, + }) + ) + + db.add(data_file) + created_files.append({ + "filename": zip_info.filename, + "size": len(file_data), + "type": file_type + }) + total_size += len(file_data) - db.add(data_file) db.commit() return { "success": True, - "message": f"Downloaded folder {folder_name} to server as ZIP", - "file_id": data_file.id, - "file_path": str(file_path), - "file_size": len(file_content), + "message": f"Downloaded folder {folder_name} with {len(created_files)} files", + "folder_name": folder_name, + "file_count": len(created_files), + "total_size": total_size, + "files": created_files, } except httpx.TimeoutException: @@ -862,6 +1107,11 @@ async def ftp_download_folder_to_server( status_code=504, detail="Timeout downloading folder from SLM (large folders may take a while)" ) + except zipfile.BadZipFile: + raise HTTPException( + status_code=500, + detail="Downloaded file is not a valid ZIP archive" + ) except Exception as e: logger.error(f"Error downloading folder to server: {e}") raise HTTPException( @@ -874,6 +1124,1488 @@ async def ftp_download_folder_to_server( # Project Types # ============================================================================ +@router.get("/{project_id}/files-unified", response_class=HTMLResponse) +async def get_unified_files( + project_id: str, + request: Request, + db: Session = Depends(get_db), +): + """ + Get unified view of all files in this project. + Groups files by recording session with full metadata. + Returns HTML partial with hierarchical file listing. + """ + from backend.models import DataFile + from pathlib import Path + import json + + # Get all sessions for this project + sessions = db.query(RecordingSession).filter_by( + project_id=project_id + ).order_by(RecordingSession.started_at.desc()).all() + + sessions_data = [] + for session in sessions: + # Get files for this session + files = db.query(DataFile).filter_by(session_id=session.id).all() + + # Skip sessions with no files + if not files: + continue + + # Get session context + unit = None + location = None + if session.unit_id: + unit = db.query(RosterUnit).filter_by(id=session.unit_id).first() + if session.location_id: + location = db.query(MonitoringLocation).filter_by(id=session.location_id).first() + + files_data = [] + for file in files: + # Check if file exists on disk + file_path = Path("data") / file.file_path + exists_on_disk = file_path.exists() + + # Get actual file size if exists + actual_size = file_path.stat().st_size if exists_on_disk else None + + # Parse metadata JSON + metadata = {} + try: + if file.file_metadata: + metadata = json.loads(file.file_metadata) + except Exception as e: + logger.warning(f"Failed to parse metadata for file {file.id}: {e}") + + files_data.append({ + "file": file, + "exists_on_disk": exists_on_disk, + "actual_size": actual_size, + "metadata": metadata, + }) + + sessions_data.append({ + "session": session, + "unit": unit, + "location": location, + "files": files_data, + }) + + return templates.TemplateResponse("partials/projects/unified_files.html", { + "request": request, + "project_id": project_id, + "sessions": sessions_data, + }) + + +@router.get("/{project_id}/files/{file_id}/download") +async def download_project_file( + project_id: str, + file_id: str, + db: Session = Depends(get_db), +): + """ + Download a data file from a project. + Returns the file for download. + """ + from backend.models import DataFile + from fastapi.responses import FileResponse + from pathlib import Path + + # Get the file record + file_record = db.query(DataFile).filter_by(id=file_id).first() + if not file_record: + raise HTTPException(status_code=404, detail="File not found") + + # Verify file belongs to this project + session = db.query(RecordingSession).filter_by(id=file_record.session_id).first() + if not session or session.project_id != project_id: + raise HTTPException(status_code=403, detail="File does not belong to this project") + + # Build full file path + file_path = Path("data") / file_record.file_path + + if not file_path.exists(): + raise HTTPException(status_code=404, detail="File not found on disk") + + # Extract filename for download + filename = file_path.name + + return FileResponse( + path=str(file_path), + filename=filename, + media_type="application/octet-stream" + ) + + +@router.get("/{project_id}/sessions/{session_id}/download-all") +async def download_session_files( + project_id: str, + session_id: str, + db: Session = Depends(get_db), +): + """ + Download all files from a session as a single zip archive. + """ + from backend.models import DataFile + from pathlib import Path + import zipfile + + # Verify session belongs to this project + session = db.query(RecordingSession).filter_by(id=session_id).first() + if not session: + raise HTTPException(status_code=404, detail="Session not found") + if session.project_id != project_id: + raise HTTPException(status_code=403, detail="Session does not belong to this project") + + # Get all files for this session + files = db.query(DataFile).filter_by(session_id=session_id).all() + if not files: + raise HTTPException(status_code=404, detail="No files found in this session") + + # Create zip in memory + zip_buffer = io.BytesIO() + + # Get session info for folder naming + session_date = session.started_at.strftime('%Y-%m-%d_%H%M') if session.started_at else 'unknown' + + # Get unit and location for naming + unit = db.query(RosterUnit).filter_by(id=session.unit_id).first() if session.unit_id else None + location = db.query(MonitoringLocation).filter_by(id=session.location_id).first() if session.location_id else None + + unit_name = unit.id if unit else "unknown_unit" + location_name = location.name.replace(" ", "_") if location else "" + + # Build folder name for zip contents + folder_name = f"{session_date}_{unit_name}" + if location_name: + folder_name += f"_{location_name}" + + with zipfile.ZipFile(zip_buffer, 'w', zipfile.ZIP_DEFLATED) as zip_file: + for file_record in files: + file_path = Path("data") / file_record.file_path + if file_path.exists(): + # Add file to zip with folder structure + arcname = f"{folder_name}/{file_path.name}" + zip_file.write(file_path, arcname) + + zip_buffer.seek(0) + + # Generate filename for the zip + zip_filename = f"{folder_name}.zip" + + return StreamingResponse( + zip_buffer, + media_type="application/zip", + headers={"Content-Disposition": f"attachment; filename={zip_filename}"} + ) + + +@router.delete("/{project_id}/files/{file_id}") +async def delete_project_file( + project_id: str, + file_id: str, + db: Session = Depends(get_db), +): + """ + Delete a single data file from a project. + Removes both the database record and the file on disk. + """ + from backend.models import DataFile + from pathlib import Path + + # Get the file record + file_record = db.query(DataFile).filter_by(id=file_id).first() + if not file_record: + raise HTTPException(status_code=404, detail="File not found") + + # Verify file belongs to this project + session = db.query(RecordingSession).filter_by(id=file_record.session_id).first() + if not session or session.project_id != project_id: + raise HTTPException(status_code=403, detail="File does not belong to this project") + + # Delete file from disk if it exists + file_path = Path("data") / file_record.file_path + if file_path.exists(): + file_path.unlink() + + # Delete database record + db.delete(file_record) + db.commit() + + return JSONResponse({"status": "success", "message": "File deleted"}) + + +@router.delete("/{project_id}/sessions/{session_id}") +async def delete_session( + project_id: str, + session_id: str, + db: Session = Depends(get_db), +): + """ + Delete an entire session and all its files. + Removes database records and files on disk. + """ + from backend.models import DataFile + from pathlib import Path + + # Verify session belongs to this project + session = db.query(RecordingSession).filter_by(id=session_id).first() + if not session: + raise HTTPException(status_code=404, detail="Session not found") + if session.project_id != project_id: + raise HTTPException(status_code=403, detail="Session does not belong to this project") + + # Get all files for this session + files = db.query(DataFile).filter_by(session_id=session_id).all() + + # Delete files from disk + deleted_count = 0 + for file_record in files: + file_path = Path("data") / file_record.file_path + if file_path.exists(): + file_path.unlink() + deleted_count += 1 + # Delete database record + db.delete(file_record) + + # Delete the session record + db.delete(session) + db.commit() + + return JSONResponse({ + "status": "success", + "message": f"Session and {deleted_count} file(s) deleted" + }) + + +@router.get("/{project_id}/files/{file_id}/view-rnd", response_class=HTMLResponse) +async def view_rnd_file( + request: Request, + project_id: str, + file_id: str, + db: Session = Depends(get_db), +): + """ + View an RND (sound level meter measurement) file. + Returns a dedicated page with data table and charts. + """ + from backend.models import DataFile + from pathlib import Path + + # Get the file record + file_record = db.query(DataFile).filter_by(id=file_id).first() + if not file_record: + raise HTTPException(status_code=404, detail="File not found") + + # Verify file belongs to this project + session = db.query(RecordingSession).filter_by(id=file_record.session_id).first() + if not session or session.project_id != project_id: + raise HTTPException(status_code=403, detail="File does not belong to this project") + + # Build full file path + file_path = Path("data") / file_record.file_path + + if not file_path.exists(): + raise HTTPException(status_code=404, detail="File not found on disk") + + # Get project info + project = db.query(Project).filter_by(id=project_id).first() + + # Get location info if available + location = None + if session.location_id: + location = db.query(MonitoringLocation).filter_by(id=session.location_id).first() + + # Get unit info if available + unit = None + if session.unit_id: + unit = db.query(RosterUnit).filter_by(id=session.unit_id).first() + + # Parse file metadata + metadata = {} + if file_record.file_metadata: + try: + metadata = json.loads(file_record.file_metadata) + except json.JSONDecodeError: + pass + + return templates.TemplateResponse("rnd_viewer.html", { + "request": request, + "project": project, + "project_id": project_id, + "file": file_record, + "file_id": file_id, + "session": session, + "location": location, + "unit": unit, + "metadata": metadata, + "filename": file_path.name, + }) + + +@router.get("/{project_id}/files/{file_id}/rnd-data") +async def get_rnd_data( + project_id: str, + file_id: str, + db: Session = Depends(get_db), +): + """ + Get parsed RND file data as JSON. + Returns the measurement data for charts and tables. + """ + from backend.models import DataFile + from pathlib import Path + import csv + import io + + # Get the file record + file_record = db.query(DataFile).filter_by(id=file_id).first() + if not file_record: + raise HTTPException(status_code=404, detail="File not found") + + # Verify file belongs to this project + session = db.query(RecordingSession).filter_by(id=file_record.session_id).first() + if not session or session.project_id != project_id: + raise HTTPException(status_code=403, detail="File does not belong to this project") + + # Build full file path + file_path = Path("data") / file_record.file_path + + if not file_path.exists(): + raise HTTPException(status_code=404, detail="File not found on disk") + + # Read and parse the RND file + try: + with open(file_path, 'r', encoding='utf-8', errors='replace') as f: + content = f.read() + + # Parse as CSV + reader = csv.DictReader(io.StringIO(content)) + rows = [] + headers = [] + + for row in reader: + if not headers: + headers = list(row.keys()) + # Clean up values - strip whitespace and handle special values + cleaned_row = {} + for key, value in row.items(): + if key: # Skip empty keys + cleaned_key = key.strip() + cleaned_value = value.strip() if value else '' + # Convert numeric values + if cleaned_value and cleaned_value not in ['-.-', '-', '']: + try: + cleaned_value = float(cleaned_value) + except ValueError: + pass + elif cleaned_value in ['-.-', '-']: + cleaned_value = None + cleaned_row[cleaned_key] = cleaned_value + rows.append(cleaned_row) + + # Detect file type (Leq vs Lp) based on columns + file_type = 'unknown' + if headers: + header_str = ','.join(headers).lower() + if 'leq' in header_str: + file_type = 'leq' # Time-averaged data + elif 'lp(main)' in header_str or 'lp (main)' in header_str: + file_type = 'lp' # Instantaneous data + + # Get summary statistics + summary = { + "total_rows": len(rows), + "file_type": file_type, + "headers": [h.strip() for h in headers if h.strip()], + } + + # Calculate min/max/avg for key metrics if available + metrics_to_summarize = ['Leq(Main)', 'Lmax(Main)', 'Lmin(Main)', 'Lpeak(Main)', 'Lp(Main)'] + for metric in metrics_to_summarize: + values = [row.get(metric) for row in rows if isinstance(row.get(metric), (int, float))] + if values: + summary[f"{metric}_min"] = min(values) + summary[f"{metric}_max"] = max(values) + summary[f"{metric}_avg"] = sum(values) / len(values) + + # Get time range + if rows: + first_time = rows[0].get('Start Time', '') + last_time = rows[-1].get('Start Time', '') + summary['time_start'] = first_time + summary['time_end'] = last_time + + return { + "success": True, + "summary": summary, + "headers": summary["headers"], + "data": rows, + } + + except Exception as e: + logger.error(f"Error parsing RND file: {e}") + raise HTTPException(status_code=500, detail=f"Error parsing file: {str(e)}") + + +@router.get("/{project_id}/files/{file_id}/generate-report") +async def generate_excel_report( + project_id: str, + file_id: str, + report_title: str = Query("Background Noise Study", description="Title for the report"), + location_name: str = Query("", description="Location name (e.g., 'NRL 1 - West Side')"), + project_name: str = Query("", description="Project name override"), + client_name: str = Query("", description="Client name for report header"), + start_time: str = Query("", description="Filter start time (HH:MM format, e.g., '19:00')"), + end_time: str = Query("", description="Filter end time (HH:MM format, e.g., '07:00')"), + start_date: str = Query("", description="Filter start date (YYYY-MM-DD format)"), + end_date: str = Query("", description="Filter end date (YYYY-MM-DD format)"), + db: Session = Depends(get_db), +): + """ + Generate an Excel report from an RND file. + + Creates a formatted Excel workbook with: + - Title and location headers + - Data table (Test #, Date, Time, LAmax, LA01, LA10, Comments) + - Line chart visualization + - Time period summary statistics + + Time filtering: + - start_time/end_time: Filter to time window (handles overnight like 19:00-07:00) + - start_date/end_date: Filter to date range + + Column mapping from RND to Report: + - Lmax(Main) -> LAmax (dBA) + - LN1(Main) -> LA01 (dBA) [L1 percentile] + - LN2(Main) -> LA10 (dBA) [L10 percentile] + """ + from backend.models import DataFile + from pathlib import Path + import csv + + try: + import openpyxl + from openpyxl.chart import LineChart, Reference + from openpyxl.chart.label import DataLabelList + from openpyxl.styles import Font, Alignment, Border, Side, PatternFill + from openpyxl.utils import get_column_letter + except ImportError: + raise HTTPException( + status_code=500, + detail="openpyxl is not installed. Run: pip install openpyxl" + ) + + # Get the file record + file_record = db.query(DataFile).filter_by(id=file_id).first() + if not file_record: + raise HTTPException(status_code=404, detail="File not found") + + # Verify file belongs to this project + session = db.query(RecordingSession).filter_by(id=file_record.session_id).first() + if not session or session.project_id != project_id: + raise HTTPException(status_code=403, detail="File does not belong to this project") + + # Get related data for report context + project = db.query(Project).filter_by(id=project_id).first() + location = db.query(MonitoringLocation).filter_by(id=session.location_id).first() if session.location_id else None + + # Build full file path + file_path = Path("data") / file_record.file_path + if not file_path.exists(): + raise HTTPException(status_code=404, detail="File not found on disk") + + # Validate this is a Leq file (contains '_Leq_' in path) + # Lp files (instantaneous 100ms readings) don't have the LN percentile data needed for reports + if '_Leq_' not in file_record.file_path: + raise HTTPException( + status_code=400, + detail="Reports can only be generated from Leq files (15-minute averaged data). This appears to be an Lp (instantaneous) file." + ) + + # Read and parse the Leq RND file + try: + with open(file_path, 'r', encoding='utf-8', errors='replace') as f: + content = f.read() + + reader = csv.DictReader(io.StringIO(content)) + rnd_rows = [] + for row in reader: + cleaned_row = {} + for key, value in row.items(): + if key: + cleaned_key = key.strip() + cleaned_value = value.strip() if value else '' + if cleaned_value and cleaned_value not in ['-.-', '-', '']: + try: + cleaned_value = float(cleaned_value) + except ValueError: + pass + elif cleaned_value in ['-.-', '-']: + cleaned_value = None + cleaned_row[cleaned_key] = cleaned_value + rnd_rows.append(cleaned_row) + + if not rnd_rows: + raise HTTPException(status_code=400, detail="No data found in RND file") + + except Exception as e: + logger.error(f"Error reading RND file: {e}") + raise HTTPException(status_code=500, detail=f"Error reading file: {str(e)}") + + # Apply time and date filtering + def filter_rows_by_time(rows, filter_start_time, filter_end_time, filter_start_date, filter_end_date): + """Filter rows by time window and date range.""" + if not filter_start_time and not filter_end_time and not filter_start_date and not filter_end_date: + return rows + + filtered = [] + + # Parse time filters + start_hour = start_minute = end_hour = end_minute = None + if filter_start_time: + try: + parts = filter_start_time.split(':') + start_hour = int(parts[0]) + start_minute = int(parts[1]) if len(parts) > 1 else 0 + except (ValueError, IndexError): + pass + + if filter_end_time: + try: + parts = filter_end_time.split(':') + end_hour = int(parts[0]) + end_minute = int(parts[1]) if len(parts) > 1 else 0 + except (ValueError, IndexError): + pass + + # Parse date filters + start_dt = end_dt = None + if filter_start_date: + try: + start_dt = datetime.strptime(filter_start_date, '%Y-%m-%d').date() + except ValueError: + pass + if filter_end_date: + try: + end_dt = datetime.strptime(filter_end_date, '%Y-%m-%d').date() + except ValueError: + pass + + for row in rows: + start_time_str = row.get('Start Time', '') + if not start_time_str: + continue + + try: + dt = datetime.strptime(start_time_str, '%Y/%m/%d %H:%M:%S') + row_date = dt.date() + row_hour = dt.hour + row_minute = dt.minute + + # Date filtering + if start_dt and row_date < start_dt: + continue + if end_dt and row_date > end_dt: + continue + + # Time filtering (handle overnight ranges like 19:00-07:00) + if start_hour is not None and end_hour is not None: + row_time_minutes = row_hour * 60 + row_minute + start_time_minutes = start_hour * 60 + start_minute + end_time_minutes = end_hour * 60 + end_minute + + if start_time_minutes > end_time_minutes: + # Overnight range (e.g., 19:00-07:00) + if not (row_time_minutes >= start_time_minutes or row_time_minutes < end_time_minutes): + continue + else: + # Same day range (e.g., 07:00-19:00) + if not (start_time_minutes <= row_time_minutes < end_time_minutes): + continue + + filtered.append(row) + except ValueError: + # If we can't parse the time, include the row anyway + filtered.append(row) + + return filtered + + # Apply filters + original_count = len(rnd_rows) + rnd_rows = filter_rows_by_time(rnd_rows, start_time, end_time, start_date, end_date) + + if not rnd_rows: + time_filter_desc = "" + if start_time and end_time: + time_filter_desc = f" between {start_time} and {end_time}" + if start_date or end_date: + time_filter_desc += f" from {start_date or 'start'} to {end_date or 'end'}" + raise HTTPException( + status_code=400, + detail=f"No data found after applying filters{time_filter_desc}. Original file had {original_count} rows." + ) + + # Create Excel workbook + wb = openpyxl.Workbook() + ws = wb.active + ws.title = "Sound Level Data" + + # Define styles + title_font = Font(bold=True, size=14) + header_font = Font(bold=True, size=10) + thin_border = Border( + left=Side(style='thin'), + right=Side(style='thin'), + top=Side(style='thin'), + bottom=Side(style='thin') + ) + header_fill = PatternFill(start_color="DAEEF3", end_color="DAEEF3", fill_type="solid") + + # Row 1: Report title + final_project_name = project_name if project_name else (project.name if project else "") + final_title = report_title + if final_project_name: + final_title = f"{report_title} - {final_project_name}" + ws['A1'] = final_title + ws['A1'].font = title_font + ws.merge_cells('A1:G1') + + # Row 2: Client name (if provided) + if client_name: + ws['A2'] = f"Client: {client_name}" + ws['A2'].font = Font(italic=True, size=10) + + # Row 3: Location name + final_location = location_name + if not final_location and location: + final_location = location.name + if final_location: + ws['A3'] = final_location + ws['A3'].font = Font(bold=True, size=11) + + # Row 4: Time filter info (if applied) + if start_time and end_time: + filter_info = f"Time Filter: {start_time} - {end_time}" + if start_date or end_date: + filter_info += f" | Date Range: {start_date or 'start'} to {end_date or 'end'}" + filter_info += f" | {len(rnd_rows)} of {original_count} rows" + ws['A4'] = filter_info + ws['A4'].font = Font(italic=True, size=9, color="666666") + + # Row 7: Headers + headers = ['Test Increment #', 'Date', 'Time', 'LAmax (dBA)', 'LA01 (dBA)', 'LA10 (dBA)', 'Comments'] + for col, header in enumerate(headers, 1): + cell = ws.cell(row=7, column=col, value=header) + cell.font = header_font + cell.border = thin_border + cell.fill = header_fill + cell.alignment = Alignment(horizontal='center') + + # Set column widths + column_widths = [16, 12, 10, 12, 12, 12, 40] + for i, width in enumerate(column_widths, 1): + ws.column_dimensions[get_column_letter(i)].width = width + + # Data rows starting at row 8 + data_start_row = 8 + for idx, row in enumerate(rnd_rows, 1): + data_row = data_start_row + idx - 1 + + # Test Increment # + ws.cell(row=data_row, column=1, value=idx).border = thin_border + + # Parse the Start Time to get Date and Time + start_time_str = row.get('Start Time', '') + if start_time_str: + try: + # Format: "2025/12/26 20:23:38" + dt = datetime.strptime(start_time_str, '%Y/%m/%d %H:%M:%S') + ws.cell(row=data_row, column=2, value=dt.date()) + ws.cell(row=data_row, column=3, value=dt.time()) + except ValueError: + ws.cell(row=data_row, column=2, value=start_time_str) + ws.cell(row=data_row, column=3, value='') + else: + ws.cell(row=data_row, column=2, value='') + ws.cell(row=data_row, column=3, value='') + + # LAmax - from Lmax(Main) + lmax = row.get('Lmax(Main)') + ws.cell(row=data_row, column=4, value=lmax if lmax else '').border = thin_border + + # LA01 - from LN1(Main) + ln1 = row.get('LN1(Main)') + ws.cell(row=data_row, column=5, value=ln1 if ln1 else '').border = thin_border + + # LA10 - from LN2(Main) + ln2 = row.get('LN2(Main)') + ws.cell(row=data_row, column=6, value=ln2 if ln2 else '').border = thin_border + + # Comments (empty for now, can be populated) + ws.cell(row=data_row, column=7, value='').border = thin_border + + # Apply borders to date/time cells + ws.cell(row=data_row, column=2).border = thin_border + ws.cell(row=data_row, column=3).border = thin_border + + data_end_row = data_start_row + len(rnd_rows) - 1 + + # Add Line Chart + chart = LineChart() + chart.title = f"{final_location or 'Sound Level Data'} - Background Noise Study" + chart.style = 10 + chart.y_axis.title = "Sound Level (dBA)" + chart.x_axis.title = "Test Increment" + chart.height = 12 + chart.width = 20 + + # Data references (LAmax, LA01, LA10 are columns D, E, F) + data_ref = Reference(ws, min_col=4, min_row=7, max_col=6, max_row=data_end_row) + categories = Reference(ws, min_col=1, min_row=data_start_row, max_row=data_end_row) + + chart.add_data(data_ref, titles_from_data=True) + chart.set_categories(categories) + + # Style the series + if len(chart.series) >= 3: + chart.series[0].graphicalProperties.line.solidFill = "FF0000" # LAmax - Red + chart.series[1].graphicalProperties.line.solidFill = "00B050" # LA01 - Green + chart.series[2].graphicalProperties.line.solidFill = "0070C0" # LA10 - Blue + + # Position chart to the right of data + ws.add_chart(chart, "I3") + + # Add summary statistics section below the data + summary_row = data_end_row + 3 + ws.cell(row=summary_row, column=1, value="Summary Statistics").font = Font(bold=True, size=12) + + # Calculate time-period statistics + time_periods = { + 'Evening (7PM-10PM)': [], + 'Nighttime (10PM-7AM)': [], + 'Morning (7AM-12PM)': [], + 'Daytime (12PM-7PM)': [] + } + + for row in rnd_rows: + start_time_str = row.get('Start Time', '') + if start_time_str: + try: + dt = datetime.strptime(start_time_str, '%Y/%m/%d %H:%M:%S') + hour = dt.hour + + lmax = row.get('Lmax(Main)') + ln1 = row.get('LN1(Main)') + ln2 = row.get('LN2(Main)') + + if isinstance(lmax, (int, float)) and isinstance(ln1, (int, float)) and isinstance(ln2, (int, float)): + data_point = {'lmax': lmax, 'ln1': ln1, 'ln2': ln2} + + if 19 <= hour < 22: + time_periods['Evening (7PM-10PM)'].append(data_point) + elif hour >= 22 or hour < 7: + time_periods['Nighttime (10PM-7AM)'].append(data_point) + elif 7 <= hour < 12: + time_periods['Morning (7AM-12PM)'].append(data_point) + else: # 12-19 + time_periods['Daytime (12PM-7PM)'].append(data_point) + except ValueError: + continue + + # Summary table headers + summary_row += 2 + summary_headers = ['Time Period', 'Samples', 'LAmax Avg', 'LA01 Avg', 'LA10 Avg'] + for col, header in enumerate(summary_headers, 1): + cell = ws.cell(row=summary_row, column=col, value=header) + cell.font = header_font + cell.fill = header_fill + cell.border = thin_border + + # Summary data + summary_row += 1 + for period_name, samples in time_periods.items(): + ws.cell(row=summary_row, column=1, value=period_name).border = thin_border + ws.cell(row=summary_row, column=2, value=len(samples)).border = thin_border + + if samples: + avg_lmax = sum(s['lmax'] for s in samples) / len(samples) + avg_ln1 = sum(s['ln1'] for s in samples) / len(samples) + avg_ln2 = sum(s['ln2'] for s in samples) / len(samples) + ws.cell(row=summary_row, column=3, value=round(avg_lmax, 1)).border = thin_border + ws.cell(row=summary_row, column=4, value=round(avg_ln1, 1)).border = thin_border + ws.cell(row=summary_row, column=5, value=round(avg_ln2, 1)).border = thin_border + else: + ws.cell(row=summary_row, column=3, value='-').border = thin_border + ws.cell(row=summary_row, column=4, value='-').border = thin_border + ws.cell(row=summary_row, column=5, value='-').border = thin_border + + summary_row += 1 + + # Overall summary + summary_row += 1 + ws.cell(row=summary_row, column=1, value='Overall').font = Font(bold=True) + ws.cell(row=summary_row, column=1).border = thin_border + ws.cell(row=summary_row, column=2, value=len(rnd_rows)).border = thin_border + + all_lmax = [r.get('Lmax(Main)') for r in rnd_rows if isinstance(r.get('Lmax(Main)'), (int, float))] + all_ln1 = [r.get('LN1(Main)') for r in rnd_rows if isinstance(r.get('LN1(Main)'), (int, float))] + all_ln2 = [r.get('LN2(Main)') for r in rnd_rows if isinstance(r.get('LN2(Main)'), (int, float))] + + if all_lmax: + ws.cell(row=summary_row, column=3, value=round(sum(all_lmax) / len(all_lmax), 1)).border = thin_border + if all_ln1: + ws.cell(row=summary_row, column=4, value=round(sum(all_ln1) / len(all_ln1), 1)).border = thin_border + if all_ln2: + ws.cell(row=summary_row, column=5, value=round(sum(all_ln2) / len(all_ln2), 1)).border = thin_border + + # Save to buffer + output = io.BytesIO() + wb.save(output) + output.seek(0) + + # Generate filename + filename = file_record.file_path.split('/')[-1].replace('.rnd', '') + if location: + filename = f"{location.name}_{filename}" + filename = f"{filename}_report.xlsx" + # Clean filename + filename = "".join(c for c in filename if c.isalnum() or c in ('_', '-', '.')).rstrip() + + return StreamingResponse( + output, + media_type="application/vnd.openxmlformats-officedocument.spreadsheetml.sheet", + headers={"Content-Disposition": f'attachment; filename="{filename}"'} + ) + + +@router.get("/{project_id}/files/{file_id}/preview-report") +async def preview_report_data( + request: Request, + project_id: str, + file_id: str, + report_title: str = Query("Background Noise Study", description="Title for the report"), + location_name: str = Query("", description="Location name"), + project_name: str = Query("", description="Project name override"), + client_name: str = Query("", description="Client name"), + start_time: str = Query("", description="Filter start time (HH:MM format)"), + end_time: str = Query("", description="Filter end time (HH:MM format)"), + start_date: str = Query("", description="Filter start date (YYYY-MM-DD format)"), + end_date: str = Query("", description="Filter end date (YYYY-MM-DD format)"), + db: Session = Depends(get_db), +): + """ + Preview report data for editing in jspreadsheet. + Returns an HTML page with the spreadsheet editor. + """ + from backend.models import DataFile, ReportTemplate + from pathlib import Path + import csv + + # Get the file record + file_record = db.query(DataFile).filter_by(id=file_id).first() + if not file_record: + raise HTTPException(status_code=404, detail="File not found") + + # Verify file belongs to this project + session = db.query(RecordingSession).filter_by(id=file_record.session_id).first() + if not session or session.project_id != project_id: + raise HTTPException(status_code=403, detail="File does not belong to this project") + + # Get related data for report context + project = db.query(Project).filter_by(id=project_id).first() + location = db.query(MonitoringLocation).filter_by(id=session.location_id).first() if session.location_id else None + + # Build full file path + file_path = Path("data") / file_record.file_path + if not file_path.exists(): + raise HTTPException(status_code=404, detail="File not found on disk") + + # Validate this is a Leq file + if '_Leq_' not in file_record.file_path: + raise HTTPException( + status_code=400, + detail="Reports can only be generated from Leq files (15-minute averaged data)." + ) + + # Read and parse the Leq RND file + try: + with open(file_path, 'r', encoding='utf-8', errors='replace') as f: + content = f.read() + + reader = csv.DictReader(io.StringIO(content)) + rnd_rows = [] + for row in reader: + cleaned_row = {} + for key, value in row.items(): + if key: + cleaned_key = key.strip() + cleaned_value = value.strip() if value else '' + if cleaned_value and cleaned_value not in ['-.-', '-', '']: + try: + cleaned_value = float(cleaned_value) + except ValueError: + pass + elif cleaned_value in ['-.-', '-']: + cleaned_value = None + cleaned_row[cleaned_key] = cleaned_value + rnd_rows.append(cleaned_row) + + if not rnd_rows: + raise HTTPException(status_code=400, detail="No data found in RND file") + + except Exception as e: + logger.error(f"Error reading RND file: {e}") + raise HTTPException(status_code=500, detail=f"Error reading file: {str(e)}") + + # Apply time and date filtering (same logic as generate-report) + def filter_rows(rows, filter_start_time, filter_end_time, filter_start_date, filter_end_date): + if not filter_start_time and not filter_end_time and not filter_start_date and not filter_end_date: + return rows + + filtered = [] + start_hour = start_minute = end_hour = end_minute = None + + if filter_start_time: + try: + parts = filter_start_time.split(':') + start_hour = int(parts[0]) + start_minute = int(parts[1]) if len(parts) > 1 else 0 + except (ValueError, IndexError): + pass + + if filter_end_time: + try: + parts = filter_end_time.split(':') + end_hour = int(parts[0]) + end_minute = int(parts[1]) if len(parts) > 1 else 0 + except (ValueError, IndexError): + pass + + start_dt = end_dt = None + if filter_start_date: + try: + start_dt = datetime.strptime(filter_start_date, '%Y-%m-%d').date() + except ValueError: + pass + if filter_end_date: + try: + end_dt = datetime.strptime(filter_end_date, '%Y-%m-%d').date() + except ValueError: + pass + + for row in rows: + start_time_str = row.get('Start Time', '') + if not start_time_str: + continue + + try: + dt = datetime.strptime(start_time_str, '%Y/%m/%d %H:%M:%S') + row_date = dt.date() + row_hour = dt.hour + row_minute = dt.minute + + if start_dt and row_date < start_dt: + continue + if end_dt and row_date > end_dt: + continue + + if start_hour is not None and end_hour is not None: + row_time_minutes = row_hour * 60 + row_minute + start_time_minutes = start_hour * 60 + start_minute + end_time_minutes = end_hour * 60 + end_minute + + if start_time_minutes > end_time_minutes: + if not (row_time_minutes >= start_time_minutes or row_time_minutes < end_time_minutes): + continue + else: + if not (start_time_minutes <= row_time_minutes < end_time_minutes): + continue + + filtered.append(row) + except ValueError: + filtered.append(row) + + return filtered + + original_count = len(rnd_rows) + rnd_rows = filter_rows(rnd_rows, start_time, end_time, start_date, end_date) + + # Convert to spreadsheet data format (array of arrays) + spreadsheet_data = [] + for idx, row in enumerate(rnd_rows, 1): + start_time_str = row.get('Start Time', '') + date_str = '' + time_str = '' + if start_time_str: + try: + dt = datetime.strptime(start_time_str, '%Y/%m/%d %H:%M:%S') + date_str = dt.strftime('%Y-%m-%d') + time_str = dt.strftime('%H:%M:%S') + except ValueError: + date_str = start_time_str + time_str = '' + + lmax = row.get('Lmax(Main)', '') + ln1 = row.get('LN1(Main)', '') + ln2 = row.get('LN2(Main)', '') + + spreadsheet_data.append([ + idx, # Test # + date_str, + time_str, + lmax if lmax else '', + ln1 if ln1 else '', + ln2 if ln2 else '', + '' # Comments + ]) + + # Prepare context data + final_project_name = project_name if project_name else (project.name if project else "") + final_location = location_name if location_name else (location.name if location else "") + + # Get templates for the dropdown + templates = db.query(ReportTemplate).all() + + return templates.TemplateResponse("report_preview.html", { + "request": request, + "project_id": project_id, + "file_id": file_id, + "project": project, + "location": location, + "file": file_record, + "spreadsheet_data": spreadsheet_data, + "report_title": report_title, + "project_name": final_project_name, + "client_name": client_name, + "location_name": final_location, + "start_time": start_time, + "end_time": end_time, + "start_date": start_date, + "end_date": end_date, + "original_count": original_count, + "filtered_count": len(rnd_rows), + "templates": templates, + }) + + +@router.post("/{project_id}/files/{file_id}/generate-from-preview") +async def generate_report_from_preview( + project_id: str, + file_id: str, + data: dict, + db: Session = Depends(get_db), +): + """ + Generate an Excel report from edited spreadsheet data. + Accepts the edited data from jspreadsheet and creates the final Excel file. + """ + from backend.models import DataFile + from pathlib import Path + + try: + import openpyxl + from openpyxl.chart import LineChart, Reference + from openpyxl.styles import Font, Alignment, Border, Side, PatternFill + from openpyxl.utils import get_column_letter + except ImportError: + raise HTTPException(status_code=500, detail="openpyxl is not installed") + + # Get the file record for filename generation + file_record = db.query(DataFile).filter_by(id=file_id).first() + if not file_record: + raise HTTPException(status_code=404, detail="File not found") + + session = db.query(RecordingSession).filter_by(id=file_record.session_id).first() + if not session or session.project_id != project_id: + raise HTTPException(status_code=403, detail="File does not belong to this project") + + project = db.query(Project).filter_by(id=project_id).first() + location = db.query(MonitoringLocation).filter_by(id=session.location_id).first() if session.location_id else None + + # Extract data from request + spreadsheet_data = data.get('data', []) + report_title = data.get('report_title', 'Background Noise Study') + project_name = data.get('project_name', project.name if project else '') + client_name = data.get('client_name', '') + location_name = data.get('location_name', location.name if location else '') + time_filter = data.get('time_filter', '') + + if not spreadsheet_data: + raise HTTPException(status_code=400, detail="No data provided") + + # Create Excel workbook + wb = openpyxl.Workbook() + ws = wb.active + ws.title = "Sound Level Data" + + # Styles + title_font = Font(bold=True, size=14) + header_font = Font(bold=True, size=10) + thin_border = Border( + left=Side(style='thin'), + right=Side(style='thin'), + top=Side(style='thin'), + bottom=Side(style='thin') + ) + header_fill = PatternFill(start_color="DAEEF3", end_color="DAEEF3", fill_type="solid") + + # Row 1: Title + final_title = f"{report_title} - {project_name}" if project_name else report_title + ws['A1'] = final_title + ws['A1'].font = title_font + ws.merge_cells('A1:G1') + + # Row 2: Client + if client_name: + ws['A2'] = f"Client: {client_name}" + ws['A2'].font = Font(italic=True, size=10) + + # Row 3: Location + if location_name: + ws['A3'] = location_name + ws['A3'].font = Font(bold=True, size=11) + + # Row 4: Time filter info + if time_filter: + ws['A4'] = time_filter + ws['A4'].font = Font(italic=True, size=9, color="666666") + + # Row 7: Headers + headers = ['Test Increment #', 'Date', 'Time', 'LAmax (dBA)', 'LA01 (dBA)', 'LA10 (dBA)', 'Comments'] + for col, header in enumerate(headers, 1): + cell = ws.cell(row=7, column=col, value=header) + cell.font = header_font + cell.border = thin_border + cell.fill = header_fill + cell.alignment = Alignment(horizontal='center') + + # Column widths + column_widths = [16, 12, 10, 12, 12, 12, 40] + for i, width in enumerate(column_widths, 1): + ws.column_dimensions[get_column_letter(i)].width = width + + # Data rows + data_start_row = 8 + for idx, row_data in enumerate(spreadsheet_data): + data_row = data_start_row + idx + for col, value in enumerate(row_data, 1): + cell = ws.cell(row=data_row, column=col, value=value if value != '' else None) + cell.border = thin_border + + data_end_row = data_start_row + len(spreadsheet_data) - 1 + + # Add chart if we have data + if len(spreadsheet_data) > 0: + chart = LineChart() + chart.title = f"{location_name or 'Sound Level Data'} - Background Noise Study" + chart.style = 10 + chart.y_axis.title = "Sound Level (dBA)" + chart.x_axis.title = "Test Increment" + chart.height = 12 + chart.width = 20 + + data_ref = Reference(ws, min_col=4, min_row=7, max_col=6, max_row=data_end_row) + categories = Reference(ws, min_col=1, min_row=data_start_row, max_row=data_end_row) + + chart.add_data(data_ref, titles_from_data=True) + chart.set_categories(categories) + + if len(chart.series) >= 3: + chart.series[0].graphicalProperties.line.solidFill = "FF0000" + chart.series[1].graphicalProperties.line.solidFill = "00B050" + chart.series[2].graphicalProperties.line.solidFill = "0070C0" + + ws.add_chart(chart, "I3") + + # Save to buffer + output = io.BytesIO() + wb.save(output) + output.seek(0) + + # Generate filename + filename = file_record.file_path.split('/')[-1].replace('.rnd', '') + if location: + filename = f"{location.name}_{filename}" + filename = f"{filename}_report.xlsx" + filename = "".join(c for c in filename if c.isalnum() or c in ('_', '-', '.')).rstrip() + + return StreamingResponse( + output, + media_type="application/vnd.openxmlformats-officedocument.spreadsheetml.sheet", + headers={"Content-Disposition": f'attachment; filename="{filename}"'} + ) + + +@router.get("/{project_id}/generate-combined-report") +async def generate_combined_excel_report( + project_id: str, + report_title: str = Query("Background Noise Study", description="Title for the report"), + db: Session = Depends(get_db), +): + """ + Generate a combined Excel report from all RND files in a project. + + Creates a multi-sheet Excel workbook with: + - One sheet per location/RND file + - Data tables with LAmax, LA01, LA10 + - Line charts for each location + - Summary sheet combining all locations + + Column mapping from RND to Report: + - Lmax(Main) -> LAmax (dBA) + - LN1(Main) -> LA01 (dBA) [L1 percentile] + - LN2(Main) -> LA10 (dBA) [L10 percentile] + """ + from backend.models import DataFile + from pathlib import Path + import csv + + try: + import openpyxl + from openpyxl.chart import LineChart, Reference + from openpyxl.styles import Font, Alignment, Border, Side, PatternFill + from openpyxl.utils import get_column_letter + except ImportError: + raise HTTPException( + status_code=500, + detail="openpyxl is not installed. Run: pip install openpyxl" + ) + + # Get project + project = db.query(Project).filter_by(id=project_id).first() + if not project: + raise HTTPException(status_code=404, detail="Project not found") + + # Get all sessions with measurement files + sessions = db.query(RecordingSession).filter_by(project_id=project_id).all() + + # Collect all Leq RND files grouped by location + # Only include files with '_Leq_' in the path (15-minute averaged data) + # Exclude Lp files (instantaneous 100ms readings) + location_files = {} + for session in sessions: + files = db.query(DataFile).filter_by(session_id=session.id).all() + for file in files: + # Only include Leq files for reports (contain '_Leq_' in path) + is_leq_file = file.file_path and '_Leq_' in file.file_path and file.file_path.endswith('.rnd') + if is_leq_file: + location = db.query(MonitoringLocation).filter_by(id=session.location_id).first() if session.location_id else None + location_name = location.name if location else f"Session {session.id[:8]}" + + if location_name not in location_files: + location_files[location_name] = [] + location_files[location_name].append({ + 'file': file, + 'session': session, + 'location': location + }) + + if not location_files: + raise HTTPException(status_code=404, detail="No Leq measurement files found in project. Reports require Leq data (files with '_Leq_' in the name).") + + # Define styles + title_font = Font(bold=True, size=14) + header_font = Font(bold=True, size=10) + thin_border = Border( + left=Side(style='thin'), + right=Side(style='thin'), + top=Side(style='thin'), + bottom=Side(style='thin') + ) + header_fill = PatternFill(start_color="DAEEF3", end_color="DAEEF3", fill_type="solid") + + # Create Excel workbook + wb = openpyxl.Workbook() + + # Remove default sheet + wb.remove(wb.active) + + # Track all data for summary + all_location_summaries = [] + + # Create a sheet for each location + for location_name, file_list in location_files.items(): + # Sanitize sheet name (max 31 chars, no special chars) + safe_sheet_name = "".join(c for c in location_name if c.isalnum() or c in (' ', '-', '_'))[:31] + ws = wb.create_sheet(title=safe_sheet_name) + + # Row 1: Report title + final_title = f"{report_title} - {project.name}" + ws['A1'] = final_title + ws['A1'].font = title_font + ws.merge_cells('A1:G1') + + # Row 3: Location name + ws['A3'] = location_name + ws['A3'].font = Font(bold=True, size=11) + + # Row 7: Headers + headers = ['Test Increment #', 'Date', 'Time', 'LAmax (dBA)', 'LA01 (dBA)', 'LA10 (dBA)', 'Comments'] + for col, header in enumerate(headers, 1): + cell = ws.cell(row=7, column=col, value=header) + cell.font = header_font + cell.border = thin_border + cell.fill = header_fill + cell.alignment = Alignment(horizontal='center') + + # Set column widths + column_widths = [16, 12, 10, 12, 12, 12, 40] + for i, width in enumerate(column_widths, 1): + ws.column_dimensions[get_column_letter(i)].width = width + + # Combine data from all files for this location + all_rnd_rows = [] + for file_info in file_list: + file = file_info['file'] + file_path = Path("data") / file.file_path + + if not file_path.exists(): + continue + + try: + with open(file_path, 'r', encoding='utf-8', errors='replace') as f: + content = f.read() + + reader = csv.DictReader(io.StringIO(content)) + for row in reader: + cleaned_row = {} + for key, value in row.items(): + if key: + cleaned_key = key.strip() + cleaned_value = value.strip() if value else '' + if cleaned_value and cleaned_value not in ['-.-', '-', '']: + try: + cleaned_value = float(cleaned_value) + except ValueError: + pass + elif cleaned_value in ['-.-', '-']: + cleaned_value = None + cleaned_row[cleaned_key] = cleaned_value + all_rnd_rows.append(cleaned_row) + except Exception as e: + logger.warning(f"Error reading file {file.file_path}: {e}") + continue + + if not all_rnd_rows: + continue + + # Sort by start time + all_rnd_rows.sort(key=lambda r: r.get('Start Time', '')) + + # Data rows starting at row 8 + data_start_row = 8 + for idx, row in enumerate(all_rnd_rows, 1): + data_row = data_start_row + idx - 1 + + ws.cell(row=data_row, column=1, value=idx).border = thin_border + + start_time_str = row.get('Start Time', '') + if start_time_str: + try: + dt = datetime.strptime(start_time_str, '%Y/%m/%d %H:%M:%S') + ws.cell(row=data_row, column=2, value=dt.date()) + ws.cell(row=data_row, column=3, value=dt.time()) + except ValueError: + ws.cell(row=data_row, column=2, value=start_time_str) + ws.cell(row=data_row, column=3, value='') + else: + ws.cell(row=data_row, column=2, value='') + ws.cell(row=data_row, column=3, value='') + + lmax = row.get('Lmax(Main)') + ws.cell(row=data_row, column=4, value=lmax if lmax else '').border = thin_border + + ln1 = row.get('LN1(Main)') + ws.cell(row=data_row, column=5, value=ln1 if ln1 else '').border = thin_border + + ln2 = row.get('LN2(Main)') + ws.cell(row=data_row, column=6, value=ln2 if ln2 else '').border = thin_border + + ws.cell(row=data_row, column=7, value='').border = thin_border + ws.cell(row=data_row, column=2).border = thin_border + ws.cell(row=data_row, column=3).border = thin_border + + data_end_row = data_start_row + len(all_rnd_rows) - 1 + + # Add Line Chart + chart = LineChart() + chart.title = f"{location_name}" + chart.style = 10 + chart.y_axis.title = "Sound Level (dBA)" + chart.x_axis.title = "Test Increment" + chart.height = 12 + chart.width = 20 + + data_ref = Reference(ws, min_col=4, min_row=7, max_col=6, max_row=data_end_row) + categories = Reference(ws, min_col=1, min_row=data_start_row, max_row=data_end_row) + + chart.add_data(data_ref, titles_from_data=True) + chart.set_categories(categories) + + if len(chart.series) >= 3: + chart.series[0].graphicalProperties.line.solidFill = "FF0000" + chart.series[1].graphicalProperties.line.solidFill = "00B050" + chart.series[2].graphicalProperties.line.solidFill = "0070C0" + + ws.add_chart(chart, "I3") + + # Calculate summary for this location + all_lmax = [r.get('Lmax(Main)') for r in all_rnd_rows if isinstance(r.get('Lmax(Main)'), (int, float))] + all_ln1 = [r.get('LN1(Main)') for r in all_rnd_rows if isinstance(r.get('LN1(Main)'), (int, float))] + all_ln2 = [r.get('LN2(Main)') for r in all_rnd_rows if isinstance(r.get('LN2(Main)'), (int, float))] + + all_location_summaries.append({ + 'location': location_name, + 'samples': len(all_rnd_rows), + 'lmax_avg': round(sum(all_lmax) / len(all_lmax), 1) if all_lmax else None, + 'ln1_avg': round(sum(all_ln1) / len(all_ln1), 1) if all_ln1 else None, + 'ln2_avg': round(sum(all_ln2) / len(all_ln2), 1) if all_ln2 else None, + }) + + # Create Summary sheet at the beginning + summary_ws = wb.create_sheet(title="Summary", index=0) + + summary_ws['A1'] = f"{report_title} - {project.name} - Summary" + summary_ws['A1'].font = title_font + summary_ws.merge_cells('A1:E1') + + summary_headers = ['Location', 'Samples', 'LAmax Avg', 'LA01 Avg', 'LA10 Avg'] + for col, header in enumerate(summary_headers, 1): + cell = summary_ws.cell(row=3, column=col, value=header) + cell.font = header_font + cell.fill = header_fill + cell.border = thin_border + + for i, width in enumerate([30, 10, 12, 12, 12], 1): + summary_ws.column_dimensions[get_column_letter(i)].width = width + + for idx, loc_summary in enumerate(all_location_summaries, 4): + summary_ws.cell(row=idx, column=1, value=loc_summary['location']).border = thin_border + summary_ws.cell(row=idx, column=2, value=loc_summary['samples']).border = thin_border + summary_ws.cell(row=idx, column=3, value=loc_summary['lmax_avg'] or '-').border = thin_border + summary_ws.cell(row=idx, column=4, value=loc_summary['ln1_avg'] or '-').border = thin_border + summary_ws.cell(row=idx, column=5, value=loc_summary['ln2_avg'] or '-').border = thin_border + + # Save to buffer + output = io.BytesIO() + wb.save(output) + output.seek(0) + + # Generate filename + project_name_clean = "".join(c for c in project.name if c.isalnum() or c in ('_', '-', ' ')).strip() + filename = f"{project_name_clean}_combined_report.xlsx" + filename = filename.replace(' ', '_') + + return StreamingResponse( + output, + media_type="application/vnd.openxmlformats-officedocument.spreadsheetml.sheet", + headers={"Content-Disposition": f'attachment; filename="{filename}"'} + ) + + @router.get("/types/list", response_class=HTMLResponse) async def get_project_types(request: Request, db: Session = Depends(get_db)): """ diff --git a/backend/routers/recurring_schedules.py b/backend/routers/recurring_schedules.py new file mode 100644 index 0000000..b784c5d --- /dev/null +++ b/backend/routers/recurring_schedules.py @@ -0,0 +1,465 @@ +""" +Recurring Schedules Router + +API endpoints for managing recurring monitoring schedules. +""" + +from fastapi import APIRouter, Request, Depends, HTTPException, Query +from fastapi.responses import HTMLResponse, JSONResponse +from sqlalchemy.orm import Session +from typing import Optional +from datetime import datetime +import json + +from backend.database import get_db +from backend.models import RecurringSchedule, MonitoringLocation, Project, RosterUnit +from backend.services.recurring_schedule_service import get_recurring_schedule_service +from backend.templates_config import templates + +router = APIRouter(prefix="/api/projects/{project_id}/recurring-schedules", tags=["recurring-schedules"]) + + +# ============================================================================ +# List and Get +# ============================================================================ + +@router.get("/") +async def list_recurring_schedules( + project_id: str, + db: Session = Depends(get_db), + enabled_only: bool = Query(False), +): + """ + List all recurring schedules for a project. + """ + project = db.query(Project).filter_by(id=project_id).first() + if not project: + raise HTTPException(status_code=404, detail="Project not found") + + query = db.query(RecurringSchedule).filter_by(project_id=project_id) + if enabled_only: + query = query.filter_by(enabled=True) + + schedules = query.order_by(RecurringSchedule.created_at.desc()).all() + + return { + "schedules": [ + { + "id": s.id, + "name": s.name, + "schedule_type": s.schedule_type, + "device_type": s.device_type, + "location_id": s.location_id, + "unit_id": s.unit_id, + "enabled": s.enabled, + "weekly_pattern": json.loads(s.weekly_pattern) if s.weekly_pattern else None, + "interval_type": s.interval_type, + "cycle_time": s.cycle_time, + "include_download": s.include_download, + "timezone": s.timezone, + "next_occurrence": s.next_occurrence.isoformat() if s.next_occurrence else None, + "last_generated_at": s.last_generated_at.isoformat() if s.last_generated_at else None, + "created_at": s.created_at.isoformat() if s.created_at else None, + } + for s in schedules + ], + "count": len(schedules), + } + + +@router.get("/{schedule_id}") +async def get_recurring_schedule( + project_id: str, + schedule_id: str, + db: Session = Depends(get_db), +): + """ + Get a specific recurring schedule. + """ + schedule = db.query(RecurringSchedule).filter_by( + id=schedule_id, + project_id=project_id, + ).first() + + if not schedule: + raise HTTPException(status_code=404, detail="Schedule not found") + + # Get related location and unit info + location = db.query(MonitoringLocation).filter_by(id=schedule.location_id).first() + unit = None + if schedule.unit_id: + unit = db.query(RosterUnit).filter_by(id=schedule.unit_id).first() + + return { + "id": schedule.id, + "name": schedule.name, + "schedule_type": schedule.schedule_type, + "device_type": schedule.device_type, + "location_id": schedule.location_id, + "location_name": location.name if location else None, + "unit_id": schedule.unit_id, + "unit_name": unit.id if unit else None, + "enabled": schedule.enabled, + "weekly_pattern": json.loads(schedule.weekly_pattern) if schedule.weekly_pattern else None, + "interval_type": schedule.interval_type, + "cycle_time": schedule.cycle_time, + "include_download": schedule.include_download, + "timezone": schedule.timezone, + "next_occurrence": schedule.next_occurrence.isoformat() if schedule.next_occurrence else None, + "last_generated_at": schedule.last_generated_at.isoformat() if schedule.last_generated_at else None, + "created_at": schedule.created_at.isoformat() if schedule.created_at else None, + "updated_at": schedule.updated_at.isoformat() if schedule.updated_at else None, + } + + +# ============================================================================ +# Create +# ============================================================================ + +@router.post("/") +async def create_recurring_schedule( + project_id: str, + request: Request, + db: Session = Depends(get_db), +): + """ + Create recurring schedules for one or more locations. + + Body for weekly_calendar (supports multiple locations): + { + "name": "Weeknight Monitoring", + "schedule_type": "weekly_calendar", + "location_ids": ["uuid1", "uuid2"], // Array of location IDs + "weekly_pattern": { + "monday": {"enabled": true, "start": "19:00", "end": "07:00"}, + "tuesday": {"enabled": false}, + ... + }, + "include_download": true, + "auto_increment_index": true, + "timezone": "America/New_York" + } + + Body for simple_interval (supports multiple locations): + { + "name": "24/7 Continuous", + "schedule_type": "simple_interval", + "location_ids": ["uuid1", "uuid2"], // Array of location IDs + "interval_type": "daily", + "cycle_time": "00:00", + "include_download": true, + "auto_increment_index": true, + "timezone": "America/New_York" + } + + Legacy single location support (backwards compatible): + { + "name": "...", + "location_id": "uuid", // Single location ID + ... + } + """ + project = db.query(Project).filter_by(id=project_id).first() + if not project: + raise HTTPException(status_code=404, detail="Project not found") + + data = await request.json() + + # Support both location_ids (array) and location_id (single) for backwards compatibility + location_ids = data.get("location_ids", []) + if not location_ids and data.get("location_id"): + location_ids = [data.get("location_id")] + + if not location_ids: + raise HTTPException(status_code=400, detail="At least one location is required") + + # Validate all locations exist + locations = db.query(MonitoringLocation).filter( + MonitoringLocation.id.in_(location_ids), + MonitoringLocation.project_id == project_id, + ).all() + + if len(locations) != len(location_ids): + raise HTTPException(status_code=404, detail="One or more locations not found") + + service = get_recurring_schedule_service(db) + created_schedules = [] + base_name = data.get("name", "Unnamed Schedule") + + # Create a schedule for each location + for location in locations: + # Determine device type from location + device_type = "slm" if location.location_type == "sound" else "seismograph" + + # Append location name if multiple locations + schedule_name = f"{base_name} - {location.name}" if len(locations) > 1 else base_name + + schedule = service.create_schedule( + project_id=project_id, + location_id=location.id, + name=schedule_name, + schedule_type=data.get("schedule_type", "weekly_calendar"), + device_type=device_type, + unit_id=data.get("unit_id"), + weekly_pattern=data.get("weekly_pattern"), + interval_type=data.get("interval_type"), + cycle_time=data.get("cycle_time"), + include_download=data.get("include_download", True), + auto_increment_index=data.get("auto_increment_index", True), + timezone=data.get("timezone", "America/New_York"), + ) + + # Generate actions immediately so they appear right away + generated_actions = service.generate_actions_for_schedule(schedule, horizon_days=7) + + created_schedules.append({ + "schedule_id": schedule.id, + "location_id": location.id, + "location_name": location.name, + "actions_generated": len(generated_actions), + }) + + total_actions = sum(s.get("actions_generated", 0) for s in created_schedules) + + return JSONResponse({ + "success": True, + "schedules": created_schedules, + "count": len(created_schedules), + "actions_generated": total_actions, + "message": f"Created {len(created_schedules)} recurring schedule(s) with {total_actions} upcoming actions", + }) + + +# ============================================================================ +# Update +# ============================================================================ + +@router.put("/{schedule_id}") +async def update_recurring_schedule( + project_id: str, + schedule_id: str, + request: Request, + db: Session = Depends(get_db), +): + """ + Update a recurring schedule. + """ + schedule = db.query(RecurringSchedule).filter_by( + id=schedule_id, + project_id=project_id, + ).first() + + if not schedule: + raise HTTPException(status_code=404, detail="Schedule not found") + + data = await request.json() + service = get_recurring_schedule_service(db) + + # Build update kwargs + update_kwargs = {} + for field in ["name", "weekly_pattern", "interval_type", "cycle_time", + "include_download", "auto_increment_index", "timezone", "unit_id"]: + if field in data: + update_kwargs[field] = data[field] + + updated = service.update_schedule(schedule_id, **update_kwargs) + + return { + "success": True, + "schedule_id": updated.id, + "message": "Schedule updated successfully", + } + + +# ============================================================================ +# Delete +# ============================================================================ + +@router.delete("/{schedule_id}") +async def delete_recurring_schedule( + project_id: str, + schedule_id: str, + db: Session = Depends(get_db), +): + """ + Delete a recurring schedule. + """ + service = get_recurring_schedule_service(db) + deleted = service.delete_schedule(schedule_id) + + if not deleted: + raise HTTPException(status_code=404, detail="Schedule not found") + + return { + "success": True, + "message": "Schedule deleted successfully", + } + + +# ============================================================================ +# Enable/Disable +# ============================================================================ + +@router.post("/{schedule_id}/enable") +async def enable_schedule( + project_id: str, + schedule_id: str, + db: Session = Depends(get_db), +): + """ + Enable a disabled schedule. + """ + service = get_recurring_schedule_service(db) + schedule = service.enable_schedule(schedule_id) + + if not schedule: + raise HTTPException(status_code=404, detail="Schedule not found") + + return { + "success": True, + "schedule_id": schedule.id, + "enabled": schedule.enabled, + "message": "Schedule enabled", + } + + +@router.post("/{schedule_id}/disable") +async def disable_schedule( + project_id: str, + schedule_id: str, + db: Session = Depends(get_db), +): + """ + Disable a schedule. + """ + service = get_recurring_schedule_service(db) + schedule = service.disable_schedule(schedule_id) + + if not schedule: + raise HTTPException(status_code=404, detail="Schedule not found") + + return { + "success": True, + "schedule_id": schedule.id, + "enabled": schedule.enabled, + "message": "Schedule disabled", + } + + +# ============================================================================ +# Preview Generated Actions +# ============================================================================ + +@router.post("/{schedule_id}/generate-preview") +async def preview_generated_actions( + project_id: str, + schedule_id: str, + db: Session = Depends(get_db), + days: int = Query(7, ge=1, le=30), +): + """ + Preview what actions would be generated without saving them. + """ + schedule = db.query(RecurringSchedule).filter_by( + id=schedule_id, + project_id=project_id, + ).first() + + if not schedule: + raise HTTPException(status_code=404, detail="Schedule not found") + + service = get_recurring_schedule_service(db) + actions = service.generate_actions_for_schedule( + schedule, + horizon_days=days, + preview_only=True, + ) + + return { + "schedule_id": schedule_id, + "schedule_name": schedule.name, + "preview_days": days, + "actions": [ + { + "action_type": a.action_type, + "scheduled_time": a.scheduled_time.isoformat(), + "notes": a.notes, + } + for a in actions + ], + "action_count": len(actions), + } + + +# ============================================================================ +# Manual Generation Trigger +# ============================================================================ + +@router.post("/{schedule_id}/generate") +async def generate_actions_now( + project_id: str, + schedule_id: str, + db: Session = Depends(get_db), + days: int = Query(7, ge=1, le=30), +): + """ + Manually trigger action generation for a schedule. + """ + schedule = db.query(RecurringSchedule).filter_by( + id=schedule_id, + project_id=project_id, + ).first() + + if not schedule: + raise HTTPException(status_code=404, detail="Schedule not found") + + if not schedule.enabled: + raise HTTPException(status_code=400, detail="Schedule is disabled") + + service = get_recurring_schedule_service(db) + actions = service.generate_actions_for_schedule( + schedule, + horizon_days=days, + preview_only=False, + ) + + return { + "success": True, + "schedule_id": schedule_id, + "generated_count": len(actions), + "message": f"Generated {len(actions)} scheduled actions", + } + + +# ============================================================================ +# HTML Partials +# ============================================================================ + +@router.get("/partials/list", response_class=HTMLResponse) +async def get_schedule_list_partial( + project_id: str, + request: Request, + db: Session = Depends(get_db), +): + """ + Return HTML partial for schedule list. + """ + schedules = db.query(RecurringSchedule).filter_by( + project_id=project_id + ).order_by(RecurringSchedule.created_at.desc()).all() + + # Enrich with location info + schedule_data = [] + for s in schedules: + location = db.query(MonitoringLocation).filter_by(id=s.location_id).first() + schedule_data.append({ + "schedule": s, + "location": location, + "pattern": json.loads(s.weekly_pattern) if s.weekly_pattern else None, + }) + + return templates.TemplateResponse("partials/projects/recurring_schedule_list.html", { + "request": request, + "project_id": project_id, + "schedules": schedule_data, + }) diff --git a/backend/routers/report_templates.py b/backend/routers/report_templates.py new file mode 100644 index 0000000..d7103ff --- /dev/null +++ b/backend/routers/report_templates.py @@ -0,0 +1,187 @@ +""" +Report Templates Router + +CRUD operations for report template management. +Templates store time filter presets and report configuration for reuse. +""" + +from fastapi import APIRouter, Depends, HTTPException +from fastapi.responses import JSONResponse +from sqlalchemy.orm import Session +from datetime import datetime +from typing import Optional +import uuid + +from backend.database import get_db +from backend.models import ReportTemplate + +router = APIRouter(prefix="/api/report-templates", tags=["report-templates"]) + + +@router.get("") +async def list_templates( + project_id: Optional[str] = None, + db: Session = Depends(get_db), +): + """ + List all report templates. + Optionally filter by project_id (includes global templates with project_id=None). + """ + query = db.query(ReportTemplate) + + if project_id: + # Include global templates (project_id=None) AND project-specific templates + query = query.filter( + (ReportTemplate.project_id == None) | (ReportTemplate.project_id == project_id) + ) + + templates = query.order_by(ReportTemplate.name).all() + + return [ + { + "id": t.id, + "name": t.name, + "project_id": t.project_id, + "report_title": t.report_title, + "start_time": t.start_time, + "end_time": t.end_time, + "start_date": t.start_date, + "end_date": t.end_date, + "created_at": t.created_at.isoformat() if t.created_at else None, + "updated_at": t.updated_at.isoformat() if t.updated_at else None, + } + for t in templates + ] + + +@router.post("") +async def create_template( + data: dict, + db: Session = Depends(get_db), +): + """ + Create a new report template. + + Request body: + - name: Template name (required) + - project_id: Optional project ID for project-specific template + - report_title: Default report title + - start_time: Start time filter (HH:MM format) + - end_time: End time filter (HH:MM format) + - start_date: Start date filter (YYYY-MM-DD format) + - end_date: End date filter (YYYY-MM-DD format) + """ + name = data.get("name") + if not name: + raise HTTPException(status_code=400, detail="Template name is required") + + template = ReportTemplate( + id=str(uuid.uuid4()), + name=name, + project_id=data.get("project_id"), + report_title=data.get("report_title", "Background Noise Study"), + start_time=data.get("start_time"), + end_time=data.get("end_time"), + start_date=data.get("start_date"), + end_date=data.get("end_date"), + ) + + db.add(template) + db.commit() + db.refresh(template) + + return { + "id": template.id, + "name": template.name, + "project_id": template.project_id, + "report_title": template.report_title, + "start_time": template.start_time, + "end_time": template.end_time, + "start_date": template.start_date, + "end_date": template.end_date, + "created_at": template.created_at.isoformat() if template.created_at else None, + } + + +@router.get("/{template_id}") +async def get_template( + template_id: str, + db: Session = Depends(get_db), +): + """Get a specific report template by ID.""" + template = db.query(ReportTemplate).filter_by(id=template_id).first() + if not template: + raise HTTPException(status_code=404, detail="Template not found") + + return { + "id": template.id, + "name": template.name, + "project_id": template.project_id, + "report_title": template.report_title, + "start_time": template.start_time, + "end_time": template.end_time, + "start_date": template.start_date, + "end_date": template.end_date, + "created_at": template.created_at.isoformat() if template.created_at else None, + "updated_at": template.updated_at.isoformat() if template.updated_at else None, + } + + +@router.put("/{template_id}") +async def update_template( + template_id: str, + data: dict, + db: Session = Depends(get_db), +): + """Update an existing report template.""" + template = db.query(ReportTemplate).filter_by(id=template_id).first() + if not template: + raise HTTPException(status_code=404, detail="Template not found") + + # Update fields if provided + if "name" in data: + template.name = data["name"] + if "project_id" in data: + template.project_id = data["project_id"] + if "report_title" in data: + template.report_title = data["report_title"] + if "start_time" in data: + template.start_time = data["start_time"] + if "end_time" in data: + template.end_time = data["end_time"] + if "start_date" in data: + template.start_date = data["start_date"] + if "end_date" in data: + template.end_date = data["end_date"] + + template.updated_at = datetime.utcnow() + db.commit() + db.refresh(template) + + return { + "id": template.id, + "name": template.name, + "project_id": template.project_id, + "report_title": template.report_title, + "start_time": template.start_time, + "end_time": template.end_time, + "start_date": template.start_date, + "end_date": template.end_date, + "updated_at": template.updated_at.isoformat() if template.updated_at else None, + } + + +@router.delete("/{template_id}") +async def delete_template( + template_id: str, + db: Session = Depends(get_db), +): + """Delete a report template.""" + template = db.query(ReportTemplate).filter_by(id=template_id).first() + if not template: + raise HTTPException(status_code=404, detail="Template not found") + + db.delete(template) + db.commit() + + return JSONResponse({"status": "success", "message": "Template deleted"}) diff --git a/backend/routers/roster_edit.py b/backend/routers/roster_edit.py index dd0c192..f8e6f7f 100644 --- a/backend/routers/roster_edit.py +++ b/backend/routers/roster_edit.py @@ -1,4 +1,4 @@ -from fastapi import APIRouter, Depends, HTTPException, Form, UploadFile, File, Request +from fastapi import APIRouter, Depends, HTTPException, Form, UploadFile, File, Request, Query from fastapi.exceptions import RequestValidationError from sqlalchemy.orm import Session from datetime import datetime, date @@ -150,6 +150,8 @@ async def add_roster_unit( ip_address: str = Form(None), phone_number: str = Form(None), hardware_model: str = Form(None), + deployment_type: str = Form(None), # "seismograph" | "slm" - what device type modem is deployed with + deployed_with_unit_id: str = Form(None), # ID of seismograph/SLM this modem is deployed with # Sound Level Meter-specific fields slm_host: str = Form(None), slm_tcp_port: str = Form(None), @@ -209,6 +211,7 @@ async def add_roster_unit( ip_address=ip_address if ip_address else None, phone_number=phone_number if phone_number else None, hardware_model=hardware_model if hardware_model else None, + deployment_type=deployment_type if deployment_type else None, # Sound Level Meter-specific fields slm_host=slm_host if slm_host else None, slm_tcp_port=slm_tcp_port_int, @@ -219,11 +222,28 @@ async def add_roster_unit( slm_time_weighting=slm_time_weighting if slm_time_weighting else None, slm_measurement_range=slm_measurement_range if slm_measurement_range else None, ) + + # Auto-fill location data from modem if pairing and fields are empty + if deployed_with_modem_id: + modem = db.query(RosterUnit).filter( + RosterUnit.id == deployed_with_modem_id, + RosterUnit.device_type == "modem" + ).first() + if modem: + if not unit.location and modem.location: + unit.location = modem.location + if not unit.address and modem.address: + unit.address = modem.address + if not unit.coordinates and modem.coordinates: + unit.coordinates = modem.coordinates + if not unit.project_id and modem.project_id: + unit.project_id = modem.project_id + db.add(unit) db.commit() # If sound level meter, sync config to SLMM cache - if device_type == "sound_level_meter": + if device_type == "slm": logger.info(f"Syncing SLM {id} config to SLMM cache...") result = await sync_slm_to_slmm_cache( unit_id=id, @@ -259,6 +279,145 @@ def get_modems_list(db: Session = Depends(get_db)): ] +@router.get("/search/modems") +def search_modems( + request: Request, + q: str = Query("", description="Search term"), + deployed_only: bool = Query(False, description="Only show deployed modems"), + exclude_retired: bool = Query(True, description="Exclude retired modems"), + limit: int = Query(10, le=50), + db: Session = Depends(get_db) +): + """ + Search modems by ID, IP address, or note. Returns HTML partial for HTMX dropdown. + + Used by modem picker component to find modems to link with seismographs/SLMs. + """ + from fastapi.responses import HTMLResponse + from fastapi.templating import Jinja2Templates + + templates = Jinja2Templates(directory="templates") + + query = db.query(RosterUnit).filter(RosterUnit.device_type == "modem") + + if deployed_only: + query = query.filter(RosterUnit.deployed == True) + + if exclude_retired: + query = query.filter(RosterUnit.retired == False) + + # Search by ID, IP address, or note + if q and q.strip(): + search_term = f"%{q.strip()}%" + query = query.filter( + (RosterUnit.id.ilike(search_term)) | + (RosterUnit.ip_address.ilike(search_term)) | + (RosterUnit.note.ilike(search_term)) + ) + + modems = query.order_by(RosterUnit.id).limit(limit).all() + + # Build results + results = [] + for modem in modems: + # Build display text: ID - IP - Note (if available) + display_parts = [modem.id] + if modem.ip_address: + display_parts.append(modem.ip_address) + if modem.note: + display_parts.append(modem.note) + display = " - ".join(display_parts) + + results.append({ + "id": modem.id, + "ip_address": modem.ip_address or "", + "phone_number": modem.phone_number or "", + "note": modem.note or "", + "deployed": modem.deployed, + "display": display + }) + + # Determine if we should show "no results" message + show_empty = len(results) == 0 and q and q.strip() + + return templates.TemplateResponse( + "partials/modem_search_results.html", + { + "request": request, + "modems": results, + "query": q, + "show_empty": show_empty + } + ) + + +@router.get("/search/units") +def search_units( + request: Request, + q: str = Query("", description="Search term"), + device_type: str = Query(None, description="Filter by device type: seismograph, modem, slm"), + deployed_only: bool = Query(False, description="Only show deployed units"), + exclude_retired: bool = Query(True, description="Exclude retired units"), + limit: int = Query(10, le=50), + db: Session = Depends(get_db) +): + """ + Search roster units by ID or note. Returns HTML partial for HTMX dropdown. + + Used by unit picker component to find seismographs/SLMs to link with modems. + """ + from fastapi.responses import HTMLResponse + from fastapi.templating import Jinja2Templates + + templates = Jinja2Templates(directory="templates") + + query = db.query(RosterUnit) + + # Apply filters + if device_type: + query = query.filter(RosterUnit.device_type == device_type) + + if deployed_only: + query = query.filter(RosterUnit.deployed == True) + + if exclude_retired: + query = query.filter(RosterUnit.retired == False) + + # Search by ID or note + if q and q.strip(): + search_term = f"%{q.strip()}%" + query = query.filter( + (RosterUnit.id.ilike(search_term)) | + (RosterUnit.note.ilike(search_term)) + ) + + units = query.order_by(RosterUnit.id).limit(limit).all() + + # Build results + results = [] + for unit in units: + results.append({ + "id": unit.id, + "device_type": unit.device_type or "seismograph", + "note": unit.note or "", + "deployed": unit.deployed, + "display": f"{unit.id}" + (f" - {unit.note}" if unit.note else "") + }) + + # Determine if we should show "no results" message + show_empty = len(results) == 0 and q and q.strip() + + return templates.TemplateResponse( + "partials/unit_search_results.html", + { + "request": request, + "units": results, + "query": q, + "show_empty": show_empty + } + ) + + @router.get("/{unit_id}") def get_roster_unit(unit_id: str, db: Session = Depends(get_db)): """Get a single roster unit by ID""" @@ -283,6 +442,8 @@ def get_roster_unit(unit_id: str, db: Session = Depends(get_db)): "ip_address": unit.ip_address or "", "phone_number": unit.phone_number or "", "hardware_model": unit.hardware_model or "", + "deployment_type": unit.deployment_type or "", + "deployed_with_unit_id": unit.deployed_with_unit_id or "", "slm_host": unit.slm_host or "", "slm_tcp_port": unit.slm_tcp_port or "", "slm_ftp_port": unit.slm_ftp_port or "", @@ -314,6 +475,8 @@ def edit_roster_unit( ip_address: str = Form(None), phone_number: str = Form(None), hardware_model: str = Form(None), + deployment_type: str = Form(None), + deployed_with_unit_id: str = Form(None), # Sound Level Meter-specific fields slm_host: str = Form(None), slm_tcp_port: str = Form(None), @@ -323,6 +486,14 @@ def edit_roster_unit( slm_frequency_weighting: str = Form(None), slm_time_weighting: str = Form(None), slm_measurement_range: str = Form(None), + # Cascade options - sync fields to paired device + cascade_to_unit_id: str = Form(None), + cascade_deployed: str = Form(None), + cascade_retired: str = Form(None), + cascade_project: str = Form(None), + cascade_location: str = Form(None), + cascade_coordinates: str = Form(None), + cascade_note: str = Form(None), db: Session = Depends(get_db) ): unit = db.query(RosterUnit).filter(RosterUnit.id == unit_id).first() @@ -374,10 +545,29 @@ def edit_roster_unit( unit.next_calibration_due = next_cal_date unit.deployed_with_modem_id = deployed_with_modem_id if deployed_with_modem_id else None + # Auto-fill location data from modem if pairing and fields are empty + if deployed_with_modem_id: + modem = db.query(RosterUnit).filter( + RosterUnit.id == deployed_with_modem_id, + RosterUnit.device_type == "modem" + ).first() + if modem: + # Only fill if the device field is empty + if not unit.location and modem.location: + unit.location = modem.location + if not unit.address and modem.address: + unit.address = modem.address + if not unit.coordinates and modem.coordinates: + unit.coordinates = modem.coordinates + if not unit.project_id and modem.project_id: + unit.project_id = modem.project_id + # Modem-specific fields unit.ip_address = ip_address if ip_address else None unit.phone_number = phone_number if phone_number else None unit.hardware_model = hardware_model if hardware_model else None + unit.deployment_type = deployment_type if deployment_type else None + unit.deployed_with_unit_id = deployed_with_unit_id if deployed_with_unit_id else None # Sound Level Meter-specific fields unit.slm_host = slm_host if slm_host else None @@ -403,8 +593,79 @@ def edit_roster_unit( old_status_text = "retired" if old_retired else "active" record_history(db, unit_id, "retired_change", "retired", old_status_text, status_text, "manual") + # Handle cascade to paired device + cascaded_unit_id = None + if cascade_to_unit_id and cascade_to_unit_id.strip(): + paired_unit = db.query(RosterUnit).filter(RosterUnit.id == cascade_to_unit_id).first() + if paired_unit: + cascaded_unit_id = paired_unit.id + + # Cascade deployed status + if cascade_deployed in ['true', 'True', '1', 'yes']: + old_paired_deployed = paired_unit.deployed + paired_unit.deployed = deployed_bool + paired_unit.last_updated = datetime.utcnow() + if old_paired_deployed != deployed_bool: + status_text = "deployed" if deployed_bool else "benched" + old_status_text = "deployed" if old_paired_deployed else "benched" + record_history(db, paired_unit.id, "deployed_change", "deployed", + old_status_text, status_text, f"cascade from {unit_id}") + + # Cascade retired status + if cascade_retired in ['true', 'True', '1', 'yes']: + old_paired_retired = paired_unit.retired + paired_unit.retired = retired_bool + paired_unit.last_updated = datetime.utcnow() + if old_paired_retired != retired_bool: + status_text = "retired" if retired_bool else "active" + old_status_text = "retired" if old_paired_retired else "active" + record_history(db, paired_unit.id, "retired_change", "retired", + old_status_text, status_text, f"cascade from {unit_id}") + + # Cascade project + if cascade_project in ['true', 'True', '1', 'yes']: + old_paired_project = paired_unit.project_id + paired_unit.project_id = project_id + paired_unit.last_updated = datetime.utcnow() + if old_paired_project != project_id: + record_history(db, paired_unit.id, "project_change", "project_id", + old_paired_project or "", project_id or "", f"cascade from {unit_id}") + + # Cascade address/location + if cascade_location in ['true', 'True', '1', 'yes']: + old_paired_address = paired_unit.address + old_paired_location = paired_unit.location + paired_unit.address = address + paired_unit.location = location + paired_unit.last_updated = datetime.utcnow() + if old_paired_address != address: + record_history(db, paired_unit.id, "address_change", "address", + old_paired_address or "", address or "", f"cascade from {unit_id}") + + # Cascade coordinates + if cascade_coordinates in ['true', 'True', '1', 'yes']: + old_paired_coords = paired_unit.coordinates + paired_unit.coordinates = coordinates + paired_unit.last_updated = datetime.utcnow() + if old_paired_coords != coordinates: + record_history(db, paired_unit.id, "coordinates_change", "coordinates", + old_paired_coords or "", coordinates or "", f"cascade from {unit_id}") + + # Cascade note + if cascade_note in ['true', 'True', '1', 'yes']: + old_paired_note = paired_unit.note + paired_unit.note = note + paired_unit.last_updated = datetime.utcnow() + if old_paired_note != note: + record_history(db, paired_unit.id, "note_change", "note", + old_paired_note or "", note or "", f"cascade from {unit_id}") + db.commit() - return {"message": "Unit updated", "id": unit_id, "device_type": device_type} + + response = {"message": "Unit updated", "id": unit_id, "device_type": device_type} + if cascaded_unit_id: + response["cascaded_to"] = cascaded_unit_id + return response @router.post("/set-deployed/{unit_id}") @@ -458,16 +719,20 @@ def set_retired(unit_id: str, retired: bool = Form(...), db: Session = Depends(g @router.delete("/{unit_id}") -def delete_roster_unit(unit_id: str, db: Session = Depends(get_db)): +async def delete_roster_unit(unit_id: str, db: Session = Depends(get_db)): """ Permanently delete a unit from the database. Checks roster, emitters, and ignored_units tables and deletes from any table where the unit exists. + + For SLM devices, also removes from SLMM to stop background polling. """ deleted = False + was_slm = False # Try to delete from roster table roster_unit = db.query(RosterUnit).filter(RosterUnit.id == unit_id).first() if roster_unit: + was_slm = roster_unit.device_type == "slm" db.delete(roster_unit) deleted = True @@ -488,6 +753,19 @@ def delete_roster_unit(unit_id: str, db: Session = Depends(get_db)): raise HTTPException(status_code=404, detail="Unit not found") db.commit() + + # If it was an SLM, also delete from SLMM + if was_slm: + try: + async with httpx.AsyncClient(timeout=5.0) as client: + response = await client.delete(f"{SLMM_BASE_URL}/api/nl43/{unit_id}/config") + if response.status_code in [200, 404]: + logger.info(f"Deleted SLM {unit_id} from SLMM") + else: + logger.warning(f"Failed to delete SLM {unit_id} from SLMM: {response.status_code}") + except Exception as e: + logger.error(f"Error deleting SLM {unit_id} from SLMM: {e}") + return {"message": "Unit deleted", "id": unit_id} @@ -514,6 +792,37 @@ def set_note(unit_id: str, note: str = Form(""), db: Session = Depends(get_db)): return {"message": "Updated", "id": unit_id, "note": note} +def _parse_bool(value: str) -> bool: + """Parse boolean from CSV string value.""" + return value.lower() in ('true', '1', 'yes') if value else False + + +def _parse_int(value: str) -> int | None: + """Parse integer from CSV string value, return None if empty or invalid.""" + if not value or not value.strip(): + return None + try: + return int(value.strip()) + except ValueError: + return None + + +def _parse_date(value: str) -> date | None: + """Parse date from CSV string value (YYYY-MM-DD format).""" + if not value or not value.strip(): + return None + try: + return datetime.strptime(value.strip(), '%Y-%m-%d').date() + except ValueError: + return None + + +def _get_csv_value(row: dict, key: str, default=None): + """Get value from CSV row, return default if empty.""" + value = row.get(key, '').strip() if row.get(key) else '' + return value if value else default + + @router.post("/import-csv") async def import_csv( file: UploadFile = File(...), @@ -524,13 +833,40 @@ async def import_csv( Import roster units from CSV file. Expected CSV columns (unit_id is required, others are optional): - - unit_id: Unique identifier for the unit - - unit_type: Type of unit (default: "series3") - - deployed: Boolean for deployment status (default: False) - - retired: Boolean for retirement status (default: False) + + Common fields (all device types): + - unit_id: Unique identifier for the unit (REQUIRED) + - device_type: "seismograph", "modem", or "slm" (default: "seismograph") + - unit_type: Sub-type (e.g., "series3", "series4" for seismographs) + - deployed: Boolean (true/false/yes/no/1/0) + - retired: Boolean - note: Notes about the unit - project_id: Project identifier - location: Location description + - address: Street address + - coordinates: GPS coordinates (lat;lon or lat,lon) + + Seismograph-specific: + - last_calibrated: Date (YYYY-MM-DD) + - next_calibration_due: Date (YYYY-MM-DD) + - deployed_with_modem_id: ID of paired modem + + Modem-specific: + - ip_address: Device IP address + - phone_number: SIM card phone number + - hardware_model: Hardware model (e.g., IBR900, RV55) + + SLM-specific: + - slm_host: Device IP or hostname + - slm_tcp_port: TCP control port (default 2255) + - slm_ftp_port: FTP port (default 21) + - slm_model: Device model (NL-43, NL-53) + - slm_serial_number: Serial number + - slm_frequency_weighting: A, C, or Z + - slm_time_weighting: F (Fast), S (Slow), I (Impulse) + - slm_measurement_range: e.g., "30-130 dB" + + Lines starting with # are treated as comments and skipped. Args: file: CSV file upload @@ -543,6 +879,46 @@ async def import_csv( # Read file content contents = await file.read() csv_text = contents.decode('utf-8') + + # Filter out comment lines (starting with #) + lines = csv_text.split('\n') + filtered_lines = [line for line in lines if not line.strip().startswith('#')] + csv_text = '\n'.join(filtered_lines) + + # First pass: validate for duplicates and empty unit_ids + csv_reader = csv.DictReader(io.StringIO(csv_text)) + seen_unit_ids = {} # unit_id -> list of row numbers + empty_unit_id_rows = [] + + for row_num, row in enumerate(csv_reader, start=2): + unit_id = row.get('unit_id', '').strip() + if not unit_id: + empty_unit_id_rows.append(row_num) + else: + if unit_id not in seen_unit_ids: + seen_unit_ids[unit_id] = [] + seen_unit_ids[unit_id].append(row_num) + + # Check for validation errors + validation_errors = [] + + # Report empty unit_ids + if empty_unit_id_rows: + validation_errors.append(f"Empty unit_id on row(s): {', '.join(map(str, empty_unit_id_rows))}") + + # Report duplicates + duplicates = {uid: rows for uid, rows in seen_unit_ids.items() if len(rows) > 1} + if duplicates: + for uid, rows in duplicates.items(): + validation_errors.append(f"Duplicate unit_id '{uid}' on rows: {', '.join(map(str, rows))}") + + if validation_errors: + raise HTTPException( + status_code=400, + detail="CSV validation failed:\n" + "\n".join(validation_errors) + ) + + # Second pass: actually import the data csv_reader = csv.DictReader(io.StringIO(csv_text)) results = { @@ -563,6 +939,9 @@ async def import_csv( }) continue + # Determine device type + device_type = _get_csv_value(row, 'device_type', 'seismograph') + # Check if unit exists existing_unit = db.query(RosterUnit).filter(RosterUnit.id == unit_id).first() @@ -571,31 +950,90 @@ async def import_csv( results["skipped"].append(unit_id) continue - # Update existing unit - existing_unit.unit_type = row.get('unit_type', existing_unit.unit_type or 'series3') - existing_unit.deployed = row.get('deployed', '').lower() in ('true', '1', 'yes') if row.get('deployed') else existing_unit.deployed - existing_unit.retired = row.get('retired', '').lower() in ('true', '1', 'yes') if row.get('retired') else existing_unit.retired - existing_unit.note = row.get('note', existing_unit.note or '') - existing_unit.project_id = row.get('project_id', existing_unit.project_id) - existing_unit.location = row.get('location', existing_unit.location) - existing_unit.address = row.get('address', existing_unit.address) - existing_unit.coordinates = row.get('coordinates', existing_unit.coordinates) + # Update existing unit - common fields + existing_unit.device_type = device_type + existing_unit.unit_type = _get_csv_value(row, 'unit_type', existing_unit.unit_type or 'series3') + existing_unit.deployed = _parse_bool(row.get('deployed', '')) if row.get('deployed') else existing_unit.deployed + existing_unit.retired = _parse_bool(row.get('retired', '')) if row.get('retired') else existing_unit.retired + existing_unit.note = _get_csv_value(row, 'note', existing_unit.note) + existing_unit.project_id = _get_csv_value(row, 'project_id', existing_unit.project_id) + existing_unit.location = _get_csv_value(row, 'location', existing_unit.location) + existing_unit.address = _get_csv_value(row, 'address', existing_unit.address) + existing_unit.coordinates = _get_csv_value(row, 'coordinates', existing_unit.coordinates) existing_unit.last_updated = datetime.utcnow() + # Seismograph-specific fields + if row.get('last_calibrated'): + existing_unit.last_calibrated = _parse_date(row.get('last_calibrated')) + if row.get('next_calibration_due'): + existing_unit.next_calibration_due = _parse_date(row.get('next_calibration_due')) + if row.get('deployed_with_modem_id'): + existing_unit.deployed_with_modem_id = _get_csv_value(row, 'deployed_with_modem_id') + + # Modem-specific fields + if row.get('ip_address'): + existing_unit.ip_address = _get_csv_value(row, 'ip_address') + if row.get('phone_number'): + existing_unit.phone_number = _get_csv_value(row, 'phone_number') + if row.get('hardware_model'): + existing_unit.hardware_model = _get_csv_value(row, 'hardware_model') + if row.get('deployment_type'): + existing_unit.deployment_type = _get_csv_value(row, 'deployment_type') + if row.get('deployed_with_unit_id'): + existing_unit.deployed_with_unit_id = _get_csv_value(row, 'deployed_with_unit_id') + + # SLM-specific fields + if row.get('slm_host'): + existing_unit.slm_host = _get_csv_value(row, 'slm_host') + if row.get('slm_tcp_port'): + existing_unit.slm_tcp_port = _parse_int(row.get('slm_tcp_port')) + if row.get('slm_ftp_port'): + existing_unit.slm_ftp_port = _parse_int(row.get('slm_ftp_port')) + if row.get('slm_model'): + existing_unit.slm_model = _get_csv_value(row, 'slm_model') + if row.get('slm_serial_number'): + existing_unit.slm_serial_number = _get_csv_value(row, 'slm_serial_number') + if row.get('slm_frequency_weighting'): + existing_unit.slm_frequency_weighting = _get_csv_value(row, 'slm_frequency_weighting') + if row.get('slm_time_weighting'): + existing_unit.slm_time_weighting = _get_csv_value(row, 'slm_time_weighting') + if row.get('slm_measurement_range'): + existing_unit.slm_measurement_range = _get_csv_value(row, 'slm_measurement_range') + results["updated"].append(unit_id) else: - # Create new unit + # Create new unit with all fields new_unit = RosterUnit( id=unit_id, - unit_type=row.get('unit_type', 'series3'), - deployed=row.get('deployed', '').lower() in ('true', '1', 'yes'), - retired=row.get('retired', '').lower() in ('true', '1', 'yes'), - note=row.get('note', ''), - project_id=row.get('project_id'), - location=row.get('location'), - address=row.get('address'), - coordinates=row.get('coordinates'), - last_updated=datetime.utcnow() + device_type=device_type, + unit_type=_get_csv_value(row, 'unit_type', 'series3'), + deployed=_parse_bool(row.get('deployed', '')), + retired=_parse_bool(row.get('retired', '')), + note=_get_csv_value(row, 'note', ''), + project_id=_get_csv_value(row, 'project_id'), + location=_get_csv_value(row, 'location'), + address=_get_csv_value(row, 'address'), + coordinates=_get_csv_value(row, 'coordinates'), + last_updated=datetime.utcnow(), + # Seismograph fields + last_calibrated=_parse_date(row.get('last_calibrated', '')), + next_calibration_due=_parse_date(row.get('next_calibration_due', '')), + deployed_with_modem_id=_get_csv_value(row, 'deployed_with_modem_id'), + # Modem fields + ip_address=_get_csv_value(row, 'ip_address'), + phone_number=_get_csv_value(row, 'phone_number'), + hardware_model=_get_csv_value(row, 'hardware_model'), + deployment_type=_get_csv_value(row, 'deployment_type'), + deployed_with_unit_id=_get_csv_value(row, 'deployed_with_unit_id'), + # SLM fields + slm_host=_get_csv_value(row, 'slm_host'), + slm_tcp_port=_parse_int(row.get('slm_tcp_port', '')), + slm_ftp_port=_parse_int(row.get('slm_ftp_port', '')), + slm_model=_get_csv_value(row, 'slm_model'), + slm_serial_number=_get_csv_value(row, 'slm_serial_number'), + slm_frequency_weighting=_get_csv_value(row, 'slm_frequency_weighting'), + slm_time_weighting=_get_csv_value(row, 'slm_time_weighting'), + slm_measurement_range=_get_csv_value(row, 'slm_measurement_range'), ) db.add(new_unit) results["added"].append(unit_id) diff --git a/backend/routers/roster_rename.py b/backend/routers/roster_rename.py index bf9a14a..c99082d 100644 --- a/backend/routers/roster_rename.py +++ b/backend/routers/roster_rename.py @@ -106,7 +106,7 @@ async def rename_unit( db.commit() # If sound level meter, sync updated config to SLMM cache - if device_type == "sound_level_meter": + if device_type == "slm": logger.info(f"Syncing renamed SLM {new_id} (was {old_id}) config to SLMM cache...") result = await sync_slm_to_slmm_cache( unit_id=new_id, diff --git a/backend/routers/scheduler.py b/backend/routers/scheduler.py index caf64cf..3c65e1c 100644 --- a/backend/routers/scheduler.py +++ b/backend/routers/scheduler.py @@ -5,7 +5,6 @@ Handles scheduled actions for automated recording control. """ from fastapi import APIRouter, Request, Depends, HTTPException, Query -from fastapi.templating import Jinja2Templates from fastapi.responses import HTMLResponse, JSONResponse from sqlalchemy.orm import Session from sqlalchemy import and_, or_ @@ -23,9 +22,9 @@ from backend.models import ( RosterUnit, ) from backend.services.scheduler import get_scheduler +from backend.templates_config import templates router = APIRouter(prefix="/api/projects/{project_id}/scheduler", tags=["scheduler"]) -templates = Jinja2Templates(directory="templates") # ============================================================================ @@ -131,7 +130,7 @@ async def create_scheduled_action( raise HTTPException(status_code=404, detail="Location not found") # Determine device type from location - device_type = "sound_level_meter" if location.location_type == "sound" else "seismograph" + device_type = "slm" if location.location_type == "sound" else "seismograph" # Get unit_id (optional - can be determined from assignment at execution time) unit_id = form_data.get("unit_id") @@ -188,7 +187,7 @@ async def schedule_recording_session( if not location: raise HTTPException(status_code=404, detail="Location not found") - device_type = "sound_level_meter" if location.location_type == "sound" else "seismograph" + device_type = "slm" if location.location_type == "sound" else "seismograph" unit_id = form_data.get("unit_id") start_time = datetime.fromisoformat(form_data.get("start_time")) diff --git a/backend/routers/seismo_dashboard.py b/backend/routers/seismo_dashboard.py index e54a814..6f99d6d 100644 --- a/backend/routers/seismo_dashboard.py +++ b/backend/routers/seismo_dashboard.py @@ -5,13 +5,12 @@ Provides endpoints for the seismograph-specific dashboard from fastapi import APIRouter, Request, Depends, Query from fastapi.responses import HTMLResponse -from fastapi.templating import Jinja2Templates from sqlalchemy.orm import Session from backend.database import get_db from backend.models import RosterUnit +from backend.templates_config import templates router = APIRouter(prefix="/api/seismo-dashboard", tags=["seismo-dashboard"]) -templates = Jinja2Templates(directory="templates") @router.get("/stats", response_class=HTMLResponse) diff --git a/backend/routers/settings.py b/backend/routers/settings.py index bb14357..e32f4d6 100644 --- a/backend/routers/settings.py +++ b/backend/routers/settings.py @@ -477,3 +477,75 @@ async def upload_snapshot(file: UploadFile = File(...)): except Exception as e: raise HTTPException(status_code=500, detail=f"Upload failed: {str(e)}") + + +# ============================================================================ +# SLMM SYNC ENDPOINTS +# ============================================================================ + +@router.post("/slmm/sync-all") +async def sync_all_slms(db: Session = Depends(get_db)): + """ + Manually trigger full sync of all SLM devices from Terra-View roster to SLMM. + + This ensures SLMM database matches Terra-View roster (source of truth). + Also cleans up orphaned devices in SLMM that are not in Terra-View. + """ + from backend.services.slmm_sync import sync_all_slms_to_slmm, cleanup_orphaned_slmm_devices + + try: + # Sync all SLMs + sync_results = await sync_all_slms_to_slmm(db) + + # Clean up orphaned devices + cleanup_results = await cleanup_orphaned_slmm_devices(db) + + return { + "status": "ok", + "sync": sync_results, + "cleanup": cleanup_results + } + + except Exception as e: + raise HTTPException(status_code=500, detail=f"Sync failed: {str(e)}") + + +@router.get("/slmm/status") +async def get_slmm_sync_status(db: Session = Depends(get_db)): + """ + Get status of SLMM synchronization. + + Shows which devices are in Terra-View roster vs SLMM database. + """ + from backend.services.slmm_sync import get_slmm_devices + + try: + # Get devices from both systems + roster_slms = db.query(RosterUnit).filter_by(device_type="slm").all() + slmm_devices = await get_slmm_devices() + + if slmm_devices is None: + raise HTTPException(status_code=503, detail="SLMM service unavailable") + + roster_unit_ids = {unit.unit_type for unit in roster_slms} + slmm_unit_ids = set(slmm_devices) + + # Find differences + in_roster_only = roster_unit_ids - slmm_unit_ids + in_slmm_only = slmm_unit_ids - roster_unit_ids + in_both = roster_unit_ids & slmm_unit_ids + + return { + "status": "ok", + "terra_view_total": len(roster_unit_ids), + "slmm_total": len(slmm_unit_ids), + "synced": len(in_both), + "missing_from_slmm": list(in_roster_only), + "orphaned_in_slmm": list(in_slmm_only), + "in_sync": len(in_roster_only) == 0 and len(in_slmm_only) == 0 + } + + except HTTPException: + raise + except Exception as e: + raise HTTPException(status_code=500, detail=f"Status check failed: {str(e)}") diff --git a/backend/routers/slm_dashboard.py b/backend/routers/slm_dashboard.py index 9b20456..be70cc2 100644 --- a/backend/routers/slm_dashboard.py +++ b/backend/routers/slm_dashboard.py @@ -5,7 +5,6 @@ Provides API endpoints for the Sound Level Meters dashboard page. """ from fastapi import APIRouter, Request, Depends, Query -from fastapi.templating import Jinja2Templates from fastapi.responses import HTMLResponse from sqlalchemy.orm import Session from sqlalchemy import func @@ -18,11 +17,11 @@ import os from backend.database import get_db from backend.models import RosterUnit from backend.routers.roster_edit import sync_slm_to_slmm_cache +from backend.templates_config import templates logger = logging.getLogger(__name__) router = APIRouter(prefix="/api/slm-dashboard", tags=["slm-dashboard"]) -templates = Jinja2Templates(directory="templates") # SLMM backend URL - configurable via environment variable SLMM_BASE_URL = os.getenv("SLMM_BASE_URL", "http://localhost:8100") @@ -35,7 +34,7 @@ async def get_slm_stats(request: Request, db: Session = Depends(get_db)): Returns HTML partial with stat cards. """ # Query all SLMs - all_slms = db.query(RosterUnit).filter_by(device_type="sound_level_meter").all() + all_slms = db.query(RosterUnit).filter_by(device_type="slm").all() # Count deployed vs benched deployed_count = sum(1 for slm in all_slms if slm.deployed and not slm.retired) @@ -69,7 +68,7 @@ async def get_slm_units( Get list of SLM units for the sidebar. Returns HTML partial with unit cards. """ - query = db.query(RosterUnit).filter_by(device_type="sound_level_meter") + query = db.query(RosterUnit).filter_by(device_type="slm") # Filter by project if provided if project: @@ -129,7 +128,7 @@ async def get_live_view(request: Request, unit_id: str, db: Session = Depends(ge Returns HTML partial with live metrics and chart. """ # Get unit from database - unit = db.query(RosterUnit).filter_by(id=unit_id, device_type="sound_level_meter").first() + unit = db.query(RosterUnit).filter_by(id=unit_id, device_type="slm").first() if not unit: return templates.TemplateResponse("partials/slm_live_view_error.html", { @@ -242,7 +241,7 @@ async def get_slm_config(request: Request, unit_id: str, db: Session = Depends(g Get configuration form for a specific SLM unit. Returns HTML partial with configuration form. """ - unit = db.query(RosterUnit).filter_by(id=unit_id, device_type="sound_level_meter").first() + unit = db.query(RosterUnit).filter_by(id=unit_id, device_type="slm").first() if not unit: return HTMLResponse( @@ -262,7 +261,7 @@ async def save_slm_config(request: Request, unit_id: str, db: Session = Depends( Save SLM configuration. Updates unit parameters in the database. """ - unit = db.query(RosterUnit).filter_by(id=unit_id, device_type="sound_level_meter").first() + unit = db.query(RosterUnit).filter_by(id=unit_id, device_type="slm").first() if not unit: return {"status": "error", "detail": f"Unit {unit_id} not found"} diff --git a/backend/routers/slm_ui.py b/backend/routers/slm_ui.py index d0945f6..b003771 100644 --- a/backend/routers/slm_ui.py +++ b/backend/routers/slm_ui.py @@ -6,7 +6,6 @@ Provides endpoints for SLM dashboard cards, detail pages, and real-time data. from fastapi import APIRouter, Depends, HTTPException, Request from fastapi.responses import HTMLResponse -from fastapi.templating import Jinja2Templates from sqlalchemy.orm import Session from datetime import datetime import httpx @@ -15,11 +14,11 @@ import os from backend.database import get_db from backend.models import RosterUnit +from backend.templates_config import templates logger = logging.getLogger(__name__) router = APIRouter(prefix="/slm", tags=["slm-ui"]) -templates = Jinja2Templates(directory="templates") SLMM_BASE_URL = os.getenv("SLMM_BASE_URL", "http://172.19.0.1:8100") @@ -30,7 +29,7 @@ async def slm_detail_page(request: Request, unit_id: str, db: Session = Depends( # Get roster unit unit = db.query(RosterUnit).filter_by(id=unit_id).first() - if not unit or unit.device_type != "sound_level_meter": + if not unit or unit.device_type != "slm": raise HTTPException(status_code=404, detail="Sound level meter not found") return templates.TemplateResponse("slm_detail.html", { @@ -46,7 +45,7 @@ async def get_slm_summary(unit_id: str, db: Session = Depends(get_db)): # Get roster unit unit = db.query(RosterUnit).filter_by(id=unit_id).first() - if not unit or unit.device_type != "sound_level_meter": + if not unit or unit.device_type != "slm": raise HTTPException(status_code=404, detail="Sound level meter not found") # Try to get live status from SLMM @@ -61,7 +60,7 @@ async def get_slm_summary(unit_id: str, db: Session = Depends(get_db)): return { "unit_id": unit_id, - "device_type": "sound_level_meter", + "device_type": "slm", "deployed": unit.deployed, "model": unit.slm_model or "NL-43", "location": unit.address or unit.location, @@ -89,7 +88,7 @@ async def slm_controls_partial(request: Request, unit_id: str, db: Session = Dep """Render SLM control panel partial.""" unit = db.query(RosterUnit).filter_by(id=unit_id).first() - if not unit or unit.device_type != "sound_level_meter": + if not unit or unit.device_type != "slm": raise HTTPException(status_code=404, detail="Sound level meter not found") # Get current status from SLMM diff --git a/backend/services/alert_service.py b/backend/services/alert_service.py new file mode 100644 index 0000000..f10ffd1 --- /dev/null +++ b/backend/services/alert_service.py @@ -0,0 +1,462 @@ +""" +Alert Service + +Manages in-app alerts for device status changes and system events. +Provides foundation for future notification channels (email, webhook). +""" + +import json +import uuid +import logging +from datetime import datetime, timedelta +from typing import Optional, List, Dict, Any + +from sqlalchemy.orm import Session +from sqlalchemy import and_, or_ + +from backend.models import Alert, RosterUnit + +logger = logging.getLogger(__name__) + + +class AlertService: + """ + Service for managing alerts. + + Handles alert lifecycle: + - Create alerts from various triggers + - Query active alerts + - Acknowledge/resolve/dismiss alerts + - (Future) Dispatch to notification channels + """ + + def __init__(self, db: Session): + self.db = db + + def create_alert( + self, + alert_type: str, + title: str, + message: str = None, + severity: str = "warning", + unit_id: str = None, + project_id: str = None, + location_id: str = None, + schedule_id: str = None, + metadata: dict = None, + expires_hours: int = 24, + ) -> Alert: + """ + Create a new alert. + + Args: + alert_type: Type of alert (device_offline, device_online, schedule_failed) + title: Short alert title + message: Detailed description + severity: info, warning, or critical + unit_id: Related unit ID (optional) + project_id: Related project ID (optional) + location_id: Related location ID (optional) + schedule_id: Related schedule ID (optional) + metadata: Additional JSON data + expires_hours: Hours until auto-expiry (default 24) + + Returns: + Created Alert instance + """ + alert = Alert( + id=str(uuid.uuid4()), + alert_type=alert_type, + title=title, + message=message, + severity=severity, + unit_id=unit_id, + project_id=project_id, + location_id=location_id, + schedule_id=schedule_id, + alert_metadata=json.dumps(metadata) if metadata else None, + status="active", + expires_at=datetime.utcnow() + timedelta(hours=expires_hours), + ) + + self.db.add(alert) + self.db.commit() + self.db.refresh(alert) + + logger.info(f"Created alert: {alert.title} ({alert.alert_type})") + return alert + + def create_device_offline_alert( + self, + unit_id: str, + consecutive_failures: int = 0, + last_error: str = None, + ) -> Optional[Alert]: + """ + Create alert when device becomes unreachable. + + Only creates if no active offline alert exists for this device. + + Args: + unit_id: The unit that went offline + consecutive_failures: Number of consecutive poll failures + last_error: Last error message from polling + + Returns: + Created Alert or None if alert already exists + """ + # Check if active offline alert already exists + existing = self.db.query(Alert).filter( + and_( + Alert.unit_id == unit_id, + Alert.alert_type == "device_offline", + Alert.status == "active", + ) + ).first() + + if existing: + logger.debug(f"Offline alert already exists for {unit_id}") + return None + + # Get unit info for title + unit = self.db.query(RosterUnit).filter_by(id=unit_id).first() + unit_name = unit.id if unit else unit_id + + # Determine severity based on failure count + severity = "critical" if consecutive_failures >= 5 else "warning" + + return self.create_alert( + alert_type="device_offline", + title=f"{unit_name} is offline", + message=f"Device has been unreachable after {consecutive_failures} failed connection attempts." + + (f" Last error: {last_error}" if last_error else ""), + severity=severity, + unit_id=unit_id, + metadata={ + "consecutive_failures": consecutive_failures, + "last_error": last_error, + }, + expires_hours=48, # Offline alerts stay longer + ) + + def resolve_device_offline_alert(self, unit_id: str) -> Optional[Alert]: + """ + Auto-resolve offline alert when device comes back online. + + Also creates an "device_online" info alert to notify user. + + Args: + unit_id: The unit that came back online + + Returns: + The resolved Alert or None if no alert existed + """ + # Find active offline alert + alert = self.db.query(Alert).filter( + and_( + Alert.unit_id == unit_id, + Alert.alert_type == "device_offline", + Alert.status == "active", + ) + ).first() + + if not alert: + return None + + # Resolve the offline alert + alert.status = "resolved" + alert.resolved_at = datetime.utcnow() + self.db.commit() + + logger.info(f"Resolved offline alert for {unit_id}") + + # Create online notification + unit = self.db.query(RosterUnit).filter_by(id=unit_id).first() + unit_name = unit.id if unit else unit_id + + self.create_alert( + alert_type="device_online", + title=f"{unit_name} is back online", + message="Device connection has been restored.", + severity="info", + unit_id=unit_id, + expires_hours=6, # Info alerts expire quickly + ) + + return alert + + def create_schedule_failed_alert( + self, + schedule_id: str, + action_type: str, + unit_id: str = None, + error_message: str = None, + project_id: str = None, + location_id: str = None, + ) -> Alert: + """ + Create alert when a scheduled action fails. + + Args: + schedule_id: The ScheduledAction or RecurringSchedule ID + action_type: start, stop, download + unit_id: Related unit + error_message: Error from execution + project_id: Related project + location_id: Related location + + Returns: + Created Alert + """ + return self.create_alert( + alert_type="schedule_failed", + title=f"Scheduled {action_type} failed", + message=error_message or f"The scheduled {action_type} action did not complete successfully.", + severity="warning", + unit_id=unit_id, + project_id=project_id, + location_id=location_id, + schedule_id=schedule_id, + metadata={"action_type": action_type}, + expires_hours=24, + ) + + def create_schedule_completed_alert( + self, + schedule_id: str, + action_type: str, + unit_id: str = None, + project_id: str = None, + location_id: str = None, + metadata: dict = None, + ) -> Alert: + """ + Create alert when a scheduled action completes successfully. + + Args: + schedule_id: The ScheduledAction ID + action_type: start, stop, download + unit_id: Related unit + project_id: Related project + location_id: Related location + metadata: Additional info (e.g., downloaded folder, index numbers) + + Returns: + Created Alert + """ + # Build descriptive message based on action type and metadata + if action_type == "stop" and metadata: + download_folder = metadata.get("downloaded_folder") + download_success = metadata.get("download_success", False) + if download_success and download_folder: + message = f"Measurement stopped and data downloaded ({download_folder})" + elif download_success is False and metadata.get("download_attempted"): + message = "Measurement stopped but download failed" + else: + message = "Measurement stopped successfully" + elif action_type == "start" and metadata: + new_index = metadata.get("new_index") + if new_index is not None: + message = f"Measurement started (index {new_index:04d})" + else: + message = "Measurement started successfully" + else: + message = f"Scheduled {action_type} completed successfully" + + return self.create_alert( + alert_type="schedule_completed", + title=f"Scheduled {action_type} completed", + message=message, + severity="info", + unit_id=unit_id, + project_id=project_id, + location_id=location_id, + schedule_id=schedule_id, + metadata={"action_type": action_type, **(metadata or {})}, + expires_hours=12, # Info alerts expire quickly + ) + + def get_active_alerts( + self, + project_id: str = None, + unit_id: str = None, + alert_type: str = None, + min_severity: str = None, + limit: int = 50, + ) -> List[Alert]: + """ + Query active alerts with optional filters. + + Args: + project_id: Filter by project + unit_id: Filter by unit + alert_type: Filter by alert type + min_severity: Minimum severity (info, warning, critical) + limit: Maximum results + + Returns: + List of matching alerts + """ + query = self.db.query(Alert).filter(Alert.status == "active") + + if project_id: + query = query.filter(Alert.project_id == project_id) + + if unit_id: + query = query.filter(Alert.unit_id == unit_id) + + if alert_type: + query = query.filter(Alert.alert_type == alert_type) + + if min_severity: + # Map severity to numeric for comparison + severity_levels = {"info": 1, "warning": 2, "critical": 3} + min_level = severity_levels.get(min_severity, 1) + + if min_level == 2: + query = query.filter(Alert.severity.in_(["warning", "critical"])) + elif min_level == 3: + query = query.filter(Alert.severity == "critical") + + return query.order_by(Alert.created_at.desc()).limit(limit).all() + + def get_all_alerts( + self, + status: str = None, + project_id: str = None, + unit_id: str = None, + alert_type: str = None, + limit: int = 50, + offset: int = 0, + ) -> List[Alert]: + """ + Query all alerts with optional filters (includes non-active). + + Args: + status: Filter by status (active, acknowledged, resolved, dismissed) + project_id: Filter by project + unit_id: Filter by unit + alert_type: Filter by alert type + limit: Maximum results + offset: Pagination offset + + Returns: + List of matching alerts + """ + query = self.db.query(Alert) + + if status: + query = query.filter(Alert.status == status) + + if project_id: + query = query.filter(Alert.project_id == project_id) + + if unit_id: + query = query.filter(Alert.unit_id == unit_id) + + if alert_type: + query = query.filter(Alert.alert_type == alert_type) + + return ( + query.order_by(Alert.created_at.desc()) + .offset(offset) + .limit(limit) + .all() + ) + + def get_active_alert_count(self) -> int: + """Get count of active alerts for badge display.""" + return self.db.query(Alert).filter(Alert.status == "active").count() + + def acknowledge_alert(self, alert_id: str) -> Optional[Alert]: + """ + Mark alert as acknowledged. + + Args: + alert_id: Alert to acknowledge + + Returns: + Updated Alert or None if not found + """ + alert = self.db.query(Alert).filter_by(id=alert_id).first() + if not alert: + return None + + alert.status = "acknowledged" + alert.acknowledged_at = datetime.utcnow() + self.db.commit() + + logger.info(f"Acknowledged alert: {alert.title}") + return alert + + def dismiss_alert(self, alert_id: str) -> Optional[Alert]: + """ + Dismiss alert (user chose to ignore). + + Args: + alert_id: Alert to dismiss + + Returns: + Updated Alert or None if not found + """ + alert = self.db.query(Alert).filter_by(id=alert_id).first() + if not alert: + return None + + alert.status = "dismissed" + self.db.commit() + + logger.info(f"Dismissed alert: {alert.title}") + return alert + + def resolve_alert(self, alert_id: str) -> Optional[Alert]: + """ + Manually resolve an alert. + + Args: + alert_id: Alert to resolve + + Returns: + Updated Alert or None if not found + """ + alert = self.db.query(Alert).filter_by(id=alert_id).first() + if not alert: + return None + + alert.status = "resolved" + alert.resolved_at = datetime.utcnow() + self.db.commit() + + logger.info(f"Resolved alert: {alert.title}") + return alert + + def cleanup_expired_alerts(self) -> int: + """ + Remove alerts past their expiration time. + + Returns: + Number of alerts cleaned up + """ + now = datetime.utcnow() + expired = self.db.query(Alert).filter( + and_( + Alert.expires_at.isnot(None), + Alert.expires_at < now, + Alert.status == "active", + ) + ).all() + + count = len(expired) + for alert in expired: + alert.status = "dismissed" + + if count > 0: + self.db.commit() + logger.info(f"Cleaned up {count} expired alerts") + + return count + + +def get_alert_service(db: Session) -> AlertService: + """Get an AlertService instance with the given database session.""" + return AlertService(db) diff --git a/backend/services/device_controller.py b/backend/services/device_controller.py index a9aa80d..82ae6fb 100644 --- a/backend/services/device_controller.py +++ b/backend/services/device_controller.py @@ -31,7 +31,7 @@ class DeviceController: Usage: controller = DeviceController() - await controller.start_recording("nl43-001", "sound_level_meter", config={}) + await controller.start_recording("nl43-001", "slm", config={}) await controller.stop_recording("seismo-042", "seismograph") """ @@ -53,7 +53,7 @@ class DeviceController: Args: unit_id: Unit identifier - device_type: "sound_level_meter" | "seismograph" + device_type: "slm" | "seismograph" config: Device-specific recording configuration Returns: @@ -63,7 +63,7 @@ class DeviceController: UnsupportedDeviceTypeError: Device type not supported DeviceControllerError: Operation failed """ - if device_type == "sound_level_meter": + if device_type == "slm": try: return await self.slmm_client.start_recording(unit_id, config) except SLMMClientError as e: @@ -81,7 +81,7 @@ class DeviceController: else: raise UnsupportedDeviceTypeError( f"Device type '{device_type}' is not supported. " - f"Supported types: sound_level_meter, seismograph" + f"Supported types: slm, seismograph" ) async def stop_recording( @@ -94,12 +94,12 @@ class DeviceController: Args: unit_id: Unit identifier - device_type: "sound_level_meter" | "seismograph" + device_type: "slm" | "seismograph" Returns: Response dict from device module """ - if device_type == "sound_level_meter": + if device_type == "slm": try: return await self.slmm_client.stop_recording(unit_id) except SLMMClientError as e: @@ -126,12 +126,12 @@ class DeviceController: Args: unit_id: Unit identifier - device_type: "sound_level_meter" | "seismograph" + device_type: "slm" | "seismograph" Returns: Response dict from device module """ - if device_type == "sound_level_meter": + if device_type == "slm": try: return await self.slmm_client.pause_recording(unit_id) except SLMMClientError as e: @@ -157,12 +157,12 @@ class DeviceController: Args: unit_id: Unit identifier - device_type: "sound_level_meter" | "seismograph" + device_type: "slm" | "seismograph" Returns: Response dict from device module """ - if device_type == "sound_level_meter": + if device_type == "slm": try: return await self.slmm_client.resume_recording(unit_id) except SLMMClientError as e: @@ -192,12 +192,12 @@ class DeviceController: Args: unit_id: Unit identifier - device_type: "sound_level_meter" | "seismograph" + device_type: "slm" | "seismograph" Returns: Status dict from device module """ - if device_type == "sound_level_meter": + if device_type == "slm": try: return await self.slmm_client.get_unit_status(unit_id) except SLMMClientError as e: @@ -224,12 +224,12 @@ class DeviceController: Args: unit_id: Unit identifier - device_type: "sound_level_meter" | "seismograph" + device_type: "slm" | "seismograph" Returns: Live data dict from device module """ - if device_type == "sound_level_meter": + if device_type == "slm": try: return await self.slmm_client.get_live_data(unit_id) except SLMMClientError as e: @@ -261,14 +261,14 @@ class DeviceController: Args: unit_id: Unit identifier - device_type: "sound_level_meter" | "seismograph" + device_type: "slm" | "seismograph" destination_path: Local path to save files files: List of filenames, or None for all Returns: Download result with file list """ - if device_type == "sound_level_meter": + if device_type == "slm": try: return await self.slmm_client.download_files( unit_id, @@ -304,13 +304,13 @@ class DeviceController: Args: unit_id: Unit identifier - device_type: "sound_level_meter" | "seismograph" + device_type: "slm" | "seismograph" config: Configuration parameters Returns: Updated config from device module """ - if device_type == "sound_level_meter": + if device_type == "slm": try: return await self.slmm_client.update_unit_config( unit_id, @@ -333,6 +333,157 @@ class DeviceController: else: raise UnsupportedDeviceTypeError(f"Unsupported device type: {device_type}") + # ======================================================================== + # Store/Index Management + # ======================================================================== + + async def increment_index( + self, + unit_id: str, + device_type: str, + ) -> Dict[str, Any]: + """ + Increment the store/index number on a device. + + For SLMs, this increments the store name to prevent "overwrite data?" prompts. + Should be called before starting a new measurement if auto_increment_index is enabled. + + Args: + unit_id: Unit identifier + device_type: "slm" | "seismograph" + + Returns: + Response dict with old_index and new_index + """ + if device_type == "slm": + try: + return await self.slmm_client.increment_index(unit_id) + except SLMMClientError as e: + raise DeviceControllerError(f"SLMM error: {str(e)}") + + elif device_type == "seismograph": + # Seismographs may not have the same concept of store index + return { + "status": "not_applicable", + "message": "Index increment not applicable for seismographs", + "unit_id": unit_id, + } + + else: + raise UnsupportedDeviceTypeError(f"Unsupported device type: {device_type}") + + async def get_index_number( + self, + unit_id: str, + device_type: str, + ) -> Dict[str, Any]: + """ + Get current store/index number from device. + + Args: + unit_id: Unit identifier + device_type: "slm" | "seismograph" + + Returns: + Response dict with current index_number + """ + if device_type == "slm": + try: + return await self.slmm_client.get_index_number(unit_id) + except SLMMClientError as e: + raise DeviceControllerError(f"SLMM error: {str(e)}") + + elif device_type == "seismograph": + return { + "status": "not_applicable", + "message": "Index number not applicable for seismographs", + "unit_id": unit_id, + } + + else: + raise UnsupportedDeviceTypeError(f"Unsupported device type: {device_type}") + + # ======================================================================== + # Cycle Commands (for scheduled automation) + # ======================================================================== + + async def start_cycle( + self, + unit_id: str, + device_type: str, + sync_clock: bool = True, + ) -> Dict[str, Any]: + """ + Execute complete start cycle for scheduled automation. + + This handles the full pre-recording workflow: + 1. Sync device clock to server time + 2. Find next safe index (with overwrite protection) + 3. Start measurement + + Args: + unit_id: Unit identifier + device_type: "slm" | "seismograph" + sync_clock: Whether to sync device clock to server time + + Returns: + Response dict from device module + """ + if device_type == "slm": + try: + return await self.slmm_client.start_cycle(unit_id, sync_clock) + except SLMMClientError as e: + raise DeviceControllerError(f"SLMM error: {str(e)}") + + elif device_type == "seismograph": + return { + "status": "not_implemented", + "message": "Seismograph start cycle not yet implemented", + "unit_id": unit_id, + } + + else: + raise UnsupportedDeviceTypeError(f"Unsupported device type: {device_type}") + + async def stop_cycle( + self, + unit_id: str, + device_type: str, + download: bool = True, + ) -> Dict[str, Any]: + """ + Execute complete stop cycle for scheduled automation. + + This handles the full post-recording workflow: + 1. Stop measurement + 2. Enable FTP + 3. Download measurement folder + 4. Verify download + + Args: + unit_id: Unit identifier + device_type: "slm" | "seismograph" + download: Whether to download measurement data + + Returns: + Response dict from device module + """ + if device_type == "slm": + try: + return await self.slmm_client.stop_cycle(unit_id, download) + except SLMMClientError as e: + raise DeviceControllerError(f"SLMM error: {str(e)}") + + elif device_type == "seismograph": + return { + "status": "not_implemented", + "message": "Seismograph stop cycle not yet implemented", + "unit_id": unit_id, + } + + else: + raise UnsupportedDeviceTypeError(f"Unsupported device type: {device_type}") + # ======================================================================== # Health Check # ======================================================================== @@ -347,12 +498,12 @@ class DeviceController: Args: unit_id: Unit identifier - device_type: "sound_level_meter" | "seismograph" + device_type: "slm" | "seismograph" Returns: True if device is reachable, False otherwise """ - if device_type == "sound_level_meter": + if device_type == "slm": try: status = await self.slmm_client.get_unit_status(unit_id) return status.get("last_seen") is not None diff --git a/backend/services/device_status_monitor.py b/backend/services/device_status_monitor.py new file mode 100644 index 0000000..7cf2772 --- /dev/null +++ b/backend/services/device_status_monitor.py @@ -0,0 +1,184 @@ +""" +Device Status Monitor + +Background task that monitors device reachability via SLMM polling status +and triggers alerts when devices go offline or come back online. + +This service bridges SLMM's device polling with Terra-View's alert system. +""" + +import asyncio +import logging +from datetime import datetime +from typing import Optional, Dict + +from backend.database import SessionLocal +from backend.services.slmm_client import get_slmm_client, SLMMClientError +from backend.services.alert_service import get_alert_service + +logger = logging.getLogger(__name__) + + +class DeviceStatusMonitor: + """ + Monitors device reachability via SLMM's polling status endpoint. + + Detects state transitions (online→offline, offline→online) and + triggers AlertService to create/resolve alerts. + + Usage: + monitor = DeviceStatusMonitor() + await monitor.start() # Start background monitoring + monitor.stop() # Stop monitoring + """ + + def __init__(self, check_interval: int = 60): + """ + Initialize the monitor. + + Args: + check_interval: Seconds between status checks (default: 60) + """ + self.check_interval = check_interval + self.running = False + self.task: Optional[asyncio.Task] = None + self.slmm_client = get_slmm_client() + + # Track previous device states to detect transitions + self._device_states: Dict[str, bool] = {} + + async def start(self): + """Start the monitoring background task.""" + if self.running: + logger.warning("DeviceStatusMonitor is already running") + return + + self.running = True + self.task = asyncio.create_task(self._monitor_loop()) + logger.info(f"DeviceStatusMonitor started (checking every {self.check_interval}s)") + + def stop(self): + """Stop the monitoring background task.""" + self.running = False + if self.task: + self.task.cancel() + logger.info("DeviceStatusMonitor stopped") + + async def _monitor_loop(self): + """Main monitoring loop.""" + while self.running: + try: + await self._check_all_devices() + except Exception as e: + logger.error(f"Error in device status monitor: {e}", exc_info=True) + + # Sleep in small intervals for graceful shutdown + for _ in range(self.check_interval): + if not self.running: + break + await asyncio.sleep(1) + + logger.info("DeviceStatusMonitor loop exited") + + async def _check_all_devices(self): + """ + Fetch polling status from SLMM and detect state transitions. + + Uses GET /api/slmm/_polling/status (proxied to SLMM) + """ + try: + # Get status from SLMM + status_response = await self.slmm_client.get_polling_status() + devices = status_response.get("devices", []) + + if not devices: + logger.debug("No devices in polling status response") + return + + db = SessionLocal() + try: + alert_service = get_alert_service(db) + + for device in devices: + unit_id = device.get("unit_id") + if not unit_id: + continue + + is_reachable = device.get("is_reachable", True) + previous_reachable = self._device_states.get(unit_id) + + # Skip if this is the first check (no previous state) + if previous_reachable is None: + self._device_states[unit_id] = is_reachable + logger.debug(f"Initial state for {unit_id}: reachable={is_reachable}") + continue + + # Detect offline transition (was online, now offline) + if previous_reachable and not is_reachable: + logger.warning(f"Device {unit_id} went OFFLINE") + alert_service.create_device_offline_alert( + unit_id=unit_id, + consecutive_failures=device.get("consecutive_failures", 0), + last_error=device.get("last_error"), + ) + + # Detect online transition (was offline, now online) + elif not previous_reachable and is_reachable: + logger.info(f"Device {unit_id} came back ONLINE") + alert_service.resolve_device_offline_alert(unit_id) + + # Update tracked state + self._device_states[unit_id] = is_reachable + + # Cleanup expired alerts while we're here + alert_service.cleanup_expired_alerts() + + finally: + db.close() + + except SLMMClientError as e: + logger.warning(f"Could not reach SLMM for status check: {e}") + except Exception as e: + logger.error(f"Error checking device status: {e}", exc_info=True) + + def get_tracked_devices(self) -> Dict[str, bool]: + """ + Get the current tracked device states. + + Returns: + Dict mapping unit_id to is_reachable status + """ + return dict(self._device_states) + + def clear_tracked_devices(self): + """Clear all tracked device states (useful for testing).""" + self._device_states.clear() + + +# Singleton instance +_monitor_instance: Optional[DeviceStatusMonitor] = None + + +def get_device_status_monitor() -> DeviceStatusMonitor: + """ + Get the device status monitor singleton instance. + + Returns: + DeviceStatusMonitor instance + """ + global _monitor_instance + if _monitor_instance is None: + _monitor_instance = DeviceStatusMonitor() + return _monitor_instance + + +async def start_device_status_monitor(): + """Start the global device status monitor.""" + monitor = get_device_status_monitor() + await monitor.start() + + +def stop_device_status_monitor(): + """Stop the global device status monitor.""" + monitor = get_device_status_monitor() + monitor.stop() diff --git a/backend/services/recurring_schedule_service.py b/backend/services/recurring_schedule_service.py new file mode 100644 index 0000000..d4a8d83 --- /dev/null +++ b/backend/services/recurring_schedule_service.py @@ -0,0 +1,559 @@ +""" +Recurring Schedule Service + +Manages recurring schedule definitions and generates ScheduledAction +instances based on patterns (weekly calendar, simple interval). +""" + +import json +import uuid +import logging +from datetime import datetime, timedelta, date, time +from typing import Optional, List, Dict, Any, Tuple +from zoneinfo import ZoneInfo + +from sqlalchemy.orm import Session +from sqlalchemy import and_ + +from backend.models import RecurringSchedule, ScheduledAction, MonitoringLocation, UnitAssignment + +logger = logging.getLogger(__name__) + +# Day name mapping +DAY_NAMES = ["monday", "tuesday", "wednesday", "thursday", "friday", "saturday", "sunday"] + + +class RecurringScheduleService: + """ + Service for managing recurring schedules and generating ScheduledActions. + + Supports two schedule types: + - weekly_calendar: Specific days with start/end times + - simple_interval: Daily stop/download/restart cycles for 24/7 monitoring + """ + + def __init__(self, db: Session): + self.db = db + + def create_schedule( + self, + project_id: str, + location_id: str, + name: str, + schedule_type: str, + device_type: str = "slm", + unit_id: str = None, + weekly_pattern: dict = None, + interval_type: str = None, + cycle_time: str = None, + include_download: bool = True, + auto_increment_index: bool = True, + timezone: str = "America/New_York", + ) -> RecurringSchedule: + """ + Create a new recurring schedule. + + Args: + project_id: Project ID + location_id: Monitoring location ID + name: Schedule name + schedule_type: "weekly_calendar" or "simple_interval" + device_type: "slm" or "seismograph" + unit_id: Specific unit (optional, can use assignment) + weekly_pattern: Dict of day patterns for weekly_calendar + interval_type: "daily" or "hourly" for simple_interval + cycle_time: Time string "HH:MM" for cycle + include_download: Whether to download data on cycle + auto_increment_index: Whether to auto-increment store index before start + timezone: Timezone for schedule times + + Returns: + Created RecurringSchedule + """ + schedule = RecurringSchedule( + id=str(uuid.uuid4()), + project_id=project_id, + location_id=location_id, + unit_id=unit_id, + name=name, + schedule_type=schedule_type, + device_type=device_type, + weekly_pattern=json.dumps(weekly_pattern) if weekly_pattern else None, + interval_type=interval_type, + cycle_time=cycle_time, + include_download=include_download, + auto_increment_index=auto_increment_index, + enabled=True, + timezone=timezone, + ) + + # Calculate next occurrence + schedule.next_occurrence = self._calculate_next_occurrence(schedule) + + self.db.add(schedule) + self.db.commit() + self.db.refresh(schedule) + + logger.info(f"Created recurring schedule: {name} ({schedule_type})") + return schedule + + def update_schedule( + self, + schedule_id: str, + **kwargs, + ) -> Optional[RecurringSchedule]: + """ + Update a recurring schedule. + + Args: + schedule_id: Schedule to update + **kwargs: Fields to update + + Returns: + Updated schedule or None + """ + schedule = self.db.query(RecurringSchedule).filter_by(id=schedule_id).first() + if not schedule: + return None + + for key, value in kwargs.items(): + if hasattr(schedule, key): + if key == "weekly_pattern" and isinstance(value, dict): + value = json.dumps(value) + setattr(schedule, key, value) + + # Recalculate next occurrence + schedule.next_occurrence = self._calculate_next_occurrence(schedule) + + self.db.commit() + self.db.refresh(schedule) + + logger.info(f"Updated recurring schedule: {schedule.name}") + return schedule + + def delete_schedule(self, schedule_id: str) -> bool: + """ + Delete a recurring schedule and its pending generated actions. + + Args: + schedule_id: Schedule to delete + + Returns: + True if deleted, False if not found + """ + schedule = self.db.query(RecurringSchedule).filter_by(id=schedule_id).first() + if not schedule: + return False + + # Delete pending generated actions for this schedule + # The schedule_id is stored in the notes field as JSON + pending_actions = self.db.query(ScheduledAction).filter( + and_( + ScheduledAction.execution_status == "pending", + ScheduledAction.notes.like(f'%"schedule_id": "{schedule_id}"%'), + ) + ).all() + + deleted_count = len(pending_actions) + for action in pending_actions: + self.db.delete(action) + + self.db.delete(schedule) + self.db.commit() + + logger.info(f"Deleted recurring schedule: {schedule.name} (and {deleted_count} pending actions)") + return True + + def enable_schedule(self, schedule_id: str) -> Optional[RecurringSchedule]: + """Enable a disabled schedule.""" + return self.update_schedule(schedule_id, enabled=True) + + def disable_schedule(self, schedule_id: str) -> Optional[RecurringSchedule]: + """Disable a schedule.""" + return self.update_schedule(schedule_id, enabled=False) + + def generate_actions_for_schedule( + self, + schedule: RecurringSchedule, + horizon_days: int = 7, + preview_only: bool = False, + ) -> List[ScheduledAction]: + """ + Generate ScheduledAction entries for the next N days based on pattern. + + Args: + schedule: The recurring schedule + horizon_days: Days ahead to generate + preview_only: If True, don't save to DB (for preview) + + Returns: + List of generated ScheduledAction instances + """ + if not schedule.enabled: + return [] + + if schedule.schedule_type == "weekly_calendar": + actions = self._generate_weekly_calendar_actions(schedule, horizon_days) + elif schedule.schedule_type == "simple_interval": + actions = self._generate_interval_actions(schedule, horizon_days) + else: + logger.warning(f"Unknown schedule type: {schedule.schedule_type}") + return [] + + if not preview_only and actions: + for action in actions: + self.db.add(action) + + schedule.last_generated_at = datetime.utcnow() + schedule.next_occurrence = self._calculate_next_occurrence(schedule) + + self.db.commit() + logger.info(f"Generated {len(actions)} actions for schedule: {schedule.name}") + + return actions + + def _generate_weekly_calendar_actions( + self, + schedule: RecurringSchedule, + horizon_days: int, + ) -> List[ScheduledAction]: + """ + Generate actions from weekly calendar pattern. + + Pattern format: + { + "monday": {"enabled": true, "start": "19:00", "end": "07:00"}, + "tuesday": {"enabled": false}, + ... + } + """ + if not schedule.weekly_pattern: + return [] + + try: + pattern = json.loads(schedule.weekly_pattern) + except json.JSONDecodeError: + logger.error(f"Invalid weekly_pattern JSON for schedule {schedule.id}") + return [] + + actions = [] + tz = ZoneInfo(schedule.timezone) + now_utc = datetime.utcnow() + now_local = now_utc.replace(tzinfo=ZoneInfo("UTC")).astimezone(tz) + + # Get unit_id (from schedule or assignment) + unit_id = self._resolve_unit_id(schedule) + + for day_offset in range(horizon_days): + check_date = now_local.date() + timedelta(days=day_offset) + day_name = DAY_NAMES[check_date.weekday()] + day_config = pattern.get(day_name, {}) + + if not day_config.get("enabled", False): + continue + + start_time_str = day_config.get("start") + end_time_str = day_config.get("end") + + if not start_time_str or not end_time_str: + continue + + # Parse times + start_time = self._parse_time(start_time_str) + end_time = self._parse_time(end_time_str) + + if not start_time or not end_time: + continue + + # Create start datetime in local timezone + start_local = datetime.combine(check_date, start_time, tzinfo=tz) + start_utc = start_local.astimezone(ZoneInfo("UTC")).replace(tzinfo=None) + + # Handle overnight schedules (end time is next day) + if end_time <= start_time: + end_date = check_date + timedelta(days=1) + else: + end_date = check_date + + end_local = datetime.combine(end_date, end_time, tzinfo=tz) + end_utc = end_local.astimezone(ZoneInfo("UTC")).replace(tzinfo=None) + + # Skip if start time has already passed + if start_utc <= now_utc: + continue + + # Check if action already exists + if self._action_exists(schedule.project_id, schedule.location_id, "start", start_utc): + continue + + # Build notes with automation metadata + start_notes = json.dumps({ + "schedule_name": schedule.name, + "schedule_id": schedule.id, + "auto_increment_index": schedule.auto_increment_index, + }) + + # Create START action + start_action = ScheduledAction( + id=str(uuid.uuid4()), + project_id=schedule.project_id, + location_id=schedule.location_id, + unit_id=unit_id, + action_type="start", + device_type=schedule.device_type, + scheduled_time=start_utc, + execution_status="pending", + notes=start_notes, + ) + actions.append(start_action) + + # Create STOP action + stop_notes = json.dumps({ + "schedule_name": schedule.name, + "schedule_id": schedule.id, + }) + stop_action = ScheduledAction( + id=str(uuid.uuid4()), + project_id=schedule.project_id, + location_id=schedule.location_id, + unit_id=unit_id, + action_type="stop", + device_type=schedule.device_type, + scheduled_time=end_utc, + execution_status="pending", + notes=stop_notes, + ) + actions.append(stop_action) + + # Create DOWNLOAD action if enabled (1 minute after stop) + if schedule.include_download: + download_time = end_utc + timedelta(minutes=1) + download_notes = json.dumps({ + "schedule_name": schedule.name, + "schedule_id": schedule.id, + "schedule_type": "weekly_calendar", + }) + download_action = ScheduledAction( + id=str(uuid.uuid4()), + project_id=schedule.project_id, + location_id=schedule.location_id, + unit_id=unit_id, + action_type="download", + device_type=schedule.device_type, + scheduled_time=download_time, + execution_status="pending", + notes=download_notes, + ) + actions.append(download_action) + + return actions + + def _generate_interval_actions( + self, + schedule: RecurringSchedule, + horizon_days: int, + ) -> List[ScheduledAction]: + """ + Generate actions from simple interval pattern. + + For daily cycles: stop, download (optional), start at cycle_time each day. + """ + if not schedule.cycle_time: + return [] + + cycle_time = self._parse_time(schedule.cycle_time) + if not cycle_time: + return [] + + actions = [] + tz = ZoneInfo(schedule.timezone) + now_utc = datetime.utcnow() + now_local = now_utc.replace(tzinfo=ZoneInfo("UTC")).astimezone(tz) + + # Get unit_id + unit_id = self._resolve_unit_id(schedule) + + for day_offset in range(horizon_days): + check_date = now_local.date() + timedelta(days=day_offset) + + # Create cycle datetime in local timezone + cycle_local = datetime.combine(check_date, cycle_time, tzinfo=tz) + cycle_utc = cycle_local.astimezone(ZoneInfo("UTC")).replace(tzinfo=None) + + # Skip if time has passed + if cycle_utc <= now_utc: + continue + + # Check if action already exists + if self._action_exists(schedule.project_id, schedule.location_id, "stop", cycle_utc): + continue + + # Build notes with metadata + stop_notes = json.dumps({ + "schedule_name": schedule.name, + "schedule_id": schedule.id, + "cycle_type": "daily", + }) + + # Create STOP action + stop_action = ScheduledAction( + id=str(uuid.uuid4()), + project_id=schedule.project_id, + location_id=schedule.location_id, + unit_id=unit_id, + action_type="stop", + device_type=schedule.device_type, + scheduled_time=cycle_utc, + execution_status="pending", + notes=stop_notes, + ) + actions.append(stop_action) + + # Create DOWNLOAD action if enabled (1 minute after stop) + if schedule.include_download: + download_time = cycle_utc + timedelta(minutes=1) + download_notes = json.dumps({ + "schedule_name": schedule.name, + "schedule_id": schedule.id, + "cycle_type": "daily", + }) + download_action = ScheduledAction( + id=str(uuid.uuid4()), + project_id=schedule.project_id, + location_id=schedule.location_id, + unit_id=unit_id, + action_type="download", + device_type=schedule.device_type, + scheduled_time=download_time, + execution_status="pending", + notes=download_notes, + ) + actions.append(download_action) + + # Create START action (2 minutes after stop, or 1 minute after download) + start_offset = 2 if schedule.include_download else 1 + start_time = cycle_utc + timedelta(minutes=start_offset) + start_notes = json.dumps({ + "schedule_name": schedule.name, + "schedule_id": schedule.id, + "cycle_type": "daily", + "auto_increment_index": schedule.auto_increment_index, + }) + start_action = ScheduledAction( + id=str(uuid.uuid4()), + project_id=schedule.project_id, + location_id=schedule.location_id, + unit_id=unit_id, + action_type="start", + device_type=schedule.device_type, + scheduled_time=start_time, + execution_status="pending", + notes=start_notes, + ) + actions.append(start_action) + + return actions + + def _calculate_next_occurrence(self, schedule: RecurringSchedule) -> Optional[datetime]: + """Calculate when the next action should occur.""" + if not schedule.enabled: + return None + + tz = ZoneInfo(schedule.timezone) + now_utc = datetime.utcnow() + now_local = now_utc.replace(tzinfo=ZoneInfo("UTC")).astimezone(tz) + + if schedule.schedule_type == "weekly_calendar" and schedule.weekly_pattern: + try: + pattern = json.loads(schedule.weekly_pattern) + except: + return None + + # Find next enabled day + for day_offset in range(8): # Check up to a week ahead + check_date = now_local.date() + timedelta(days=day_offset) + day_name = DAY_NAMES[check_date.weekday()] + day_config = pattern.get(day_name, {}) + + if day_config.get("enabled") and day_config.get("start"): + start_time = self._parse_time(day_config["start"]) + if start_time: + start_local = datetime.combine(check_date, start_time, tzinfo=tz) + start_utc = start_local.astimezone(ZoneInfo("UTC")).replace(tzinfo=None) + if start_utc > now_utc: + return start_utc + + elif schedule.schedule_type == "simple_interval" and schedule.cycle_time: + cycle_time = self._parse_time(schedule.cycle_time) + if cycle_time: + # Find next cycle time + for day_offset in range(2): + check_date = now_local.date() + timedelta(days=day_offset) + cycle_local = datetime.combine(check_date, cycle_time, tzinfo=tz) + cycle_utc = cycle_local.astimezone(ZoneInfo("UTC")).replace(tzinfo=None) + if cycle_utc > now_utc: + return cycle_utc + + return None + + def _resolve_unit_id(self, schedule: RecurringSchedule) -> Optional[str]: + """Get unit_id from schedule or active assignment.""" + if schedule.unit_id: + return schedule.unit_id + + # Try to get from active assignment + assignment = self.db.query(UnitAssignment).filter( + and_( + UnitAssignment.location_id == schedule.location_id, + UnitAssignment.status == "active", + ) + ).first() + + return assignment.unit_id if assignment else None + + def _action_exists( + self, + project_id: str, + location_id: str, + action_type: str, + scheduled_time: datetime, + ) -> bool: + """Check if an action already exists for this time slot.""" + # Allow 5-minute window for duplicate detection + time_window_start = scheduled_time - timedelta(minutes=5) + time_window_end = scheduled_time + timedelta(minutes=5) + + exists = self.db.query(ScheduledAction).filter( + and_( + ScheduledAction.project_id == project_id, + ScheduledAction.location_id == location_id, + ScheduledAction.action_type == action_type, + ScheduledAction.scheduled_time >= time_window_start, + ScheduledAction.scheduled_time <= time_window_end, + ScheduledAction.execution_status == "pending", + ) + ).first() + + return exists is not None + + @staticmethod + def _parse_time(time_str: str) -> Optional[time]: + """Parse time string "HH:MM" to time object.""" + try: + parts = time_str.split(":") + return time(int(parts[0]), int(parts[1])) + except (ValueError, IndexError): + return None + + def get_schedules_for_project(self, project_id: str) -> List[RecurringSchedule]: + """Get all recurring schedules for a project.""" + return self.db.query(RecurringSchedule).filter_by(project_id=project_id).all() + + def get_enabled_schedules(self) -> List[RecurringSchedule]: + """Get all enabled recurring schedules.""" + return self.db.query(RecurringSchedule).filter_by(enabled=True).all() + + +def get_recurring_schedule_service(db: Session) -> RecurringScheduleService: + """Get a RecurringScheduleService instance.""" + return RecurringScheduleService(db) diff --git a/backend/services/scheduler.py b/backend/services/scheduler.py index 678f8ec..866ec64 100644 --- a/backend/services/scheduler.py +++ b/backend/services/scheduler.py @@ -4,22 +4,30 @@ Scheduler Service Executes scheduled actions for Projects system. Monitors pending scheduled actions and executes them by calling device modules (SLMM/SFM). +Extended to support recurring schedules: +- Generates ScheduledActions from RecurringSchedule patterns +- Cleans up old completed/failed actions + This service runs as a background task in FastAPI, checking for pending actions every minute and executing them when their scheduled time arrives. """ import asyncio import json +import logging from datetime import datetime, timedelta from typing import Optional, List, Dict, Any from sqlalchemy.orm import Session from sqlalchemy import and_ from backend.database import SessionLocal -from backend.models import ScheduledAction, RecordingSession, MonitoringLocation, Project +from backend.models import ScheduledAction, RecordingSession, MonitoringLocation, Project, RecurringSchedule from backend.services.device_controller import get_device_controller, DeviceControllerError +from backend.services.alert_service import get_alert_service import uuid +logger = logging.getLogger(__name__) + class SchedulerService: """ @@ -62,11 +70,26 @@ class SchedulerService: async def _run_loop(self): """Main scheduler loop.""" + # Track when we last generated recurring actions (do this once per hour) + last_generation_check = datetime.utcnow() - timedelta(hours=1) + while self.running: try: + # Execute pending actions await self.execute_pending_actions() + + # Generate actions from recurring schedules (every hour) + now = datetime.utcnow() + if (now - last_generation_check).total_seconds() >= 3600: + await self.generate_recurring_actions() + last_generation_check = now + + # Cleanup old actions (also every hour, during generation cycle) + if (now - last_generation_check).total_seconds() < 60: + await self.cleanup_old_actions() + except Exception as e: - print(f"Scheduler error: {e}") + logger.error(f"Scheduler error: {e}", exc_info=True) # Continue running even if there's an error await asyncio.sleep(self.check_interval) @@ -175,6 +198,21 @@ class SchedulerService: print(f"✓ Action {action.id} completed successfully") + # Create success alert + try: + alert_service = get_alert_service(db) + alert_metadata = response.get("cycle_response", {}) if isinstance(response, dict) else {} + alert_service.create_schedule_completed_alert( + schedule_id=action.id, + action_type=action.action_type, + unit_id=unit_id, + project_id=action.project_id, + location_id=action.location_id, + metadata=alert_metadata, + ) + except Exception as alert_err: + logger.warning(f"Failed to create success alert: {alert_err}") + except Exception as e: # Mark action as failed action.execution_status = "failed" @@ -185,6 +223,20 @@ class SchedulerService: print(f"✗ Action {action.id} failed: {e}") + # Create failure alert + try: + alert_service = get_alert_service(db) + alert_service.create_schedule_failed_alert( + schedule_id=action.id, + action_type=action.action_type, + unit_id=unit_id if 'unit_id' in dir() else action.unit_id, + error_message=str(e), + project_id=action.project_id, + location_id=action.location_id, + ) + except Exception as alert_err: + logger.warning(f"Failed to create failure alert: {alert_err}") + return result async def _execute_start( @@ -193,12 +245,19 @@ class SchedulerService: unit_id: str, db: Session, ) -> Dict[str, Any]: - """Execute a 'start' action.""" - # Start recording via device controller - response = await self.device_controller.start_recording( + """Execute a 'start' action using the start_cycle command. + + start_cycle handles: + 1. Sync device clock to server time + 2. Find next safe index (with overwrite protection) + 3. Start measurement + """ + # Execute the full start cycle via device controller + # SLMM handles clock sync, index increment, and start + cycle_response = await self.device_controller.start_cycle( unit_id, action.device_type, - config={}, # TODO: Load config from action.notes or metadata + sync_clock=True, ) # Create recording session @@ -207,17 +266,20 @@ class SchedulerService: project_id=action.project_id, location_id=action.location_id, unit_id=unit_id, - session_type="sound" if action.device_type == "sound_level_meter" else "vibration", + session_type="sound" if action.device_type == "slm" else "vibration", started_at=datetime.utcnow(), status="recording", - session_metadata=json.dumps({"scheduled_action_id": action.id}), + session_metadata=json.dumps({ + "scheduled_action_id": action.id, + "cycle_response": cycle_response, + }), ) db.add(session) return { "status": "started", "session_id": session.id, - "device_response": response, + "cycle_response": cycle_response, } async def _execute_stop( @@ -226,11 +288,29 @@ class SchedulerService: unit_id: str, db: Session, ) -> Dict[str, Any]: - """Execute a 'stop' action.""" - # Stop recording via device controller - response = await self.device_controller.stop_recording( + """Execute a 'stop' action using the stop_cycle command. + + stop_cycle handles: + 1. Stop measurement + 2. Enable FTP + 3. Download measurement folder + 4. Verify download + """ + # Parse notes for download preference + include_download = True + try: + if action.notes: + notes_data = json.loads(action.notes) + include_download = notes_data.get("include_download", True) + except json.JSONDecodeError: + pass # Notes is plain text, not JSON + + # Execute the full stop cycle via device controller + # SLMM handles stop, FTP enable, and download + cycle_response = await self.device_controller.stop_cycle( unit_id, action.device_type, + download=include_download, ) # Find and update the active recording session @@ -248,11 +328,20 @@ class SchedulerService: active_session.duration_seconds = int( (active_session.stopped_at - active_session.started_at).total_seconds() ) + # Store download info in session metadata + if cycle_response.get("download_success"): + try: + metadata = json.loads(active_session.session_metadata or "{}") + metadata["downloaded_folder"] = cycle_response.get("downloaded_folder") + metadata["local_path"] = cycle_response.get("local_path") + active_session.session_metadata = json.dumps(metadata) + except json.JSONDecodeError: + pass return { "status": "stopped", "session_id": active_session.id if active_session else None, - "device_response": response, + "cycle_response": cycle_response, } async def _execute_download( @@ -272,7 +361,7 @@ class SchedulerService: # Build destination path # Example: data/Projects/{project-id}/sound/{location-name}/session-{timestamp}/ session_timestamp = datetime.utcnow().strftime("%Y-%m-%d-%H%M") - location_type_dir = "sound" if action.device_type == "sound_level_meter" else "vibration" + location_type_dir = "sound" if action.device_type == "slm" else "vibration" destination_path = ( f"data/Projects/{project.id}/{location_type_dir}/" @@ -295,6 +384,90 @@ class SchedulerService: "device_response": response, } + # ======================================================================== + # Recurring Schedule Generation + # ======================================================================== + + async def generate_recurring_actions(self) -> int: + """ + Generate ScheduledActions from all enabled recurring schedules. + + Runs once per hour to generate actions for the next 7 days. + + Returns: + Total number of actions generated + """ + db = SessionLocal() + total_generated = 0 + + try: + from backend.services.recurring_schedule_service import get_recurring_schedule_service + + service = get_recurring_schedule_service(db) + schedules = service.get_enabled_schedules() + + if not schedules: + logger.debug("No enabled recurring schedules found") + return 0 + + logger.info(f"Generating actions for {len(schedules)} recurring schedule(s)") + + for schedule in schedules: + try: + actions = service.generate_actions_for_schedule(schedule, horizon_days=7) + total_generated += len(actions) + except Exception as e: + logger.error(f"Error generating actions for schedule {schedule.id}: {e}") + + if total_generated > 0: + logger.info(f"Generated {total_generated} scheduled actions from recurring schedules") + + except Exception as e: + logger.error(f"Error in generate_recurring_actions: {e}", exc_info=True) + finally: + db.close() + + return total_generated + + async def cleanup_old_actions(self, retention_days: int = 30) -> int: + """ + Remove old completed/failed actions to prevent database bloat. + + Args: + retention_days: Keep actions newer than this many days + + Returns: + Number of actions cleaned up + """ + db = SessionLocal() + cleaned = 0 + + try: + cutoff = datetime.utcnow() - timedelta(days=retention_days) + + old_actions = db.query(ScheduledAction).filter( + and_( + ScheduledAction.execution_status.in_(["completed", "failed", "cancelled"]), + ScheduledAction.executed_at < cutoff, + ) + ).all() + + cleaned = len(old_actions) + for action in old_actions: + db.delete(action) + + if cleaned > 0: + db.commit() + logger.info(f"Cleaned up {cleaned} old scheduled actions (>{retention_days} days)") + + except Exception as e: + logger.error(f"Error cleaning up old actions: {e}") + db.rollback() + finally: + db.close() + + return cleaned + # ======================================================================== # Manual Execution (for testing/debugging) # ======================================================================== diff --git a/backend/services/slmm_client.py b/backend/services/slmm_client.py index f04badf..a242c12 100644 --- a/backend/services/slmm_client.py +++ b/backend/services/slmm_client.py @@ -9,13 +9,14 @@ that handles TCP/FTP communication with Rion NL-43/NL-53 devices. """ import httpx +import os from typing import Optional, Dict, Any, List from datetime import datetime import json -# SLMM backend base URLs -SLMM_BASE_URL = "http://localhost:8100" +# SLMM backend base URLs - use environment variable if set (for Docker) +SLMM_BASE_URL = os.environ.get("SLMM_BASE_URL", "http://localhost:8100") SLMM_API_BASE = f"{SLMM_BASE_URL}/api/nl43" @@ -276,6 +277,124 @@ class SLMMClient: """ return await self._request("POST", f"/{unit_id}/reset") + # ======================================================================== + # Store/Index Management + # ======================================================================== + + async def get_index_number(self, unit_id: str) -> Dict[str, Any]: + """ + Get current store/index number from device. + + Args: + unit_id: Unit identifier + + Returns: + Dict with current index_number (store name) + """ + return await self._request("GET", f"/{unit_id}/index-number") + + async def set_index_number( + self, + unit_id: str, + index_number: int, + ) -> Dict[str, Any]: + """ + Set store/index number on device. + + Args: + unit_id: Unit identifier + index_number: New index number to set + + Returns: + Confirmation response + """ + return await self._request( + "PUT", + f"/{unit_id}/index-number", + data={"index_number": index_number}, + ) + + async def check_overwrite_status(self, unit_id: str) -> Dict[str, Any]: + """ + Check if data exists at the current store index. + + Args: + unit_id: Unit identifier + + Returns: + Dict with: + - overwrite_status: "None" (safe) or "Exist" (would overwrite) + - will_overwrite: bool + - safe_to_store: bool + """ + return await self._request("GET", f"/{unit_id}/overwrite-check") + + async def increment_index(self, unit_id: str, max_attempts: int = 100) -> Dict[str, Any]: + """ + Find and set the next available (unused) store/index number. + + Checks the current index - if it would overwrite existing data, + increments until finding an unused index number. + + Args: + unit_id: Unit identifier + max_attempts: Maximum number of indices to try before giving up + + Returns: + Dict with old_index, new_index, and attempts_made + """ + # Get current index + current = await self.get_index_number(unit_id) + old_index = current.get("index_number", 0) + + # Check if current index is safe + overwrite_check = await self.check_overwrite_status(unit_id) + if overwrite_check.get("safe_to_store", False): + # Current index is safe, no need to increment + return { + "success": True, + "old_index": old_index, + "new_index": old_index, + "unit_id": unit_id, + "already_safe": True, + "attempts_made": 0, + } + + # Need to find an unused index + attempts = 0 + test_index = old_index + 1 + + while attempts < max_attempts: + # Set the new index + await self.set_index_number(unit_id, test_index) + + # Check if this index is safe + overwrite_check = await self.check_overwrite_status(unit_id) + attempts += 1 + + if overwrite_check.get("safe_to_store", False): + return { + "success": True, + "old_index": old_index, + "new_index": test_index, + "unit_id": unit_id, + "already_safe": False, + "attempts_made": attempts, + } + + # Try next index (wrap around at 9999) + test_index = (test_index + 1) % 10000 + + # Avoid infinite loops if we've wrapped around + if test_index == old_index: + break + + # Could not find a safe index + raise SLMMDeviceError( + f"Could not find unused store index for {unit_id} after {attempts} attempts. " + f"Consider downloading and clearing data from the device." + ) + # ======================================================================== # Device Settings # ======================================================================== @@ -387,6 +506,135 @@ class SLMMClient: } return await self._request("POST", f"/{unit_id}/ftp/download", data=data) + # ======================================================================== + # Cycle Commands (for scheduled automation) + # ======================================================================== + + async def start_cycle( + self, + unit_id: str, + sync_clock: bool = True, + ) -> Dict[str, Any]: + """ + Execute complete start cycle on device via SLMM. + + This handles the full pre-recording workflow: + 1. Sync device clock to server time + 2. Find next safe index (with overwrite protection) + 3. Start measurement + + Args: + unit_id: Unit identifier + sync_clock: Whether to sync device clock to server time + + Returns: + Dict with clock_synced, old_index, new_index, started, etc. + """ + return await self._request( + "POST", + f"/{unit_id}/start-cycle", + data={"sync_clock": sync_clock}, + ) + + async def stop_cycle( + self, + unit_id: str, + download: bool = True, + download_path: Optional[str] = None, + ) -> Dict[str, Any]: + """ + Execute complete stop cycle on device via SLMM. + + This handles the full post-recording workflow: + 1. Stop measurement + 2. Enable FTP + 3. Download measurement folder (if download=True) + 4. Verify download + + Args: + unit_id: Unit identifier + download: Whether to download measurement data + download_path: Custom path for downloaded ZIP (optional) + + Returns: + Dict with stopped, ftp_enabled, download_success, local_path, etc. + """ + data = {"download": download} + if download_path: + data["download_path"] = download_path + return await self._request( + "POST", + f"/{unit_id}/stop-cycle", + data=data, + ) + + # ======================================================================== + # Polling Status (for device monitoring/alerts) + # ======================================================================== + + async def get_polling_status(self) -> Dict[str, Any]: + """ + Get global polling status from SLMM. + + Returns device reachability information for all polled devices. + Used by DeviceStatusMonitor to detect offline/online transitions. + + Returns: + Dict with devices list containing: + - unit_id + - is_reachable + - consecutive_failures + - last_poll_attempt + - last_success + - last_error + """ + try: + async with httpx.AsyncClient(timeout=self.timeout) as client: + response = await client.get(f"{self.base_url}/api/nl43/_polling/status") + response.raise_for_status() + return response.json() + except httpx.ConnectError: + raise SLMMConnectionError("Cannot connect to SLMM for polling status") + except Exception as e: + raise SLMMClientError(f"Failed to get polling status: {str(e)}") + + async def get_device_polling_config(self, unit_id: str) -> Dict[str, Any]: + """ + Get polling configuration for a specific device. + + Args: + unit_id: Unit identifier + + Returns: + Dict with poll_enabled and poll_interval_seconds + """ + return await self._request("GET", f"/{unit_id}/polling/config") + + async def update_device_polling_config( + self, + unit_id: str, + poll_enabled: Optional[bool] = None, + poll_interval_seconds: Optional[int] = None, + ) -> Dict[str, Any]: + """ + Update polling configuration for a device. + + Args: + unit_id: Unit identifier + poll_enabled: Enable/disable polling + poll_interval_seconds: Polling interval (10-3600) + + Returns: + Updated config + """ + config = {} + if poll_enabled is not None: + config["poll_enabled"] = poll_enabled + if poll_interval_seconds is not None: + config["poll_interval_seconds"] = poll_interval_seconds + + return await self._request("PUT", f"/{unit_id}/polling/config", data=config) + # ======================================================================== # Health Check # ======================================================================== diff --git a/backend/services/slmm_sync.py b/backend/services/slmm_sync.py new file mode 100644 index 0000000..78667f0 --- /dev/null +++ b/backend/services/slmm_sync.py @@ -0,0 +1,227 @@ +""" +SLMM Synchronization Service + +This service ensures Terra-View roster is the single source of truth for SLM device configuration. +When SLM devices are added, edited, or deleted in Terra-View, changes are automatically synced to SLMM. +""" + +import logging +import httpx +import os +from typing import Optional +from sqlalchemy.orm import Session + +from backend.models import RosterUnit + +logger = logging.getLogger(__name__) + +SLMM_BASE_URL = os.getenv("SLMM_BASE_URL", "http://localhost:8100") + + +async def sync_slm_to_slmm(unit: RosterUnit) -> bool: + """ + Sync a single SLM device from Terra-View roster to SLMM. + + Args: + unit: RosterUnit with device_type="slm" + + Returns: + True if sync successful, False otherwise + """ + if unit.device_type != "slm": + logger.warning(f"Attempted to sync non-SLM unit {unit.id} to SLMM") + return False + + if not unit.slm_host: + logger.warning(f"SLM {unit.id} has no host configured, skipping SLMM sync") + return False + + try: + async with httpx.AsyncClient(timeout=5.0) as client: + response = await client.put( + f"{SLMM_BASE_URL}/api/nl43/{unit.id}/config", + json={ + "host": unit.slm_host, + "tcp_port": unit.slm_tcp_port or 2255, + "tcp_enabled": True, + "ftp_enabled": True, + "ftp_username": "USER", # Default NL43 credentials + "ftp_password": "0000", + "poll_enabled": not unit.retired, # Disable polling for retired units + "poll_interval_seconds": 60, # Default interval + } + ) + + if response.status_code in [200, 201]: + logger.info(f"✓ Synced SLM {unit.id} to SLMM at {unit.slm_host}:{unit.slm_tcp_port or 2255}") + return True + else: + logger.error(f"Failed to sync SLM {unit.id} to SLMM: {response.status_code} {response.text}") + return False + + except httpx.TimeoutException: + logger.error(f"Timeout syncing SLM {unit.id} to SLMM") + return False + except Exception as e: + logger.error(f"Error syncing SLM {unit.id} to SLMM: {e}") + return False + + +async def delete_slm_from_slmm(unit_id: str) -> bool: + """ + Delete a device from SLMM database. + + Args: + unit_id: The unit ID to delete + + Returns: + True if deletion successful or device doesn't exist, False on error + """ + try: + async with httpx.AsyncClient(timeout=5.0) as client: + response = await client.delete( + f"{SLMM_BASE_URL}/api/nl43/{unit_id}/config" + ) + + if response.status_code == 200: + logger.info(f"✓ Deleted SLM {unit_id} from SLMM") + return True + elif response.status_code == 404: + logger.info(f"SLM {unit_id} not found in SLMM (already deleted)") + return True + else: + logger.error(f"Failed to delete SLM {unit_id} from SLMM: {response.status_code} {response.text}") + return False + + except httpx.TimeoutException: + logger.error(f"Timeout deleting SLM {unit_id} from SLMM") + return False + except Exception as e: + logger.error(f"Error deleting SLM {unit_id} from SLMM: {e}") + return False + + +async def sync_all_slms_to_slmm(db: Session) -> dict: + """ + Sync all SLM devices from Terra-View roster to SLMM. + + This ensures SLMM database matches Terra-View roster as the source of truth. + Should be called on Terra-View startup and optionally via admin endpoint. + + Args: + db: Database session + + Returns: + Dictionary with sync results + """ + logger.info("Starting full SLM sync to SLMM...") + + # Get all SLM units from roster + slm_units = db.query(RosterUnit).filter_by(device_type="slm").all() + + results = { + "total": len(slm_units), + "synced": 0, + "skipped": 0, + "failed": 0 + } + + for unit in slm_units: + # Skip units without host configured + if not unit.slm_host: + results["skipped"] += 1 + logger.debug(f"Skipped {unit.unit_type} - no host configured") + continue + + # Sync to SLMM + success = await sync_slm_to_slmm(unit) + if success: + results["synced"] += 1 + else: + results["failed"] += 1 + + logger.info( + f"SLM sync complete: {results['synced']} synced, " + f"{results['skipped']} skipped, {results['failed']} failed" + ) + + return results + + +async def get_slmm_devices() -> Optional[list]: + """ + Get list of all devices currently in SLMM database. + + Returns: + List of device unit_ids, or None on error + """ + try: + async with httpx.AsyncClient(timeout=5.0) as client: + response = await client.get(f"{SLMM_BASE_URL}/api/nl43/_polling/status") + + if response.status_code == 200: + data = response.json() + return [device["unit_id"] for device in data["data"]["devices"]] + else: + logger.error(f"Failed to get SLMM devices: {response.status_code}") + return None + + except Exception as e: + logger.error(f"Error getting SLMM devices: {e}") + return None + + +async def cleanup_orphaned_slmm_devices(db: Session) -> dict: + """ + Remove devices from SLMM that are not in Terra-View roster. + + This cleans up orphaned test devices or devices that were manually added to SLMM. + + Args: + db: Database session + + Returns: + Dictionary with cleanup results + """ + logger.info("Checking for orphaned devices in SLMM...") + + # Get all device IDs from SLMM + slmm_devices = await get_slmm_devices() + if slmm_devices is None: + return {"error": "Failed to get SLMM device list"} + + # Get all SLM unit IDs from Terra-View roster + roster_units = db.query(RosterUnit.id).filter_by(device_type="slm").all() + roster_unit_ids = {unit.id for unit in roster_units} + + # Find orphaned devices (in SLMM but not in roster) + orphaned = [uid for uid in slmm_devices if uid not in roster_unit_ids] + + results = { + "total_in_slmm": len(slmm_devices), + "total_in_roster": len(roster_unit_ids), + "orphaned": len(orphaned), + "deleted": 0, + "failed": 0, + "orphaned_devices": orphaned + } + + if not orphaned: + logger.info("No orphaned devices found in SLMM") + return results + + logger.info(f"Found {len(orphaned)} orphaned devices in SLMM: {orphaned}") + + # Delete orphaned devices + for unit_id in orphaned: + success = await delete_slm_from_slmm(unit_id) + if success: + results["deleted"] += 1 + else: + results["failed"] += 1 + + logger.info( + f"Cleanup complete: {results['deleted']} deleted, {results['failed']} failed" + ) + + return results diff --git a/backend/static/icons/favicon-16.png b/backend/static/icons/favicon-16.png new file mode 100644 index 0000000..bb9c326 Binary files /dev/null and b/backend/static/icons/favicon-16.png differ diff --git a/backend/static/icons/favicon-32.png b/backend/static/icons/favicon-32.png new file mode 100644 index 0000000..7c0f5ad Binary files /dev/null and b/backend/static/icons/favicon-32.png differ diff --git a/backend/static/icons/icon-128.png b/backend/static/icons/icon-128.png index 83af799..21eb7cd 100644 Binary files a/backend/static/icons/icon-128.png and b/backend/static/icons/icon-128.png differ diff --git a/backend/static/icons/icon-144.png b/backend/static/icons/icon-144.png index d8d90b5..a454963 100644 Binary files a/backend/static/icons/icon-144.png and b/backend/static/icons/icon-144.png differ diff --git a/backend/static/icons/icon-152.png b/backend/static/icons/icon-152.png index 9ef75af..4d505ac 100644 Binary files a/backend/static/icons/icon-152.png and b/backend/static/icons/icon-152.png differ diff --git a/backend/static/icons/icon-192.png b/backend/static/icons/icon-192.png index 3290b47..9e6ac30 100644 Binary files a/backend/static/icons/icon-192.png and b/backend/static/icons/icon-192.png differ diff --git a/backend/static/icons/icon-384.png b/backend/static/icons/icon-384.png index 2cf0aef..2b5d857 100644 Binary files a/backend/static/icons/icon-384.png and b/backend/static/icons/icon-384.png differ diff --git a/backend/static/icons/icon-512.png b/backend/static/icons/icon-512.png index b2c82dd..1e4b4cd 100644 Binary files a/backend/static/icons/icon-512.png and b/backend/static/icons/icon-512.png differ diff --git a/backend/static/icons/icon-72.png b/backend/static/icons/icon-72.png index d0d0359..1a5a1c1 100644 Binary files a/backend/static/icons/icon-72.png and b/backend/static/icons/icon-72.png differ diff --git a/backend/static/icons/icon-96.png b/backend/static/icons/icon-96.png index cbcff51..a779b31 100644 Binary files a/backend/static/icons/icon-96.png and b/backend/static/icons/icon-96.png differ diff --git a/backend/static/terra-view-logo-dark.png b/backend/static/terra-view-logo-dark.png new file mode 100644 index 0000000..200ff5e Binary files /dev/null and b/backend/static/terra-view-logo-dark.png differ diff --git a/backend/static/terra-view-logo-dark@2x.png b/backend/static/terra-view-logo-dark@2x.png new file mode 100644 index 0000000..96dec0a Binary files /dev/null and b/backend/static/terra-view-logo-dark@2x.png differ diff --git a/backend/static/terra-view-logo-light.png b/backend/static/terra-view-logo-light.png new file mode 100644 index 0000000..5000328 Binary files /dev/null and b/backend/static/terra-view-logo-light.png differ diff --git a/backend/static/terra-view-logo-light@2x.png b/backend/static/terra-view-logo-light@2x.png new file mode 100644 index 0000000..51c0123 Binary files /dev/null and b/backend/static/terra-view-logo-light@2x.png differ diff --git a/backend/templates_config.py b/backend/templates_config.py new file mode 100644 index 0000000..c0e4212 --- /dev/null +++ b/backend/templates_config.py @@ -0,0 +1,39 @@ +""" +Shared Jinja2 templates configuration. + +All routers should import `templates` from this module to get consistent +filter and global function registration. +""" + +from fastapi.templating import Jinja2Templates + +# Import timezone utilities +from backend.utils.timezone import ( + format_local_datetime, format_local_time, + get_user_timezone, get_timezone_abbreviation +) + + +def jinja_local_datetime(dt, fmt="%Y-%m-%d %H:%M"): + """Jinja filter to convert UTC datetime to local timezone.""" + return format_local_datetime(dt, fmt) + + +def jinja_local_time(dt): + """Jinja filter to format time in local timezone.""" + return format_local_time(dt) + + +def jinja_timezone_abbr(): + """Jinja global to get current timezone abbreviation.""" + return get_timezone_abbreviation() + + +# Create templates instance +templates = Jinja2Templates(directory="templates") + +# Register Jinja filters and globals +templates.env.filters["local_datetime"] = jinja_local_datetime +templates.env.filters["local_time"] = jinja_local_time +templates.env.globals["timezone_abbr"] = jinja_timezone_abbr +templates.env.globals["get_user_timezone"] = get_user_timezone diff --git a/backend/utils/__init__.py b/backend/utils/__init__.py new file mode 100644 index 0000000..dd7ee44 --- /dev/null +++ b/backend/utils/__init__.py @@ -0,0 +1 @@ +# Utils package diff --git a/backend/utils/timezone.py b/backend/utils/timezone.py new file mode 100644 index 0000000..6a426cf --- /dev/null +++ b/backend/utils/timezone.py @@ -0,0 +1,173 @@ +""" +Timezone utilities for Terra-View. + +Provides consistent timezone handling throughout the application. +All database times are stored in UTC; this module converts for display. +""" + +from datetime import datetime +from zoneinfo import ZoneInfo +from typing import Optional + +from backend.database import SessionLocal +from backend.models import UserPreferences + + +# Default timezone if none set +DEFAULT_TIMEZONE = "America/New_York" + + +def get_user_timezone() -> str: + """ + Get the user's configured timezone from preferences. + + Returns: + Timezone string (e.g., "America/New_York") + """ + db = SessionLocal() + try: + prefs = db.query(UserPreferences).filter_by(id=1).first() + if prefs and prefs.timezone: + return prefs.timezone + return DEFAULT_TIMEZONE + finally: + db.close() + + +def get_timezone_info(tz_name: str = None) -> ZoneInfo: + """ + Get ZoneInfo object for the specified or user's timezone. + + Args: + tz_name: Timezone name, or None to use user preference + + Returns: + ZoneInfo object + """ + if tz_name is None: + tz_name = get_user_timezone() + try: + return ZoneInfo(tz_name) + except Exception: + return ZoneInfo(DEFAULT_TIMEZONE) + + +def utc_to_local(dt: datetime, tz_name: str = None) -> datetime: + """ + Convert a UTC datetime to local timezone. + + Args: + dt: Datetime in UTC (naive or aware) + tz_name: Target timezone, or None to use user preference + + Returns: + Datetime in local timezone + """ + if dt is None: + return None + + tz = get_timezone_info(tz_name) + + # Assume naive datetime is UTC + if dt.tzinfo is None: + dt = dt.replace(tzinfo=ZoneInfo("UTC")) + + return dt.astimezone(tz) + + +def local_to_utc(dt: datetime, tz_name: str = None) -> datetime: + """ + Convert a local datetime to UTC. + + Args: + dt: Datetime in local timezone (naive or aware) + tz_name: Source timezone, or None to use user preference + + Returns: + Datetime in UTC (naive, for database storage) + """ + if dt is None: + return None + + tz = get_timezone_info(tz_name) + + # Assume naive datetime is in local timezone + if dt.tzinfo is None: + dt = dt.replace(tzinfo=tz) + + # Convert to UTC and strip tzinfo for database storage + return dt.astimezone(ZoneInfo("UTC")).replace(tzinfo=None) + + +def format_local_datetime(dt: datetime, fmt: str = "%Y-%m-%d %H:%M", tz_name: str = None) -> str: + """ + Format a UTC datetime as local time string. + + Args: + dt: Datetime in UTC + fmt: strftime format string + tz_name: Target timezone, or None to use user preference + + Returns: + Formatted datetime string in local time + """ + if dt is None: + return "N/A" + + local_dt = utc_to_local(dt, tz_name) + return local_dt.strftime(fmt) + + +def format_local_time(dt: datetime, tz_name: str = None) -> str: + """ + Format a UTC datetime as local time (HH:MM format). + + Args: + dt: Datetime in UTC + tz_name: Target timezone + + Returns: + Time string in HH:MM format + """ + return format_local_datetime(dt, "%H:%M", tz_name) + + +def format_local_date(dt: datetime, tz_name: str = None) -> str: + """ + Format a UTC datetime as local date (YYYY-MM-DD format). + + Args: + dt: Datetime in UTC + tz_name: Target timezone + + Returns: + Date string + """ + return format_local_datetime(dt, "%Y-%m-%d", tz_name) + + +def get_timezone_abbreviation(tz_name: str = None) -> str: + """ + Get the abbreviation for a timezone (e.g., EST, EDT, PST). + + Args: + tz_name: Timezone name, or None to use user preference + + Returns: + Timezone abbreviation + """ + tz = get_timezone_info(tz_name) + now = datetime.now(tz) + return now.strftime("%Z") + + +# Common US timezone choices for settings dropdown +TIMEZONE_CHOICES = [ + ("America/New_York", "Eastern Time (ET)"), + ("America/Chicago", "Central Time (CT)"), + ("America/Denver", "Mountain Time (MT)"), + ("America/Los_Angeles", "Pacific Time (PT)"), + ("America/Anchorage", "Alaska Time (AKT)"), + ("Pacific/Honolulu", "Hawaii Time (HT)"), + ("UTC", "UTC"), +] diff --git a/docker-compose.yml b/docker-compose.yml index 876487b..1de4897 100644 --- a/docker-compose.yml +++ b/docker-compose.yml @@ -1,7 +1,7 @@ services: # --- TERRA-VIEW PRODUCTION --- - terra-view-prod: + terra-view: build: . container_name: terra-view ports: diff --git a/docs/DEVICE_TYPE_DASHBOARDS.md b/docs/DEVICE_TYPE_DASHBOARDS.md index e6c8913..b39878a 100644 --- a/docs/DEVICE_TYPE_DASHBOARDS.md +++ b/docs/DEVICE_TYPE_DASHBOARDS.md @@ -125,7 +125,7 @@ seismos = db.query(RosterUnit).filter_by( ### Sound Level Meters Query ```python slms = db.query(RosterUnit).filter_by( - device_type="sound_level_meter", + device_type="slm", retired=False ).all() ``` diff --git a/docs/DEVICE_TYPE_SCHEMA.md b/docs/DEVICE_TYPE_SCHEMA.md new file mode 100644 index 0000000..4624e5b --- /dev/null +++ b/docs/DEVICE_TYPE_SCHEMA.md @@ -0,0 +1,288 @@ +# Device Type Schema - Terra-View + +## Overview + +Terra-View uses a single roster table to manage three different device types. The `device_type` field is the primary discriminator that determines which fields are relevant for each unit. + +## Official device_type Values + +As of **Terra-View v0.4.3**, the following device_type values are standardized: + +### 1. `"seismograph"` (Default) +**Purpose**: Seismic monitoring devices + +**Applicable Fields**: +- Common: id, unit_type, deployed, retired, note, project_id, location, address, coordinates +- Specific: last_calibrated, next_calibration_due, deployed_with_modem_id + +**Examples**: +- `BE1234` - Series 3 seismograph +- `UM12345` - Series 4 Micromate unit +- `SEISMO-001` - Custom seismograph + +**Unit Type Values**: +- `series3` - Series 3 devices (default) +- `series4` - Series 4 devices +- `micromate` - Micromate devices + +--- + +### 2. `"modem"` +**Purpose**: Field modems and network equipment + +**Applicable Fields**: +- Common: id, unit_type, deployed, retired, note, project_id, location, address, coordinates +- Specific: ip_address, phone_number, hardware_model + +**Examples**: +- `MDM001` - Field modem +- `MODEM-2025-01` - Network modem +- `RAVEN-XTV-01` - Specific modem model + +**Unit Type Values**: +- `modem` - Generic modem +- `raven-xtv` - Raven XTV model +- Custom values for specific hardware + +--- + +### 3. `"slm"` ⭐ +**Purpose**: Sound level meters (Rion NL-43/NL-53) + +**Applicable Fields**: +- Common: id, unit_type, deployed, retired, note, project_id, location, address, coordinates +- Specific: slm_host, slm_tcp_port, slm_ftp_port, slm_model, slm_serial_number, slm_frequency_weighting, slm_time_weighting, slm_measurement_range, slm_last_check, deployed_with_modem_id + +**Examples**: +- `SLM-43-01` - NL-43 sound level meter +- `NL43-001` - NL-43 unit +- `NL53-002` - NL-53 unit + +**Unit Type Values**: +- `nl43` - Rion NL-43 model +- `nl53` - Rion NL-53 model + +--- + +## Migration from Legacy Values + +### Deprecated Values + +The following device_type values have been **deprecated** and should be migrated: + +- ❌ `"sound_level_meter"` → ✅ `"slm"` + +### How to Migrate + +Run the standardization migration script to update existing databases: + +```bash +cd /home/serversdown/tmi/terra-view +python3 backend/migrate_standardize_device_types.py +``` + +This script: +- Converts all `"sound_level_meter"` values to `"slm"` +- Is idempotent (safe to run multiple times) +- Shows before/after distribution of device types +- No data loss + +--- + +## Database Schema + +### RosterUnit Model (`backend/models.py`) + +```python +class RosterUnit(Base): + """ + Supports multiple device types: + - "seismograph" - Seismic monitoring devices (default) + - "modem" - Field modems and network equipment + - "slm" - Sound level meters (NL-43/NL-53) + """ + __tablename__ = "roster" + + # Core fields (all device types) + id = Column(String, primary_key=True) + unit_type = Column(String, default="series3") + device_type = Column(String, default="seismograph") # "seismograph" | "modem" | "slm" + deployed = Column(Boolean, default=True) + retired = Column(Boolean, default=False) + # ... other common fields + + # Seismograph-specific + last_calibrated = Column(Date, nullable=True) + next_calibration_due = Column(Date, nullable=True) + + # Modem-specific + ip_address = Column(String, nullable=True) + phone_number = Column(String, nullable=True) + hardware_model = Column(String, nullable=True) + + # SLM-specific + slm_host = Column(String, nullable=True) + slm_tcp_port = Column(Integer, nullable=True) + slm_ftp_port = Column(Integer, nullable=True) + slm_model = Column(String, nullable=True) + slm_serial_number = Column(String, nullable=True) + slm_frequency_weighting = Column(String, nullable=True) + slm_time_weighting = Column(String, nullable=True) + slm_measurement_range = Column(String, nullable=True) + slm_last_check = Column(DateTime, nullable=True) + + # Shared fields (seismograph + SLM) + deployed_with_modem_id = Column(String, nullable=True) # FK to modem +``` + +--- + +## API Usage + +### Adding a New Unit + +**Seismograph**: +```bash +curl -X POST http://localhost:8001/api/roster/add \ + -F "id=BE1234" \ + -F "device_type=seismograph" \ + -F "unit_type=series3" \ + -F "deployed=true" +``` + +**Modem**: +```bash +curl -X POST http://localhost:8001/api/roster/add \ + -F "id=MDM001" \ + -F "device_type=modem" \ + -F "ip_address=192.0.2.10" \ + -F "phone_number=+1-555-0100" +``` + +**Sound Level Meter**: +```bash +curl -X POST http://localhost:8001/api/roster/add \ + -F "id=SLM-43-01" \ + -F "device_type=slm" \ + -F "slm_host=63.45.161.30" \ + -F "slm_tcp_port=2255" \ + -F "slm_model=NL-43" +``` + +### CSV Import Format + +```csv +unit_id,unit_type,device_type,deployed,slm_host,slm_tcp_port,slm_model +SLM-43-01,nl43,slm,true,63.45.161.30,2255,NL-43 +SLM-43-02,nl43,slm,true,63.45.161.31,2255,NL-43 +BE1234,series3,seismograph,true,,, +MDM001,modem,modem,true,,, +``` + +--- + +## Frontend Behavior + +### Device Type Selection + +**Templates**: `unit_detail.html`, `roster.html` + +```html + +``` + +### Conditional Field Display + +JavaScript functions check `device_type` to show/hide relevant fields: + +```javascript +function toggleDetailFields() { + const deviceType = document.getElementById('device_type').value; + + if (deviceType === 'seismograph') { + // Show calibration fields + } else if (deviceType === 'modem') { + // Show network fields + } else if (deviceType === 'slm') { + // Show SLM configuration fields + } +} +``` + +--- + +## Code Conventions + +### Always Use Lowercase + +✅ **Correct**: +```python +if unit.device_type == "slm": + # Handle sound level meter +``` + +❌ **Incorrect**: +```python +if unit.device_type == "SLM": # Wrong - case sensitive +if unit.device_type == "sound_level_meter": # Deprecated +``` + +### Query Patterns + +**Filter by device type**: +```python +# Get all SLMs +slms = db.query(RosterUnit).filter_by(device_type="slm").all() + +# Get deployed seismographs +seismos = db.query(RosterUnit).filter_by( + device_type="seismograph", + deployed=True +).all() + +# Get all modems +modems = db.query(RosterUnit).filter_by(device_type="modem").all() +``` + +--- + +## Testing + +### Verify Device Type Distribution + +```bash +# Quick check +sqlite3 data/seismo_fleet.db "SELECT device_type, COUNT(*) FROM roster GROUP BY device_type;" + +# Detailed view +sqlite3 data/seismo_fleet.db "SELECT id, device_type, unit_type, deployed FROM roster ORDER BY device_type, id;" +``` + +### Check for Legacy Values + +```bash +# Should return 0 rows after migration +sqlite3 data/seismo_fleet.db "SELECT id FROM roster WHERE device_type = 'sound_level_meter';" +``` + +--- + +## Version History + +- **v0.4.3** (2026-01-16) - Standardized device_type values, deprecated `"sound_level_meter"` → `"slm"` +- **v0.4.0** (2026-01-05) - Added SLM support with `"sound_level_meter"` value +- **v0.2.0** (2025-12-03) - Added modem device type +- **v0.1.0** (2024-11-20) - Initial release with seismograph-only support + +--- + +## Related Documentation + +- [README.md](../README.md) - Main project documentation with data model +- [DEVICE_TYPE_SLM_SUPPORT.md](DEVICE_TYPE_SLM_SUPPORT.md) - Legacy SLM implementation notes +- [SOUND_LEVEL_METERS_DASHBOARD.md](SOUND_LEVEL_METERS_DASHBOARD.md) - SLM dashboard features +- [SLM_CONFIGURATION.md](SLM_CONFIGURATION.md) - SLM device configuration guide diff --git a/docs/DEVICE_TYPE_SLM_SUPPORT.md b/docs/DEVICE_TYPE_SLM_SUPPORT.md index 1c0fd2d..ae452c6 100644 --- a/docs/DEVICE_TYPE_SLM_SUPPORT.md +++ b/docs/DEVICE_TYPE_SLM_SUPPORT.md @@ -1,5 +1,7 @@ # Sound Level Meter Device Type Support +**⚠️ IMPORTANT**: This documentation uses the legacy `sound_level_meter` device type value. As of v0.4.3, the standardized value is `"slm"`. Run `backend/migrate_standardize_device_types.py` to update your database. + ## Overview Added full support for "Sound Level Meter" as a device type in the roster management system. Users can now create, edit, and manage SLM units through the Fleet Roster interface. @@ -95,7 +97,7 @@ All SLM fields are updated when editing existing unit. The database schema already included SLM fields (no changes needed): - All fields are nullable to support multiple device types -- Fields are only relevant when `device_type = "sound_level_meter"` +- Fields are only relevant when `device_type = "slm"` ## Usage @@ -125,7 +127,7 @@ The form automatically shows/hides relevant fields based on device type: ## Integration with SLMM Dashboard -Units with `device_type = "sound_level_meter"` will: +Units with `device_type = "slm"` will: - Appear in the Sound Level Meters dashboard (`/sound-level-meters`) - Be available for live monitoring and control - Use the configured `slm_host` and `slm_tcp_port` for device communication diff --git a/docs/MODEM_INTEGRATION.md b/docs/MODEM_INTEGRATION.md index b0e5586..b27194e 100644 --- a/docs/MODEM_INTEGRATION.md +++ b/docs/MODEM_INTEGRATION.md @@ -300,7 +300,7 @@ slm.deployed_with_modem_id = "modem-001" ```json { "id": "nl43-001", - "device_type": "sound_level_meter", + "device_type": "slm", "deployed_with_modem_id": "modem-001", "slm_tcp_port": 2255, "slm_model": "NL-43", diff --git a/docs/SOUND_LEVEL_METERS_DASHBOARD.md b/docs/SOUND_LEVEL_METERS_DASHBOARD.md index 9b00f62..215b882 100644 --- a/docs/SOUND_LEVEL_METERS_DASHBOARD.md +++ b/docs/SOUND_LEVEL_METERS_DASHBOARD.md @@ -135,7 +135,7 @@ The dashboard communicates with the SLMM backend service running on port 8100: SLM-specific fields in the RosterUnit model: ```python -device_type = "sound_level_meter" # Distinguishes SLMs from seismographs +device_type = "slm" # Distinguishes SLMs from seismographs slm_host = String # Device IP or hostname slm_tcp_port = Integer # TCP control port (default 2255) slm_model = String # NL-43, NL-53, etc. diff --git a/PROJECTS_SYSTEM_IMPLEMENTATION.md b/docs/archive/PROJECTS_SYSTEM_IMPLEMENTATION.md similarity index 100% rename from PROJECTS_SYSTEM_IMPLEMENTATION.md rename to docs/archive/PROJECTS_SYSTEM_IMPLEMENTATION.md diff --git a/docs/archive/README.md b/docs/archive/README.md new file mode 100644 index 0000000..f19eb39 --- /dev/null +++ b/docs/archive/README.md @@ -0,0 +1,17 @@ +# Terra-View Documentation Archive + +This directory contains old documentation files that are no longer actively maintained but preserved for historical reference. + +## Archived Documents + +### PROJECTS_SYSTEM_IMPLEMENTATION.md +Early implementation notes for the projects system. Superseded by current documentation in main docs directory. + +### .aider.chat.history.md +AI assistant chat history from development sessions. Contains context and decision-making process. + +## Note + +These documents may contain outdated information. For current documentation, see: +- [Main README](../../README.md) +- [Active Documentation](../) diff --git a/requirements.txt b/requirements.txt index 9c7ba93..542f015 100644 --- a/requirements.txt +++ b/requirements.txt @@ -7,3 +7,4 @@ jinja2==3.1.2 aiofiles==23.2.1 Pillow==10.1.0 httpx==0.25.2 +openpyxl==3.1.2 diff --git a/sample_roster.csv b/sample_roster.csv index c54c894..ed0bf71 100644 --- a/sample_roster.csv +++ b/sample_roster.csv @@ -1,6 +1,23 @@ -unit_id,unit_type,deployed,retired,note,project_id,location -BE1234,series3,true,false,Primary unit at main site,PROJ-001,San Francisco CA -BE5678,series3,true,false,Backup sensor,PROJ-001,Los Angeles CA -BE9012,series3,false,false,In maintenance,PROJ-002,Workshop -BE3456,series3,true,false,,PROJ-003,New York NY -BE7890,series3,false,true,Decommissioned 2024,,Storage +unit_id,device_type,unit_type,deployed,retired,note,project_id,location,address,coordinates,last_calibrated,next_calibration_due,deployed_with_modem_id,ip_address,phone_number,hardware_model,slm_host,slm_tcp_port,slm_ftp_port,slm_model,slm_serial_number,slm_frequency_weighting,slm_time_weighting,slm_measurement_range +# ============================================ +# SEISMOGRAPHS (device_type=seismograph) +# ============================================ +BE1234,seismograph,series3,true,false,Primary unit at main site,PROJ-001,San Francisco CA,123 Market St,37.7749;-122.4194,2025-06-15,2026-06-15,MDM001,,,,,,,,,,, +BE5678,seismograph,series3,true,false,Backup sensor,PROJ-001,Los Angeles CA,456 Sunset Blvd,34.0522;-118.2437,2025-03-01,2026-03-01,MDM002,,,,,,,,,,, +BE9012,seismograph,series4,false,false,In maintenance - needs calibration,PROJ-002,Workshop,789 Industrial Way,,,,,,,,,,,,,, +BE3456,seismograph,series3,true,false,,PROJ-003,New York NY,101 Broadway,40.7128;-74.0060,2025-01-10,2026-01-10,,,,,,,,,,, +BE7890,seismograph,series3,false,true,Decommissioned 2024,,Storage,Warehouse B,,,,,,,,,,,,,,, +# ============================================ +# MODEMS (device_type=modem) +# ============================================ +MDM001,modem,,true,false,Cradlepoint at SF site,PROJ-001,San Francisco CA,123 Market St,37.7749;-122.4194,,,,,192.168.1.100,+1-555-0101,IBR900,,,,,,, +MDM002,modem,,true,false,Sierra Wireless at LA site,PROJ-001,Los Angeles CA,456 Sunset Blvd,34.0522;-118.2437,,,,,10.0.0.50,+1-555-0102,RV55,,,,,,, +MDM003,modem,,false,false,Spare modem in storage,,,Storage,Warehouse A,,,,,,+1-555-0103,IBR600,,,,,,, +MDM004,modem,,true,false,NYC backup modem,PROJ-003,New York NY,101 Broadway,40.7128;-74.0060,,,,,172.16.0.25,+1-555-0104,IBR1700,,,,,,, +# ============================================ +# SOUND LEVEL METERS (device_type=slm) +# ============================================ +SLM001,slm,,true,false,NL-43 at construction site A,PROJ-004,Downtown Site,500 Main St,40.7589;-73.9851,,,,,,,,192.168.10.101,2255,21,NL-43,12345678,A,F,30-130 dB +SLM002,slm,,true,false,NL-43 at construction site B,PROJ-004,Midtown Site,600 Park Ave,40.7614;-73.9776,,,MDM004,,,,,192.168.10.102,2255,21,NL-43,12345679,A,S,30-130 dB +SLM003,slm,,false,false,NL-53 spare unit,,,Storage,Warehouse A,,,,,,,,,,,NL-53,98765432,C,F,25-138 dB +SLM004,slm,,true,false,NL-43 nighttime monitoring,PROJ-005,Residential Area,200 Quiet Lane,40.7484;-73.9857,,,,,,,,10.0.5.50,2255,21,NL-43,11112222,A,S,30-130 dB diff --git a/scripts/README.md b/scripts/README.md index 34ca342..cc8c256 100644 --- a/scripts/README.md +++ b/scripts/README.md @@ -1,120 +1,20 @@ -# Helper Scripts +# Terra-View Utility Scripts -This directory contains helper scripts for database management and testing. +This directory contains utility scripts for database operations, testing, and maintenance. -## Database Migration Scripts +## Scripts -### migrate_dev_db.py -Migrates the DEV database schema to add SLM-specific columns to the `roster` table. +### create_test_db.py +Generate a realistic test database with sample data. -**Usage:** -```bash -cd /home/serversdown/sfm/seismo-fleet-manager -python3 scripts/migrate_dev_db.py -``` +Usage: python scripts/create_test_db.py -**What it does:** -- Adds 8 SLM-specific columns to the DEV database (data-dev/seismo_fleet.db) -- Columns: slm_host, slm_tcp_port, slm_model, slm_serial_number, slm_frequency_weighting, slm_time_weighting, slm_measurement_range, slm_last_check -- Safe to run multiple times (skips existing columns) +### rename_unit.py +Rename a unit ID across all tables. -### update_dev_db_schema.py -Inspects and displays the DEV database schema. +Usage: python scripts/rename_unit.py -**Usage:** -```bash -python3 scripts/update_dev_db_schema.py -``` +### sync_slms_to_slmm.py +Manually sync all SLM devices from Terra-View to SLMM. -**What it does:** -- Shows all tables in the DEV database -- Lists all columns in the roster table -- Useful for verifying schema after migrations - -## Test Data Scripts - -### add_test_slms.py -Adds test Sound Level Meter units to the DEV database. - -**Usage:** -```bash -python3 scripts/add_test_slms.py -``` - -**What it creates:** -- nl43-001: NL-43 SLM at Construction Site A -- nl43-002: NL-43 SLM at Construction Site B -- nl53-001: NL-53 SLM at Residential Area -- nl43-003: NL-43 SLM (not deployed, spare unit) - -### add_test_modems.py -Adds test modem units to the DEV database and assigns them to SLMs. - -**Usage:** -```bash -python3 scripts/add_test_modems.py -``` - -**What it creates:** -- modem-001, modem-002, modem-003: Deployed modems (Raven XTV and Sierra Wireless) -- modem-004: Spare modem (not deployed) - -**Modem assignments:** -- nl43-001 → modem-001 -- nl43-002 → modem-002 -- nl53-001 → modem-003 - -## Cleanup Scripts - -### remove_test_data_from_prod.py -**⚠️ PRODUCTION DATABASE CLEANUP** - -Removes test data from the production database (data/seismo_fleet.db). - -**Status:** Already executed successfully. Production database is clean. - -**What it removed:** -- All test SLM units (nl43-001, nl43-002, nl53-001, nl43-003) -- All test modem units (modem-001, modem-002, modem-003, modem-004) - -## Database Cloning - -### clone_db_to_dev.py -Clones the production database to create/update the DEV database. - -**Usage:** -```bash -python3 scripts/clone_db_to_dev.py -``` - -**What it does:** -- Copies data/seismo_fleet.db → data-dev/seismo_fleet.db -- Useful for syncing DEV database with production schema/data - -## Setup Sequence - -To set up a fresh DEV database with test data: - -```bash -cd /home/serversdown/sfm/seismo-fleet-manager - -# 1. Fix permissions (if needed) -sudo chown -R serversdown:serversdown data-dev/ - -# 2. Migrate schema -python3 scripts/migrate_dev_db.py - -# 3. Add test data -python3 scripts/add_test_slms.py -python3 scripts/add_test_modems.py - -# 4. Verify -sqlite3 data-dev/seismo_fleet.db "SELECT id, device_type FROM roster WHERE device_type IN ('sound_level_meter', 'modem');" -``` - -## Important Notes - -- **DEV Database**: `data-dev/seismo_fleet.db` - Used for development and testing -- **Production Database**: `data/seismo_fleet.db` - Used by the running application -- All test scripts are configured to use the DEV database only -- Never run test data scripts against production +Usage: python scripts/sync_slms_to_slmm.py diff --git a/create_test_db.py b/scripts/create_test_db.py similarity index 100% rename from create_test_db.py rename to scripts/create_test_db.py diff --git a/rename_unit.py b/scripts/rename_unit.py similarity index 100% rename from rename_unit.py rename to scripts/rename_unit.py diff --git a/sync_slms_to_slmm.py b/scripts/sync_slms_to_slmm.py similarity index 97% rename from sync_slms_to_slmm.py rename to scripts/sync_slms_to_slmm.py index 9fe3451..c8039ee 100755 --- a/sync_slms_to_slmm.py +++ b/scripts/sync_slms_to_slmm.py @@ -25,7 +25,7 @@ async def sync_all_slms(): try: # Get all SLM devices from Terra-View (source of truth) slm_devices = db.query(RosterUnit).filter_by( - device_type="sound_level_meter" + device_type="slm" ).all() logger.info(f"Found {len(slm_devices)} SLM devices in Terra-View roster") diff --git a/templates/base.html b/templates/base.html index e5ecd59..1ddabe5 100644 --- a/templates/base.html +++ b/templates/base.html @@ -20,6 +20,9 @@ + + + @@ -68,7 +71,7 @@ {% block extra_head %}{% endblock %} - +
@@ -85,10 +88,10 @@