From 6db958ffa62661fde7bf6932299d407c75bba677 Mon Sep 17 00:00:00 2001 From: serversdwn Date: Mon, 15 Dec 2025 18:27:00 +0000 Subject: [PATCH 1/7] map overlap bug fixed --- CHANGELOG.md | 23 +++ README.md | 13 +- backend/main.py | 2 +- backend/static/mobile.css | 48 ++++++ backend/static/mobile.js | 4 + templates/base.html | 24 +-- templates/dashboard.html | 234 +++++++++++++++++++------- templates/partials/active_table.html | 6 +- templates/partials/benched_table.html | 2 +- templates/partials/roster_table.html | 28 +-- 10 files changed, 276 insertions(+), 108 deletions(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index d4abb42..1eb567f 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -5,6 +5,28 @@ All notable changes to Seismo Fleet Manager will be documented in this file. The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html). +## [0.3.3] - 2025-12-12 + +### Changed +- **Mobile Navigation**: Moved hamburger menu button from floating top-right to bottom navigation bar + - Bottom nav now shows: Menu (hamburger), Dashboard, Roster, Settings + - Removed "Add Unit" from bottom nav (still accessible via sidebar menu) + - Hamburger no longer floats over content on mobile +- **Status Dot Visibility**: Increased status dot size from 12px to 16px (w-3/h-3 → w-4/h-4) in dashboard fleet overview for better at-a-glance visibility + - Affects both Active and Benched tabs in dashboard + - Makes status colors (green/yellow/red) easier to spot during quick scroll + +### Fixed +- **Location Navigation**: Moved tap-to-navigate functionality from roster card view to unit detail modal only + - Roster cards now show simple location text with pin emoji + - Navigation links (opening Maps app) only appear in the modal when tapping a unit + - Reduces visual clutter and accidental navigation triggers + +### Technical Details +- Bottom navigation remains at 4 buttons, first button now triggers sidebar menu +- Removed standalone hamburger button element and associated CSS +- Modal already had navigation links, no changes needed there + ## [0.3.2] - 2025-12-12 ### Added @@ -209,6 +231,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0 - Photo management per unit - Automated status categorization (OK/Pending/Missing) +[0.3.3]: https://github.com/serversdwn/seismo-fleet-manager/compare/v0.3.2...v0.3.3 [0.3.2]: https://github.com/serversdwn/seismo-fleet-manager/compare/v0.3.1...v0.3.2 [0.3.1]: https://github.com/serversdwn/seismo-fleet-manager/compare/v0.3.0...v0.3.1 [0.3.0]: https://github.com/serversdwn/seismo-fleet-manager/compare/v0.2.1...v0.3.0 diff --git a/README.md b/README.md index 32ed33d..e34142d 100644 --- a/README.md +++ b/README.md @@ -1,4 +1,4 @@ -# Seismo Fleet Manager v0.3.2 +# Seismo Fleet Manager v0.3.3 Backend API and HTMX-powered web interface for managing a mixed fleet of seismographs and field modems. Track deployments, monitor health in real time, merge roster intent with incoming telemetry, and control your fleet through a unified database and dashboard. ## Features @@ -437,6 +437,11 @@ docker compose down -v ## Release Highlights +### v0.3.3 — 2025-12-12 +- **Improved Mobile Navigation**: Hamburger menu moved to bottom nav bar (no more floating button covering content) +- **Better Status Visibility**: Larger status dots (16px) in dashboard fleet overview for easier at-a-glance status checks +- **Cleaner Roster Cards**: Location navigation links moved to detail modal only, reducing clutter in card view + ### v0.3.2 — 2025-12-12 - **Progressive Web App (PWA)**: Complete mobile optimization with offline support, installable as standalone app - **Mobile-First UI**: Hamburger menu, bottom navigation bar, card-based roster view optimized for touch @@ -494,9 +499,11 @@ MIT ## Version -**Current: 0.3.2** — Progressive Web App with mobile optimization (2025-12-12) +**Current: 0.3.3** — Mobile navigation improvements and better status visibility (2025-12-12) -Previous: 0.3.1 — Dashboard alerts and status fixes (2025-12-12) +Previous: 0.3.2 — Progressive Web App with mobile optimization (2025-12-12) + +0.3.1 — Dashboard alerts and status fixes (2025-12-12) 0.3.0 — Series 4 support, settings redesign, user preferences (2025-12-09) diff --git a/backend/main.py b/backend/main.py index fab90a7..963eca1 100644 --- a/backend/main.py +++ b/backend/main.py @@ -20,7 +20,7 @@ Base.metadata.create_all(bind=engine) ENVIRONMENT = os.getenv("ENVIRONMENT", "production") # Initialize FastAPI app -VERSION = "0.3.2" +VERSION = "0.3.3" app = FastAPI( title="Seismo Fleet Manager", description="Backend API for managing seismograph fleet status", diff --git a/backend/static/mobile.css b/backend/static/mobile.css index c8db026..fce8491 100644 --- a/backend/static/mobile.css +++ b/backend/static/mobile.css @@ -455,6 +455,54 @@ } } +/* ===== MAP OVERLAP FIX ===== */ +/* Prevent map and controls from overlapping UI elements on mobile */ +@media (max-width: 767px) { + /* Constrain leaflet container to prevent overflow */ + .leaflet-container { + max-width: 100%; + overflow: hidden; + } + + /* Override Leaflet's default high z-index values */ + /* Bottom nav is z-20, sidebar is z-40, so map must be below */ + .leaflet-pane, + .leaflet-tile-pane, + .leaflet-overlay-pane, + .leaflet-shadow-pane, + .leaflet-marker-pane, + .leaflet-tooltip-pane, + .leaflet-popup-pane { + z-index: 1 !important; + } + + /* Map controls should also be below navigation elements */ + .leaflet-control-container, + .leaflet-top, + .leaflet-bottom, + .leaflet-left, + .leaflet-right { + z-index: 1 !important; + } + + .leaflet-control { + z-index: 1 !important; + } + + /* When sidebar is open, hide all Leaflet controls (zoom, attribution, etc) */ + body.menu-open .leaflet-control-container { + opacity: 0; + pointer-events: none; + transition: opacity 0.3s ease-in-out; + } + + /* Ensure map tiles are non-interactive when sidebar is open */ + body.menu-open #fleet-map, + body.menu-open #unit-map { + pointer-events: none; + } +} + /* ===== PENDING SYNC BADGE ===== */ .pending-sync-badge { display: inline-flex; diff --git a/backend/static/mobile.js b/backend/static/mobile.js index 9651181..74b2f44 100644 --- a/backend/static/mobile.js +++ b/backend/static/mobile.js @@ -20,12 +20,14 @@ function toggleMenu() { backdrop.classList.remove('show'); hamburgerBtn?.classList.remove('menu-open'); document.body.style.overflow = ''; + document.body.classList.remove('menu-open'); } else { // Open menu sidebar.classList.add('open'); backdrop.classList.add('show'); hamburgerBtn?.classList.add('menu-open'); document.body.style.overflow = 'hidden'; + document.body.classList.add('menu-open'); } } } @@ -41,6 +43,7 @@ function closeMenuFromBackdrop() { backdrop.classList.remove('show'); hamburgerBtn?.classList.remove('menu-open'); document.body.style.overflow = ''; + document.body.classList.remove('menu-open'); } } @@ -56,6 +59,7 @@ function handleResize() { backdrop.classList.remove('show'); hamburgerBtn?.classList.remove('menu-open'); document.body.style.overflow = ''; + document.body.classList.remove('menu-open'); } } } diff --git a/templates/base.html b/templates/base.html index 4672c30..a741902 100644 --- a/templates/base.html +++ b/templates/base.html @@ -69,14 +69,6 @@ {% block extra_head %}{% endblock %} - -
@@ -172,6 +164,12 @@
+ + +
+
+

Photos

+
+ + + + + + +
+
+
+

Loading photos...

+
+ +
@@ -632,7 +664,107 @@ function parseLocation(location) { return null; } +// Load and display photos +async function loadPhotos() { + try { + const response = await fetch(`/api/unit/${unitId}/photos`); + if (!response.ok) { + throw new Error('Failed to load photos'); + } + + const data = await response.json(); + const gallery = document.getElementById('photoGallery'); + + if (data.photos && data.photos.length > 0) { + gallery.innerHTML = ''; + data.photo_urls.forEach((url, index) => { + const photoDiv = document.createElement('div'); + photoDiv.className = 'relative group'; + photoDiv.innerHTML = ` + Unit photo ${index + 1} + ${index === 0 ? 'Primary' : ''} + `; + gallery.appendChild(photoDiv); + }); + } else { + gallery.innerHTML = '

No photos yet. Add a photo to get started.

'; + } + } catch (error) { + console.error('Error loading photos:', error); + document.getElementById('photoGallery').innerHTML = '

Failed to load photos

'; + } +} + +// Upload photo with EXIF metadata extraction +async function uploadPhoto(file) { + if (!file) return; + + const statusDiv = document.getElementById('uploadStatus'); + statusDiv.className = 'mt-4 p-4 rounded-lg bg-blue-100 dark:bg-blue-900 text-blue-800 dark:text-blue-200'; + statusDiv.textContent = 'Uploading photo and extracting metadata...'; + statusDiv.classList.remove('hidden'); + + const formData = new FormData(); + formData.append('photo', file); + + try { + const response = await fetch(`/api/unit/${unitId}/upload-photo`, { + method: 'POST', + body: formData + }); + + if (!response.ok) { + throw new Error('Upload failed'); + } + + const result = await response.json(); + + // Show success message with metadata info + let message = 'Photo uploaded successfully!'; + if (result.metadata && result.metadata.coordinates) { + message += ` GPS location detected: ${result.metadata.coordinates}`; + if (result.coordinates_updated) { + message += ' (Unit coordinates updated automatically)'; + } + } else { + message += ' No GPS data found in photo.'; + } + + statusDiv.className = 'mt-4 p-4 rounded-lg bg-green-100 dark:bg-green-900 text-green-800 dark:text-green-200'; + statusDiv.textContent = message; + + // Reload photos and unit data + await loadPhotos(); + if (result.coordinates_updated) { + await loadUnitData(); + } + + // Hide status after 5 seconds + setTimeout(() => { + statusDiv.classList.add('hidden'); + }, 5000); + + // Reset both file inputs + document.getElementById('photoCameraUpload').value = ''; + document.getElementById('photoLibraryUpload').value = ''; + + } catch (error) { + console.error('Error uploading photo:', error); + statusDiv.className = 'mt-4 p-4 rounded-lg bg-red-100 dark:bg-red-900 text-red-800 dark:text-red-200'; + statusDiv.textContent = `Error uploading photo: ${error.message}`; + + // Hide error after 5 seconds + setTimeout(() => { + statusDiv.classList.add('hidden'); + }, 5000); + } +} + // Load data when page loads -loadUnitData(); +loadUnitData().then(() => { + loadPhotos(); +}); {% endblock %} From d97999e26f1f2ce7ca270a8653e0e319cf07c2e8 Mon Sep 17 00:00:00 2001 From: serversdwn Date: Tue, 16 Dec 2025 04:38:06 +0000 Subject: [PATCH 3/7] unit history added --- backend/migrate_add_unit_history.py | 78 +++++++++++++++++ backend/models.py | 18 ++++ backend/routers/roster_edit.py | 127 ++++++++++++++++++++++++++- templates/unit_detail.html | 129 ++++++++++++++++++++++++++++ 4 files changed, 351 insertions(+), 1 deletion(-) create mode 100644 backend/migrate_add_unit_history.py diff --git a/backend/migrate_add_unit_history.py b/backend/migrate_add_unit_history.py new file mode 100644 index 0000000..15cdaad --- /dev/null +++ b/backend/migrate_add_unit_history.py @@ -0,0 +1,78 @@ +""" +Migration script to add unit history timeline support. + +This creates the unit_history table to track all changes to units: +- Note changes (archived old notes, new notes) +- Deployment status changes (deployed/benched) +- Retired status changes +- Other field changes + +Run this script once to migrate an existing database. +""" + +import sqlite3 +import os + +# Database path +DB_PATH = "./data/seismo_fleet.db" + +def migrate_database(): + """Create the unit_history table""" + + if not os.path.exists(DB_PATH): + print(f"Database not found at {DB_PATH}") + print("The database will be created automatically when you run the application.") + return + + print(f"Migrating database: {DB_PATH}") + + conn = sqlite3.connect(DB_PATH) + cursor = conn.cursor() + + # Check if unit_history table already exists + cursor.execute("SELECT name FROM sqlite_master WHERE type='table' AND name='unit_history'") + if cursor.fetchone(): + print("Migration already applied - unit_history table exists") + conn.close() + return + + print("Creating unit_history table...") + + try: + cursor.execute(""" + CREATE TABLE unit_history ( + id INTEGER PRIMARY KEY AUTOINCREMENT, + unit_id TEXT NOT NULL, + change_type TEXT NOT NULL, + field_name TEXT, + old_value TEXT, + new_value TEXT, + changed_at TIMESTAMP NOT NULL, + source TEXT DEFAULT 'manual', + notes TEXT + ) + """) + print(" ✓ Created unit_history table") + + # Create indexes for better query performance + cursor.execute("CREATE INDEX idx_unit_history_unit_id ON unit_history(unit_id)") + print(" ✓ Created index on unit_id") + + cursor.execute("CREATE INDEX idx_unit_history_changed_at ON unit_history(changed_at)") + print(" ✓ Created index on changed_at") + + conn.commit() + print("\nMigration completed successfully!") + print("Units will now track their complete history of changes.") + + except sqlite3.Error as e: + print(f"\nError during migration: {e}") + conn.rollback() + raise + + finally: + conn.close() + + +if __name__ == "__main__": + migrate_database() diff --git a/backend/models.py b/backend/models.py index 4b36061..bf3f32d 100644 --- a/backend/models.py +++ b/backend/models.py @@ -59,6 +59,24 @@ class IgnoredUnit(Base): ignored_at = Column(DateTime, default=datetime.utcnow) +class UnitHistory(Base): + """ + Unit history: complete timeline of changes to each unit. + Tracks note changes, status changes, deployment/benched events, and more. + """ + __tablename__ = "unit_history" + + id = Column(Integer, primary_key=True, autoincrement=True) + unit_id = Column(String, nullable=False, index=True) # FK to RosterUnit.id + change_type = Column(String, nullable=False) # note_change, deployed_change, retired_change, etc. + field_name = Column(String, nullable=True) # Which field changed + old_value = Column(Text, nullable=True) # Previous value + new_value = Column(Text, nullable=True) # New value + changed_at = Column(DateTime, default=datetime.utcnow, nullable=False, index=True) + source = Column(String, default="manual") # manual, csv_import, telemetry, offline_sync + notes = Column(Text, nullable=True) # Optional reason/context for the change + + class UserPreferences(Base): """ User preferences: persistent storage for application settings. diff --git a/backend/routers/roster_edit.py b/backend/routers/roster_edit.py index e1ab7ca..a495885 100644 --- a/backend/routers/roster_edit.py +++ b/backend/routers/roster_edit.py @@ -5,11 +5,28 @@ import csv import io from backend.database import get_db -from backend.models import RosterUnit, IgnoredUnit, Emitter +from backend.models import RosterUnit, IgnoredUnit, Emitter, UnitHistory router = APIRouter(prefix="/api/roster", tags=["roster-edit"]) +def record_history(db: Session, unit_id: str, change_type: str, field_name: str = None, + old_value: str = None, new_value: str = None, source: str = "manual", notes: str = None): + """Helper function to record a change in unit history""" + history_entry = UnitHistory( + unit_id=unit_id, + change_type=change_type, + field_name=field_name, + old_value=old_value, + new_value=new_value, + changed_at=datetime.utcnow(), + source=source, + notes=notes + ) + db.add(history_entry) + # Note: caller is responsible for db.commit() + + def get_or_create_roster_unit(db: Session, unit_id: str): unit = db.query(RosterUnit).filter(RosterUnit.id == unit_id).first() if not unit: @@ -154,6 +171,11 @@ def edit_roster_unit( except ValueError: raise HTTPException(status_code=400, detail="Invalid next_calibration_due date format. Use YYYY-MM-DD") + # Track changes for history + old_note = unit.note + old_deployed = unit.deployed + old_retired = unit.retired + # Update all fields unit.device_type = device_type unit.unit_type = unit_type @@ -176,6 +198,20 @@ def edit_roster_unit( unit.phone_number = phone_number if phone_number else None unit.hardware_model = hardware_model if hardware_model else None + # Record history entries for changed fields + if old_note != note: + record_history(db, unit_id, "note_change", "note", old_note, note, "manual") + + if old_deployed != deployed: + status_text = "deployed" if deployed else "benched" + old_status_text = "deployed" if old_deployed else "benched" + record_history(db, unit_id, "deployed_change", "deployed", old_status_text, status_text, "manual") + + if old_retired != retired: + status_text = "retired" if retired else "active" + old_status_text = "retired" if old_retired else "active" + record_history(db, unit_id, "retired_change", "retired", old_status_text, status_text, "manual") + db.commit() return {"message": "Unit updated", "id": unit_id, "device_type": device_type} @@ -183,8 +219,24 @@ def edit_roster_unit( @router.post("/set-deployed/{unit_id}") def set_deployed(unit_id: str, deployed: bool = Form(...), db: Session = Depends(get_db)): unit = get_or_create_roster_unit(db, unit_id) + old_deployed = unit.deployed unit.deployed = deployed unit.last_updated = datetime.utcnow() + + # Record history entry for deployed status change + if old_deployed != deployed: + status_text = "deployed" if deployed else "benched" + old_status_text = "deployed" if old_deployed else "benched" + record_history( + db=db, + unit_id=unit_id, + change_type="deployed_change", + field_name="deployed", + old_value=old_status_text, + new_value=status_text, + source="manual" + ) + db.commit() return {"message": "Updated", "id": unit_id, "deployed": deployed} @@ -192,8 +244,24 @@ def set_deployed(unit_id: str, deployed: bool = Form(...), db: Session = Depends @router.post("/set-retired/{unit_id}") def set_retired(unit_id: str, retired: bool = Form(...), db: Session = Depends(get_db)): unit = get_or_create_roster_unit(db, unit_id) + old_retired = unit.retired unit.retired = retired unit.last_updated = datetime.utcnow() + + # Record history entry for retired status change + if old_retired != retired: + status_text = "retired" if retired else "active" + old_status_text = "retired" if old_retired else "active" + record_history( + db=db, + unit_id=unit_id, + change_type="retired_change", + field_name="retired", + old_value=old_status_text, + new_value=status_text, + source="manual" + ) + db.commit() return {"message": "Updated", "id": unit_id, "retired": retired} @@ -235,8 +303,22 @@ def delete_roster_unit(unit_id: str, db: Session = Depends(get_db)): @router.post("/set-note/{unit_id}") def set_note(unit_id: str, note: str = Form(""), db: Session = Depends(get_db)): unit = get_or_create_roster_unit(db, unit_id) + old_note = unit.note unit.note = note unit.last_updated = datetime.utcnow() + + # Record history entry for note change + if old_note != note: + record_history( + db=db, + unit_id=unit_id, + change_type="note_change", + field_name="note", + old_value=old_note, + new_value=note, + source="manual" + ) + db.commit() return {"message": "Updated", "id": unit_id, "note": note} @@ -402,3 +484,46 @@ def list_ignored_units(db: Session = Depends(get_db)): for unit in ignored_units ] } + + +@router.get("/history/{unit_id}") +def get_unit_history(unit_id: str, db: Session = Depends(get_db)): + """ + Get complete history timeline for a unit. + Returns all historical changes ordered by most recent first. + """ + history_entries = db.query(UnitHistory).filter( + UnitHistory.unit_id == unit_id + ).order_by(UnitHistory.changed_at.desc()).all() + + return { + "unit_id": unit_id, + "history": [ + { + "id": entry.id, + "change_type": entry.change_type, + "field_name": entry.field_name, + "old_value": entry.old_value, + "new_value": entry.new_value, + "changed_at": entry.changed_at.isoformat(), + "source": entry.source, + "notes": entry.notes + } + for entry in history_entries + ] + } + + +@router.delete("/history/{history_id}") +def delete_history_entry(history_id: int, db: Session = Depends(get_db)): + """ + Delete a specific history entry by ID. + Allows manual cleanup of old history entries. + """ + history_entry = db.query(UnitHistory).filter(UnitHistory.id == history_id).first() + if not history_entry: + raise HTTPException(status_code=404, detail="History entry not found") + + db.delete(history_entry) + db.commit() + return {"message": "History entry deleted", "id": history_id} diff --git a/templates/unit_detail.html b/templates/unit_detail.html index 55ebe6a..ebf4706 100644 --- a/templates/unit_detail.html +++ b/templates/unit_detail.html @@ -178,6 +178,14 @@

--

+ +
+

Timeline

+
+

Loading history...

+
+
+
@@ -762,9 +770,130 @@ async function uploadPhoto(file) { } } +// Load and display unit history timeline +async function loadUnitHistory() { + try { + const response = await fetch(`/api/roster/history/${unitId}`); + if (!response.ok) { + throw new Error('Failed to load history'); + } + + const data = await response.json(); + const timeline = document.getElementById('historyTimeline'); + + if (data.history && data.history.length > 0) { + timeline.innerHTML = ''; + data.history.forEach(entry => { + const timelineEntry = createTimelineEntry(entry); + timeline.appendChild(timelineEntry); + }); + } else { + timeline.innerHTML = '

No history yet. Changes will appear here.

'; + } + } catch (error) { + console.error('Error loading history:', error); + document.getElementById('historyTimeline').innerHTML = '

Failed to load history

'; + } +} + +// Create a timeline entry element +function createTimelineEntry(entry) { + const div = document.createElement('div'); + div.className = 'flex gap-3 p-3 rounded-lg bg-gray-50 dark:bg-slate-700/50'; + + // Icon based on change type + const icons = { + 'note_change': ` + + `, + 'deployed_change': ` + + `, + 'retired_change': ` + + ` + }; + + const icon = icons[entry.change_type] || ` + + `; + + // Format change description + let description = ''; + if (entry.change_type === 'note_change') { + description = `Note changed`; + if (entry.old_value) { + description += `
From: "${entry.old_value}"`; + } + if (entry.new_value) { + description += `
To: "${entry.new_value}"`; + } + } else if (entry.change_type === 'deployed_change') { + description = `Status changed to ${entry.new_value}`; + } else if (entry.change_type === 'retired_change') { + description = `Marked as ${entry.new_value}`; + } else { + description = `${entry.field_name} changed`; + if (entry.old_value && entry.new_value) { + description += `
${entry.old_value} → ${entry.new_value}`; + } + } + + // Format timestamp + const timestamp = new Date(entry.changed_at).toLocaleString(); + + div.innerHTML = ` +
+ ${icon} +
+
+
+ ${description} +
+
+ ${timestamp} + ${entry.source !== 'manual' ? `${entry.source}` : ''} +
+
+
+ +
+ `; + + return div; +} + +// Delete a history entry +async function deleteHistoryEntry(historyId) { + if (!confirm('Are you sure you want to delete this history entry?')) { + return; + } + + try { + const response = await fetch(`/api/roster/history/${historyId}`, { + method: 'DELETE' + }); + + if (response.ok) { + // Reload history + await loadUnitHistory(); + } else { + const result = await response.json(); + alert(`Error: ${result.detail || 'Unknown error'}`); + } + } catch (error) { + alert(`Error: ${error.message}`); + } +} + // Load data when page loads loadUnitData().then(() => { loadPhotos(); + loadUnitHistory(); }); {% endblock %} From 27f8719e3303db18aa8454cba9cd384d43bc9e9e Mon Sep 17 00:00:00 2001 From: serversdwn Date: Tue, 16 Dec 2025 20:02:04 +0000 Subject: [PATCH 4/7] db management system added --- backend/main.py | 3 +- backend/routers/activity.py | 146 ++++++++ backend/routers/settings.py | 146 +++++++- backend/services/backup_scheduler.py | 145 ++++++++ backend/services/database_backup.py | 192 +++++++++++ docs/DATABASE_MANAGEMENT.md | 477 +++++++++++++++++++++++++++ scripts/clone_db_to_dev.py | 149 +++++++++ templates/dashboard.html | 112 ++++++- templates/settings.html | 351 ++++++++++++++++++++ 9 files changed, 1705 insertions(+), 16 deletions(-) create mode 100644 backend/routers/activity.py create mode 100644 backend/services/backup_scheduler.py create mode 100644 backend/services/database_backup.py create mode 100644 docs/DATABASE_MANAGEMENT.md create mode 100755 scripts/clone_db_to_dev.py diff --git a/backend/main.py b/backend/main.py index 963eca1..1c95686 100644 --- a/backend/main.py +++ b/backend/main.py @@ -9,7 +9,7 @@ from typing import List, Dict from pydantic import BaseModel from backend.database import engine, Base, get_db -from backend.routers import roster, units, photos, roster_edit, dashboard, dashboard_tabs +from backend.routers import roster, units, photos, roster_edit, dashboard, dashboard_tabs, activity from backend.services.snapshot import emit_status_snapshot from backend.models import IgnoredUnit @@ -67,6 +67,7 @@ app.include_router(photos.router) app.include_router(roster_edit.router) app.include_router(dashboard.router) app.include_router(dashboard_tabs.router) +app.include_router(activity.router) from backend.routers import settings app.include_router(settings.router) diff --git a/backend/routers/activity.py b/backend/routers/activity.py new file mode 100644 index 0000000..b881a8e --- /dev/null +++ b/backend/routers/activity.py @@ -0,0 +1,146 @@ +from fastapi import APIRouter, Depends +from sqlalchemy.orm import Session +from sqlalchemy import desc +from pathlib import Path +from datetime import datetime, timedelta, timezone +from typing import List, Dict, Any +from backend.database import get_db +from backend.models import UnitHistory, Emitter, RosterUnit + +router = APIRouter(prefix="/api", tags=["activity"]) + +PHOTOS_BASE_DIR = Path("data/photos") + + +@router.get("/recent-activity") +def get_recent_activity(limit: int = 20, db: Session = Depends(get_db)): + """ + Get recent activity feed combining unit history changes and photo uploads. + Returns a unified timeline of events sorted by timestamp (newest first). + """ + activities = [] + + # Get recent history entries + history_entries = db.query(UnitHistory)\ + .order_by(desc(UnitHistory.changed_at))\ + .limit(limit * 2)\ + .all() # Get more than needed to mix with photos + + for entry in history_entries: + activity = { + "type": "history", + "timestamp": entry.changed_at.isoformat(), + "timestamp_unix": entry.changed_at.timestamp(), + "unit_id": entry.unit_id, + "change_type": entry.change_type, + "field_name": entry.field_name, + "old_value": entry.old_value, + "new_value": entry.new_value, + "source": entry.source, + "notes": entry.notes + } + activities.append(activity) + + # Get recent photos + if PHOTOS_BASE_DIR.exists(): + image_extensions = {".jpg", ".jpeg", ".png", ".gif", ".webp"} + photo_activities = [] + + for unit_dir in PHOTOS_BASE_DIR.iterdir(): + if not unit_dir.is_dir(): + continue + + unit_id = unit_dir.name + + for file_path in unit_dir.iterdir(): + if file_path.is_file() and file_path.suffix.lower() in image_extensions: + modified_time = file_path.stat().st_mtime + photo_activities.append({ + "type": "photo", + "timestamp": datetime.fromtimestamp(modified_time).isoformat(), + "timestamp_unix": modified_time, + "unit_id": unit_id, + "filename": file_path.name, + "photo_url": f"/api/unit/{unit_id}/photo/{file_path.name}" + }) + + activities.extend(photo_activities) + + # Sort all activities by timestamp (newest first) + activities.sort(key=lambda x: x["timestamp_unix"], reverse=True) + + # Limit to requested number + activities = activities[:limit] + + return { + "activities": activities, + "total": len(activities) + } + + +@router.get("/recent-callins") +def get_recent_callins(hours: int = 6, limit: int = None, db: Session = Depends(get_db)): + """ + Get recent unit call-ins (units that have reported recently). + Returns units sorted by most recent last_seen timestamp. + + Args: + hours: Look back this many hours (default: 6) + limit: Maximum number of results (default: None = all) + """ + # Calculate the time threshold + time_threshold = datetime.now(timezone.utc) - timedelta(hours=hours) + + # Query emitters with recent activity, joined with roster info + recent_emitters = db.query(Emitter)\ + .filter(Emitter.last_seen >= time_threshold)\ + .order_by(desc(Emitter.last_seen))\ + .all() + + # Get roster info for all units + roster_dict = {r.id: r for r in db.query(RosterUnit).all()} + + call_ins = [] + for emitter in recent_emitters: + roster_unit = roster_dict.get(emitter.id) + + # Calculate time since last seen + last_seen_utc = emitter.last_seen.replace(tzinfo=timezone.utc) if emitter.last_seen.tzinfo is None else emitter.last_seen + time_diff = datetime.now(timezone.utc) - last_seen_utc + + # Format time ago + if time_diff.total_seconds() < 60: + time_ago = "just now" + elif time_diff.total_seconds() < 3600: + minutes = int(time_diff.total_seconds() / 60) + time_ago = f"{minutes}m ago" + else: + hours_ago = time_diff.total_seconds() / 3600 + if hours_ago < 24: + time_ago = f"{int(hours_ago)}h {int((hours_ago % 1) * 60)}m ago" + else: + days = int(hours_ago / 24) + time_ago = f"{days}d ago" + + call_in = { + "unit_id": emitter.id, + "last_seen": emitter.last_seen.isoformat(), + "time_ago": time_ago, + "status": emitter.status, + "device_type": roster_unit.device_type if roster_unit else "seismograph", + "deployed": roster_unit.deployed if roster_unit else False, + "note": roster_unit.note if roster_unit and roster_unit.note else "", + "location": roster_unit.address if roster_unit and roster_unit.address else (roster_unit.location if roster_unit else "") + } + call_ins.append(call_in) + + # Apply limit if specified + if limit: + call_ins = call_ins[:limit] + + return { + "call_ins": call_ins, + "total": len(call_ins), + "hours": hours, + "time_threshold": time_threshold.isoformat() + } diff --git a/backend/routers/settings.py b/backend/routers/settings.py index 1af0547..4cd0fb0 100644 --- a/backend/routers/settings.py +++ b/backend/routers/settings.py @@ -1,14 +1,17 @@ from fastapi import APIRouter, Depends, HTTPException, UploadFile, File -from fastapi.responses import StreamingResponse +from fastapi.responses import StreamingResponse, FileResponse from sqlalchemy.orm import Session from datetime import datetime, date from pydantic import BaseModel from typing import Optional import csv import io +import shutil +from pathlib import Path from backend.database import get_db from backend.models import RosterUnit, Emitter, IgnoredUnit, UserPreferences +from backend.services.database_backup import DatabaseBackupService router = APIRouter(prefix="/api/settings", tags=["settings"]) @@ -325,3 +328,144 @@ def update_preferences( "status_pending_threshold_hours": prefs.status_pending_threshold_hours, "updated_at": prefs.updated_at.isoformat() if prefs.updated_at else None } + + +# Database Management Endpoints + +backup_service = DatabaseBackupService() + + +@router.get("/database/stats") +def get_database_stats(): + """Get current database statistics""" + try: + stats = backup_service.get_database_stats() + return stats + except Exception as e: + raise HTTPException(status_code=500, detail=f"Failed to get database stats: {str(e)}") + + +@router.post("/database/snapshot") +def create_database_snapshot(description: Optional[str] = None): + """Create a full database snapshot""" + try: + snapshot = backup_service.create_snapshot(description=description) + return { + "message": "Snapshot created successfully", + "snapshot": snapshot + } + except Exception as e: + raise HTTPException(status_code=500, detail=f"Snapshot creation failed: {str(e)}") + + +@router.get("/database/snapshots") +def list_database_snapshots(): + """List all available database snapshots""" + try: + snapshots = backup_service.list_snapshots() + return { + "snapshots": snapshots, + "count": len(snapshots) + } + except Exception as e: + raise HTTPException(status_code=500, detail=f"Failed to list snapshots: {str(e)}") + + +@router.get("/database/snapshot/{filename}") +def download_snapshot(filename: str): + """Download a specific snapshot file""" + try: + snapshot_path = backup_service.download_snapshot(filename) + return FileResponse( + path=str(snapshot_path), + filename=filename, + media_type="application/x-sqlite3" + ) + except FileNotFoundError: + raise HTTPException(status_code=404, detail=f"Snapshot {filename} not found") + except Exception as e: + raise HTTPException(status_code=500, detail=f"Download failed: {str(e)}") + + +@router.delete("/database/snapshot/{filename}") +def delete_database_snapshot(filename: str): + """Delete a specific snapshot""" + try: + backup_service.delete_snapshot(filename) + return { + "message": f"Snapshot {filename} deleted successfully", + "filename": filename + } + except FileNotFoundError: + raise HTTPException(status_code=404, detail=f"Snapshot {filename} not found") + except Exception as e: + raise HTTPException(status_code=500, detail=f"Delete failed: {str(e)}") + + +class RestoreRequest(BaseModel): + """Schema for restore request""" + filename: str + create_backup: bool = True + + +@router.post("/database/restore") +def restore_database(request: RestoreRequest, db: Session = Depends(get_db)): + """Restore database from a snapshot""" + try: + # Close the database connection before restoring + db.close() + + result = backup_service.restore_snapshot( + filename=request.filename, + create_backup_before_restore=request.create_backup + ) + + return result + except FileNotFoundError: + raise HTTPException(status_code=404, detail=f"Snapshot {request.filename} not found") + except Exception as e: + raise HTTPException(status_code=500, detail=f"Restore failed: {str(e)}") + + +@router.post("/database/upload-snapshot") +async def upload_snapshot(file: UploadFile = File(...)): + """Upload a snapshot file to the backups directory""" + if not file.filename.endswith('.db'): + raise HTTPException(status_code=400, detail="File must be a .db file") + + try: + # Save uploaded file to backups directory + backups_dir = Path("./data/backups") + backups_dir.mkdir(parents=True, exist_ok=True) + + timestamp = datetime.utcnow().strftime("%Y%m%d_%H%M%S") + uploaded_filename = f"snapshot_uploaded_{timestamp}.db" + file_path = backups_dir / uploaded_filename + + # Save file + with open(file_path, "wb") as buffer: + shutil.copyfileobj(file.file, buffer) + + # Create metadata + metadata = { + "filename": uploaded_filename, + "created_at": timestamp, + "created_at_iso": datetime.utcnow().isoformat(), + "description": f"Uploaded: {file.filename}", + "size_bytes": file_path.stat().st_size, + "size_mb": round(file_path.stat().st_size / (1024 * 1024), 2), + "type": "uploaded" + } + + metadata_path = backups_dir / f"{uploaded_filename}.meta.json" + import json + with open(metadata_path, 'w') as f: + json.dump(metadata, f, indent=2) + + return { + "message": "Snapshot uploaded successfully", + "snapshot": metadata + } + + except Exception as e: + raise HTTPException(status_code=500, detail=f"Upload failed: {str(e)}") diff --git a/backend/services/backup_scheduler.py b/backend/services/backup_scheduler.py new file mode 100644 index 0000000..15168cc --- /dev/null +++ b/backend/services/backup_scheduler.py @@ -0,0 +1,145 @@ +""" +Automatic Database Backup Scheduler +Handles scheduled automatic backups of the database +""" + +import schedule +import time +import threading +from datetime import datetime +from typing import Optional +import logging + +from backend.services.database_backup import DatabaseBackupService + +logger = logging.getLogger(__name__) + + +class BackupScheduler: + """Manages automatic database backups on a schedule""" + + def __init__(self, db_path: str = "./data/seismo_fleet.db", backups_dir: str = "./data/backups"): + self.backup_service = DatabaseBackupService(db_path=db_path, backups_dir=backups_dir) + self.scheduler_thread: Optional[threading.Thread] = None + self.is_running = False + + # Default settings + self.backup_interval_hours = 24 # Daily backups + self.keep_count = 10 # Keep last 10 backups + self.enabled = False + + def configure(self, interval_hours: int = 24, keep_count: int = 10, enabled: bool = True): + """ + Configure backup scheduler settings + + Args: + interval_hours: Hours between automatic backups + keep_count: Number of backups to retain + enabled: Whether automatic backups are enabled + """ + self.backup_interval_hours = interval_hours + self.keep_count = keep_count + self.enabled = enabled + + logger.info(f"Backup scheduler configured: interval={interval_hours}h, keep={keep_count}, enabled={enabled}") + + def create_automatic_backup(self): + """Create an automatic backup and cleanup old ones""" + if not self.enabled: + logger.info("Automatic backups are disabled, skipping") + return + + try: + timestamp = datetime.utcnow().strftime("%Y-%m-%d %H:%M UTC") + description = f"Automatic backup - {timestamp}" + + logger.info("Creating automatic backup...") + snapshot = self.backup_service.create_snapshot(description=description) + + logger.info(f"Automatic backup created: {snapshot['filename']} ({snapshot['size_mb']} MB)") + + # Cleanup old backups + cleanup_result = self.backup_service.cleanup_old_snapshots(keep_count=self.keep_count) + if cleanup_result['deleted'] > 0: + logger.info(f"Cleaned up {cleanup_result['deleted']} old snapshots") + + return snapshot + + except Exception as e: + logger.error(f"Automatic backup failed: {str(e)}") + return None + + def start(self): + """Start the backup scheduler in a background thread""" + if self.is_running: + logger.warning("Backup scheduler is already running") + return + + if not self.enabled: + logger.info("Backup scheduler is disabled, not starting") + return + + logger.info(f"Starting backup scheduler (every {self.backup_interval_hours} hours)") + + # Clear any existing scheduled jobs + schedule.clear() + + # Schedule the backup job + schedule.every(self.backup_interval_hours).hours.do(self.create_automatic_backup) + + # Also run immediately on startup + self.create_automatic_backup() + + # Start the scheduler thread + self.is_running = True + self.scheduler_thread = threading.Thread(target=self._run_scheduler, daemon=True) + self.scheduler_thread.start() + + logger.info("Backup scheduler started successfully") + + def _run_scheduler(self): + """Internal method to run the scheduler loop""" + while self.is_running: + schedule.run_pending() + time.sleep(60) # Check every minute + + def stop(self): + """Stop the backup scheduler""" + if not self.is_running: + logger.warning("Backup scheduler is not running") + return + + logger.info("Stopping backup scheduler...") + self.is_running = False + schedule.clear() + + if self.scheduler_thread: + self.scheduler_thread.join(timeout=5) + + logger.info("Backup scheduler stopped") + + def get_status(self) -> dict: + """Get current scheduler status""" + next_run = None + if self.is_running and schedule.jobs: + next_run = schedule.jobs[0].next_run.isoformat() if schedule.jobs[0].next_run else None + + return { + "enabled": self.enabled, + "running": self.is_running, + "interval_hours": self.backup_interval_hours, + "keep_count": self.keep_count, + "next_run": next_run + } + + +# Global scheduler instance +_scheduler_instance: Optional[BackupScheduler] = None + + +def get_backup_scheduler() -> BackupScheduler: + """Get or create the global backup scheduler instance""" + global _scheduler_instance + if _scheduler_instance is None: + _scheduler_instance = BackupScheduler() + return _scheduler_instance diff --git a/backend/services/database_backup.py b/backend/services/database_backup.py new file mode 100644 index 0000000..2858fd2 --- /dev/null +++ b/backend/services/database_backup.py @@ -0,0 +1,192 @@ +""" +Database Backup and Restore Service +Handles full database snapshots, restoration, and remote synchronization +""" + +import os +import shutil +import sqlite3 +from datetime import datetime +from pathlib import Path +from typing import List, Dict, Optional +import json + + +class DatabaseBackupService: + """Manages database backup operations""" + + def __init__(self, db_path: str = "./data/seismo_fleet.db", backups_dir: str = "./data/backups"): + self.db_path = Path(db_path) + self.backups_dir = Path(backups_dir) + self.backups_dir.mkdir(parents=True, exist_ok=True) + + def create_snapshot(self, description: Optional[str] = None) -> Dict: + """ + Create a full database snapshot using SQLite backup API + Returns snapshot metadata + """ + if not self.db_path.exists(): + raise FileNotFoundError(f"Database not found at {self.db_path}") + + # Generate snapshot filename with timestamp + timestamp = datetime.utcnow().strftime("%Y%m%d_%H%M%S") + snapshot_name = f"snapshot_{timestamp}.db" + snapshot_path = self.backups_dir / snapshot_name + + # Get database size before backup + db_size = self.db_path.stat().st_size + + try: + # Use SQLite backup API for safe backup (handles concurrent access) + source_conn = sqlite3.connect(str(self.db_path)) + dest_conn = sqlite3.connect(str(snapshot_path)) + + # Perform the backup + with dest_conn: + source_conn.backup(dest_conn) + + source_conn.close() + dest_conn.close() + + # Create metadata + metadata = { + "filename": snapshot_name, + "created_at": timestamp, + "created_at_iso": datetime.utcnow().isoformat(), + "description": description or "Manual snapshot", + "size_bytes": snapshot_path.stat().st_size, + "size_mb": round(snapshot_path.stat().st_size / (1024 * 1024), 2), + "original_db_size_bytes": db_size, + "type": "manual" + } + + # Save metadata as JSON sidecar file + metadata_path = self.backups_dir / f"{snapshot_name}.meta.json" + with open(metadata_path, 'w') as f: + json.dump(metadata, f, indent=2) + + return metadata + + except Exception as e: + # Clean up partial snapshot if it exists + if snapshot_path.exists(): + snapshot_path.unlink() + raise Exception(f"Snapshot creation failed: {str(e)}") + + def list_snapshots(self) -> List[Dict]: + """ + List all available snapshots with metadata + Returns list sorted by creation date (newest first) + """ + snapshots = [] + + for db_file in sorted(self.backups_dir.glob("snapshot_*.db"), reverse=True): + metadata_file = self.backups_dir / f"{db_file.name}.meta.json" + + if metadata_file.exists(): + with open(metadata_file, 'r') as f: + metadata = json.load(f) + else: + # Fallback for legacy snapshots without metadata + stat_info = db_file.stat() + metadata = { + "filename": db_file.name, + "created_at": datetime.fromtimestamp(stat_info.st_mtime).strftime("%Y%m%d_%H%M%S"), + "created_at_iso": datetime.fromtimestamp(stat_info.st_mtime).isoformat(), + "description": "Legacy snapshot", + "size_bytes": stat_info.st_size, + "size_mb": round(stat_info.st_size / (1024 * 1024), 2), + "type": "manual" + } + + snapshots.append(metadata) + + return snapshots + + def delete_snapshot(self, filename: str) -> bool: + """Delete a snapshot and its metadata""" + snapshot_path = self.backups_dir / filename + metadata_path = self.backups_dir / f"{filename}.meta.json" + + if not snapshot_path.exists(): + raise FileNotFoundError(f"Snapshot {filename} not found") + + snapshot_path.unlink() + if metadata_path.exists(): + metadata_path.unlink() + + return True + + def restore_snapshot(self, filename: str, create_backup_before_restore: bool = True) -> Dict: + """ + Restore database from a snapshot + Creates a safety backup before restoring if requested + """ + snapshot_path = self.backups_dir / filename + + if not snapshot_path.exists(): + raise FileNotFoundError(f"Snapshot {filename} not found") + + if not self.db_path.exists(): + raise FileNotFoundError(f"Database not found at {self.db_path}") + + backup_info = None + + # Create safety backup before restore + if create_backup_before_restore: + backup_info = self.create_snapshot(description="Auto-backup before restore") + + try: + # Replace database file + shutil.copy2(str(snapshot_path), str(self.db_path)) + + return { + "message": "Database restored successfully", + "restored_from": filename, + "restored_at": datetime.utcnow().isoformat(), + "backup_created": backup_info["filename"] if backup_info else None + } + + except Exception as e: + raise Exception(f"Restore failed: {str(e)}") + + def get_database_stats(self) -> Dict: + """Get statistics about the current database""" + if not self.db_path.exists(): + return {"error": "Database not found"} + + conn = sqlite3.connect(str(self.db_path)) + cursor = conn.cursor() + + # Get table counts + cursor.execute("SELECT name FROM sqlite_master WHERE type='table' AND name NOT LIKE 'sqlite_%'") + tables = cursor.fetchall() + + table_stats = {} + total_rows = 0 + + for (table_name,) in tables: + cursor.execute(f"SELECT COUNT(*) FROM {table_name}") + count = cursor.fetchone()[0] + table_stats[table_name] = count + total_rows += count + + conn.close() + + db_size = self.db_path.stat().st_size + + return { + "database_path": str(self.db_path), + "size_bytes": db_size, + "size_mb": round(db_size / (1024 * 1024), 2), + "total_rows": total_rows, + "tables": table_stats, + "last_modified": datetime.fromtimestamp(self.db_path.stat().st_mtime).isoformat() + } + + def download_snapshot(self, filename: str) -> Path: + """Get the file path for downloading a snapshot""" + snapshot_path = self.backups_dir / filename + if not snapshot_path.exists(): + raise FileNotFoundError(f"Snapshot {filename} not found") + return snapshot_path diff --git a/docs/DATABASE_MANAGEMENT.md b/docs/DATABASE_MANAGEMENT.md new file mode 100644 index 0000000..73e3246 --- /dev/null +++ b/docs/DATABASE_MANAGEMENT.md @@ -0,0 +1,477 @@ +# Database Management Guide + +This guide covers the comprehensive database management features available in the Seismo Fleet Manager, including manual snapshots, restoration, remote cloning, and automatic backups. + +## Table of Contents + +1. [Manual Database Snapshots](#manual-database-snapshots) +2. [Restore from Snapshot](#restore-from-snapshot) +3. [Download and Upload Snapshots](#download-and-upload-snapshots) +4. [Clone Database to Dev Server](#clone-database-to-dev-server) +5. [Automatic Backup Service](#automatic-backup-service) +6. [API Reference](#api-reference) + +--- + +## Manual Database Snapshots + +### Creating a Snapshot via UI + +1. Navigate to **Settings** → **Danger Zone** tab +2. Scroll to the **Database Management** section +3. Click **"Create Snapshot"** +4. Optionally enter a description +5. The snapshot will be created and appear in the "Available Snapshots" list + +### Creating a Snapshot via API + +```bash +curl -X POST http://localhost:8000/api/settings/database/snapshot \ + -H "Content-Type: application/json" \ + -d '{"description": "Pre-deployment backup"}' +``` + +### What Happens + +- A full copy of the SQLite database is created using the SQLite backup API +- The snapshot is stored in `./data/backups/` directory +- A metadata JSON file is created alongside the snapshot +- No downtime or interruption to the running application + +### Snapshot Files + +Snapshots are stored as: +- **Database file**: `snapshot_YYYYMMDD_HHMMSS.db` +- **Metadata file**: `snapshot_YYYYMMDD_HHMMSS.db.meta.json` + +Example: +``` +data/backups/ +├── snapshot_20250101_143022.db +├── snapshot_20250101_143022.db.meta.json +├── snapshot_20250102_080000.db +└── snapshot_20250102_080000.db.meta.json +``` + +--- + +## Restore from Snapshot + +### Restoring via UI + +1. Navigate to **Settings** → **Danger Zone** tab +2. In the **Available Snapshots** section, find the snapshot you want to restore +3. Click the **restore icon** (circular arrow) next to the snapshot +4. Confirm the restoration warning +5. A safety backup of the current database is automatically created +6. The database is replaced with the snapshot +7. The page reloads automatically + +### Restoring via API + +```bash +curl -X POST http://localhost:8000/api/settings/database/restore \ + -H "Content-Type: application/json" \ + -d '{ + "filename": "snapshot_20250101_143022.db", + "create_backup": true + }' +``` + +### Important Notes + +- **Always creates a safety backup** before restoring (unless explicitly disabled) +- **Application reload required** - Users should refresh their browsers +- **Atomic operation** - The entire database is replaced at once +- **Cannot be undone** - But you'll have the safety backup + +--- + +## Download and Upload Snapshots + +### Download a Snapshot + +**Via UI**: Click the download icon next to any snapshot in the list + +**Via Browser**: +``` +http://localhost:8000/api/settings/database/snapshot/snapshot_20250101_143022.db +``` + +**Via Command Line**: +```bash +curl -o backup.db http://localhost:8000/api/settings/database/snapshot/snapshot_20250101_143022.db +``` + +### Upload a Snapshot + +**Via UI**: +1. Navigate to **Settings** → **Danger Zone** tab +2. Find the **Upload Snapshot** section +3. Click **"Choose File"** and select a `.db` file +4. Click **"Upload Snapshot"** + +**Via Command Line**: +```bash +curl -X POST http://localhost:8000/api/settings/database/upload-snapshot \ + -F "file=@/path/to/your/backup.db" +``` + +--- + +## Clone Database to Dev Server + +The clone tool allows you to copy the production database to a remote development server over the network. + +### Prerequisites + +- Remote dev server must have the same Seismo Fleet Manager installation +- Network connectivity between production and dev servers +- Python 3 and `requests` library installed + +### Basic Usage + +```bash +# Clone current database to dev server +python3 scripts/clone_db_to_dev.py --url https://dev.example.com + +# Clone using existing snapshot +python3 scripts/clone_db_to_dev.py \ + --url https://dev.example.com \ + --snapshot snapshot_20250101_143022.db + +# Clone with authentication token +python3 scripts/clone_db_to_dev.py \ + --url https://dev.example.com \ + --token YOUR_AUTH_TOKEN +``` + +### What Happens + +1. Creates a snapshot of the production database (or uses existing one) +2. Uploads the snapshot to the remote dev server +3. Automatically restores the snapshot on the dev server +4. Creates a safety backup on the dev server before restoring + +### Remote Server Setup + +The remote dev server needs no special setup - it just needs to be running the same Seismo Fleet Manager application with the database management endpoints enabled. + +### Use Cases + +- **Testing**: Test changes against production data in a dev environment +- **Debugging**: Investigate production issues with real data safely +- **Training**: Provide realistic data for user training +- **Development**: Build new features with realistic data + +--- + +## Automatic Backup Service + +The automatic backup service runs scheduled backups in the background and manages backup retention. + +### Configuration + +The backup scheduler can be configured programmatically or via environment variables. + +**Programmatic Configuration**: + +```python +from backend.services.backup_scheduler import get_backup_scheduler + +scheduler = get_backup_scheduler() +scheduler.configure( + interval_hours=24, # Backup every 24 hours + keep_count=10, # Keep last 10 backups + enabled=True # Enable automatic backups +) +scheduler.start() +``` + +**Environment Variables** (add to your `.env` or deployment config): + +```bash +AUTO_BACKUP_ENABLED=true +AUTO_BACKUP_INTERVAL_HOURS=24 +AUTO_BACKUP_KEEP_COUNT=10 +``` + +### Integration with Application Startup + +Add to `backend/main.py`: + +```python +from backend.services.backup_scheduler import get_backup_scheduler + +@app.on_event("startup") +async def startup_event(): + # Start automatic backup scheduler + scheduler = get_backup_scheduler() + scheduler.configure( + interval_hours=24, # Daily backups + keep_count=10, # Keep 10 most recent + enabled=True + ) + scheduler.start() + +@app.on_event("shutdown") +async def shutdown_event(): + # Stop backup scheduler gracefully + scheduler = get_backup_scheduler() + scheduler.stop() +``` + +### Manual Control + +```python +from backend.services.backup_scheduler import get_backup_scheduler + +scheduler = get_backup_scheduler() + +# Get current status +status = scheduler.get_status() +print(status) +# {'enabled': True, 'running': True, 'interval_hours': 24, 'keep_count': 10, 'next_run': '2025-01-02T14:00:00'} + +# Create backup immediately +scheduler.create_automatic_backup() + +# Stop scheduler +scheduler.stop() + +# Start scheduler +scheduler.start() +``` + +### Backup Retention + +The scheduler automatically deletes old backups based on the `keep_count` setting. For example, if `keep_count=10`, only the 10 most recent backups are kept, and older ones are automatically deleted. + +--- + +## API Reference + +### Database Statistics + +```http +GET /api/settings/database/stats +``` + +Returns database size, row counts, and last modified time. + +**Response**: +```json +{ + "database_path": "./data/seismo_fleet.db", + "size_bytes": 1048576, + "size_mb": 1.0, + "total_rows": 1250, + "tables": { + "roster": 450, + "emitters": 600, + "ignored_units": 50, + "unit_history": 150 + }, + "last_modified": "2025-01-01T14:30:22" +} +``` + +### Create Snapshot + +```http +POST /api/settings/database/snapshot +Content-Type: application/json + +{ + "description": "Optional description" +} +``` + +**Response**: +```json +{ + "message": "Snapshot created successfully", + "snapshot": { + "filename": "snapshot_20250101_143022.db", + "created_at": "20250101_143022", + "created_at_iso": "2025-01-01T14:30:22", + "description": "Optional description", + "size_bytes": 1048576, + "size_mb": 1.0, + "type": "manual" + } +} +``` + +### List Snapshots + +```http +GET /api/settings/database/snapshots +``` + +**Response**: +```json +{ + "snapshots": [ + { + "filename": "snapshot_20250101_143022.db", + "created_at": "20250101_143022", + "created_at_iso": "2025-01-01T14:30:22", + "description": "Manual backup", + "size_mb": 1.0, + "type": "manual" + } + ], + "count": 1 +} +``` + +### Download Snapshot + +```http +GET /api/settings/database/snapshot/{filename} +``` + +Returns the snapshot file as a download. + +### Delete Snapshot + +```http +DELETE /api/settings/database/snapshot/{filename} +``` + +### Restore Database + +```http +POST /api/settings/database/restore +Content-Type: application/json + +{ + "filename": "snapshot_20250101_143022.db", + "create_backup": true +} +``` + +**Response**: +```json +{ + "message": "Database restored successfully", + "restored_from": "snapshot_20250101_143022.db", + "restored_at": "2025-01-01T15:00:00", + "backup_created": "snapshot_20250101_150000.db" +} +``` + +### Upload Snapshot + +```http +POST /api/settings/database/upload-snapshot +Content-Type: multipart/form-data + +file: +``` + +--- + +## Best Practices + +### 1. Regular Backups + +- **Enable automatic backups** with a 24-hour interval +- **Keep at least 7-10 backups** for historical coverage +- **Create manual snapshots** before major changes + +### 2. Before Major Operations + +Always create a snapshot before: +- Software upgrades +- Bulk data imports +- Database schema changes +- Testing destructive operations + +### 3. Testing Restores + +Periodically test your restore process: +1. Download a snapshot +2. Test restoration on a dev environment +3. Verify data integrity + +### 4. Off-Site Backups + +For production systems: +- **Download snapshots** to external storage regularly +- Use the clone tool to **sync to remote servers** +- Store backups in **multiple geographic locations** + +### 5. Snapshot Management + +- Delete old snapshots when no longer needed +- Use descriptive names/descriptions for manual snapshots +- Keep pre-deployment snapshots separate + +--- + +## Troubleshooting + +### Snapshot Creation Fails + +**Problem**: "Database is locked" error + +**Solution**: The database is being written to. Wait a moment and try again. The SQLite backup API handles most locking automatically. + +### Restore Doesn't Complete + +**Problem**: Restore appears to hang + +**Solution**: +- Check server logs for errors +- Ensure sufficient disk space +- Verify the snapshot file isn't corrupted + +### Upload Fails on Dev Server + +**Problem**: "Permission denied" or "File too large" + +**Solutions**: +- Check file upload size limits in your web server config (nginx/apache) +- Verify write permissions on `./data/backups/` directory +- Ensure sufficient disk space + +### Automatic Backups Not Running + +**Problem**: No automatic backups being created + +**Solutions**: +1. Check if scheduler is enabled: `scheduler.get_status()` +2. Check application logs for scheduler errors +3. Ensure `schedule` library is installed: `pip install schedule` +4. Verify scheduler was started in application startup + +--- + +## Security Considerations + +1. **Access Control**: Restrict access to the Settings → Danger Zone to administrators only +2. **Backup Storage**: Store backups in a secure location with proper permissions +3. **Remote Cloning**: Use authentication tokens when cloning to remote servers +4. **Data Sensitivity**: Remember that snapshots contain all database data - treat them with the same security as the live database + +--- + +## File Locations + +- **Database**: `./data/seismo_fleet.db` +- **Backups Directory**: `./data/backups/` +- **Clone Script**: `./scripts/clone_db_to_dev.py` +- **Backup Service**: `./backend/services/database_backup.py` +- **Scheduler Service**: `./backend/services/backup_scheduler.py` + +--- + +## Support + +For issues or questions: +1. Check application logs in `./logs/` +2. Review this documentation +3. Test with a small database first +4. Contact your system administrator diff --git a/scripts/clone_db_to_dev.py b/scripts/clone_db_to_dev.py new file mode 100755 index 0000000..e9394d8 --- /dev/null +++ b/scripts/clone_db_to_dev.py @@ -0,0 +1,149 @@ +#!/usr/bin/env python3 +""" +Clone Production Database to Dev Server +Helper script to clone the production database to a remote development server +""" + +import argparse +import requests +from pathlib import Path +import sys + +# Add parent directory to path for imports +sys.path.insert(0, str(Path(__file__).parent.parent)) + +from backend.services.database_backup import DatabaseBackupService + + +def clone_to_dev(remote_url: str, snapshot_filename: str = None, auth_token: str = None): + """Clone database to remote dev server""" + + backup_service = DatabaseBackupService() + + print(f"🔄 Cloning database to {remote_url}...") + + try: + # If no snapshot specified, create a new one + if snapshot_filename: + print(f"📦 Using existing snapshot: {snapshot_filename}") + snapshot_path = backup_service.backups_dir / snapshot_filename + if not snapshot_path.exists(): + print(f"❌ Error: Snapshot {snapshot_filename} not found") + return False + else: + print("📸 Creating new snapshot...") + snapshot_info = backup_service.create_snapshot(description="Clone to dev server") + snapshot_filename = snapshot_info["filename"] + snapshot_path = backup_service.backups_dir / snapshot_filename + print(f"✅ Snapshot created: {snapshot_filename} ({snapshot_info['size_mb']} MB)") + + # Upload to remote server + print(f"📤 Uploading to {remote_url}...") + + headers = {} + if auth_token: + headers["Authorization"] = f"Bearer {auth_token}" + + with open(snapshot_path, 'rb') as f: + files = {'file': (snapshot_filename, f, 'application/x-sqlite3')} + + response = requests.post( + f"{remote_url.rstrip('/')}/api/settings/database/upload-snapshot", + files=files, + headers=headers, + timeout=300 + ) + + response.raise_for_status() + result = response.json() + + print(f"✅ Upload successful!") + print(f" Remote filename: {result['snapshot']['filename']}") + print(f" Size: {result['snapshot']['size_mb']} MB") + + # Now restore on remote server + print("🔄 Restoring on remote server...") + restore_response = requests.post( + f"{remote_url.rstrip('/')}/api/settings/database/restore", + json={ + "filename": result['snapshot']['filename'], + "create_backup": True + }, + headers=headers, + timeout=60 + ) + + restore_response.raise_for_status() + restore_result = restore_response.json() + + print(f"✅ Database cloned successfully!") + print(f" Restored from: {restore_result['restored_from']}") + print(f" Remote backup created: {restore_result.get('backup_created', 'N/A')}") + + return True + + except requests.exceptions.RequestException as e: + print(f"❌ Network error: {str(e)}") + return False + except Exception as e: + print(f"❌ Error: {str(e)}") + return False + + +def main(): + parser = argparse.ArgumentParser( + description="Clone production database to development server", + formatter_class=argparse.RawDescriptionHelpFormatter, + epilog=""" +Examples: + # Clone current database to dev server + python clone_db_to_dev.py --url https://dev.example.com + + # Clone using existing snapshot + python clone_db_to_dev.py --url https://dev.example.com --snapshot snapshot_20250101_120000.db + + # Clone with authentication + python clone_db_to_dev.py --url https://dev.example.com --token YOUR_TOKEN + """ + ) + + parser.add_argument( + '--url', + required=True, + help='Remote dev server URL (e.g., https://dev.example.com)' + ) + + parser.add_argument( + '--snapshot', + help='Use existing snapshot instead of creating new one' + ) + + parser.add_argument( + '--token', + help='Authentication token for remote server' + ) + + args = parser.parse_args() + + print("=" * 60) + print(" Database Cloning Tool - Production to Dev") + print("=" * 60) + print() + + success = clone_to_dev( + remote_url=args.url, + snapshot_filename=args.snapshot, + auth_token=args.token + ) + + print() + if success: + print("🎉 Cloning completed successfully!") + sys.exit(0) + else: + print("💥 Cloning failed") + sys.exit(1) + + +if __name__ == "__main__": + main() diff --git a/templates/dashboard.html b/templates/dashboard.html index 5bb86ec..6fdb27d 100644 --- a/templates/dashboard.html +++ b/templates/dashboard.html @@ -116,28 +116,28 @@
- -
-
-

Recent Photos

+ +
+
+

Recent Call-Ins

+ d="M12 8v4l3 3m6-3a9 9 0 11-18 0 9 9 0 0118 0z"> - +
-
- - - - -

No recent photos

+
+
+

Loading recent call-ins...

+
+
@@ -295,7 +295,7 @@ function toggleCard(cardName) { // Restore card states from localStorage on page load function restoreCardStates() { const cardStates = JSON.parse(localStorage.getItem('dashboardCardStates') || '{}'); - const cardNames = ['fleet-summary', 'recent-alerts', 'recent-photos', 'fleet-map', 'fleet-status']; + const cardNames = ['fleet-summary', 'recent-alerts', 'recent-callins', 'fleet-map', 'fleet-status']; cardNames.forEach(cardName => { const content = document.getElementById(`${cardName}-content`); @@ -531,6 +531,90 @@ async function loadRecentPhotos() { // Load recent photos on page load and refresh every 30 seconds loadRecentPhotos(); setInterval(loadRecentPhotos, 30000); + +// Load and display recent call-ins +let showingAllCallins = false; +const DEFAULT_CALLINS_DISPLAY = 5; + +async function loadRecentCallins() { + try { + const response = await fetch('/api/recent-callins?hours=6'); + if (!response.ok) { + throw new Error('Failed to load recent call-ins'); + } + + const data = await response.json(); + const callinsList = document.getElementById('recent-callins-list'); + const showAllButton = document.getElementById('show-all-callins'); + + if (data.call_ins && data.call_ins.length > 0) { + // Determine how many to show + const displayCount = showingAllCallins ? data.call_ins.length : Math.min(DEFAULT_CALLINS_DISPLAY, data.call_ins.length); + const callinsToDisplay = data.call_ins.slice(0, displayCount); + + // Build HTML for call-ins list + let html = ''; + callinsToDisplay.forEach(callin => { + // Status color + const statusColor = callin.status === 'OK' ? 'green' : callin.status === 'Pending' ? 'yellow' : 'red'; + const statusClass = callin.status === 'OK' ? 'bg-green-500' : callin.status === 'Pending' ? 'bg-yellow-500' : 'bg-red-500'; + + // Build location/note line + let subtitle = ''; + if (callin.location) { + subtitle = callin.location; + } else if (callin.note) { + subtitle = callin.note; + } + + html += ` +
+
+ +
+ + ${callin.unit_id} + + ${subtitle ? `

${subtitle}

` : ''} +
+
+ ${callin.time_ago} +
`; + }); + + callinsList.innerHTML = html; + + // Show/hide the "Show all" button + if (data.call_ins.length > DEFAULT_CALLINS_DISPLAY) { + showAllButton.classList.remove('hidden'); + showAllButton.textContent = showingAllCallins + ? `Show fewer (${DEFAULT_CALLINS_DISPLAY})` + : `Show all (${data.call_ins.length})`; + } else { + showAllButton.classList.add('hidden'); + } + } else { + callinsList.innerHTML = '

No units have called in within the past 6 hours

'; + showAllButton.classList.add('hidden'); + } + } catch (error) { + console.error('Error loading recent call-ins:', error); + document.getElementById('recent-callins-list').innerHTML = '

Failed to load recent call-ins

'; + } +} + +// Toggle show all/show fewer +document.addEventListener('DOMContentLoaded', function() { + const showAllButton = document.getElementById('show-all-callins'); + showAllButton.addEventListener('click', function() { + showingAllCallins = !showingAllCallins; + loadRecentCallins(); + }); +}); + +// Load recent call-ins on page load and refresh every 30 seconds +loadRecentCallins(); +setInterval(loadRecentCallins, 30000); {% endblock %} diff --git a/templates/settings.html b/templates/settings.html index 8602236..407f6c3 100644 --- a/templates/settings.html +++ b/templates/settings.html @@ -401,6 +401,99 @@
+ + +
+

Database Management

+

Create snapshots, restore backups, and manage database files

+
+ + +
+

Database Statistics

+
+
+
+ + +
+ + +
+
+
+

Create Database Snapshot

+

+ Create a full backup of the current database state +

+
+ +
+
+ + +
+
+

Available Snapshots

+ +
+ +
+
+
+ + + + +
+ + +
+

Upload Snapshot

+

+ Upload a database snapshot file from another server +

+
+ + +
+ +
@@ -1004,5 +1097,263 @@ async function confirmClearIgnored() { alert('❌ Error: ' + error.message); } } + +// ========== DATABASE MANAGEMENT ========== + +async function loadDatabaseStats() { + const loading = document.getElementById('dbStatsLoading'); + const content = document.getElementById('dbStatsContent'); + + try { + loading.classList.remove('hidden'); + content.classList.add('hidden'); + + const response = await fetch('/api/settings/database/stats'); + const stats = await response.json(); + + // Update stats display + document.getElementById('dbSize').textContent = stats.size_mb + ' MB'; + document.getElementById('dbRows').textContent = stats.total_rows.toLocaleString(); + + const lastMod = new Date(stats.last_modified); + document.getElementById('dbModified').textContent = lastMod.toLocaleDateString(); + + // Load snapshot count + const snapshotsResp = await fetch('/api/settings/database/snapshots'); + const snapshotsData = await snapshotsResp.json(); + document.getElementById('dbSnapshotCount').textContent = snapshotsData.count; + + loading.classList.add('hidden'); + content.classList.remove('hidden'); + } catch (error) { + loading.classList.add('hidden'); + alert('Error loading database stats: ' + error.message); + } +} + +async function createSnapshot() { + const description = prompt('Enter a description for this snapshot (optional):'); + + try { + const response = await fetch('/api/settings/database/snapshot', { + method: 'POST', + headers: { 'Content-Type': 'application/json' }, + body: JSON.stringify({ description: description || null }) + }); + + const result = await response.json(); + + if (response.ok) { + alert(`✅ Snapshot created successfully!\n\nFilename: ${result.snapshot.filename}\nSize: ${result.snapshot.size_mb} MB`); + loadSnapshots(); + loadDatabaseStats(); + } else { + alert('❌ Error: ' + (result.detail || 'Unknown error')); + } + } catch (error) { + alert('❌ Error: ' + error.message); + } +} + +async function loadSnapshots() { + const loading = document.getElementById('snapshotsLoading'); + const list = document.getElementById('snapshotsList'); + const empty = document.getElementById('snapshotsEmpty'); + + try { + loading.classList.remove('hidden'); + list.classList.add('hidden'); + empty.classList.add('hidden'); + + const response = await fetch('/api/settings/database/snapshots'); + const data = await response.json(); + + if (data.snapshots.length === 0) { + loading.classList.add('hidden'); + empty.classList.remove('hidden'); + return; + } + + list.innerHTML = data.snapshots.map(snapshot => createSnapshotCard(snapshot)).join(''); + + loading.classList.add('hidden'); + list.classList.remove('hidden'); + } catch (error) { + loading.classList.add('hidden'); + alert('Error loading snapshots: ' + error.message); + } +} + +function createSnapshotCard(snapshot) { + const createdDate = new Date(snapshot.created_at_iso); + const dateStr = createdDate.toLocaleString(); + + return ` +
+
+
+
+

${snapshot.filename}

+ + ${snapshot.type} + +
+

${snapshot.description}

+
+ 📅 ${dateStr} + 💾 ${snapshot.size_mb} MB +
+
+
+ + + +
+
+
+ `; +} + +function downloadSnapshot(filename) { + window.location.href = `/api/settings/database/snapshot/${filename}`; +} + +async function restoreSnapshot(filename) { + const confirmMsg = `⚠️ RESTORE DATABASE WARNING ⚠️ + +This will REPLACE the current database with snapshot: +${filename} + +A backup of the current database will be created automatically before restoring. + +THIS ACTION WILL RESTART THE APPLICATION! + +Continue?`; + + if (!confirm(confirmMsg)) { + return; + } + + try { + const response = await fetch('/api/settings/database/restore', { + method: 'POST', + headers: { 'Content-Type': 'application/json' }, + body: JSON.stringify({ + filename: filename, + create_backup: true + }) + }); + + const result = await response.json(); + + if (response.ok) { + alert(`✅ Database restored successfully!\n\nRestored from: ${result.restored_from}\nBackup created: ${result.backup_created}\n\nThe page will now reload.`); + location.reload(); + } else { + alert('❌ Error: ' + (result.detail || 'Unknown error')); + } + } catch (error) { + alert('❌ Error: ' + error.message); + } +} + +async function deleteSnapshot(filename) { + if (!confirm(`Delete snapshot ${filename}?\n\nThis cannot be undone.`)) { + return; + } + + try { + const response = await fetch(`/api/settings/database/snapshot/${filename}`, { + method: 'DELETE' + }); + + const result = await response.json(); + + if (response.ok) { + alert(`✅ Snapshot deleted: ${filename}`); + loadSnapshots(); + loadDatabaseStats(); + } else { + alert('❌ Error: ' + (result.detail || 'Unknown error')); + } + } catch (error) { + alert('❌ Error: ' + error.message); + } +} + +// Upload snapshot form handler +document.getElementById('uploadSnapshotForm').addEventListener('submit', async function(e) { + e.preventDefault(); + + const fileInput = document.getElementById('snapshotFileInput'); + const resultDiv = document.getElementById('uploadResult'); + + if (!fileInput.files[0]) { + alert('Please select a file'); + return; + } + + const formData = new FormData(); + formData.append('file', fileInput.files[0]); + + try { + const response = await fetch('/api/settings/database/upload-snapshot', { + method: 'POST', + body: formData + }); + + const result = await response.json(); + + if (response.ok) { + resultDiv.className = 'mt-3 p-3 rounded-lg bg-green-100 dark:bg-green-900 text-green-800 dark:text-green-200'; + resultDiv.innerHTML = `✅ Uploaded: ${result.snapshot.filename} (${result.snapshot.size_mb} MB)`; + resultDiv.classList.remove('hidden'); + + fileInput.value = ''; + loadSnapshots(); + loadDatabaseStats(); + + setTimeout(() => { + resultDiv.classList.add('hidden'); + }, 5000); + } else { + resultDiv.className = 'mt-3 p-3 rounded-lg bg-red-100 dark:bg-red-900 text-red-800 dark:text-red-200'; + resultDiv.innerHTML = `❌ Error: ${result.detail || 'Unknown error'}`; + resultDiv.classList.remove('hidden'); + } + } catch (error) { + resultDiv.className = 'mt-3 p-3 rounded-lg bg-red-100 dark:bg-red-900 text-red-800 dark:text-red-200'; + resultDiv.innerHTML = `❌ Error: ${error.message}`; + resultDiv.classList.remove('hidden'); + } +}); + +// Load database stats and snapshots when danger zone tab is shown +const originalShowTab = showTab; +showTab = function(tabName) { + originalShowTab(tabName); + if (tabName === 'danger') { + loadDatabaseStats(); + loadSnapshots(); + } +}; {% endblock %} From 7c89d203d71a5d26920195036ef6fd07fe70228a Mon Sep 17 00:00:00 2001 From: serversdwn Date: Tue, 16 Dec 2025 20:05:49 +0000 Subject: [PATCH 5/7] renamed datamanagement to roster management --- templates/settings.html | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/templates/settings.html b/templates/settings.html index 407f6c3..ee7f250 100644 --- a/templates/settings.html +++ b/templates/settings.html @@ -20,7 +20,7 @@ - Data Management + Roster Management + + + + + + + + + @@ -1346,11 +1363,11 @@ document.getElementById('uploadSnapshotForm').addEventListener('submit', async f } }); -// Load database stats and snapshots when danger zone tab is shown +// Load database stats and snapshots when database tab is shown const originalShowTab = showTab; showTab = function(tabName) { originalShowTab(tabName); - if (tabName === 'danger') { + if (tabName === 'database') { loadDatabaseStats(); loadSnapshots(); } From 2d22d0d3290087b862e9d16403dd28014dfa5830 Mon Sep 17 00:00:00 2001 From: serversdwn Date: Tue, 16 Dec 2025 20:39:56 +0000 Subject: [PATCH 7/7] docs updated to v0.4.0 --- CHANGELOG.md | 63 +++++++++++++++++++++++++++++++++++++++++++++ README.md | 47 ++++++++++++++++++++++++++++----- backend/main.py | 2 +- templates/base.html | 4 +-- 4 files changed, 107 insertions(+), 9 deletions(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index 1eb567f..99ff90c 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -5,6 +5,68 @@ All notable changes to Seismo Fleet Manager will be documented in this file. The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html). +## [0.4.0] - 2025-12-16 + +### Added +- **Database Management System**: Comprehensive backup and restore capabilities + - **Manual Snapshots**: Create on-demand backups of the entire database with optional descriptions + - **Restore from Snapshot**: Restore database from any snapshot with automatic safety backup + - **Upload/Download Snapshots**: Transfer database snapshots to/from the server + - **Database Tab**: New dedicated tab in Settings for all database management operations + - **Database Statistics**: View database size, row counts by table, and last modified time + - **Snapshot Metadata**: Each snapshot includes creation time, description, size, and type (manual/automatic) + - **Safety Backups**: Automatic backup created before any restore operation +- **Remote Database Cloning**: Dev tools for cloning production database to remote development servers + - **Clone Script**: `scripts/clone_db_to_dev.py` for copying database over WAN + - **Network Upload**: Upload snapshots via HTTP to remote servers + - **Auto-restore**: Automatically restore uploaded database on target server + - **Authentication Support**: Optional token-based authentication for secure transfers +- **Automatic Backup Scheduler**: Background service for automated database backups + - **Configurable Intervals**: Set backup frequency (default: 24 hours) + - **Retention Management**: Automatically delete old backups (configurable keep count) + - **Manual Trigger**: Force immediate backup via API + - **Status Monitoring**: Check scheduler status and next scheduled run time + - **Background Thread**: Non-blocking operation using Python threading +- **Settings Reorganization**: Improved tab structure for better organization + - Renamed "Data Management" tab to "Roster Management" + - Moved CSV Replace Mode from Advanced tab to Roster Management tab + - Created dedicated Database tab for all backup/restore operations +- **Comprehensive Documentation**: New `docs/DATABASE_MANAGEMENT.md` guide covering: + - Manual snapshot creation and restoration workflows + - Download/upload procedures for off-site backups + - Remote database cloning setup and usage + - Automatic backup configuration and integration + - API reference for all database endpoints + - Best practices and troubleshooting guide + +### Changed +- **Settings Tab Organization**: Restructured for better logical grouping + - **General**: Display preferences (timezone, theme, auto-refresh) + - **Roster Management**: CSV operations and roster table (now includes Replace Mode) + - **Database**: All backup/restore operations (NEW) + - **Advanced**: Power user settings (calibration, thresholds) + - **Danger Zone**: Destructive operations +- CSV Replace Mode warnings enhanced and moved to Roster Management context + +### Technical Details +- **SQLite Backup API**: Uses native SQLite backup API for concurrent-safe snapshots +- **Metadata Tracking**: JSON sidecar files store snapshot metadata alongside database files +- **Atomic Operations**: Database restoration is atomic with automatic rollback on failure +- **File Structure**: Snapshots stored in `./data/backups/` with timestamped filenames +- **API Endpoints**: 7 new endpoints for database management operations +- **Backup Service**: `backend/services/database_backup.py` - Core backup/restore logic +- **Scheduler Service**: `backend/services/backup_scheduler.py` - Automatic backup automation +- **Clone Utility**: `scripts/clone_db_to_dev.py` - Remote database synchronization tool + +### Security Considerations +- Snapshots contain full database data and should be secured appropriately +- Remote cloning supports optional authentication tokens +- Restore operations require safety backup creation by default +- All destructive operations remain in Danger Zone with warnings + +### Migration Notes +No database migration required for v0.4.0. All new features use existing database structure and add new backup management capabilities without modifying the core schema. + ## [0.3.3] - 2025-12-12 ### Changed @@ -231,6 +293,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0 - Photo management per unit - Automated status categorization (OK/Pending/Missing) +[0.4.0]: https://github.com/serversdwn/seismo-fleet-manager/compare/v0.3.3...v0.4.0 [0.3.3]: https://github.com/serversdwn/seismo-fleet-manager/compare/v0.3.2...v0.3.3 [0.3.2]: https://github.com/serversdwn/seismo-fleet-manager/compare/v0.3.1...v0.3.2 [0.3.1]: https://github.com/serversdwn/seismo-fleet-manager/compare/v0.3.0...v0.3.1 diff --git a/README.md b/README.md index e34142d..3451713 100644 --- a/README.md +++ b/README.md @@ -1,4 +1,4 @@ -# Seismo Fleet Manager v0.3.3 +# Seismo Fleet Manager v0.4.0 Backend API and HTMX-powered web interface for managing a mixed fleet of seismographs and field modems. Track deployments, monitor health in real time, merge roster intent with incoming telemetry, and control your fleet through a unified database and dashboard. ## Features @@ -19,6 +19,12 @@ Backend API and HTMX-powered web interface for managing a mixed fleet of seismog - **Photo Management**: Upload and view photos for each unit - **Interactive Maps**: Leaflet-based maps showing unit locations with tap-to-navigate for mobile - **SQLite Storage**: Lightweight, file-based database for easy deployment +- **Database Management**: Comprehensive backup and restore system + - **Manual Snapshots**: Create on-demand backups with descriptions + - **Restore from Snapshot**: Restore database with automatic safety backups + - **Upload/Download**: Transfer database snapshots for off-site storage + - **Remote Cloning**: Copy production database to remote dev servers over WAN + - **Automatic Backups**: Scheduled background backups with configurable retention ## Roster Manager & Settings @@ -26,10 +32,12 @@ Visit [`/settings`](http://localhost:8001/settings) to perform bulk roster opera - **CSV export/import**: Download the entire roster, merge updates, or replace all units in one transaction. - **Live roster table**: Fetch every unit via HTMX, edit metadata, toggle deployed/retired states, move emitters to the ignore list, or delete records in-place. +- **Database backups**: Create snapshots, restore from backups, upload/download database files, view database statistics. +- **Remote cloning**: Clone production database to remote development servers over the network (see `scripts/clone_db_to_dev.py`). - **Stats at a glance**: View counts for the roster, emitters, and ignored units to confirm import/cleanup operations worked. - **Danger zone controls**: Clear specific tables or wipe all fleet data when resetting a lab/demo environment. -All UI actions call `GET/POST /api/settings/*` endpoints so you can automate the same workflows from scripts. +All UI actions call `GET/POST /api/settings/*` endpoints so you can automate the same workflows from scripts. See [docs/DATABASE_MANAGEMENT.md](docs/DATABASE_MANAGEMENT.md) for comprehensive database backup and restore documentation. ## Tech Stack @@ -180,6 +188,17 @@ Both migration scripts are idempotent—if the columns/tables already exist, the - **POST** `/api/settings/clear-emitters` - Delete auto-discovered emitters - **POST** `/api/settings/clear-ignored` - Reset ignore list +### Database Management +- **GET** `/api/settings/database/stats` - Database size, row counts, and last modified time +- **POST** `/api/settings/database/snapshot` - Create manual database snapshot with optional description +- **GET** `/api/settings/database/snapshots` - List all available snapshots with metadata +- **GET** `/api/settings/database/snapshot/{filename}` - Download a specific snapshot file +- **DELETE** `/api/settings/database/snapshot/{filename}` - Delete a snapshot +- **POST** `/api/settings/database/restore` - Restore database from snapshot (creates safety backup) +- **POST** `/api/settings/database/upload-snapshot` - Upload snapshot file to server + +See [docs/DATABASE_MANAGEMENT.md](docs/DATABASE_MANAGEMENT.md) for detailed documentation and examples. + ### CSV Import Format Create a CSV file with the following columns (only `unit_id` is required, everything else is optional): @@ -368,7 +387,9 @@ seismo-fleet-manager/ │ │ ├── dashboard_tabs.py # Dashboard tab endpoints │ │ └── settings.py # Settings, preferences, and data management │ ├── services/ -│ │ └── snapshot.py # Fleet status snapshot logic +│ │ ├── snapshot.py # Fleet status snapshot logic +│ │ ├── database_backup.py # Database backup and restore service +│ │ └── backup_scheduler.py # Automatic backup scheduler │ ├── migrate_add_device_types.py # SQLite migration for v0.2 schema │ ├── migrate_add_user_preferences.py # SQLite migration for v0.3 schema │ └── static/ # Static assets (CSS, etc.) @@ -385,6 +406,11 @@ seismo-fleet-manager/ │ ├── ignored_table.html │ └── unknown_emitters.html ├── data/ # SQLite database & photos (persisted) +│ └── backups/ # Database snapshots directory +├── scripts/ +│ └── clone_db_to_dev.py # Remote database cloning utility +├── docs/ +│ └── DATABASE_MANAGEMENT.md # Database backup/restore guide ├── requirements.txt # Python dependencies ├── Dockerfile # Docker container definition ├── docker-compose.yml # Docker Compose configuration @@ -437,6 +463,14 @@ docker compose down -v ## Release Highlights +### v0.4.0 — 2025-12-16 +- **Database Management System**: Complete backup and restore functionality with manual snapshots, restore operations, and upload/download capabilities +- **Remote Database Cloning**: New `clone_db_to_dev.py` script for copying production database to remote dev servers over WAN +- **Automatic Backup Scheduler**: Background service for scheduled backups with configurable retention management +- **Database Tab**: New dedicated tab in Settings for all database operations with real-time statistics +- **Settings Reorganization**: Improved tab structure - renamed "Data Management" to "Roster Management", moved CSV Replace Mode, created Database tab +- **Comprehensive Documentation**: New `docs/DATABASE_MANAGEMENT.md` with complete guide to backup/restore workflows, API reference, and best practices + ### v0.3.3 — 2025-12-12 - **Improved Mobile Navigation**: Hamburger menu moved to bottom nav bar (no more floating button covering content) - **Better Status Visibility**: Larger status dots (16px) in dashboard fleet overview for easier at-a-glance status checks @@ -491,7 +525,6 @@ See [CHANGELOG.md](CHANGELOG.md) for the full release notes. - PostgreSQL support for larger deployments - Advanced filtering and search - Export roster to various formats -- Automated backup and restore ## License @@ -499,9 +532,11 @@ MIT ## Version -**Current: 0.3.3** — Mobile navigation improvements and better status visibility (2025-12-12) +**Current: 0.4.0** — Database management system with backup/restore and remote cloning (2025-12-16) -Previous: 0.3.2 — Progressive Web App with mobile optimization (2025-12-12) +Previous: 0.3.3 — Mobile navigation improvements and better status visibility (2025-12-12) + +0.3.2 — Progressive Web App with mobile optimization (2025-12-12) 0.3.1 — Dashboard alerts and status fixes (2025-12-12) diff --git a/backend/main.py b/backend/main.py index 1c95686..575f7dc 100644 --- a/backend/main.py +++ b/backend/main.py @@ -20,7 +20,7 @@ Base.metadata.create_all(bind=engine) ENVIRONMENT = os.getenv("ENVIRONMENT", "production") # Initialize FastAPI app -VERSION = "0.3.3" +VERSION = "0.4.0" app = FastAPI( title="Seismo Fleet Manager", description="Backend API for managing seismograph fleet status", diff --git a/templates/base.html b/templates/base.html index a741902..90ce252 100644 --- a/templates/base.html +++ b/templates/base.html @@ -360,10 +360,10 @@ - + - + {% block extra_scripts %}{% endblock %}