Compare commits

...

12 Commits

Author SHA1 Message Date
serversdwn
bf5f222511 Add:
- db cache dump on diagnostics request.
- individual device logs, db and files.
-Device logs api endpoints and diagnostics UI.

Fix:
- slmm standalone now uses local TZ (was UTC only before)
- fixed measurement start time logic.
2026-01-29 18:50:47 +00:00
serversdwn
eb39a9d1d0 add: device communication lock, Now to send a tcp command, slmm must establish a connection lock to prevent flooding unit.
fixed: Background poller intervals increased.
2026-01-29 07:54:49 +00:00
serversdwn
67d63b4173 Merge branch 'main' of ssh://10.0.0.2:2222/serversdown/slmm 2026-01-23 08:29:27 +00:00
serversdwn
25cf9528d0 docs: update to 0.2.1 2026-01-23 08:26:23 +00:00
738ad7878e doc update 2026-01-22 15:30:06 -05:00
serversdwn
152377d608 feat: terra-view scheduler implementation added. Start_cylce and stop_cycle functions added. 2026-01-22 20:25:47 +00:00
serversdwn
4868381053 Enhance FTP logging with detailed phases for connection, authentication, and data transfer 2026-01-21 08:05:38 +00:00
serversdwn
b4bbfd2b01 chore:fixed api.md to confirm FTP/TCP interactions are working. 2026-01-17 08:13:19 +00:00
serversdwn
82651f71b5 Add roster management interface and related API endpoints
- Implemented a new `/roster` endpoint to retrieve and manage device configurations.
- Added HTML template for the roster page with a table to display device status and actions.
- Introduced functionality to add, edit, and delete devices via the roster interface.
- Enhanced `ConfigPayload` model to include polling options.
- Updated the main application to serve the new roster page and link to it from the index.
- Added validation for polling interval in the configuration payload.
- Created detailed documentation for the roster management features and API endpoints.
2026-01-17 08:00:05 +00:00
serversdwn
182920809d chore: docs and scripts organized. clutter cleared. 2026-01-16 19:06:38 +00:00
serversdwn
2a3589ca5c Add endpoint to delete device configuration and associated status data 2026-01-16 07:39:26 +00:00
serversdwn
d43ef7427f v0.2.0: async status polling added. 2026-01-16 06:24:13 +00:00
29 changed files with 3750 additions and 165 deletions

151
CHANGELOG.md Normal file
View File

@@ -0,0 +1,151 @@
# Changelog
All notable changes to SLMM (Sound Level Meter Manager) will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [0.2.1] - 2026-01-23
### Added
- **Roster management**: UI and API endpoints for managing device rosters.
- **Delete config endpoint**: Remove device configuration alongside cached status data.
- **Scheduler hooks**: `start_cycle` and `stop_cycle` helpers for Terra-View scheduling integration.
### Changed
- **FTP logging**: Connection, authentication, and transfer phases now log explicitly.
- **Documentation**: Reorganized docs/scripts and updated API notes for FTP/TCP verification.
## [0.2.0] - 2026-01-15
### Added
#### Background Polling System
- **Continuous automatic device polling** - Background service that continuously polls configured devices
- **Per-device configurable intervals** - Each device can have custom polling interval (10-3600 seconds, default 60)
- **Automatic offline detection** - Devices automatically marked unreachable after 3 consecutive failures
- **Reachability tracking** - Database fields track device health with failure counters and error messages
- **Dynamic sleep scheduling** - Polling service adjusts sleep intervals based on device configurations
- **Graceful lifecycle management** - Background poller starts on application startup and stops cleanly on shutdown
#### New API Endpoints
- `GET /api/nl43/{unit_id}/polling/config` - Get device polling configuration
- `PUT /api/nl43/{unit_id}/polling/config` - Update polling interval and enable/disable per-device polling
- `GET /api/nl43/_polling/status` - Get global polling status for all devices with reachability info
#### Database Schema Changes
- **NL43Config table**:
- `poll_interval_seconds` (Integer, default 60) - Polling interval in seconds
- `poll_enabled` (Boolean, default true) - Enable/disable background polling per device
- **NL43Status table**:
- `is_reachable` (Boolean, default true) - Current device reachability status
- `consecutive_failures` (Integer, default 0) - Count of consecutive poll failures
- `last_poll_attempt` (DateTime) - Last time background poller attempted to poll
- `last_success` (DateTime) - Last successful poll timestamp
- `last_error` (Text) - Last error message (truncated to 500 chars)
#### New Files
- `app/background_poller.py` - Background polling service implementation
- `migrate_add_polling_fields.py` - Database migration script for v0.2.0 schema changes
- `test_polling.sh` - Comprehensive test script for polling functionality
- `CHANGELOG.md` - This changelog file
### Changed
- **Enhanced status endpoint** - `GET /api/nl43/{unit_id}/status` now includes polling-related fields (is_reachable, consecutive_failures, last_poll_attempt, last_success, last_error)
- **Application startup** - Added lifespan context manager in `app/main.py` to manage background poller lifecycle
- **Performance improvement** - Terra-View requests now return cached data instantly (<100ms) instead of waiting for device queries (1-2 seconds)
### Technical Details
#### Architecture
- Background poller runs as async task using `asyncio.create_task()`
- Uses existing `NL43Client` and `persist_snapshot()` functions - no code duplication
- Respects existing 1-second rate limiting per device
- Efficient resource usage - skips work when no devices configured
- WebSocket streaming remains unaffected - separate real-time data path
#### Default Behavior
- Existing devices automatically get 60-second polling interval
- Existing status records default to `is_reachable=true`
- Migration is additive-only - no data loss
- Polling can be disabled per-device via `poll_enabled=false`
#### Recommended Intervals
- Critical monitoring: 30 seconds
- Normal monitoring: 60 seconds (default)
- Battery conservation: 300 seconds (5 minutes)
- Development/testing: 10 seconds (minimum allowed)
### Migration Notes
To upgrade from v0.1.x to v0.2.0:
1. **Stop the service** (if running):
```bash
docker compose down slmm
# OR
# Stop your uvicorn process
```
2. **Update code**:
```bash
git pull
# OR copy new files
```
3. **Run migration**:
```bash
cd slmm
python3 migrate_add_polling_fields.py
```
4. **Restart service**:
```bash
docker compose up -d --build slmm
# OR
uvicorn app.main:app --host 0.0.0.0 --port 8100
```
5. **Verify polling is active**:
```bash
curl http://localhost:8100/api/nl43/_polling/status | jq '.'
```
You should see `"poller_running": true` and all configured devices listed.
### Breaking Changes
None. This release is fully backward-compatible with v0.1.x. All existing endpoints and functionality remain unchanged.
---
## [0.1.0] - 2025-12-XX
### Added
- Initial release
- REST API for NL43/NL53 sound level meter control
- TCP command protocol implementation
- FTP file download support
- WebSocket streaming for real-time data (DRD)
- Device configuration management
- Measurement control (start, stop, pause, resume, reset, store)
- Device information endpoints (battery, clock, results)
- Measurement settings management (frequency/time weighting)
- Sleep mode control
- Rate limiting (1-second minimum between commands)
- SQLite database for device configs and status cache
- Health check endpoints
- Comprehensive API documentation
- NL43 protocol documentation
### Database Schema (v0.1.0)
- **NL43Config table** - Device connection configuration
- **NL43Status table** - Measurement snapshot cache
---
## Version History Summary
- **v0.2.1** (2026-01-23) - Roster management, scheduler hooks, FTP logging, doc cleanup
- **v0.2.0** (2026-01-15) - Background Polling System
- **v0.1.0** (2025-12-XX) - Initial Release

110
README.md
View File

@@ -1,15 +1,19 @@
# SLMM - Sound Level Meter Manager # SLMM - Sound Level Meter Manager
**Version 0.2.1**
Backend API service for controlling and monitoring Rion NL-43/NL-53 Sound Level Meters via TCP and FTP protocols. Backend API service for controlling and monitoring Rion NL-43/NL-53 Sound Level Meters via TCP and FTP protocols.
## Overview ## Overview
SLMM is a standalone backend module that provides REST API routing and command translation for NL43/NL53 sound level meters. This service acts as a bridge between the hardware devices and frontend applications, handling all device communication, data persistence, and protocol management. SLMM is a standalone backend module that provides REST API routing and command translation for NL43/NL53 sound level meters. This service acts as a bridge between the hardware devices and frontend applications, handling all device communication, data persistence, and protocol management.
**Note:** This is a backend-only service. Actual user interfacing is done via [SFM/Terra-View](https://github.com/your-org/terra-view) frontend applications. **Note:** This is a backend-only service. Actual user interfacing is done via customized front ends or cli.
## Features ## Features
- **Background Polling** ⭐ NEW: Continuous automatic polling of devices with configurable intervals
- **Offline Detection** ⭐ NEW: Automatic device reachability tracking with failure counters
- **Device Management**: Configure and manage multiple NL43/NL53 devices - **Device Management**: Configure and manage multiple NL43/NL53 devices
- **Real-time Monitoring**: Stream live measurement data via WebSocket - **Real-time Monitoring**: Stream live measurement data via WebSocket
- **Measurement Control**: Start, stop, pause, resume, and reset measurements - **Measurement Control**: Start, stop, pause, resume, and reset measurements
@@ -22,18 +26,33 @@ SLMM is a standalone backend module that provides REST API routing and command t
## Architecture ## Architecture
``` ```
┌─────────────────┐ ┌──────────────┐ ┌─────────────────┐ ┌─────────────────┐ ┌──────────────────────────────┐ ┌─────────────────┐
Terra-View UI │◄───────►│ SLMM API │◄───────►│ NL43/NL53 │ │◄───────►│ SLMM API │◄───────►│ NL43/NL53 │
│ (Frontend) │ HTTP │ (Backend) │ TCP │ Sound Meters │ │ (Frontend) │ HTTP │ • REST Endpoints │ TCP │ Sound Meters │
└─────────────────┘ └──────────────┘ └─────────────────┘ └─────────────────┘ │ • WebSocket Streaming │ └─────────────────┘
• Background Poller ⭐ NEW │ ▲
└──────────────────────────────┘ │
┌──────────────┐ │ Continuous
│ SQLite DB │ ▼ Polling
│ (Cache) ┌──────────────┐
──────────────┘ │ SQLite DB │◄─────────────────────┘
│ • Config │
│ • Status │
└──────────────┘
``` ```
### Background Polling (v0.2.0)
SLMM now includes a background polling service that continuously queries devices and updates the status cache:
- **Automatic Updates**: Devices are polled at configurable intervals (10-3600 seconds)
- **Offline Detection**: Devices marked unreachable after 3 consecutive failures
- **Per-Device Configuration**: Each device can have a custom polling interval
- **Resource Efficient**: Dynamic sleep intervals and smart scheduling
- **Graceful Shutdown**: Background task stops cleanly on service shutdown
This makes Terra-View significantly more responsive - status requests return cached data instantly (<100ms) instead of waiting for device queries (1-2 seconds).
## Quick Start ## Quick Start
### Prerequisites ### Prerequisites
@@ -103,10 +122,18 @@ Logs are written to:
| Method | Endpoint | Description | | Method | Endpoint | Description |
|--------|----------|-------------| |--------|----------|-------------|
| GET | `/api/nl43/{unit_id}/status` | Get cached measurement snapshot | | GET | `/api/nl43/{unit_id}/status` | Get cached measurement snapshot (updated by background poller) |
| GET | `/api/nl43/{unit_id}/live` | Request fresh DOD data from device | | GET | `/api/nl43/{unit_id}/live` | Request fresh DOD data from device (bypasses cache) |
| WS | `/api/nl43/{unit_id}/stream` | WebSocket stream for real-time DRD data | | WS | `/api/nl43/{unit_id}/stream` | WebSocket stream for real-time DRD data |
### Background Polling Configuration ⭐ NEW
| Method | Endpoint | Description |
|--------|----------|-------------|
| GET | `/api/nl43/{unit_id}/polling/config` | Get device polling configuration |
| PUT | `/api/nl43/{unit_id}/polling/config` | Update polling interval and enable/disable polling |
| GET | `/api/nl43/_polling/status` | Get global polling status for all devices |
### Measurement Control ### Measurement Control
| Method | Endpoint | Description | | Method | Endpoint | Description |
@@ -167,6 +194,7 @@ slmm/
│ ├── routers.py # API route definitions │ ├── routers.py # API route definitions
│ ├── models.py # SQLAlchemy database models │ ├── models.py # SQLAlchemy database models
│ ├── services.py # NL43Client and business logic │ ├── services.py # NL43Client and business logic
│ ├── background_poller.py # Background polling service ⭐ NEW
│ └── database.py # Database configuration │ └── database.py # Database configuration
├── data/ ├── data/
│ ├── slmm.db # SQLite database (auto-created) │ ├── slmm.db # SQLite database (auto-created)
@@ -175,9 +203,12 @@ slmm/
├── templates/ ├── templates/
│ └── index.html # Simple web interface (optional) │ └── index.html # Simple web interface (optional)
├── manuals/ # Device documentation ├── manuals/ # Device documentation
├── migrate_add_polling_fields.py # Database migration for v0.2.0 ⭐ NEW
├── test_polling.sh # Polling feature test script ⭐ NEW
├── API.md # Detailed API documentation ├── API.md # Detailed API documentation
├── COMMUNICATION_GUIDE.md # NL43 protocol documentation ├── COMMUNICATION_GUIDE.md # NL43 protocol documentation
├── NL43_COMMANDS.md # Command reference ├── NL43_COMMANDS.md # Command reference
├── CHANGELOG.md # Version history ⭐ NEW
├── requirements.txt # Python dependencies ├── requirements.txt # Python dependencies
└── README.md # This file └── README.md # This file
``` ```
@@ -194,12 +225,16 @@ Stores device connection configuration:
- `ftp_username`: FTP authentication username - `ftp_username`: FTP authentication username
- `ftp_password`: FTP authentication password - `ftp_password`: FTP authentication password
- `web_enabled`: Enable/disable web interface access - `web_enabled`: Enable/disable web interface access
- `poll_interval_seconds`: Polling interval in seconds (10-3600, default: 60) ⭐ NEW
- `poll_enabled`: Enable/disable background polling for this device ⭐ NEW
### NL43Status Table ### NL43Status Table
Caches latest measurement snapshot: Caches latest measurement snapshot:
- `unit_id` (PK): Unique device identifier - `unit_id` (PK): Unique device identifier
- `last_seen`: Timestamp of last update - `last_seen`: Timestamp of last update
- `measurement_state`: Current state (Measure/Stop) - `measurement_state`: Current state (Measure/Stop)
- `measurement_start_time`: When measurement started (UTC)
- `counter`: Measurement interval counter (1-600)
- `lp`: Instantaneous sound pressure level - `lp`: Instantaneous sound pressure level
- `leq`: Equivalent continuous sound level - `leq`: Equivalent continuous sound level
- `lmax`: Maximum sound level - `lmax`: Maximum sound level
@@ -210,6 +245,11 @@ Caches latest measurement snapshot:
- `sd_remaining_mb`: Free SD card space (MB) - `sd_remaining_mb`: Free SD card space (MB)
- `sd_free_ratio`: SD card free space ratio - `sd_free_ratio`: SD card free space ratio
- `raw_payload`: Raw device response data - `raw_payload`: Raw device response data
- `is_reachable`: Device reachability status (Boolean) ⭐ NEW
- `consecutive_failures`: Count of consecutive poll failures ⭐ NEW
- `last_poll_attempt`: Last time background poller attempted to poll ⭐ NEW
- `last_success`: Last successful poll timestamp ⭐ NEW
- `last_error`: Last error message (truncated to 500 chars) ⭐ NEW
## Protocol Details ## Protocol Details
@@ -253,11 +293,33 @@ curl -X PUT http://localhost:8100/api/nl43/meter-001/config \
curl -X POST http://localhost:8100/api/nl43/meter-001/start curl -X POST http://localhost:8100/api/nl43/meter-001/start
``` ```
### Get Live Status ### Get Cached Status (Fast - from background poller)
```bash
curl http://localhost:8100/api/nl43/meter-001/status
```
### Get Live Status (Bypasses cache)
```bash ```bash
curl http://localhost:8100/api/nl43/meter-001/live curl http://localhost:8100/api/nl43/meter-001/live
``` ```
### Configure Background Polling ⭐ NEW
```bash
# Set polling interval to 30 seconds
curl -X PUT http://localhost:8100/api/nl43/meter-001/polling/config \
-H "Content-Type: application/json" \
-d '{
"poll_interval_seconds": 30,
"poll_enabled": true
}'
# Get polling configuration
curl http://localhost:8100/api/nl43/meter-001/polling/config
# Check global polling status
curl http://localhost:8100/api/nl43/_polling/status
```
### Verify Device Settings ### Verify Device Settings
```bash ```bash
curl http://localhost:8100/api/nl43/meter-001/settings curl http://localhost:8100/api/nl43/meter-001/settings
@@ -356,13 +418,31 @@ pytest
### Database Migrations ### Database Migrations
```bash ```bash
# Migrate existing database to add FTP credentials # Migrate to v0.2.0 (add background polling fields)
python3 migrate_add_polling_fields.py
# Legacy: Migrate to add FTP credentials
python migrate_add_ftp_credentials.py python migrate_add_ftp_credentials.py
# Set FTP credentials for a device # Set FTP credentials for a device
python set_ftp_credentials.py <unit_id> <username> <password> python set_ftp_credentials.py <unit_id> <username> <password>
``` ```
### Testing Background Polling
```bash
# Run comprehensive polling tests
./test_polling.sh [unit_id]
# Test settings endpoint
python3 test_settings_endpoint.py <unit_id>
# Test sleep mode auto-disable
python3 test_sleep_mode_auto_disable.py <unit_id>
```
### Legacy Scripts
Old migration scripts and manual polling tools have been moved to `archive/` for reference. See [archive/README.md](archive/README.md) for details.
## Contributing ## Contributing
This is a standalone module kept separate from the SFM/Terra-View codebase. When contributing: This is a standalone module kept separate from the SFM/Terra-View codebase. When contributing:

343
app/background_poller.py Normal file
View File

@@ -0,0 +1,343 @@
"""
Background polling service for NL43 devices.
This module provides continuous, automatic polling of configured NL43 devices
at configurable intervals. Status snapshots are persisted to the database
for fast API access without querying devices on every request.
"""
import asyncio
import logging
from datetime import datetime, timedelta
from typing import Optional
from sqlalchemy.orm import Session
from app.database import SessionLocal
from app.models import NL43Config, NL43Status
from app.services import NL43Client, persist_snapshot, sync_measurement_start_time_from_ftp
from app.device_logger import log_device_event, cleanup_old_logs
logger = logging.getLogger(__name__)
class BackgroundPoller:
"""
Background task that continuously polls NL43 devices and updates status cache.
Features:
- Per-device configurable poll intervals (30 seconds to 6 hours)
- Automatic offline detection (marks unreachable after 3 consecutive failures)
- Dynamic sleep intervals based on device configurations
- Graceful shutdown on application stop
- Respects existing rate limiting (1-second minimum between commands)
"""
def __init__(self):
self._task: Optional[asyncio.Task] = None
self._running = False
self._logger = logger
self._last_cleanup = None # Track last log cleanup time
async def start(self):
"""Start the background polling task."""
if self._running:
self._logger.warning("Background poller already running")
return
self._running = True
self._task = asyncio.create_task(self._poll_loop())
self._logger.info("Background poller task created")
async def stop(self):
"""Gracefully stop the background polling task."""
if not self._running:
return
self._logger.info("Stopping background poller...")
self._running = False
if self._task:
try:
await asyncio.wait_for(self._task, timeout=5.0)
except asyncio.TimeoutError:
self._logger.warning("Background poller task did not stop gracefully, cancelling...")
self._task.cancel()
try:
await self._task
except asyncio.CancelledError:
pass
self._logger.info("Background poller stopped")
async def _poll_loop(self):
"""Main polling loop that runs continuously."""
self._logger.info("Background polling loop started")
while self._running:
try:
await self._poll_all_devices()
except Exception as e:
self._logger.error(f"Error in poll loop: {e}", exc_info=True)
# Run log cleanup once per hour
try:
now = datetime.utcnow()
if self._last_cleanup is None or (now - self._last_cleanup).total_seconds() > 3600:
cleanup_old_logs()
self._last_cleanup = now
except Exception as e:
self._logger.warning(f"Log cleanup failed: {e}")
# Calculate dynamic sleep interval
sleep_time = self._calculate_sleep_interval()
self._logger.debug(f"Sleeping for {sleep_time} seconds until next poll cycle")
# Sleep in small intervals to allow graceful shutdown
for _ in range(int(sleep_time)):
if not self._running:
break
await asyncio.sleep(1)
self._logger.info("Background polling loop exited")
async def _poll_all_devices(self):
"""Poll all configured devices that are due for polling."""
db: Session = SessionLocal()
try:
# Get all devices with TCP and polling enabled
configs = db.query(NL43Config).filter_by(
tcp_enabled=True,
poll_enabled=True
).all()
if not configs:
self._logger.debug("No devices configured for polling")
return
self._logger.debug(f"Checking {len(configs)} devices for polling")
now = datetime.utcnow()
polled_count = 0
for cfg in configs:
if not self._running:
break
# Get current status
status = db.query(NL43Status).filter_by(unit_id=cfg.unit_id).first()
# Check if device should be polled
if self._should_poll(cfg, status, now):
await self._poll_device(cfg, db)
polled_count += 1
else:
self._logger.debug(f"Skipping {cfg.unit_id} - interval not elapsed")
if polled_count > 0:
self._logger.info(f"Polled {polled_count}/{len(configs)} devices")
finally:
db.close()
def _should_poll(self, cfg: NL43Config, status: Optional[NL43Status], now: datetime) -> bool:
"""
Determine if a device should be polled based on interval and last poll time.
Args:
cfg: Device configuration
status: Current device status (may be None if never polled)
now: Current UTC timestamp
Returns:
True if device should be polled, False otherwise
"""
# If never polled before, poll now
if not status or not status.last_poll_attempt:
self._logger.debug(f"Device {cfg.unit_id} never polled, polling now")
return True
# Calculate elapsed time since last poll attempt
interval = cfg.poll_interval_seconds or 60
elapsed = (now - status.last_poll_attempt).total_seconds()
should_poll = elapsed >= interval
if should_poll:
self._logger.debug(
f"Device {cfg.unit_id} due for polling: {elapsed:.1f}s elapsed, interval={interval}s"
)
return should_poll
async def _poll_device(self, cfg: NL43Config, db: Session):
"""
Poll a single device and update its status in the database.
Args:
cfg: Device configuration
db: Database session
"""
unit_id = cfg.unit_id
self._logger.info(f"Polling device {unit_id} at {cfg.host}:{cfg.tcp_port}")
# Get or create status record
status = db.query(NL43Status).filter_by(unit_id=unit_id).first()
if not status:
status = NL43Status(unit_id=unit_id)
db.add(status)
# Update last_poll_attempt immediately
status.last_poll_attempt = datetime.utcnow()
db.commit()
# Create client and attempt to poll
client = NL43Client(
cfg.host,
cfg.tcp_port,
timeout=5.0,
ftp_username=cfg.ftp_username,
ftp_password=cfg.ftp_password,
ftp_port=cfg.ftp_port or 21
)
try:
# Send DOD? command to get device status
snap = await client.request_dod()
snap.unit_id = unit_id
# Success - persist snapshot and reset failure counter
persist_snapshot(snap, db)
status.is_reachable = True
status.consecutive_failures = 0
status.last_success = datetime.utcnow()
status.last_error = None
db.commit()
self._logger.info(f"✓ Successfully polled {unit_id}")
# Log to device log
log_device_event(
unit_id, "INFO", "POLL",
f"Poll success: state={snap.measurement_state}, Leq={snap.leq}, Lp={snap.lp}",
db
)
# Check if device is measuring but has no start time recorded
# This happens if measurement was started before SLMM began polling
# or after a service restart
status = db.query(NL43Status).filter_by(unit_id=unit_id).first()
# Reset the sync flag when measurement stops (so next measurement can sync)
if status and status.measurement_state != "Start":
if status.start_time_sync_attempted:
status.start_time_sync_attempted = False
db.commit()
self._logger.debug(f"Reset FTP sync flag for {unit_id} (measurement stopped)")
log_device_event(unit_id, "DEBUG", "STATE", "Measurement stopped, reset FTP sync flag", db)
# Attempt FTP sync if:
# - Device is measuring
# - No start time recorded
# - FTP sync not already attempted for this measurement
# - FTP is configured
if (status and
status.measurement_state == "Start" and
status.measurement_start_time is None and
not status.start_time_sync_attempted and
cfg.ftp_enabled and
cfg.ftp_username and
cfg.ftp_password):
self._logger.info(
f"Device {unit_id} is measuring but has no start time - "
f"attempting FTP sync"
)
log_device_event(unit_id, "INFO", "SYNC", "Attempting FTP sync for measurement start time", db)
# Mark that we attempted sync (prevents repeated attempts on failure)
status.start_time_sync_attempted = True
db.commit()
try:
synced = await sync_measurement_start_time_from_ftp(
unit_id=unit_id,
host=cfg.host,
tcp_port=cfg.tcp_port,
ftp_port=cfg.ftp_port or 21,
ftp_username=cfg.ftp_username,
ftp_password=cfg.ftp_password,
db=db
)
if synced:
self._logger.info(f"✓ FTP sync succeeded for {unit_id}")
log_device_event(unit_id, "INFO", "SYNC", "FTP sync succeeded - measurement start time updated", db)
else:
self._logger.warning(f"FTP sync returned False for {unit_id}")
log_device_event(unit_id, "WARNING", "SYNC", "FTP sync returned False", db)
except Exception as sync_err:
self._logger.warning(
f"FTP sync failed for {unit_id}: {sync_err}"
)
log_device_event(unit_id, "ERROR", "SYNC", f"FTP sync failed: {sync_err}", db)
except Exception as e:
# Failure - increment counter and potentially mark offline
status.consecutive_failures += 1
error_msg = str(e)[:500] # Truncate to prevent bloat
status.last_error = error_msg
# Mark unreachable after 3 consecutive failures
if status.consecutive_failures >= 3:
if status.is_reachable: # Only log transition
self._logger.warning(
f"Device {unit_id} marked unreachable after {status.consecutive_failures} failures: {error_msg}"
)
log_device_event(unit_id, "ERROR", "POLL", f"Device marked UNREACHABLE after {status.consecutive_failures} failures: {error_msg}", db)
status.is_reachable = False
else:
self._logger.warning(
f"Poll failed for {unit_id} (attempt {status.consecutive_failures}/3): {error_msg}"
)
log_device_event(unit_id, "WARNING", "POLL", f"Poll failed (attempt {status.consecutive_failures}/3): {error_msg}", db)
db.commit()
def _calculate_sleep_interval(self) -> int:
"""
Calculate the next sleep interval based on all device poll intervals.
Returns a dynamic sleep time that ensures responsive polling:
- Minimum 30 seconds (prevents tight loops)
- Maximum 300 seconds / 5 minutes (ensures reasonable responsiveness for long intervals)
- Generally half the minimum device interval
Returns:
Sleep interval in seconds
"""
db: Session = SessionLocal()
try:
configs = db.query(NL43Config).filter_by(
tcp_enabled=True,
poll_enabled=True
).all()
if not configs:
return 60 # Default sleep when no devices configured
# Get all intervals
intervals = [cfg.poll_interval_seconds or 60 for cfg in configs]
min_interval = min(intervals)
# Use half the minimum interval, but cap between 30-300 seconds
# This allows longer sleep times when polling intervals are long (e.g., hourly)
sleep_time = max(30, min(300, min_interval // 2))
return sleep_time
finally:
db.close()
# Global singleton instance
poller = BackgroundPoller()

277
app/device_logger.py Normal file
View File

@@ -0,0 +1,277 @@
"""
Per-device logging system.
Provides dual output: database entries for structured queries and file logs for backup.
Each device gets its own log file in data/logs/{unit_id}.log with rotation.
"""
import logging
import os
from datetime import datetime, timedelta
from logging.handlers import RotatingFileHandler
from pathlib import Path
from typing import Optional
from sqlalchemy.orm import Session
from app.database import SessionLocal
from app.models import DeviceLog
# Configure base logger
logger = logging.getLogger(__name__)
# Log directory (persisted in Docker volume)
LOG_DIR = Path(os.path.dirname(os.path.dirname(__file__))) / "data" / "logs"
LOG_DIR.mkdir(parents=True, exist_ok=True)
# Per-device file loggers (cached)
_device_file_loggers: dict = {}
# Log retention (days)
LOG_RETENTION_DAYS = int(os.getenv("LOG_RETENTION_DAYS", "7"))
def _get_file_logger(unit_id: str) -> logging.Logger:
"""Get or create a file logger for a specific device."""
if unit_id in _device_file_loggers:
return _device_file_loggers[unit_id]
# Create device-specific logger
device_logger = logging.getLogger(f"device.{unit_id}")
device_logger.setLevel(logging.DEBUG)
# Avoid duplicate handlers
if not device_logger.handlers:
# Create rotating file handler (5 MB max, keep 3 backups)
log_file = LOG_DIR / f"{unit_id}.log"
handler = RotatingFileHandler(
log_file,
maxBytes=5 * 1024 * 1024, # 5 MB
backupCount=3,
encoding="utf-8"
)
handler.setLevel(logging.DEBUG)
# Format: timestamp [LEVEL] [CATEGORY] message
formatter = logging.Formatter(
"%(asctime)s [%(levelname)s] [%(category)s] %(message)s",
datefmt="%Y-%m-%d %H:%M:%S"
)
handler.setFormatter(formatter)
device_logger.addHandler(handler)
# Don't propagate to root logger
device_logger.propagate = False
_device_file_loggers[unit_id] = device_logger
return device_logger
def log_device_event(
unit_id: str,
level: str,
category: str,
message: str,
db: Optional[Session] = None
):
"""
Log an event for a specific device.
Writes to both:
1. Database (DeviceLog table) for structured queries
2. File (data/logs/{unit_id}.log) for backup/debugging
Args:
unit_id: Device identifier
level: Log level (DEBUG, INFO, WARNING, ERROR)
category: Event category (TCP, FTP, POLL, COMMAND, STATE, SYNC)
message: Log message
db: Optional database session (creates one if not provided)
"""
timestamp = datetime.utcnow()
# Write to file log
try:
file_logger = _get_file_logger(unit_id)
log_func = getattr(file_logger, level.lower(), file_logger.info)
# Pass category as extra for formatter
log_func(message, extra={"category": category})
except Exception as e:
logger.warning(f"Failed to write file log for {unit_id}: {e}")
# Write to database
close_db = False
try:
if db is None:
db = SessionLocal()
close_db = True
log_entry = DeviceLog(
unit_id=unit_id,
timestamp=timestamp,
level=level.upper(),
category=category.upper(),
message=message
)
db.add(log_entry)
db.commit()
except Exception as e:
logger.warning(f"Failed to write DB log for {unit_id}: {e}")
if db:
db.rollback()
finally:
if close_db and db:
db.close()
def cleanup_old_logs(retention_days: Optional[int] = None, db: Optional[Session] = None):
"""
Delete log entries older than retention period.
Args:
retention_days: Days to retain (default: LOG_RETENTION_DAYS env var or 7)
db: Optional database session
"""
if retention_days is None:
retention_days = LOG_RETENTION_DAYS
cutoff = datetime.utcnow() - timedelta(days=retention_days)
close_db = False
try:
if db is None:
db = SessionLocal()
close_db = True
deleted = db.query(DeviceLog).filter(DeviceLog.timestamp < cutoff).delete()
db.commit()
if deleted > 0:
logger.info(f"Cleaned up {deleted} log entries older than {retention_days} days")
except Exception as e:
logger.error(f"Failed to cleanup old logs: {e}")
if db:
db.rollback()
finally:
if close_db and db:
db.close()
def get_device_logs(
unit_id: str,
limit: int = 100,
offset: int = 0,
level: Optional[str] = None,
category: Optional[str] = None,
since: Optional[datetime] = None,
db: Optional[Session] = None
) -> list:
"""
Query log entries for a specific device.
Args:
unit_id: Device identifier
limit: Max entries to return (default: 100)
offset: Number of entries to skip (default: 0)
level: Filter by level (DEBUG, INFO, WARNING, ERROR)
category: Filter by category (TCP, FTP, POLL, COMMAND, STATE, SYNC)
since: Filter entries after this timestamp
db: Optional database session
Returns:
List of log entries as dicts
"""
close_db = False
try:
if db is None:
db = SessionLocal()
close_db = True
query = db.query(DeviceLog).filter(DeviceLog.unit_id == unit_id)
if level:
query = query.filter(DeviceLog.level == level.upper())
if category:
query = query.filter(DeviceLog.category == category.upper())
if since:
query = query.filter(DeviceLog.timestamp >= since)
# Order by newest first
query = query.order_by(DeviceLog.timestamp.desc())
# Apply pagination
entries = query.offset(offset).limit(limit).all()
return [
{
"id": e.id,
"timestamp": e.timestamp.isoformat() + "Z",
"level": e.level,
"category": e.category,
"message": e.message
}
for e in entries
]
finally:
if close_db and db:
db.close()
def get_log_stats(unit_id: str, db: Optional[Session] = None) -> dict:
"""
Get log statistics for a device.
Returns:
Dict with counts by level and category
"""
close_db = False
try:
if db is None:
db = SessionLocal()
close_db = True
total = db.query(DeviceLog).filter(DeviceLog.unit_id == unit_id).count()
# Count by level
level_counts = {}
for level in ["DEBUG", "INFO", "WARNING", "ERROR"]:
count = db.query(DeviceLog).filter(
DeviceLog.unit_id == unit_id,
DeviceLog.level == level
).count()
if count > 0:
level_counts[level] = count
# Count by category
category_counts = {}
for category in ["TCP", "FTP", "POLL", "COMMAND", "STATE", "SYNC", "GENERAL"]:
count = db.query(DeviceLog).filter(
DeviceLog.unit_id == unit_id,
DeviceLog.category == category
).count()
if count > 0:
category_counts[category] = count
# Get oldest and newest
oldest = db.query(DeviceLog).filter(
DeviceLog.unit_id == unit_id
).order_by(DeviceLog.timestamp.asc()).first()
newest = db.query(DeviceLog).filter(
DeviceLog.unit_id == unit_id
).order_by(DeviceLog.timestamp.desc()).first()
return {
"total": total,
"by_level": level_counts,
"by_category": category_counts,
"oldest": oldest.timestamp.isoformat() + "Z" if oldest else None,
"newest": newest.timestamp.isoformat() + "Z" if newest else None
}
finally:
if close_db and db:
db.close()

View File

@@ -1,5 +1,6 @@
import os import os
import logging import logging
from contextlib import asynccontextmanager
from fastapi import FastAPI, Request from fastapi import FastAPI, Request
from fastapi.middleware.cors import CORSMiddleware from fastapi.middleware.cors import CORSMiddleware
from fastapi.responses import HTMLResponse from fastapi.responses import HTMLResponse
@@ -7,6 +8,7 @@ from fastapi.templating import Jinja2Templates
from app.database import Base, engine from app.database import Base, engine
from app import routers from app import routers
from app.background_poller import poller
# Configure logging # Configure logging
logging.basicConfig( logging.basicConfig(
@@ -23,10 +25,28 @@ logger = logging.getLogger(__name__)
Base.metadata.create_all(bind=engine) Base.metadata.create_all(bind=engine)
logger.info("Database tables initialized") logger.info("Database tables initialized")
@asynccontextmanager
async def lifespan(app: FastAPI):
"""Manage application lifecycle - startup and shutdown events."""
# Startup
logger.info("Starting background poller...")
await poller.start()
logger.info("Background poller started")
yield # Application runs
# Shutdown
logger.info("Stopping background poller...")
await poller.stop()
logger.info("Background poller stopped")
app = FastAPI( app = FastAPI(
title="SLMM NL43 Addon", title="SLMM NL43 Addon",
description="Standalone module for NL43 configuration and status APIs", description="Standalone module for NL43 configuration and status APIs with background polling",
version="0.1.0", version="0.2.0",
lifespan=lifespan,
) )
# CORS configuration - use environment variable for allowed origins # CORS configuration - use environment variable for allowed origins
@@ -52,6 +72,11 @@ def index(request: Request):
return templates.TemplateResponse("index.html", {"request": request}) return templates.TemplateResponse("index.html", {"request": request})
@app.get("/roster", response_class=HTMLResponse)
def roster(request: Request):
return templates.TemplateResponse("roster.html", {"request": request})
@app.get("/health") @app.get("/health")
async def health(): async def health():
"""Basic health check endpoint.""" """Basic health check endpoint."""

View File

@@ -19,6 +19,10 @@ class NL43Config(Base):
ftp_password = Column(String, nullable=True) # FTP login password ftp_password = Column(String, nullable=True) # FTP login password
web_enabled = Column(Boolean, default=False) web_enabled = Column(Boolean, default=False)
# Background polling configuration
poll_interval_seconds = Column(Integer, nullable=True, default=60) # Polling interval (10-3600 seconds)
poll_enabled = Column(Boolean, default=True) # Enable/disable background polling for this device
class NL43Status(Base): class NL43Status(Base):
""" """
@@ -42,3 +46,29 @@ class NL43Status(Base):
sd_remaining_mb = Column(String, nullable=True) sd_remaining_mb = Column(String, nullable=True)
sd_free_ratio = Column(String, nullable=True) sd_free_ratio = Column(String, nullable=True)
raw_payload = Column(Text, nullable=True) raw_payload = Column(Text, nullable=True)
# Background polling status
is_reachable = Column(Boolean, default=True) # Device reachability status
consecutive_failures = Column(Integer, default=0) # Count of consecutive poll failures
last_poll_attempt = Column(DateTime, nullable=True) # Last time background poller attempted to poll
last_success = Column(DateTime, nullable=True) # Last successful poll timestamp
last_error = Column(Text, nullable=True) # Last error message (truncated to 500 chars)
# FTP start time sync tracking
start_time_sync_attempted = Column(Boolean, default=False) # True if FTP sync was attempted for current measurement
class DeviceLog(Base):
"""
Per-device log entries for debugging and audit trail.
Stores events like commands, state changes, errors, and FTP operations.
"""
__tablename__ = "device_logs"
id = Column(Integer, primary_key=True, autoincrement=True)
unit_id = Column(String, index=True, nullable=False)
timestamp = Column(DateTime, default=func.now(), index=True)
level = Column(String, default="INFO") # DEBUG, INFO, WARNING, ERROR
category = Column(String, default="GENERAL") # TCP, FTP, POLL, COMMAND, STATE, SYNC
message = Column(Text, nullable=False)

View File

@@ -2,7 +2,8 @@ from fastapi import APIRouter, Depends, HTTPException, WebSocket, WebSocketDisco
from fastapi.responses import FileResponse from fastapi.responses import FileResponse
from sqlalchemy.orm import Session from sqlalchemy.orm import Session
from datetime import datetime from datetime import datetime
from pydantic import BaseModel, field_validator from pydantic import BaseModel, field_validator, Field
from typing import Optional
import logging import logging
import ipaddress import ipaddress
import json import json
@@ -49,6 +50,8 @@ class ConfigPayload(BaseModel):
ftp_username: str | None = None ftp_username: str | None = None
ftp_password: str | None = None ftp_password: str | None = None
web_enabled: bool | None = None web_enabled: bool | None = None
poll_enabled: bool | None = None
poll_interval_seconds: int | None = None
@field_validator("host") @field_validator("host")
@classmethod @classmethod
@@ -76,6 +79,229 @@ class ConfigPayload(BaseModel):
raise ValueError("Port must be between 1 and 65535") raise ValueError("Port must be between 1 and 65535")
return v return v
@field_validator("poll_interval_seconds")
@classmethod
def validate_poll_interval(cls, v):
if v is not None and not (30 <= v <= 21600):
raise ValueError("Poll interval must be between 30 and 21600 seconds (30s to 6 hours)")
return v
class PollingConfigPayload(BaseModel):
"""Payload for updating device polling configuration."""
poll_interval_seconds: int | None = Field(None, ge=30, le=21600, description="Polling interval in seconds (30s to 6 hours)")
poll_enabled: bool | None = Field(None, description="Enable or disable background polling for this device")
# ============================================================================
# GLOBAL POLLING STATUS ENDPOINT (must be before /{unit_id} routes)
# ============================================================================
@router.get("/_polling/status")
def get_global_polling_status(db: Session = Depends(get_db)):
"""
Get global background polling status for all devices.
Returns information about which devices are being polled, their
reachability status, failure counts, and last poll times.
Useful for monitoring the health of the background polling system.
Note: Must be defined before /{unit_id} routes to avoid routing conflicts.
"""
from app.background_poller import poller
configs = db.query(NL43Config).filter_by(
tcp_enabled=True,
poll_enabled=True
).all()
device_statuses = []
for cfg in configs:
status = db.query(NL43Status).filter_by(unit_id=cfg.unit_id).first()
device_statuses.append({
"unit_id": cfg.unit_id,
"poll_interval_seconds": cfg.poll_interval_seconds,
"poll_enabled": cfg.poll_enabled,
"is_reachable": status.is_reachable if status else None,
"consecutive_failures": status.consecutive_failures if status else 0,
"last_poll_attempt": status.last_poll_attempt.isoformat() if status and status.last_poll_attempt else None,
"last_success": status.last_success.isoformat() if status and status.last_success else None,
"last_error": status.last_error if status else None
})
return {
"status": "ok",
"data": {
"poller_running": poller._running,
"total_devices": len(configs),
"devices": device_statuses
}
}
@router.get("/roster")
def get_roster(db: Session = Depends(get_db)):
"""
Get list of all configured devices with their status.
Returns all NL43Config entries along with their associated status information.
Used by the roster page to display all devices in a table.
Note: Must be defined before /{unit_id} routes to avoid routing conflicts.
"""
configs = db.query(NL43Config).all()
devices = []
for cfg in configs:
status = db.query(NL43Status).filter_by(unit_id=cfg.unit_id).first()
device_data = {
"unit_id": cfg.unit_id,
"host": cfg.host,
"tcp_port": cfg.tcp_port,
"ftp_port": cfg.ftp_port,
"tcp_enabled": cfg.tcp_enabled,
"ftp_enabled": cfg.ftp_enabled,
"ftp_username": cfg.ftp_username,
"ftp_password": cfg.ftp_password,
"web_enabled": cfg.web_enabled,
"poll_enabled": cfg.poll_enabled,
"poll_interval_seconds": cfg.poll_interval_seconds,
"status": None
}
if status:
device_data["status"] = {
"last_seen": status.last_seen.isoformat() if status.last_seen else None,
"measurement_state": status.measurement_state,
"is_reachable": status.is_reachable,
"consecutive_failures": status.consecutive_failures,
"last_success": status.last_success.isoformat() if status.last_success else None,
"last_error": status.last_error
}
devices.append(device_data)
return {
"status": "ok",
"devices": devices,
"total": len(devices)
}
class RosterCreatePayload(BaseModel):
"""Payload for creating a new device via roster."""
unit_id: str
host: str
tcp_port: int = 2255
ftp_port: int = 21
tcp_enabled: bool = True
ftp_enabled: bool = False
ftp_username: str | None = None
ftp_password: str | None = None
web_enabled: bool = False
poll_enabled: bool = True
poll_interval_seconds: int = 60
@field_validator("host")
@classmethod
def validate_host(cls, v):
if v is None:
return v
# Try to parse as IP address or hostname
try:
ipaddress.ip_address(v)
except ValueError:
# Not an IP, check if it's a valid hostname format
if not v or len(v) > 253:
raise ValueError("Invalid hostname length")
# Allow hostnames (basic validation)
if not all(c.isalnum() or c in ".-" for c in v):
raise ValueError("Host must be a valid IP address or hostname")
return v
@field_validator("tcp_port", "ftp_port")
@classmethod
def validate_port(cls, v):
if v is None:
return v
if not (1 <= v <= 65535):
raise ValueError("Port must be between 1 and 65535")
return v
@field_validator("poll_interval_seconds")
@classmethod
def validate_poll_interval(cls, v):
if v is not None and not (30 <= v <= 21600):
raise ValueError("Poll interval must be between 30 and 21600 seconds (30s to 6 hours)")
return v
@router.post("/roster")
async def create_device(payload: RosterCreatePayload, db: Session = Depends(get_db)):
"""
Create a new device configuration via roster.
This endpoint allows creating a new device with all configuration options.
If a device with the same unit_id already exists, returns a 409 conflict.
Note: Must be defined before /{unit_id} routes to avoid routing conflicts.
"""
# Check if device already exists
existing = db.query(NL43Config).filter_by(unit_id=payload.unit_id).first()
if existing:
raise HTTPException(
status_code=409,
detail=f"Device with unit_id '{payload.unit_id}' already exists. Use PUT /{payload.unit_id}/config to update."
)
# Create new config
cfg = NL43Config(
unit_id=payload.unit_id,
host=payload.host,
tcp_port=payload.tcp_port,
ftp_port=payload.ftp_port,
tcp_enabled=payload.tcp_enabled,
ftp_enabled=payload.ftp_enabled,
ftp_username=payload.ftp_username,
ftp_password=payload.ftp_password,
web_enabled=payload.web_enabled,
poll_enabled=payload.poll_enabled,
poll_interval_seconds=payload.poll_interval_seconds
)
db.add(cfg)
db.commit()
db.refresh(cfg)
logger.info(f"Created new device config for {payload.unit_id}")
# If TCP is enabled, automatically disable sleep mode
if cfg.tcp_enabled and cfg.host and cfg.tcp_port:
logger.info(f"TCP enabled for {payload.unit_id}, ensuring sleep mode is disabled")
client = NL43Client(cfg.host, cfg.tcp_port, ftp_username=cfg.ftp_username, ftp_password=cfg.ftp_password, ftp_port=cfg.ftp_port)
await ensure_sleep_mode_disabled(client, payload.unit_id)
return {
"status": "ok",
"message": f"Device {payload.unit_id} created successfully",
"data": {
"unit_id": cfg.unit_id,
"host": cfg.host,
"tcp_port": cfg.tcp_port,
"tcp_enabled": cfg.tcp_enabled,
"ftp_enabled": cfg.ftp_enabled,
"poll_enabled": cfg.poll_enabled,
"poll_interval_seconds": cfg.poll_interval_seconds
}
}
# ============================================================================
# DEVICE-SPECIFIC ENDPOINTS
# ============================================================================
@router.get("/{unit_id}/config") @router.get("/{unit_id}/config")
def get_config(unit_id: str, db: Session = Depends(get_db)): def get_config(unit_id: str, db: Session = Depends(get_db)):
@@ -98,6 +324,34 @@ def get_config(unit_id: str, db: Session = Depends(get_db)):
} }
@router.delete("/{unit_id}/config")
def delete_config(unit_id: str, db: Session = Depends(get_db)):
"""
Delete device configuration and associated status data.
Used by Terra-View to remove devices from SLMM when deleted from roster.
"""
cfg = db.query(NL43Config).filter_by(unit_id=unit_id).first()
if not cfg:
raise HTTPException(status_code=404, detail="NL43 config not found")
# Also delete associated status record
status = db.query(NL43Status).filter_by(unit_id=unit_id).first()
if status:
db.delete(status)
logger.info(f"Deleted status record for {unit_id}")
db.delete(cfg)
db.commit()
logger.info(f"Deleted device config for {unit_id}")
return {
"status": "ok",
"message": f"Deleted device {unit_id}"
}
@router.put("/{unit_id}/config") @router.put("/{unit_id}/config")
async def upsert_config(unit_id: str, payload: ConfigPayload, db: Session = Depends(get_db)): async def upsert_config(unit_id: str, payload: ConfigPayload, db: Session = Depends(get_db)):
cfg = db.query(NL43Config).filter_by(unit_id=unit_id).first() cfg = db.query(NL43Config).filter_by(unit_id=unit_id).first()
@@ -121,6 +375,10 @@ async def upsert_config(unit_id: str, payload: ConfigPayload, db: Session = Depe
cfg.ftp_password = payload.ftp_password cfg.ftp_password = payload.ftp_password
if payload.web_enabled is not None: if payload.web_enabled is not None:
cfg.web_enabled = payload.web_enabled cfg.web_enabled = payload.web_enabled
if payload.poll_enabled is not None:
cfg.poll_enabled = payload.poll_enabled
if payload.poll_interval_seconds is not None:
cfg.poll_interval_seconds = payload.poll_interval_seconds
db.commit() db.commit()
db.refresh(cfg) db.refresh(cfg)
@@ -142,6 +400,8 @@ async def upsert_config(unit_id: str, payload: ConfigPayload, db: Session = Depe
"tcp_enabled": cfg.tcp_enabled, "tcp_enabled": cfg.tcp_enabled,
"ftp_enabled": cfg.ftp_enabled, "ftp_enabled": cfg.ftp_enabled,
"web_enabled": cfg.web_enabled, "web_enabled": cfg.web_enabled,
"poll_enabled": cfg.poll_enabled,
"poll_interval_seconds": cfg.poll_interval_seconds,
}, },
} }
@@ -167,6 +427,12 @@ def get_status(unit_id: str, db: Session = Depends(get_db)):
"sd_remaining_mb": status.sd_remaining_mb, "sd_remaining_mb": status.sd_remaining_mb,
"sd_free_ratio": status.sd_free_ratio, "sd_free_ratio": status.sd_free_ratio,
"raw_payload": status.raw_payload, "raw_payload": status.raw_payload,
# Background polling status
"is_reachable": status.is_reachable,
"consecutive_failures": status.consecutive_failures,
"last_poll_attempt": status.last_poll_attempt.isoformat() if status.last_poll_attempt else None,
"last_success": status.last_success.isoformat() if status.last_success else None,
"last_error": status.last_error,
}, },
} }
@@ -297,6 +563,104 @@ async def stop_measurement(unit_id: str, db: Session = Depends(get_db)):
return {"status": "ok", "message": "Measurement stopped"} return {"status": "ok", "message": "Measurement stopped"}
# ============================================================================
# CYCLE COMMANDS (for scheduled automation)
# ============================================================================
class StartCyclePayload(BaseModel):
"""Payload for start_cycle endpoint."""
sync_clock: bool = Field(True, description="Whether to sync device clock to server time")
class StopCyclePayload(BaseModel):
"""Payload for stop_cycle endpoint."""
download: bool = Field(True, description="Whether to download measurement data")
download_path: str | None = Field(None, description="Custom path for ZIP file (optional)")
@router.post("/{unit_id}/start-cycle")
async def start_cycle(unit_id: str, payload: StartCyclePayload = None, db: Session = Depends(get_db)):
"""
Execute complete start cycle for scheduled automation:
1. Sync device clock to server time (if sync_clock=True)
2. Find next safe index (increment, check overwrite, repeat if needed)
3. Start measurement
Use this instead of /start when automating scheduled measurements.
This ensures the device is properly prepared before recording begins.
"""
cfg = db.query(NL43Config).filter_by(unit_id=unit_id).first()
if not cfg:
raise HTTPException(status_code=404, detail="NL43 config not found")
if not cfg.tcp_enabled:
raise HTTPException(status_code=403, detail="TCP communication is disabled for this device")
payload = payload or StartCyclePayload()
client = NL43Client(cfg.host, cfg.tcp_port, ftp_username=cfg.ftp_username, ftp_password=cfg.ftp_password, ftp_port=cfg.ftp_port or 21)
try:
# Ensure sleep mode is disabled before starting
await ensure_sleep_mode_disabled(client, unit_id)
# Execute the full start cycle
result = await client.start_cycle(sync_clock=payload.sync_clock)
# Update status in database
snap = await client.request_dod()
snap.unit_id = unit_id
persist_snapshot(snap, db)
logger.info(f"Start cycle completed for {unit_id}: index {result['old_index']} -> {result['new_index']}")
return {"status": "ok", "unit_id": unit_id, **result}
except Exception as e:
logger.error(f"Start cycle failed for {unit_id}: {e}")
raise HTTPException(status_code=502, detail=str(e))
@router.post("/{unit_id}/stop-cycle")
async def stop_cycle(unit_id: str, payload: StopCyclePayload = None, db: Session = Depends(get_db)):
"""
Execute complete stop cycle for scheduled automation:
1. Stop measurement
2. Enable FTP
3. Download measurement folder (matching current index)
4. Verify download succeeded
Use this instead of /stop when automating scheduled measurements.
This ensures data is properly saved and downloaded before the next session.
"""
cfg = db.query(NL43Config).filter_by(unit_id=unit_id).first()
if not cfg:
raise HTTPException(status_code=404, detail="NL43 config not found")
if not cfg.tcp_enabled:
raise HTTPException(status_code=403, detail="TCP communication is disabled for this device")
payload = payload or StopCyclePayload()
client = NL43Client(cfg.host, cfg.tcp_port, ftp_username=cfg.ftp_username, ftp_password=cfg.ftp_password, ftp_port=cfg.ftp_port or 21)
try:
# Execute the full stop cycle
result = await client.stop_cycle(
download=payload.download,
download_path=payload.download_path,
)
# Update status in database
snap = await client.request_dod()
snap.unit_id = unit_id
persist_snapshot(snap, db)
logger.info(f"Stop cycle completed for {unit_id}: folder={result.get('downloaded_folder')}, success={result.get('download_success')}")
return {"status": "ok", "unit_id": unit_id, **result}
except Exception as e:
logger.error(f"Stop cycle failed for {unit_id}: {e}")
raise HTTPException(status_code=502, detail=str(e))
@router.post("/{unit_id}/store") @router.post("/{unit_id}/store")
async def manual_store(unit_id: str, db: Session = Depends(get_db)): async def manual_store(unit_id: str, db: Session = Depends(get_db)):
"""Manually store measurement data to SD card.""" """Manually store measurement data to SD card."""
@@ -1479,4 +1843,208 @@ async def run_diagnostics(unit_id: str, db: Session = Depends(get_db)):
# All tests passed # All tests passed
diagnostics["overall_status"] = "pass" diagnostics["overall_status"] = "pass"
# Add database dump: config and status cache
diagnostics["database_dump"] = {
"config": {
"unit_id": cfg.unit_id,
"host": cfg.host,
"tcp_port": cfg.tcp_port,
"tcp_enabled": cfg.tcp_enabled,
"ftp_enabled": cfg.ftp_enabled,
"ftp_port": cfg.ftp_port,
"ftp_username": cfg.ftp_username,
"ftp_password": "***" if cfg.ftp_password else None, # Mask password
"web_enabled": cfg.web_enabled,
"poll_interval_seconds": cfg.poll_interval_seconds,
"poll_enabled": cfg.poll_enabled
},
"status_cache": None
}
# Get cached status if available
status = db.query(NL43Status).filter_by(unit_id=unit_id).first()
if status:
# Helper to format datetime as ISO with Z suffix to indicate UTC
def to_utc_iso(dt):
return dt.isoformat() + 'Z' if dt else None
diagnostics["database_dump"]["status_cache"] = {
"unit_id": status.unit_id,
"last_seen": to_utc_iso(status.last_seen),
"measurement_state": status.measurement_state,
"measurement_start_time": to_utc_iso(status.measurement_start_time),
"counter": status.counter,
"lp": status.lp,
"leq": status.leq,
"lmax": status.lmax,
"lmin": status.lmin,
"lpeak": status.lpeak,
"battery_level": status.battery_level,
"power_source": status.power_source,
"sd_remaining_mb": status.sd_remaining_mb,
"sd_free_ratio": status.sd_free_ratio,
"is_reachable": status.is_reachable,
"consecutive_failures": status.consecutive_failures,
"last_poll_attempt": to_utc_iso(status.last_poll_attempt),
"last_success": to_utc_iso(status.last_success),
"last_error": status.last_error,
"raw_payload": status.raw_payload
}
return diagnostics return diagnostics
# ============================================================================
# DEVICE LOGS ENDPOINTS
# ============================================================================
@router.get("/{unit_id}/logs")
def get_device_logs(
unit_id: str,
limit: int = 100,
offset: int = 0,
level: Optional[str] = None,
category: Optional[str] = None,
db: Session = Depends(get_db)
):
"""
Get log entries for a specific device.
Query parameters:
- limit: Max entries to return (default: 100, max: 1000)
- offset: Number of entries to skip (for pagination)
- level: Filter by level (DEBUG, INFO, WARNING, ERROR)
- category: Filter by category (TCP, FTP, POLL, COMMAND, STATE, SYNC)
Returns newest entries first.
"""
from app.device_logger import get_device_logs as fetch_logs, get_log_stats
# Validate limit
limit = min(limit, 1000)
logs = fetch_logs(
unit_id=unit_id,
limit=limit,
offset=offset,
level=level,
category=category,
db=db
)
stats = get_log_stats(unit_id, db)
return {
"status": "ok",
"unit_id": unit_id,
"logs": logs,
"count": len(logs),
"stats": stats,
"filters": {
"level": level,
"category": category
},
"pagination": {
"limit": limit,
"offset": offset
}
}
@router.delete("/{unit_id}/logs")
def clear_device_logs(unit_id: str, db: Session = Depends(get_db)):
"""
Clear all log entries for a specific device.
"""
from app.models import DeviceLog
deleted = db.query(DeviceLog).filter(DeviceLog.unit_id == unit_id).delete()
db.commit()
logger.info(f"Cleared {deleted} log entries for device {unit_id}")
return {
"status": "ok",
"message": f"Cleared {deleted} log entries for {unit_id}",
"deleted_count": deleted
}
# ============================================================================
# BACKGROUND POLLING CONFIGURATION ENDPOINTS
# ============================================================================
@router.get("/{unit_id}/polling/config")
def get_polling_config(unit_id: str, db: Session = Depends(get_db)):
"""
Get background polling configuration for a device.
Returns the current polling interval and enabled status for automatic
background status polling.
"""
cfg = db.query(NL43Config).filter_by(unit_id=unit_id).first()
if not cfg:
raise HTTPException(status_code=404, detail="Device configuration not found")
return {
"status": "ok",
"data": {
"unit_id": unit_id,
"poll_interval_seconds": cfg.poll_interval_seconds,
"poll_enabled": cfg.poll_enabled
}
}
@router.put("/{unit_id}/polling/config")
def update_polling_config(
unit_id: str,
payload: PollingConfigPayload,
db: Session = Depends(get_db)
):
"""
Update background polling configuration for a device.
Allows configuring the polling interval (30-21600 seconds, i.e. 30s to 6 hours) and
enabling/disabling automatic background polling per device.
Changes take effect on the next polling cycle.
"""
cfg = db.query(NL43Config).filter_by(unit_id=unit_id).first()
if not cfg:
raise HTTPException(status_code=404, detail="Device configuration not found")
# Update interval if provided
if payload.poll_interval_seconds is not None:
if payload.poll_interval_seconds < 30:
raise HTTPException(
status_code=400,
detail="Polling interval must be at least 30 seconds"
)
if payload.poll_interval_seconds > 21600:
raise HTTPException(
status_code=400,
detail="Polling interval must be at most 21600 seconds (6 hours)"
)
cfg.poll_interval_seconds = payload.poll_interval_seconds
# Update enabled status if provided
if payload.poll_enabled is not None:
cfg.poll_enabled = payload.poll_enabled
db.commit()
logger.info(
f"Updated polling config for {unit_id}: "
f"interval={cfg.poll_interval_seconds}s, enabled={cfg.poll_enabled}"
)
return {
"status": "ok",
"data": {
"unit_id": unit_id,
"poll_interval_seconds": cfg.poll_interval_seconds,
"poll_enabled": cfg.poll_enabled
}
}

View File

@@ -14,7 +14,7 @@ import zipfile
import tempfile import tempfile
from dataclasses import dataclass from dataclasses import dataclass
from datetime import datetime, timezone, timedelta from datetime import datetime, timezone, timedelta
from typing import Optional, List from typing import Optional, List, Dict
from sqlalchemy.orm import Session from sqlalchemy.orm import Session
from ftplib import FTP from ftplib import FTP
from pathlib import Path from pathlib import Path
@@ -76,10 +76,22 @@ def persist_snapshot(s: NL43Snapshot, db: Session):
# Measurement just started - record the start time # Measurement just started - record the start time
row.measurement_start_time = datetime.utcnow() row.measurement_start_time = datetime.utcnow()
logger.info(f"✓ Measurement started on {s.unit_id} at {row.measurement_start_time}") logger.info(f"✓ Measurement started on {s.unit_id} at {row.measurement_start_time}")
# Log state change (lazy import to avoid circular dependency)
try:
from app.device_logger import log_device_event
log_device_event(s.unit_id, "INFO", "STATE", f"Measurement STARTED at {row.measurement_start_time}", db)
except Exception:
pass
elif was_measuring and not is_measuring: elif was_measuring and not is_measuring:
# Measurement stopped - clear the start time # Measurement stopped - clear the start time
row.measurement_start_time = None row.measurement_start_time = None
logger.info(f"✓ Measurement stopped on {s.unit_id}") logger.info(f"✓ Measurement stopped on {s.unit_id}")
# Log state change
try:
from app.device_logger import log_device_event
log_device_event(s.unit_id, "INFO", "STATE", "Measurement STOPPED", db)
except Exception:
pass
row.measurement_state = new_state row.measurement_state = new_state
row.counter = s.counter row.counter = s.counter
@@ -101,10 +113,126 @@ def persist_snapshot(s: NL43Snapshot, db: Session):
raise raise
async def sync_measurement_start_time_from_ftp(
unit_id: str,
host: str,
tcp_port: int,
ftp_port: int,
ftp_username: str,
ftp_password: str,
db: Session
) -> bool:
"""
Sync measurement start time from the FTP folder timestamp.
This is called when SLMM detects a device is already measuring but doesn't
have a recorded start time (e.g., after service restart or if measurement
was started before SLMM began polling).
The workflow:
1. Disable FTP (reset)
2. Enable FTP
3. List NL-43 folder to get measurement folder timestamps
4. Use the most recent folder's timestamp as the start time
5. Update the database
Args:
unit_id: Device identifier
host: Device IP/hostname
tcp_port: TCP control port
ftp_port: FTP port (usually 21)
ftp_username: FTP username (usually "USER")
ftp_password: FTP password (usually "0000")
db: Database session
Returns:
True if sync succeeded, False otherwise
"""
logger.info(f"[FTP-SYNC] Attempting to sync measurement start time for {unit_id} via FTP")
client = NL43Client(
host, tcp_port,
ftp_username=ftp_username,
ftp_password=ftp_password,
ftp_port=ftp_port
)
try:
# Step 1: Disable FTP to reset it
logger.info(f"[FTP-SYNC] Step 1: Disabling FTP on {unit_id}")
await client.disable_ftp()
await asyncio.sleep(1.5) # Wait for device to process
# Step 2: Enable FTP
logger.info(f"[FTP-SYNC] Step 2: Enabling FTP on {unit_id}")
await client.enable_ftp()
await asyncio.sleep(2.0) # Wait for FTP server to start
# Step 3: List NL-43 folder
logger.info(f"[FTP-SYNC] Step 3: Listing /NL-43 folder on {unit_id}")
files = await client.list_ftp_files("/NL-43")
# Filter for directories only (measurement folders)
folders = [f for f in files if f.get('is_dir', False)]
if not folders:
logger.warning(f"[FTP-SYNC] No measurement folders found on {unit_id}")
return False
# Sort by modified timestamp (newest first)
folders.sort(key=lambda f: f.get('modified_timestamp', ''), reverse=True)
latest_folder = folders[0]
folder_name = latest_folder['name']
logger.info(f"[FTP-SYNC] Found latest measurement folder: {folder_name}")
# Step 4: Parse timestamp
if 'modified_timestamp' in latest_folder and latest_folder['modified_timestamp']:
timestamp_str = latest_folder['modified_timestamp']
# Parse ISO format timestamp (already in UTC from SLMM FTP listing)
start_time = datetime.fromisoformat(timestamp_str.replace('Z', ''))
# Step 5: Update database
status = db.query(NL43Status).filter_by(unit_id=unit_id).first()
if status:
old_time = status.measurement_start_time
status.measurement_start_time = start_time
db.commit()
logger.info(f"[FTP-SYNC] ✓ Successfully synced start time for {unit_id}")
logger.info(f"[FTP-SYNC] Folder: {folder_name}")
logger.info(f"[FTP-SYNC] Old start time: {old_time}")
logger.info(f"[FTP-SYNC] New start time: {start_time}")
return True
else:
logger.warning(f"[FTP-SYNC] Status record not found for {unit_id}")
return False
else:
logger.warning(f"[FTP-SYNC] Could not parse timestamp from folder {folder_name}")
return False
except Exception as e:
logger.error(f"[FTP-SYNC] Failed to sync start time for {unit_id}: {e}")
return False
# Rate limiting: NL43 requires ≥1 second between commands # Rate limiting: NL43 requires ≥1 second between commands
_last_command_time = {} _last_command_time = {}
_rate_limit_lock = asyncio.Lock() _rate_limit_lock = asyncio.Lock()
# Per-device connection locks: NL43 devices only support one TCP connection at a time
# This prevents concurrent connections from fighting for the device
_device_locks: Dict[str, asyncio.Lock] = {}
_device_locks_lock = asyncio.Lock()
async def _get_device_lock(device_key: str) -> asyncio.Lock:
"""Get or create a lock for a specific device."""
async with _device_locks_lock:
if device_key not in _device_locks:
_device_locks[device_key] = asyncio.Lock()
return _device_locks[device_key]
class NL43Client: class NL43Client:
def __init__(self, host: str, port: int, timeout: float = 5.0, ftp_username: str = None, ftp_password: str = None, ftp_port: int = 21): def __init__(self, host: str, port: int, timeout: float = 5.0, ftp_username: str = None, ftp_password: str = None, ftp_port: int = 21):
@@ -133,7 +261,17 @@ class NL43Client:
NL43 protocol returns two lines for query commands: NL43 protocol returns two lines for query commands:
Line 1: Result code (R+0000 for success, error codes otherwise) Line 1: Result code (R+0000 for success, error codes otherwise)
Line 2: Actual data (for query commands ending with '?') Line 2: Actual data (for query commands ending with '?')
This method acquires a per-device lock to ensure only one TCP connection
is active at a time (NL43 devices only support single connections).
""" """
# Acquire per-device lock to prevent concurrent connections
device_lock = await _get_device_lock(self.device_key)
async with device_lock:
return await self._send_command_unlocked(cmd)
async def _send_command_unlocked(self, cmd: str) -> str:
"""Internal: send command without acquiring device lock (lock must be held by caller)."""
await self._enforce_rate_limit() await self._enforce_rate_limit()
logger.info(f"Sending command to {self.device_key}: {cmd.strip()}") logger.info(f"Sending command to {self.device_key}: {cmd.strip()}")
@@ -429,105 +567,112 @@ class NL43Client:
The stream continues until an exception occurs or the connection is closed. The stream continues until an exception occurs or the connection is closed.
Send SUB character (0x1A) to stop the stream. Send SUB character (0x1A) to stop the stream.
NOTE: This method holds the device lock for the entire duration of streaming,
blocking other commands to this device. This is intentional since NL43 devices
only support one TCP connection at a time.
""" """
await self._enforce_rate_limit() # Acquire per-device lock - held for entire streaming session
device_lock = await _get_device_lock(self.device_key)
async with device_lock:
await self._enforce_rate_limit()
logger.info(f"Starting DRD stream for {self.device_key}") logger.info(f"Starting DRD stream for {self.device_key}")
try:
reader, writer = await asyncio.wait_for(
asyncio.open_connection(self.host, self.port), timeout=self.timeout
)
except asyncio.TimeoutError:
logger.error(f"DRD stream connection timeout to {self.device_key}")
raise ConnectionError(f"Failed to connect to device at {self.host}:{self.port}")
except Exception as e:
logger.error(f"DRD stream connection failed to {self.device_key}: {e}")
raise ConnectionError(f"Failed to connect to device: {str(e)}")
try:
# Start DRD streaming
writer.write(b"DRD?\r\n")
await writer.drain()
# Read initial result code
first_line_data = await asyncio.wait_for(reader.readuntil(b"\n"), timeout=self.timeout)
result_code = first_line_data.decode(errors="ignore").strip()
if result_code.startswith("$"):
result_code = result_code[1:].strip()
logger.debug(f"DRD stream result code from {self.device_key}: {result_code}")
if result_code != "R+0000":
raise ValueError(f"DRD stream failed to start: {result_code}")
logger.info(f"DRD stream started successfully for {self.device_key}")
# Continuously read data lines
while True:
try:
line_data = await asyncio.wait_for(reader.readuntil(b"\n"), timeout=30.0)
line = line_data.decode(errors="ignore").strip()
if not line:
continue
# Remove leading $ if present
if line.startswith("$"):
line = line[1:].strip()
# Parse the DRD data (same format as DOD)
parts = [p.strip() for p in line.split(",") if p.strip() != ""]
if len(parts) < 2:
logger.warning(f"Malformed DRD data from {self.device_key}: {line}")
continue
snap = NL43Snapshot(unit_id="", raw_payload=line, measurement_state="Measure")
# Parse known positions (DRD format - same as DOD)
# DRD format: d0=counter, d1=Lp, d2=Leq, d3=Lmax, d4=Lmin, d5=Lpeak, d6=LIeq, ...
try:
# Capture d0 (counter) for timer synchronization
if len(parts) >= 1:
snap.counter = parts[0] # d0: Measurement interval counter (1-600)
if len(parts) >= 2:
snap.lp = parts[1] # d1: Instantaneous sound pressure level
if len(parts) >= 3:
snap.leq = parts[2] # d2: Equivalent continuous sound level
if len(parts) >= 4:
snap.lmax = parts[3] # d3: Maximum level
if len(parts) >= 5:
snap.lmin = parts[4] # d4: Minimum level
if len(parts) >= 6:
snap.lpeak = parts[5] # d5: Peak level
except (IndexError, ValueError) as e:
logger.warning(f"Error parsing DRD data points: {e}")
# Call the callback with the snapshot
await callback(snap)
except asyncio.TimeoutError:
logger.warning(f"DRD stream timeout (no data for 30s) from {self.device_key}")
break
except asyncio.IncompleteReadError:
logger.info(f"DRD stream closed by device {self.device_key}")
break
finally:
# Send SUB character to stop streaming
try: try:
writer.write(b"\x1A") reader, writer = await asyncio.wait_for(
asyncio.open_connection(self.host, self.port), timeout=self.timeout
)
except asyncio.TimeoutError:
logger.error(f"DRD stream connection timeout to {self.device_key}")
raise ConnectionError(f"Failed to connect to device at {self.host}:{self.port}")
except Exception as e:
logger.error(f"DRD stream connection failed to {self.device_key}: {e}")
raise ConnectionError(f"Failed to connect to device: {str(e)}")
try:
# Start DRD streaming
writer.write(b"DRD?\r\n")
await writer.drain() await writer.drain()
except Exception:
pass
writer.close() # Read initial result code
with contextlib.suppress(Exception): first_line_data = await asyncio.wait_for(reader.readuntil(b"\n"), timeout=self.timeout)
await writer.wait_closed() result_code = first_line_data.decode(errors="ignore").strip()
logger.info(f"DRD stream ended for {self.device_key}") if result_code.startswith("$"):
result_code = result_code[1:].strip()
logger.debug(f"DRD stream result code from {self.device_key}: {result_code}")
if result_code != "R+0000":
raise ValueError(f"DRD stream failed to start: {result_code}")
logger.info(f"DRD stream started successfully for {self.device_key}")
# Continuously read data lines
while True:
try:
line_data = await asyncio.wait_for(reader.readuntil(b"\n"), timeout=30.0)
line = line_data.decode(errors="ignore").strip()
if not line:
continue
# Remove leading $ if present
if line.startswith("$"):
line = line[1:].strip()
# Parse the DRD data (same format as DOD)
parts = [p.strip() for p in line.split(",") if p.strip() != ""]
if len(parts) < 2:
logger.warning(f"Malformed DRD data from {self.device_key}: {line}")
continue
snap = NL43Snapshot(unit_id="", raw_payload=line, measurement_state="Measure")
# Parse known positions (DRD format - same as DOD)
# DRD format: d0=counter, d1=Lp, d2=Leq, d3=Lmax, d4=Lmin, d5=Lpeak, d6=LIeq, ...
try:
# Capture d0 (counter) for timer synchronization
if len(parts) >= 1:
snap.counter = parts[0] # d0: Measurement interval counter (1-600)
if len(parts) >= 2:
snap.lp = parts[1] # d1: Instantaneous sound pressure level
if len(parts) >= 3:
snap.leq = parts[2] # d2: Equivalent continuous sound level
if len(parts) >= 4:
snap.lmax = parts[3] # d3: Maximum level
if len(parts) >= 5:
snap.lmin = parts[4] # d4: Minimum level
if len(parts) >= 6:
snap.lpeak = parts[5] # d5: Peak level
except (IndexError, ValueError) as e:
logger.warning(f"Error parsing DRD data points: {e}")
# Call the callback with the snapshot
await callback(snap)
except asyncio.TimeoutError:
logger.warning(f"DRD stream timeout (no data for 30s) from {self.device_key}")
break
except asyncio.IncompleteReadError:
logger.info(f"DRD stream closed by device {self.device_key}")
break
finally:
# Send SUB character to stop streaming
try:
writer.write(b"\x1A")
await writer.drain()
except Exception:
pass
writer.close()
with contextlib.suppress(Exception):
await writer.wait_closed()
logger.info(f"DRD stream ended for {self.device_key}")
async def set_measurement_time(self, preset: str): async def set_measurement_time(self, preset: str):
"""Set measurement time preset. """Set measurement time preset.
@@ -717,29 +862,89 @@ class NL43Client:
Returns: Returns:
List of file info dicts with 'name', 'size', 'modified', 'is_dir' List of file info dicts with 'name', 'size', 'modified', 'is_dir'
""" """
logger.info(f"Listing FTP files on {self.device_key} at {remote_path}") logger.info(f"[FTP-LIST] === Starting FTP file listing for {self.device_key} ===")
logger.info(f"[FTP-LIST] Target path: {remote_path}")
logger.info(f"[FTP-LIST] Host: {self.host}, Port: {self.ftp_port}, User: {self.ftp_username}")
def _list_ftp_sync(): def _list_ftp_sync():
"""Synchronous FTP listing using ftplib for NL-43 devices.""" """Synchronous FTP listing using ftplib for NL-43 devices."""
import socket
ftp = FTP() ftp = FTP()
ftp.set_debuglevel(2) # Enable FTP debugging ftp.set_debuglevel(2) # Enable FTP debugging
try: try:
# Connect and login # Phase 1: TCP Connection
logger.info(f"Connecting to FTP server at {self.host}:{self.ftp_port}") logger.info(f"[FTP-LIST] Phase 1: Initiating TCP connection to {self.host}:{self.ftp_port}")
ftp.connect(self.host, self.ftp_port, timeout=10) logger.info(f"[FTP-LIST] Connection timeout: 10 seconds")
logger.info(f"Logging in with username: {self.ftp_username}") try:
ftp.login(self.ftp_username, self.ftp_password) ftp.connect(self.host, self.ftp_port, timeout=10)
ftp.set_pasv(False) # Use active mode (required for NL-43 devices) logger.info(f"[FTP-LIST] Phase 1 SUCCESS: TCP connection established")
logger.info("FTP connection established in active mode") # Log socket details
try:
local_addr = ftp.sock.getsockname()
remote_addr = ftp.sock.getpeername()
logger.info(f"[FTP-LIST] Control channel - Local: {local_addr[0]}:{local_addr[1]}, Remote: {remote_addr[0]}:{remote_addr[1]}")
except Exception as sock_info_err:
logger.warning(f"[FTP-LIST] Could not get socket info: {sock_info_err}")
except socket.timeout as timeout_err:
logger.error(f"[FTP-LIST] Phase 1 FAILED: TCP connection TIMEOUT after 10s to {self.host}:{self.ftp_port}")
logger.error(f"[FTP-LIST] This means the device is unreachable or FTP port is blocked/closed")
raise
except socket.error as sock_err:
logger.error(f"[FTP-LIST] Phase 1 FAILED: Socket error to {self.host}:{self.ftp_port}")
logger.error(f"[FTP-LIST] Socket error: {type(sock_err).__name__}: {sock_err}, errno={getattr(sock_err, 'errno', 'N/A')}")
raise
except Exception as conn_err:
logger.error(f"[FTP-LIST] Phase 1 FAILED: {type(conn_err).__name__}: {conn_err}")
raise
# Change to target directory # Phase 2: Authentication
logger.info(f"[FTP-LIST] Phase 2: Authenticating as '{self.ftp_username}'")
try:
ftp.login(self.ftp_username, self.ftp_password)
logger.info(f"[FTP-LIST] Phase 2 SUCCESS: Authentication successful")
except Exception as auth_err:
logger.error(f"[FTP-LIST] Phase 2 FAILED: Auth error for user '{self.ftp_username}': {auth_err}")
raise
# Phase 3: Set Active Mode
logger.info(f"[FTP-LIST] Phase 3: Setting ACTIVE mode (PASV=False)")
logger.info(f"[FTP-LIST] NOTE: Active mode requires the NL-43 device to connect BACK to this server on a data port")
logger.info(f"[FTP-LIST] If firewall blocks incoming connections, data transfer will timeout")
ftp.set_pasv(False)
logger.info(f"[FTP-LIST] Phase 3 SUCCESS: Active mode enabled")
# Phase 4: Change directory
if remote_path != "/": if remote_path != "/":
ftp.cwd(remote_path) logger.info(f"[FTP-LIST] Phase 4: Changing to directory: {remote_path}")
try:
ftp.cwd(remote_path)
logger.info(f"[FTP-LIST] Phase 4 SUCCESS: Changed to {remote_path}")
except Exception as cwd_err:
logger.error(f"[FTP-LIST] Phase 4 FAILED: Could not change to '{remote_path}': {cwd_err}")
raise
else:
logger.info(f"[FTP-LIST] Phase 4: Staying in root directory")
# Get directory listing with details # Phase 5: Get directory listing (THIS IS WHERE DATA CHANNEL IS USED)
logger.info(f"[FTP-LIST] Phase 5: Sending LIST command (data channel required)")
logger.info(f"[FTP-LIST] This step opens a data channel - device must connect back in active mode")
files = [] files = []
lines = [] lines = []
ftp.retrlines('LIST', lines.append) try:
ftp.retrlines('LIST', lines.append)
logger.info(f"[FTP-LIST] Phase 5 SUCCESS: LIST command completed, received {len(lines)} lines")
except socket.timeout as list_timeout:
logger.error(f"[FTP-LIST] Phase 5 FAILED: DATA CHANNEL TIMEOUT during LIST command")
logger.error(f"[FTP-LIST] This usually means:")
logger.error(f"[FTP-LIST] 1. Firewall is blocking incoming data connections from the NL-43")
logger.error(f"[FTP-LIST] 2. NAT is preventing the device from connecting back")
logger.error(f"[FTP-LIST] 3. Network route between device and server is blocked")
logger.error(f"[FTP-LIST] In active FTP mode, the server sends PORT command with its IP:port,")
logger.error(f"[FTP-LIST] and the device initiates a connection TO the server for data transfer")
raise
except Exception as list_err:
logger.error(f"[FTP-LIST] Phase 5 FAILED: Error during LIST: {type(list_err).__name__}: {list_err}")
raise
for line in lines: for line in lines:
# Parse Unix-style ls output # Parse Unix-style ls output
@@ -799,20 +1004,24 @@ class NL43Client:
files.append(file_info) files.append(file_info)
logger.debug(f"Found file: {file_info}") logger.debug(f"Found file: {file_info}")
logger.info(f"Found {len(files)} files/directories on {self.device_key}") logger.info(f"[FTP-LIST] === COMPLETE: Found {len(files)} files/directories on {self.device_key} ===")
return files return files
finally: finally:
logger.info(f"[FTP-LIST] Closing FTP connection")
try: try:
ftp.quit() ftp.quit()
except: logger.info(f"[FTP-LIST] FTP connection closed cleanly")
pass except Exception as quit_err:
logger.warning(f"[FTP-LIST] Error during FTP quit (non-fatal): {quit_err}")
try: try:
# Run synchronous FTP in thread pool # Run synchronous FTP in thread pool
return await asyncio.to_thread(_list_ftp_sync) return await asyncio.to_thread(_list_ftp_sync)
except Exception as e: except Exception as e:
logger.error(f"Failed to list FTP files on {self.device_key}: {e}") logger.error(f"[FTP-LIST] === FAILED: {self.device_key} - {type(e).__name__}: {e} ===")
import traceback
logger.error(f"[FTP-LIST] Full traceback:\n{traceback.format_exc()}")
raise ConnectionError(f"FTP connection failed: {str(e)}") raise ConnectionError(f"FTP connection failed: {str(e)}")
async def download_ftp_file(self, remote_path: str, local_path: str): async def download_ftp_file(self, remote_path: str, local_path: str):
@@ -822,35 +1031,86 @@ class NL43Client:
remote_path: Full path to file on the device remote_path: Full path to file on the device
local_path: Local path where file will be saved local_path: Local path where file will be saved
""" """
logger.info(f"Downloading {remote_path} from {self.device_key} to {local_path}") logger.info(f"[FTP-DOWNLOAD] === Starting FTP download for {self.device_key} ===")
logger.info(f"[FTP-DOWNLOAD] Remote path: {remote_path}")
logger.info(f"[FTP-DOWNLOAD] Local path: {local_path}")
logger.info(f"[FTP-DOWNLOAD] Host: {self.host}, Port: {self.ftp_port}, User: {self.ftp_username}")
def _download_ftp_sync(): def _download_ftp_sync():
"""Synchronous FTP download using ftplib (supports active mode).""" """Synchronous FTP download using ftplib (supports active mode)."""
import socket
ftp = FTP() ftp = FTP()
ftp.set_debuglevel(0) ftp.set_debuglevel(2) # Enable verbose FTP debugging
try: try:
# Connect and login # Phase 1: TCP Connection
ftp.connect(self.host, self.ftp_port, timeout=10) logger.info(f"[FTP-DOWNLOAD] Phase 1: Connecting to {self.host}:{self.ftp_port}")
ftp.login(self.ftp_username, self.ftp_password) try:
ftp.set_pasv(False) # Force active mode ftp.connect(self.host, self.ftp_port, timeout=10)
logger.info(f"[FTP-DOWNLOAD] Phase 1 SUCCESS: TCP connection established")
try:
local_addr = ftp.sock.getsockname()
remote_addr = ftp.sock.getpeername()
logger.info(f"[FTP-DOWNLOAD] Control channel - Local: {local_addr[0]}:{local_addr[1]}, Remote: {remote_addr[0]}:{remote_addr[1]}")
except Exception as sock_info_err:
logger.warning(f"[FTP-DOWNLOAD] Could not get socket info: {sock_info_err}")
except socket.timeout as timeout_err:
logger.error(f"[FTP-DOWNLOAD] Phase 1 FAILED: TCP connection TIMEOUT to {self.host}:{self.ftp_port}")
raise
except socket.error as sock_err:
logger.error(f"[FTP-DOWNLOAD] Phase 1 FAILED: Socket error: {type(sock_err).__name__}: {sock_err}")
raise
except Exception as conn_err:
logger.error(f"[FTP-DOWNLOAD] Phase 1 FAILED: {type(conn_err).__name__}: {conn_err}")
raise
# Download file # Phase 2: Authentication
with open(local_path, 'wb') as f: logger.info(f"[FTP-DOWNLOAD] Phase 2: Authenticating as '{self.ftp_username}'")
ftp.retrbinary(f'RETR {remote_path}', f.write) try:
ftp.login(self.ftp_username, self.ftp_password)
logger.info(f"[FTP-DOWNLOAD] Phase 2 SUCCESS: Authentication successful")
except Exception as auth_err:
logger.error(f"[FTP-DOWNLOAD] Phase 2 FAILED: Auth error: {auth_err}")
raise
logger.info(f"Successfully downloaded {remote_path} to {local_path}") # Phase 3: Set Active Mode
logger.info(f"[FTP-DOWNLOAD] Phase 3: Setting ACTIVE mode (PASV=False)")
ftp.set_pasv(False)
logger.info(f"[FTP-DOWNLOAD] Phase 3 SUCCESS: Active mode enabled")
# Phase 4: Download file (THIS IS WHERE DATA CHANNEL IS USED)
logger.info(f"[FTP-DOWNLOAD] Phase 4: Starting RETR {remote_path}")
logger.info(f"[FTP-DOWNLOAD] Data channel will be established - device connects back in active mode")
try:
with open(local_path, 'wb') as f:
ftp.retrbinary(f'RETR {remote_path}', f.write)
import os
file_size = os.path.getsize(local_path)
logger.info(f"[FTP-DOWNLOAD] Phase 4 SUCCESS: Downloaded {file_size} bytes to {local_path}")
except socket.timeout as dl_timeout:
logger.error(f"[FTP-DOWNLOAD] Phase 4 FAILED: DATA CHANNEL TIMEOUT during download")
logger.error(f"[FTP-DOWNLOAD] This usually means firewall/NAT is blocking the data connection")
raise
except Exception as dl_err:
logger.error(f"[FTP-DOWNLOAD] Phase 4 FAILED: {type(dl_err).__name__}: {dl_err}")
raise
logger.info(f"[FTP-DOWNLOAD] === COMPLETE: {remote_path} downloaded successfully ===")
finally: finally:
logger.info(f"[FTP-DOWNLOAD] Closing FTP connection")
try: try:
ftp.quit() ftp.quit()
except: logger.info(f"[FTP-DOWNLOAD] FTP connection closed cleanly")
pass except Exception as quit_err:
logger.warning(f"[FTP-DOWNLOAD] Error during FTP quit (non-fatal): {quit_err}")
try: try:
# Run synchronous FTP in thread pool # Run synchronous FTP in thread pool
await asyncio.to_thread(_download_ftp_sync) await asyncio.to_thread(_download_ftp_sync)
except Exception as e: except Exception as e:
logger.error(f"Failed to download {remote_path} from {self.device_key}: {e}") logger.error(f"[FTP-DOWNLOAD] === FAILED: {self.device_key} - {type(e).__name__}: {e} ===")
import traceback
logger.error(f"[FTP-DOWNLOAD] Full traceback:\n{traceback.format_exc()}")
raise ConnectionError(f"FTP download failed: {str(e)}") raise ConnectionError(f"FTP download failed: {str(e)}")
async def download_ftp_folder(self, remote_path: str, zip_path: str): async def download_ftp_folder(self, remote_path: str, zip_path: str):
@@ -864,24 +1124,52 @@ class NL43Client:
remote_path: Full path to folder on the device (e.g., "/NL-43/Auto_0000") remote_path: Full path to folder on the device (e.g., "/NL-43/Auto_0000")
zip_path: Local path where the ZIP file will be saved zip_path: Local path where the ZIP file will be saved
""" """
logger.info(f"Downloading folder {remote_path} from {self.device_key} as ZIP to {zip_path}") logger.info(f"[FTP-FOLDER] === Starting FTP folder download for {self.device_key} ===")
logger.info(f"[FTP-FOLDER] Remote folder: {remote_path}")
logger.info(f"[FTP-FOLDER] ZIP destination: {zip_path}")
logger.info(f"[FTP-FOLDER] Host: {self.host}, Port: {self.ftp_port}, User: {self.ftp_username}")
def _download_folder_sync(): def _download_folder_sync():
"""Synchronous FTP folder download and ZIP creation.""" """Synchronous FTP folder download and ZIP creation."""
import socket
ftp = FTP() ftp = FTP()
ftp.set_debuglevel(0) ftp.set_debuglevel(2) # Enable verbose FTP debugging
files_downloaded = 0
folders_processed = 0
# Create a temporary directory for downloaded files # Create a temporary directory for downloaded files
with tempfile.TemporaryDirectory() as temp_dir: with tempfile.TemporaryDirectory() as temp_dir:
try: try:
# Connect and login # Phase 1: Connect and authenticate
ftp.connect(self.host, self.ftp_port, timeout=10) logger.info(f"[FTP-FOLDER] Phase 1: Connecting to {self.host}:{self.ftp_port}")
try:
ftp.connect(self.host, self.ftp_port, timeout=10)
logger.info(f"[FTP-FOLDER] Phase 1 SUCCESS: TCP connection established")
try:
local_addr = ftp.sock.getsockname()
remote_addr = ftp.sock.getpeername()
logger.info(f"[FTP-FOLDER] Control channel - Local: {local_addr[0]}:{local_addr[1]}, Remote: {remote_addr[0]}:{remote_addr[1]}")
except Exception as sock_info_err:
logger.warning(f"[FTP-FOLDER] Could not get socket info: {sock_info_err}")
except socket.timeout as timeout_err:
logger.error(f"[FTP-FOLDER] Phase 1 FAILED: TCP connection TIMEOUT")
raise
except Exception as conn_err:
logger.error(f"[FTP-FOLDER] Phase 1 FAILED: {type(conn_err).__name__}: {conn_err}")
raise
logger.info(f"[FTP-FOLDER] Authenticating as '{self.ftp_username}'")
ftp.login(self.ftp_username, self.ftp_password) ftp.login(self.ftp_username, self.ftp_password)
logger.info(f"[FTP-FOLDER] Authentication successful")
ftp.set_pasv(False) # Force active mode ftp.set_pasv(False) # Force active mode
logger.info(f"[FTP-FOLDER] Active mode enabled (PASV=False)")
def download_recursive(ftp_path: str, local_path: str): def download_recursive(ftp_path: str, local_path: str):
"""Recursively download files and directories.""" """Recursively download files and directories."""
logger.info(f"Processing folder: {ftp_path}") nonlocal files_downloaded, folders_processed
folders_processed += 1
logger.info(f"[FTP-FOLDER] Processing folder #{folders_processed}: {ftp_path}")
# Create local directory # Create local directory
os.makedirs(local_path, exist_ok=True) os.makedirs(local_path, exist_ok=True)
@@ -889,10 +1177,16 @@ class NL43Client:
# List contents # List contents
try: try:
items = [] items = []
logger.info(f"[FTP-FOLDER] Changing to directory: {ftp_path}")
ftp.cwd(ftp_path) ftp.cwd(ftp_path)
logger.info(f"[FTP-FOLDER] Listing contents of {ftp_path}")
ftp.retrlines('LIST', items.append) ftp.retrlines('LIST', items.append)
logger.info(f"[FTP-FOLDER] Found {len(items)} items in {ftp_path}")
except socket.timeout as list_timeout:
logger.error(f"[FTP-FOLDER] TIMEOUT listing {ftp_path} - data channel issue")
return
except Exception as e: except Exception as e:
logger.error(f"Failed to list {ftp_path}: {e}") logger.error(f"[FTP-FOLDER] Failed to list {ftp_path}: {type(e).__name__}: {e}")
return return
for item in items: for item in items:
@@ -918,19 +1212,26 @@ class NL43Client:
else: else:
# Download file # Download file
try: try:
logger.info(f"Downloading file: {full_remote_path}") logger.info(f"[FTP-FOLDER] Downloading file #{files_downloaded + 1}: {full_remote_path}")
with open(full_local_path, 'wb') as f: with open(full_local_path, 'wb') as f:
ftp.retrbinary(f'RETR {full_remote_path}', f.write) ftp.retrbinary(f'RETR {full_remote_path}', f.write)
files_downloaded += 1
file_size = os.path.getsize(full_local_path)
logger.info(f"[FTP-FOLDER] Downloaded: {full_remote_path} ({file_size} bytes)")
except socket.timeout as dl_timeout:
logger.error(f"[FTP-FOLDER] TIMEOUT downloading {full_remote_path}")
except Exception as e: except Exception as e:
logger.error(f"Failed to download {full_remote_path}: {e}") logger.error(f"[FTP-FOLDER] Failed to download {full_remote_path}: {type(e).__name__}: {e}")
# Download entire folder structure # Download entire folder structure
folder_name = os.path.basename(remote_path.rstrip('/')) folder_name = os.path.basename(remote_path.rstrip('/'))
local_folder = os.path.join(temp_dir, folder_name) local_folder = os.path.join(temp_dir, folder_name)
download_recursive(remote_path, local_folder) download_recursive(remote_path, local_folder)
logger.info(f"[FTP-FOLDER] Download complete: {files_downloaded} files from {folders_processed} folders")
# Create ZIP archive # Create ZIP archive
logger.info(f"Creating ZIP archive: {zip_path}") logger.info(f"[FTP-FOLDER] Creating ZIP archive: {zip_path}")
with zipfile.ZipFile(zip_path, 'w', zipfile.ZIP_DEFLATED) as zipf: with zipfile.ZipFile(zip_path, 'w', zipfile.ZIP_DEFLATED) as zipf:
for root, dirs, files in os.walk(local_folder): for root, dirs, files in os.walk(local_folder):
for file in files: for file in files:
@@ -938,19 +1239,185 @@ class NL43Client:
# Calculate relative path for ZIP archive # Calculate relative path for ZIP archive
arcname = os.path.relpath(file_path, temp_dir) arcname = os.path.relpath(file_path, temp_dir)
zipf.write(file_path, arcname) zipf.write(file_path, arcname)
logger.info(f"Added to ZIP: {arcname}") logger.debug(f"[FTP-FOLDER] Added to ZIP: {arcname}")
logger.info(f"Successfully created ZIP archive: {zip_path}") zip_size = os.path.getsize(zip_path)
logger.info(f"[FTP-FOLDER] === COMPLETE: ZIP created ({zip_size} bytes) ===")
finally: finally:
logger.info(f"[FTP-FOLDER] Closing FTP connection")
try: try:
ftp.quit() ftp.quit()
except: logger.info(f"[FTP-FOLDER] FTP connection closed cleanly")
pass except Exception as quit_err:
logger.warning(f"[FTP-FOLDER] Error during FTP quit (non-fatal): {quit_err}")
try: try:
# Run synchronous FTP folder download in thread pool # Run synchronous FTP folder download in thread pool
await asyncio.to_thread(_download_folder_sync) await asyncio.to_thread(_download_folder_sync)
except Exception as e: except Exception as e:
logger.error(f"Failed to download folder {remote_path} from {self.device_key}: {e}") logger.error(f"[FTP-FOLDER] === FAILED: {self.device_key} - {type(e).__name__}: {e} ===")
import traceback
logger.error(f"[FTP-FOLDER] Full traceback:\n{traceback.format_exc()}")
raise ConnectionError(f"FTP folder download failed: {str(e)}") raise ConnectionError(f"FTP folder download failed: {str(e)}")
# ========================================================================
# Cycle Commands (for scheduled automation)
# ========================================================================
async def start_cycle(self, sync_clock: bool = True, max_index_attempts: int = 100) -> dict:
"""
Execute complete start cycle for scheduled automation:
1. Sync device clock to server time
2. Find next safe index (increment, check overwrite, repeat if needed)
3. Start measurement
Args:
sync_clock: Whether to sync device clock to server time (default: True)
max_index_attempts: Maximum attempts to find an unused index (default: 100)
Returns:
dict with clock_synced, old_index, new_index, attempts_made, started
"""
logger.info(f"[START-CYCLE] === Starting measurement cycle on {self.device_key} ===")
result = {
"clock_synced": False,
"server_time": None,
"old_index": None,
"new_index": None,
"attempts_made": 0,
"started": False,
}
# Step 1: Sync clock to server time
if sync_clock:
# Use configured timezone
server_now = datetime.now(timezone.utc) + TIMEZONE_OFFSET
server_time = server_now.strftime("%Y/%m/%d %H:%M:%S")
logger.info(f"[START-CYCLE] Step 1: Syncing clock to {server_time} ({TIMEZONE_NAME})")
await self.set_clock(server_time)
result["clock_synced"] = True
result["server_time"] = server_time
logger.info(f"[START-CYCLE] Clock synced successfully")
else:
logger.info(f"[START-CYCLE] Step 1: Skipping clock sync (sync_clock=False)")
# Step 2: Find next safe index with overwrite protection
logger.info(f"[START-CYCLE] Step 2: Finding safe index with overwrite protection")
current_index_str = await self.get_index_number()
current_index = int(current_index_str)
result["old_index"] = current_index
logger.info(f"[START-CYCLE] Current index: {current_index}")
test_index = current_index + 1
attempts = 0
while attempts < max_index_attempts:
test_index = test_index % 10000 # Wrap at 9999
await self.set_index_number(test_index)
attempts += 1
# Check if this index is safe (no existing data)
overwrite_status = await self.get_overwrite_status()
logger.info(f"[START-CYCLE] Index {test_index:04d}: overwrite status = {overwrite_status}")
if overwrite_status == "None":
# Safe to use this index
result["new_index"] = test_index
result["attempts_made"] = attempts
logger.info(f"[START-CYCLE] Found safe index {test_index:04d} after {attempts} attempt(s)")
break
# Data exists, try next index
test_index += 1
if test_index == current_index:
# Wrapped around completely - all indices have data
logger.error(f"[START-CYCLE] All indices have data! Device storage is full.")
raise Exception("All indices have data. Download and clear device storage.")
if result["new_index"] is None:
logger.error(f"[START-CYCLE] Could not find empty index after {max_index_attempts} attempts")
raise Exception(f"Could not find empty index after {max_index_attempts} attempts")
# Step 3: Start measurement
logger.info(f"[START-CYCLE] Step 3: Starting measurement")
await self.start()
result["started"] = True
logger.info(f"[START-CYCLE] === Measurement started successfully ===")
return result
async def stop_cycle(self, download: bool = True, download_path: str = None) -> dict:
"""
Execute complete stop cycle for scheduled automation:
1. Stop measurement
2. Enable FTP
3. Download measurement folder (matching current index)
4. Verify download succeeded
Args:
download: Whether to download measurement data (default: True)
download_path: Custom path for ZIP file (default: data/downloads/{device_key}/Auto_XXXX.zip)
Returns:
dict with stopped, ftp_enabled, download_attempted, download_success, etc.
"""
logger.info(f"[STOP-CYCLE] === Stopping measurement cycle on {self.device_key} ===")
result = {
"stopped": False,
"ftp_enabled": False,
"download_attempted": False,
"download_success": False,
"downloaded_folder": None,
"local_path": None,
}
# Step 1: Stop measurement
logger.info(f"[STOP-CYCLE] Step 1: Stopping measurement")
await self.stop()
result["stopped"] = True
logger.info(f"[STOP-CYCLE] Measurement stopped")
# Step 2: Enable FTP
logger.info(f"[STOP-CYCLE] Step 2: Enabling FTP")
await self.enable_ftp()
result["ftp_enabled"] = True
logger.info(f"[STOP-CYCLE] FTP enabled")
if not download:
logger.info(f"[STOP-CYCLE] === Cycle complete (download=False) ===")
return result
# Step 3: Get current index to know which folder to download
logger.info(f"[STOP-CYCLE] Step 3: Determining folder to download")
current_index_str = await self.get_index_number()
# Pad to 4 digits for folder name
folder_name = f"Auto_{current_index_str.zfill(4)}"
remote_path = f"/NL-43/{folder_name}"
result["downloaded_folder"] = folder_name
result["download_attempted"] = True
logger.info(f"[STOP-CYCLE] Will download folder: {remote_path}")
# Step 4: Download the folder
if download_path is None:
# Default path: data/downloads/{device_key}/Auto_XXXX.zip
download_dir = f"data/downloads/{self.device_key}"
os.makedirs(download_dir, exist_ok=True)
download_path = os.path.join(download_dir, f"{folder_name}.zip")
logger.info(f"[STOP-CYCLE] Step 4: Downloading to {download_path}")
try:
await self.download_ftp_folder(remote_path, download_path)
result["download_success"] = True
result["local_path"] = download_path
logger.info(f"[STOP-CYCLE] Download successful: {download_path}")
except Exception as e:
logger.error(f"[STOP-CYCLE] Download failed: {e}")
# Don't raise - the stop was successful, just the download failed
result["download_error"] = str(e)
logger.info(f"[STOP-CYCLE] === Cycle complete ===")
return result

67
archive/README.md Normal file
View File

@@ -0,0 +1,67 @@
# SLMM Archive
This directory contains legacy scripts that are no longer needed for normal operation but are preserved for reference.
## Legacy Migrations (`legacy_migrations/`)
These migration scripts were used during SLMM development (v0.1.x) to incrementally add database fields. They are **no longer needed** because:
1. **Fresh databases** get the complete schema automatically from `app/models.py`
2. **Existing databases** should already have these fields from previous runs
3. **Current migration** is `migrate_add_polling_fields.py` (v0.2.0) in the parent directory
### Archived Migration Files
- `migrate_add_counter.py` - Added `counter` field to NL43Status
- `migrate_add_measurement_start_time.py` - Added `measurement_start_time` field
- `migrate_add_ftp_port.py` - Added `ftp_port` field to NL43Config
- `migrate_field_names.py` - Renamed fields for consistency (one-time fix)
- `migrate_revert_field_names.py` - Rollback for the rename migration
**Do not delete** - These provide historical context for database schema evolution.
---
## Legacy Tools
### `nl43_dod_poll.py`
Manual polling script that queries a single NL-43 device for DOD (Device On-Demand) data.
**Status**: Replaced by background polling system in v0.2.0
**Why archived**:
- Background poller (`app/background_poller.py`) now handles continuous polling automatically
- No need for manual polling scripts
- Kept for reference in case manual querying is needed for debugging
**How to use** (if needed):
```bash
cd /home/serversdown/tmi/slmm/archive
python3 nl43_dod_poll.py <host> <port> <unit_id>
```
---
## Active Scripts (Still in Parent Directory)
These scripts are **actively used** and documented in the main README:
### Migrations
- `migrate_add_polling_fields.py` - **v0.2.0 migration** - Adds background polling fields
- `migrate_add_ftp_credentials.py` - **Legacy FTP migration** - Adds FTP auth fields
### Testing
- `test_polling.sh` - Comprehensive test suite for background polling features
- `test_settings_endpoint.py` - Tests device settings API
- `test_sleep_mode_auto_disable.py` - Tests automatic sleep mode handling
### Utilities
- `set_ftp_credentials.py` - Command-line tool to set FTP credentials for a device
---
## Version History
- **v0.2.0** (2026-01-15) - Background polling system added, manual polling scripts archived
- **v0.1.0** (2025-12-XX) - Initial release with incremental migrations

View File

@@ -483,7 +483,7 @@ POST /{unit_id}/ftp/enable
``` ```
Enables FTP server on the device. Enables FTP server on the device.
**Note:** FTP and TCP are mutually exclusive. Enabling FTP will temporarily disable TCP control. **Note:** ~~FTP and TCP are mutually exclusive. Enabling FTP will temporarily disable TCP control.~~ As of v0.2.0, FTP and TCP are working fine in tandem. Just dont spam them a bunch.
### Disable FTP ### Disable FTP
``` ```

246
docs/ROSTER.md Normal file
View File

@@ -0,0 +1,246 @@
# SLMM Roster Management
The SLMM standalone application now includes a roster management interface for viewing and configuring all Sound Level Meter devices.
## Features
### Web Interface
Access the roster at: **http://localhost:8100/roster**
The roster page provides:
- **Device List Table**: View all configured SLMs with their connection details
- **Real-time Status**: See device connectivity status (Online/Offline/Stale)
- **Add Device**: Create new device configurations with a user-friendly modal form
- **Edit Device**: Modify existing device configurations
- **Delete Device**: Remove device configurations (does not affect physical devices)
- **Test Connection**: Run diagnostics on individual devices
### Table Columns
| Column | Description |
|--------|-------------|
| Unit ID | Unique identifier for the device |
| Host / IP | Device IP address or hostname |
| TCP Port | TCP control port (default: 2255) |
| FTP Port | FTP file transfer port (default: 21) |
| TCP | Whether TCP control is enabled |
| FTP | Whether FTP file transfer is enabled |
| Polling | Whether background polling is enabled |
| Status | Device connectivity status (Online/Offline/Stale) |
| Actions | Test, Edit, Delete buttons |
### Status Indicators
- **Online** (green): Device responded within the last 5 minutes
- **Stale** (yellow): Device hasn't responded recently but was seen before
- **Offline** (red): Device is unreachable or has consecutive failures
- **Unknown** (gray): No status data available yet
## API Endpoints
### List All Devices
```bash
GET /api/nl43/roster
```
Returns all configured devices with their status information.
**Response:**
```json
{
"status": "ok",
"devices": [
{
"unit_id": "SLM-43-01",
"host": "192.168.1.100",
"tcp_port": 2255,
"ftp_port": 21,
"tcp_enabled": true,
"ftp_enabled": true,
"ftp_username": "USER",
"ftp_password": "0000",
"web_enabled": false,
"poll_enabled": true,
"poll_interval_seconds": 60,
"status": {
"last_seen": "2026-01-16T20:00:00",
"measurement_state": "Start",
"is_reachable": true,
"consecutive_failures": 0,
"last_success": "2026-01-16T20:00:00",
"last_error": null
}
}
],
"total": 1
}
```
### Create New Device
```bash
POST /api/nl43/roster
Content-Type: application/json
{
"unit_id": "SLM-43-01",
"host": "192.168.1.100",
"tcp_port": 2255,
"ftp_port": 21,
"tcp_enabled": true,
"ftp_enabled": false,
"poll_enabled": true,
"poll_interval_seconds": 60
}
```
**Required Fields:**
- `unit_id`: Unique device identifier
- `host`: IP address or hostname
**Optional Fields:**
- `tcp_port`: TCP control port (default: 2255)
- `ftp_port`: FTP port (default: 21)
- `tcp_enabled`: Enable TCP control (default: true)
- `ftp_enabled`: Enable FTP transfers (default: false)
- `ftp_username`: FTP username (only if ftp_enabled)
- `ftp_password`: FTP password (only if ftp_enabled)
- `poll_enabled`: Enable background polling (default: true)
- `poll_interval_seconds`: Polling interval 10-3600 seconds (default: 60)
**Response:**
```json
{
"status": "ok",
"message": "Device SLM-43-01 created successfully",
"data": {
"unit_id": "SLM-43-01",
"host": "192.168.1.100",
"tcp_port": 2255,
"tcp_enabled": true,
"ftp_enabled": false,
"poll_enabled": true,
"poll_interval_seconds": 60
}
}
```
### Update Device
```bash
PUT /api/nl43/{unit_id}/config
Content-Type: application/json
{
"host": "192.168.1.101",
"tcp_port": 2255,
"poll_interval_seconds": 120
}
```
All fields are optional. Only include fields you want to update.
### Delete Device
```bash
DELETE /api/nl43/{unit_id}/config
```
Removes the device configuration and associated status data. Does not affect the physical device.
**Response:**
```json
{
"status": "ok",
"message": "Deleted device SLM-43-01"
}
```
## Usage Examples
### Via Web Interface
1. Navigate to http://localhost:8100/roster
2. Click "Add Device" to create a new configuration
3. Fill in the device details (unit ID, IP address, ports)
4. Configure TCP, FTP, and polling settings
5. Click "Save Device"
6. Use "Test" button to verify connectivity
7. Edit or delete devices as needed
### Via API (curl)
**Add a new device:**
```bash
curl -X POST http://localhost:8100/api/nl43/roster \
-H "Content-Type: application/json" \
-d '{
"unit_id": "slm-site-a",
"host": "192.168.1.100",
"tcp_port": 2255,
"tcp_enabled": true,
"ftp_enabled": true,
"ftp_username": "USER",
"ftp_password": "0000",
"poll_enabled": true,
"poll_interval_seconds": 60
}'
```
**Update device host:**
```bash
curl -X PUT http://localhost:8100/api/nl43/slm-site-a/config \
-H "Content-Type: application/json" \
-d '{"host": "192.168.1.101"}'
```
**Delete device:**
```bash
curl -X DELETE http://localhost:8100/api/nl43/slm-site-a/config
```
**List all devices:**
```bash
curl http://localhost:8100/api/nl43/roster | python3 -m json.tool
```
## Integration with Terra-View
When SLMM is used as a module within Terra-View:
1. Terra-View manages device configurations in its own database
2. Terra-View syncs configurations to SLMM via `PUT /api/nl43/{unit_id}/config`
3. Terra-View can query device status via `GET /api/nl43/{unit_id}/status`
4. SLMM's roster page can be used for standalone testing and diagnostics
## Background Polling
Devices with `poll_enabled: true` are automatically polled at their configured interval:
- Polls device status every `poll_interval_seconds` (10-3600 seconds)
- Updates `NL43Status` table with latest measurements
- Tracks device reachability and failure counts
- Provides real-time status updates in the roster
**Note**: Polling respects the NL43 protocol's 1-second rate limit between commands.
## Validation
The roster system validates:
- **Unit ID**: Must be unique across all devices
- **Host**: Valid IP address or hostname format
- **Ports**: Must be between 1-65535
- **Poll Interval**: Must be between 10-3600 seconds
- **Duplicate Check**: Returns 409 Conflict if unit_id already exists
## Notes
- Deleting a device from the roster does NOT affect the physical device
- Device configurations are stored in the SLMM database (`data/slmm.db`)
- Status information is updated by the background polling system
- The roster page auto-refreshes status indicators
- Test button runs full diagnostics (connectivity, TCP, FTP if enabled)

26
docs/features/README.md Normal file
View File

@@ -0,0 +1,26 @@
# SLMM Feature Documentation
This directory contains detailed documentation for specific SLMM features and enhancements.
## Feature Documents
### FEATURE_SUMMARY.md
Overview of all major features in SLMM.
### SETTINGS_ENDPOINT.md
Documentation of the device settings endpoint and verification system.
### TIMEZONE_CONFIGURATION.md
Timezone handling and configuration for SLMM timestamps.
### SLEEP_MODE_AUTO_DISABLE.md
Automatic sleep mode wake-up system for background polling.
### UI_UPDATE.md
UI/UX improvements and interface updates.
## Related Documentation
- [../README.md](../../README.md) - Main SLMM documentation
- [../CHANGELOG.md](../../CHANGELOG.md) - Version history
- [../API.md](../../API.md) - Complete API reference

View File

@@ -0,0 +1,73 @@
#!/usr/bin/env python3
"""
Database migration: Add device_logs table.
This table stores per-device log entries for debugging and audit trail.
Run this once to add the new table.
"""
import sqlite3
import os
# Path to the SLMM database
DB_PATH = os.path.join(os.path.dirname(__file__), "data", "slmm.db")
def migrate():
print(f"Adding device_logs table to: {DB_PATH}")
if not os.path.exists(DB_PATH):
print("Database does not exist yet. Table will be created automatically on first run.")
return
conn = sqlite3.connect(DB_PATH)
cursor = conn.cursor()
try:
# Check if table already exists
cursor.execute("""
SELECT name FROM sqlite_master
WHERE type='table' AND name='device_logs'
""")
if cursor.fetchone():
print("✓ device_logs table already exists, no migration needed")
return
# Create the table
print("Creating device_logs table...")
cursor.execute("""
CREATE TABLE device_logs (
id INTEGER PRIMARY KEY AUTOINCREMENT,
unit_id VARCHAR NOT NULL,
timestamp DATETIME DEFAULT CURRENT_TIMESTAMP,
level VARCHAR DEFAULT 'INFO',
category VARCHAR DEFAULT 'GENERAL',
message TEXT NOT NULL
)
""")
# Create indexes for efficient querying
print("Creating indexes...")
cursor.execute("CREATE INDEX ix_device_logs_unit_id ON device_logs (unit_id)")
cursor.execute("CREATE INDEX ix_device_logs_timestamp ON device_logs (timestamp)")
conn.commit()
print("✓ Created device_logs table with indexes")
# Verify
cursor.execute("""
SELECT name FROM sqlite_master
WHERE type='table' AND name='device_logs'
""")
if not cursor.fetchone():
raise Exception("device_logs table was not created successfully")
print("✓ Migration completed successfully")
finally:
conn.close()
if __name__ == "__main__":
migrate()

View File

@@ -0,0 +1,136 @@
#!/usr/bin/env python3
"""
Migration script to add polling-related fields to nl43_config and nl43_status tables.
Adds to nl43_config:
- poll_interval_seconds (INTEGER, default 60)
- poll_enabled (BOOLEAN, default 1/True)
Adds to nl43_status:
- is_reachable (BOOLEAN, default 1/True)
- consecutive_failures (INTEGER, default 0)
- last_poll_attempt (DATETIME, nullable)
- last_success (DATETIME, nullable)
- last_error (TEXT, nullable)
Usage:
python migrate_add_polling_fields.py
"""
import sqlite3
import sys
from pathlib import Path
def migrate():
db_path = Path("data/slmm.db")
if not db_path.exists():
print(f"❌ Database not found at {db_path}")
print(" Run this script from the slmm directory")
return False
try:
conn = sqlite3.connect(db_path)
cursor = conn.cursor()
# Check nl43_config columns
cursor.execute("PRAGMA table_info(nl43_config)")
config_columns = [row[1] for row in cursor.fetchall()]
# Check nl43_status columns
cursor.execute("PRAGMA table_info(nl43_status)")
status_columns = [row[1] for row in cursor.fetchall()]
changes_made = False
# Add nl43_config columns
if "poll_interval_seconds" not in config_columns:
print("Adding poll_interval_seconds to nl43_config...")
cursor.execute("""
ALTER TABLE nl43_config
ADD COLUMN poll_interval_seconds INTEGER DEFAULT 60
""")
changes_made = True
else:
print("✓ poll_interval_seconds already exists in nl43_config")
if "poll_enabled" not in config_columns:
print("Adding poll_enabled to nl43_config...")
cursor.execute("""
ALTER TABLE nl43_config
ADD COLUMN poll_enabled BOOLEAN DEFAULT 1
""")
changes_made = True
else:
print("✓ poll_enabled already exists in nl43_config")
# Add nl43_status columns
if "is_reachable" not in status_columns:
print("Adding is_reachable to nl43_status...")
cursor.execute("""
ALTER TABLE nl43_status
ADD COLUMN is_reachable BOOLEAN DEFAULT 1
""")
changes_made = True
else:
print("✓ is_reachable already exists in nl43_status")
if "consecutive_failures" not in status_columns:
print("Adding consecutive_failures to nl43_status...")
cursor.execute("""
ALTER TABLE nl43_status
ADD COLUMN consecutive_failures INTEGER DEFAULT 0
""")
changes_made = True
else:
print("✓ consecutive_failures already exists in nl43_status")
if "last_poll_attempt" not in status_columns:
print("Adding last_poll_attempt to nl43_status...")
cursor.execute("""
ALTER TABLE nl43_status
ADD COLUMN last_poll_attempt DATETIME
""")
changes_made = True
else:
print("✓ last_poll_attempt already exists in nl43_status")
if "last_success" not in status_columns:
print("Adding last_success to nl43_status...")
cursor.execute("""
ALTER TABLE nl43_status
ADD COLUMN last_success DATETIME
""")
changes_made = True
else:
print("✓ last_success already exists in nl43_status")
if "last_error" not in status_columns:
print("Adding last_error to nl43_status...")
cursor.execute("""
ALTER TABLE nl43_status
ADD COLUMN last_error TEXT
""")
changes_made = True
else:
print("✓ last_error already exists in nl43_status")
if changes_made:
conn.commit()
print("\n✓ Migration completed successfully")
print(" Added polling-related fields to nl43_config and nl43_status")
else:
print("\n✓ All polling fields already exist - no changes needed")
conn.close()
return True
except Exception as e:
print(f"❌ Migration failed: {e}")
return False
if __name__ == "__main__":
success = migrate()
sys.exit(0 if success else 1)

View File

@@ -0,0 +1,60 @@
#!/usr/bin/env python3
"""
Database migration: Add start_time_sync_attempted field to nl43_status table.
This field tracks whether FTP sync has been attempted for the current measurement,
preventing repeated sync attempts when FTP fails.
Run this once to add the new column.
"""
import sqlite3
import os
# Path to the SLMM database
DB_PATH = os.path.join(os.path.dirname(__file__), "data", "slmm.db")
def migrate():
print(f"Adding start_time_sync_attempted field to: {DB_PATH}")
if not os.path.exists(DB_PATH):
print("Database does not exist yet. Column will be created automatically.")
return
conn = sqlite3.connect(DB_PATH)
cursor = conn.cursor()
try:
# Check if column already exists
cursor.execute("PRAGMA table_info(nl43_status)")
columns = [col[1] for col in cursor.fetchall()]
if 'start_time_sync_attempted' in columns:
print("✓ start_time_sync_attempted column already exists, no migration needed")
return
# Add the column
print("Adding start_time_sync_attempted column...")
cursor.execute("""
ALTER TABLE nl43_status
ADD COLUMN start_time_sync_attempted BOOLEAN DEFAULT 0
""")
conn.commit()
print("✓ Added start_time_sync_attempted column")
# Verify
cursor.execute("PRAGMA table_info(nl43_status)")
columns = [col[1] for col in cursor.fetchall()]
if 'start_time_sync_attempted' not in columns:
raise Exception("start_time_sync_attempted column was not added successfully")
print("✓ Migration completed successfully")
finally:
conn.close()
if __name__ == "__main__":
migrate()

View File

@@ -31,6 +31,11 @@
<body> <body>
<h1>SLMM NL43 Standalone</h1> <h1>SLMM NL43 Standalone</h1>
<p>Configure a unit (host/port), then use controls to Start/Stop and fetch live status.</p> <p>Configure a unit (host/port), then use controls to Start/Stop and fetch live status.</p>
<p style="margin-bottom: 16px;">
<a href="/roster" style="color: #0969da; text-decoration: none; font-weight: 600;">📊 View Device Roster</a>
<span style="margin: 0 8px; color: #d0d7de;">|</span>
<a href="/docs" style="color: #0969da; text-decoration: none;">API Documentation</a>
</p>
<fieldset> <fieldset>
<legend>🔍 Connection Diagnostics</legend> <legend>🔍 Connection Diagnostics</legend>
@@ -40,13 +45,34 @@
</fieldset> </fieldset>
<fieldset> <fieldset>
<legend>Unit Config</legend> <legend>Unit Selection & Config</legend>
<label>Unit ID</label>
<input id="unitId" value="nl43-1" /> <div style="display: flex; gap: 8px; align-items: flex-end; margin-bottom: 12px;">
<label>Host</label> <div style="flex: 1;">
<input id="host" value="127.0.0.1" /> <label>Select Device</label>
<label>Port</label> <select id="deviceSelector" onchange="loadSelectedDevice()" style="width: 100%; padding: 8px; margin-bottom: 0;">
<input id="port" type="number" value="80" /> <option value="">-- Select a device --</option>
</select>
</div>
<button onclick="refreshDeviceList()" style="padding: 8px 12px;">↻ Refresh</button>
</div>
<div style="padding: 12px; background: #f6f8fa; border: 1px solid #d0d7de; border-radius: 4px; margin-bottom: 12px;">
<div style="display: flex; gap: 16px;">
<div style="flex: 1;">
<label>Unit ID</label>
<input id="unitId" value="nl43-1" />
</div>
<div style="flex: 2;">
<label>Host</label>
<input id="host" value="127.0.0.1" />
</div>
<div style="flex: 1;">
<label>TCP Port</label>
<input id="port" type="number" value="2255" />
</div>
</div>
</div>
<div style="margin: 12px 0;"> <div style="margin: 12px 0;">
<label style="display: inline-flex; align-items: center; margin-right: 16px;"> <label style="display: inline-flex; align-items: center; margin-right: 16px;">
@@ -66,8 +92,10 @@
<input id="ftpPassword" type="password" value="0000" /> <input id="ftpPassword" type="password" value="0000" />
</div> </div>
<button onclick="saveConfig()" style="margin-top: 12px;">Save Config</button> <div style="margin-top: 12px;">
<button onclick="loadConfig()">Load Config</button> <button onclick="saveConfig()">Save Config</button>
<button onclick="loadConfig()">Load Config</button>
</div>
</fieldset> </fieldset>
<fieldset> <fieldset>
@@ -148,6 +176,7 @@
let ws = null; let ws = null;
let streamUpdateCount = 0; let streamUpdateCount = 0;
let availableDevices = [];
function log(msg) { function log(msg) {
logEl.textContent += msg + "\n"; logEl.textContent += msg + "\n";
@@ -160,9 +189,97 @@
ftpCredentials.style.display = ftpEnabled ? 'block' : 'none'; ftpCredentials.style.display = ftpEnabled ? 'block' : 'none';
} }
// Add event listener for FTP checkbox // Load device list from roster
async function refreshDeviceList() {
try {
const res = await fetch('/api/nl43/roster');
const data = await res.json();
if (!res.ok) {
log('Failed to load device list');
return;
}
availableDevices = data.devices || [];
const selector = document.getElementById('deviceSelector');
// Save current selection
const currentSelection = selector.value;
// Clear and rebuild options
selector.innerHTML = '<option value="">-- Select a device --</option>';
availableDevices.forEach(device => {
const option = document.createElement('option');
option.value = device.unit_id;
// Add status indicator
let statusIcon = '⚪';
if (device.status) {
if (device.status.is_reachable === false) {
statusIcon = '🔴';
} else if (device.status.last_success) {
const lastSeen = new Date(device.status.last_success);
const ageMinutes = Math.floor((Date.now() - lastSeen) / 60000);
statusIcon = ageMinutes < 5 ? '🟢' : '🟡';
}
}
option.textContent = `${statusIcon} ${device.unit_id} (${device.host})`;
selector.appendChild(option);
});
// Restore selection if it still exists
if (currentSelection && availableDevices.find(d => d.unit_id === currentSelection)) {
selector.value = currentSelection;
}
log(`Loaded ${availableDevices.length} device(s) from roster`);
} catch (err) {
log(`Error loading device list: ${err.message}`);
}
}
// Load selected device configuration
function loadSelectedDevice() {
const selector = document.getElementById('deviceSelector');
const unitId = selector.value;
if (!unitId) {
return;
}
const device = availableDevices.find(d => d.unit_id === unitId);
if (!device) {
log(`Device ${unitId} not found in list`);
return;
}
// Populate form fields
document.getElementById('unitId').value = device.unit_id;
document.getElementById('host').value = device.host;
document.getElementById('port').value = device.tcp_port || 2255;
document.getElementById('tcpEnabled').checked = device.tcp_enabled || false;
document.getElementById('ftpEnabled').checked = device.ftp_enabled || false;
if (device.ftp_username) {
document.getElementById('ftpUsername').value = device.ftp_username;
}
if (device.ftp_password) {
document.getElementById('ftpPassword').value = device.ftp_password;
}
toggleFtpCredentials();
log(`Loaded configuration for ${device.unit_id}`);
}
// Add event listeners
document.addEventListener('DOMContentLoaded', function() { document.addEventListener('DOMContentLoaded', function() {
document.getElementById('ftpEnabled').addEventListener('change', toggleFtpCredentials); document.getElementById('ftpEnabled').addEventListener('change', toggleFtpCredentials);
// Load device list on page load
refreshDeviceList();
}); });
async function runDiagnostics() { async function runDiagnostics() {
@@ -216,6 +333,134 @@
html += `<p style="margin-top: 12px; font-size: 0.9em; color: #666;">Last run: ${new Date(data.timestamp).toLocaleString()}</p>`; html += `<p style="margin-top: 12px; font-size: 0.9em; color: #666;">Last run: ${new Date(data.timestamp).toLocaleString()}</p>`;
// Add database dump section if available
if (data.database_dump) {
html += `<div style="margin-top: 16px; border-top: 1px solid #d0d7de; padding-top: 12px;">`;
html += `<h4 style="margin: 0 0 12px 0;">📦 Database Dump</h4>`;
// Config section
if (data.database_dump.config) {
const cfg = data.database_dump.config;
html += `<div style="background: #f0f4f8; padding: 12px; border-radius: 4px; margin-bottom: 12px;">`;
html += `<strong>Configuration (nl43_config)</strong>`;
html += `<table style="width: 100%; margin-top: 8px; font-size: 0.9em;">`;
html += `<tr><td style="padding: 2px 8px; color: #666;">Host</td><td>${cfg.host}:${cfg.tcp_port}</td></tr>`;
html += `<tr><td style="padding: 2px 8px; color: #666;">TCP Enabled</td><td>${cfg.tcp_enabled ? '✓' : '✗'}</td></tr>`;
html += `<tr><td style="padding: 2px 8px; color: #666;">FTP Enabled</td><td>${cfg.ftp_enabled ? '✓' : '✗'}${cfg.ftp_enabled ? ` (port ${cfg.ftp_port}, user: ${cfg.ftp_username || 'none'})` : ''}</td></tr>`;
html += `<tr><td style="padding: 2px 8px; color: #666;">Background Polling</td><td>${cfg.poll_enabled ? `✓ every ${cfg.poll_interval_seconds}s` : '✗ disabled'}</td></tr>`;
html += `</table></div>`;
}
// Status cache section
if (data.database_dump.status_cache) {
const cache = data.database_dump.status_cache;
html += `<div style="background: #f0f8f4; padding: 12px; border-radius: 4px; margin-bottom: 12px;">`;
html += `<strong>Status Cache (nl43_status)</strong>`;
html += `<table style="width: 100%; margin-top: 8px; font-size: 0.9em;">`;
// Measurement state and timing
html += `<tr><td style="padding: 2px 8px; color: #666;">Measurement State</td><td><strong>${cache.measurement_state || 'unknown'}</strong></td></tr>`;
if (cache.measurement_start_time) {
const startTime = new Date(cache.measurement_start_time);
const elapsed = Math.floor((Date.now() - startTime) / 1000);
const elapsedStr = elapsed > 3600 ? `${Math.floor(elapsed/3600)}h ${Math.floor((elapsed%3600)/60)}m` : elapsed > 60 ? `${Math.floor(elapsed/60)}m ${elapsed%60}s` : `${elapsed}s`;
html += `<tr><td style="padding: 2px 8px; color: #666;">Measurement Started</td><td>${startTime.toLocaleString()} (${elapsedStr} ago)</td></tr>`;
}
html += `<tr><td style="padding: 2px 8px; color: #666;">Counter (d0)</td><td>${cache.counter || 'N/A'}</td></tr>`;
// Sound levels
html += `<tr><td colspan="2" style="padding: 8px 8px 2px 8px; font-weight: 600; border-top: 1px solid #d0d7de;">Sound Levels (dB)</td></tr>`;
html += `<tr><td style="padding: 2px 8px; color: #666;">Lp (Instantaneous)</td><td>${cache.lp || 'N/A'}</td></tr>`;
html += `<tr><td style="padding: 2px 8px; color: #666;">Leq (Equivalent)</td><td>${cache.leq || 'N/A'}</td></tr>`;
html += `<tr><td style="padding: 2px 8px; color: #666;">Lmax / Lmin</td><td>${cache.lmax || 'N/A'} / ${cache.lmin || 'N/A'}</td></tr>`;
html += `<tr><td style="padding: 2px 8px; color: #666;">Lpeak</td><td>${cache.lpeak || 'N/A'}</td></tr>`;
// Device status
html += `<tr><td colspan="2" style="padding: 8px 8px 2px 8px; font-weight: 600; border-top: 1px solid #d0d7de;">Device Status</td></tr>`;
html += `<tr><td style="padding: 2px 8px; color: #666;">Battery</td><td>${cache.battery_level || 'N/A'}${cache.power_source ? ` (${cache.power_source})` : ''}</td></tr>`;
html += `<tr><td style="padding: 2px 8px; color: #666;">SD Card</td><td>${cache.sd_remaining_mb ? `${cache.sd_remaining_mb} MB` : 'N/A'}${cache.sd_free_ratio ? ` (${cache.sd_free_ratio} free)` : ''}</td></tr>`;
// Polling status
html += `<tr><td colspan="2" style="padding: 8px 8px 2px 8px; font-weight: 600; border-top: 1px solid #d0d7de;">Polling Status</td></tr>`;
html += `<tr><td style="padding: 2px 8px; color: #666;">Reachable</td><td>${cache.is_reachable ? '🟢 Yes' : '🔴 No'}</td></tr>`;
if (cache.last_seen) {
html += `<tr><td style="padding: 2px 8px; color: #666;">Last Seen</td><td>${new Date(cache.last_seen).toLocaleString()}</td></tr>`;
}
if (cache.last_success) {
html += `<tr><td style="padding: 2px 8px; color: #666;">Last Success</td><td>${new Date(cache.last_success).toLocaleString()}</td></tr>`;
}
if (cache.last_poll_attempt) {
html += `<tr><td style="padding: 2px 8px; color: #666;">Last Poll Attempt</td><td>${new Date(cache.last_poll_attempt).toLocaleString()}</td></tr>`;
}
html += `<tr><td style="padding: 2px 8px; color: #666;">Consecutive Failures</td><td>${cache.consecutive_failures || 0}</td></tr>`;
if (cache.last_error) {
html += `<tr><td style="padding: 2px 8px; color: #666;">Last Error</td><td style="color: #d00; font-size: 0.85em;">${cache.last_error}</td></tr>`;
}
html += `</table></div>`;
// Raw payload (collapsible)
if (cache.raw_payload) {
html += `<details style="margin-top: 8px;"><summary style="cursor: pointer; color: #666; font-size: 0.9em;">📄 Raw Payload</summary>`;
html += `<pre style="background: #f6f8fa; padding: 8px; border-radius: 4px; font-size: 0.8em; overflow-x: auto; margin-top: 8px;">${cache.raw_payload}</pre></details>`;
}
} else {
html += `<p style="color: #888; font-style: italic;">No cached status available for this unit.</p>`;
}
html += `</div>`;
}
// Fetch and display device logs
try {
const logsRes = await fetch(`/api/nl43/${unitId}/logs?limit=50`);
if (logsRes.ok) {
const logsData = await logsRes.json();
if (logsData.logs && logsData.logs.length > 0) {
html += `<div style="margin-top: 16px; border-top: 1px solid #d0d7de; padding-top: 12px;">`;
html += `<h4 style="margin: 0 0 12px 0;">📋 Device Logs (${logsData.stats.total} total)</h4>`;
// Stats summary
if (logsData.stats.by_level) {
html += `<div style="margin-bottom: 8px; font-size: 0.85em; color: #666;">`;
const levels = logsData.stats.by_level;
const parts = [];
if (levels.ERROR) parts.push(`<span style="color: #d00;">${levels.ERROR} errors</span>`);
if (levels.WARNING) parts.push(`<span style="color: #fa0;">${levels.WARNING} warnings</span>`);
if (levels.INFO) parts.push(`${levels.INFO} info`);
html += parts.join(' · ');
html += `</div>`;
}
// Log entries (collapsible)
html += `<details open><summary style="cursor: pointer; font-size: 0.9em; margin-bottom: 8px;">Recent entries (${logsData.logs.length})</summary>`;
html += `<div style="max-height: 300px; overflow-y: auto; background: #f6f8fa; border: 1px solid #d0d7de; border-radius: 4px; padding: 8px; font-size: 0.8em; font-family: monospace;">`;
logsData.logs.forEach(entry => {
const levelColor = {
'ERROR': '#d00',
'WARNING': '#b86e00',
'INFO': '#0969da',
'DEBUG': '#888'
}[entry.level] || '#666';
const time = new Date(entry.timestamp).toLocaleString();
html += `<div style="margin-bottom: 4px; border-bottom: 1px solid #eee; padding-bottom: 4px;">`;
html += `<span style="color: #888;">${time}</span> `;
html += `<span style="color: ${levelColor}; font-weight: 600;">[${entry.level}]</span> `;
html += `<span style="color: #666;">[${entry.category}]</span> `;
html += `${entry.message}`;
html += `</div>`;
});
html += `</div></details>`;
html += `</div>`;
}
}
} catch (logErr) {
console.log('Could not fetch device logs:', logErr);
}
resultsEl.innerHTML = html; resultsEl.innerHTML = html;
log(`Diagnostics complete: ${data.overall_status}`); log(`Diagnostics complete: ${data.overall_status}`);

624
templates/roster.html Normal file
View File

@@ -0,0 +1,624 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>SLMM Roster - Sound Level Meter Configuration</title>
<style>
* { box-sizing: border-box; }
body {
font-family: system-ui, -apple-system, sans-serif;
margin: 0;
padding: 24px;
background: #f6f8fa;
}
.container { max-width: 1400px; margin: 0 auto; }
.header {
display: flex;
justify-content: space-between;
align-items: center;
margin-bottom: 24px;
padding: 16px;
background: white;
border-radius: 6px;
box-shadow: 0 1px 3px rgba(0,0,0,0.1);
}
h1 { margin: 0; font-size: 24px; }
.nav { display: flex; gap: 12px; }
.btn {
padding: 8px 16px;
border: 1px solid #d0d7de;
background: white;
border-radius: 6px;
cursor: pointer;
text-decoration: none;
color: #24292f;
font-size: 14px;
transition: background 0.2s;
}
.btn:hover { background: #f6f8fa; }
.btn-primary {
background: #2da44e;
color: white;
border-color: #2da44e;
}
.btn-primary:hover { background: #2c974b; }
.btn-danger {
background: #cf222e;
color: white;
border-color: #cf222e;
}
.btn-danger:hover { background: #a40e26; }
.btn-small {
padding: 4px 8px;
font-size: 12px;
margin-right: 4px;
}
.table-container {
background: white;
border-radius: 6px;
box-shadow: 0 1px 3px rgba(0,0,0,0.1);
overflow-x: auto;
}
table {
width: 100%;
border-collapse: collapse;
}
th {
background: #f6f8fa;
padding: 12px;
text-align: left;
font-weight: 600;
border-bottom: 2px solid #d0d7de;
font-size: 13px;
white-space: nowrap;
}
td {
padding: 12px;
border-bottom: 1px solid #d0d7de;
font-size: 13px;
}
tr:hover { background: #f6f8fa; }
.status-badge {
display: inline-block;
padding: 2px 8px;
border-radius: 12px;
font-size: 11px;
font-weight: 600;
text-transform: uppercase;
}
.status-ok {
background: #dafbe1;
color: #1a7f37;
}
.status-unknown {
background: #eaeef2;
color: #57606a;
}
.status-error {
background: #ffebe9;
color: #cf222e;
}
.checkbox-cell {
text-align: center;
width: 80px;
}
.checkbox-cell input[type="checkbox"] {
cursor: pointer;
width: 16px;
height: 16px;
}
.actions-cell {
white-space: nowrap;
width: 200px;
}
.empty-state {
text-align: center;
padding: 48px;
color: #57606a;
}
.empty-state-icon {
font-size: 48px;
margin-bottom: 16px;
}
.modal {
display: none;
position: fixed;
top: 0;
left: 0;
width: 100%;
height: 100%;
background: rgba(0,0,0,0.5);
z-index: 1000;
align-items: center;
justify-content: center;
}
.modal.active { display: flex; }
.modal-content {
background: white;
padding: 24px;
border-radius: 6px;
max-width: 600px;
width: 90%;
max-height: 80vh;
overflow-y: auto;
}
.modal-header {
display: flex;
justify-content: space-between;
align-items: center;
margin-bottom: 16px;
}
.modal-header h2 {
margin: 0;
font-size: 20px;
}
.close-btn {
background: none;
border: none;
font-size: 24px;
cursor: pointer;
color: #57606a;
padding: 0;
width: 32px;
height: 32px;
}
.close-btn:hover { color: #24292f; }
.form-group {
margin-bottom: 16px;
}
.form-group label {
display: block;
margin-bottom: 6px;
font-weight: 600;
font-size: 14px;
}
.form-group input[type="text"],
.form-group input[type="number"],
.form-group input[type="password"] {
width: 100%;
padding: 8px 12px;
border: 1px solid #d0d7de;
border-radius: 6px;
font-size: 14px;
}
.form-group input[type="checkbox"] {
width: auto;
margin-right: 8px;
}
.checkbox-label {
display: flex;
align-items: center;
font-weight: normal;
cursor: pointer;
}
.form-actions {
display: flex;
justify-content: flex-end;
gap: 8px;
margin-top: 24px;
}
.toast {
position: fixed;
top: 24px;
right: 24px;
padding: 12px 16px;
background: #24292f;
color: white;
border-radius: 6px;
box-shadow: 0 4px 12px rgba(0,0,0,0.15);
z-index: 2000;
display: none;
min-width: 300px;
}
.toast.active {
display: block;
animation: slideIn 0.3s ease-out;
}
@keyframes slideIn {
from {
transform: translateX(400px);
opacity: 0;
}
to {
transform: translateX(0);
opacity: 1;
}
}
.toast-success { background: #2da44e; }
.toast-error { background: #cf222e; }
</style>
</head>
<body>
<div class="container">
<div class="header">
<h1>📊 Sound Level Meter Roster</h1>
<div class="nav">
<a href="/" class="btn">← Back to Control Panel</a>
<button class="btn btn-primary" onclick="openAddModal()">+ Add Device</button>
</div>
</div>
<div class="table-container">
<table id="rosterTable">
<thead>
<tr>
<th>Unit ID</th>
<th>Host / IP</th>
<th>TCP Port</th>
<th>FTP Port</th>
<th class="checkbox-cell">TCP</th>
<th class="checkbox-cell">FTP</th>
<th class="checkbox-cell">Polling</th>
<th>Status</th>
<th class="actions-cell">Actions</th>
</tr>
</thead>
<tbody id="rosterBody">
<tr>
<td colspan="9" style="text-align: center; padding: 24px;">
Loading...
</td>
</tr>
</tbody>
</table>
</div>
</div>
<!-- Add/Edit Modal -->
<div id="deviceModal" class="modal">
<div class="modal-content">
<div class="modal-header">
<h2 id="modalTitle">Add Device</h2>
<button class="close-btn" onclick="closeModal()">&times;</button>
</div>
<form id="deviceForm" onsubmit="saveDevice(event)">
<div class="form-group">
<label for="unitId">Unit ID *</label>
<input type="text" id="unitId" required placeholder="e.g., nl43-1, slm-site-a" />
</div>
<div class="form-group">
<label for="host">Host / IP Address *</label>
<input type="text" id="host" required placeholder="e.g., 192.168.1.100" />
</div>
<div class="form-group">
<label for="tcpPort">TCP Port *</label>
<input type="number" id="tcpPort" required value="2255" min="1" max="65535" />
</div>
<div class="form-group">
<label for="ftpPort">FTP Port</label>
<input type="number" id="ftpPort" value="21" min="1" max="65535" />
</div>
<div class="form-group">
<label class="checkbox-label">
<input type="checkbox" id="tcpEnabled" checked />
TCP Enabled (required for remote control)
</label>
</div>
<div class="form-group">
<label class="checkbox-label">
<input type="checkbox" id="ftpEnabled" onchange="toggleFtpCredentials()" />
FTP Enabled (for file downloads)
</label>
</div>
<div id="ftpCredentialsSection" style="display: none; padding: 12px; background: #f6f8fa; border-radius: 6px; margin-bottom: 16px;">
<div class="form-group">
<label for="ftpUsername">FTP Username</label>
<input type="text" id="ftpUsername" placeholder="Default: USER" />
</div>
<div class="form-group">
<label for="ftpPassword">FTP Password</label>
<input type="password" id="ftpPassword" placeholder="Default: 0000" />
</div>
</div>
<div class="form-group">
<label class="checkbox-label">
<input type="checkbox" id="pollEnabled" checked />
Enable background polling (status updates)
</label>
</div>
<div class="form-group">
<label for="pollInterval">Polling Interval (seconds)</label>
<input type="number" id="pollInterval" value="60" min="10" max="3600" />
</div>
<div class="form-actions">
<button type="button" class="btn" onclick="closeModal()">Cancel</button>
<button type="submit" class="btn btn-primary">Save Device</button>
</div>
</form>
</div>
</div>
<!-- Toast Notification -->
<div id="toast" class="toast"></div>
<script>
let devices = [];
let editingDeviceId = null;
// Load roster on page load
document.addEventListener('DOMContentLoaded', () => {
loadRoster();
});
async function loadRoster() {
try {
const res = await fetch('/api/nl43/roster');
const data = await res.json();
if (!res.ok) {
showToast('Failed to load roster', 'error');
return;
}
devices = data.devices || [];
renderRoster();
} catch (err) {
showToast('Error loading roster: ' + err.message, 'error');
console.error('Load roster error:', err);
}
}
function renderRoster() {
const tbody = document.getElementById('rosterBody');
if (devices.length === 0) {
tbody.innerHTML = `
<tr>
<td colspan="9" class="empty-state">
<div class="empty-state-icon">📭</div>
<div><strong>No devices configured</strong></div>
<div style="margin-top: 8px; font-size: 14px;">Click "Add Device" to configure your first sound level meter</div>
</td>
</tr>
`;
return;
}
tbody.innerHTML = devices.map(device => `
<tr>
<td><strong>${escapeHtml(device.unit_id)}</strong></td>
<td>${escapeHtml(device.host)}</td>
<td>${device.tcp_port}</td>
<td>${device.ftp_port || 21}</td>
<td class="checkbox-cell">
<input type="checkbox" ${device.tcp_enabled ? 'checked' : ''} disabled />
</td>
<td class="checkbox-cell">
<input type="checkbox" ${device.ftp_enabled ? 'checked' : ''} disabled />
</td>
<td class="checkbox-cell">
<input type="checkbox" ${device.poll_enabled ? 'checked' : ''} disabled />
</td>
<td>
${getStatusBadge(device)}
</td>
<td class="actions-cell">
<button class="btn btn-small" onclick="testDevice('${escapeHtml(device.unit_id)}')">Test</button>
<button class="btn btn-small" onclick="openEditModal('${escapeHtml(device.unit_id)}')">Edit</button>
<button class="btn btn-small btn-danger" onclick="deleteDevice('${escapeHtml(device.unit_id)}')">Delete</button>
</td>
</tr>
`).join('');
}
function getStatusBadge(device) {
if (!device.status) {
return '<span class="status-badge status-unknown">Unknown</span>';
}
if (device.status.is_reachable === false) {
return '<span class="status-badge status-error">Offline</span>';
}
if (device.status.last_success) {
const lastSeen = new Date(device.status.last_success);
const ago = Math.floor((Date.now() - lastSeen) / 1000);
if (ago < 300) { // Less than 5 minutes
return '<span class="status-badge status-ok">Online</span>';
} else {
return `<span class="status-badge status-unknown">Stale (${Math.floor(ago / 60)}m ago)</span>`;
}
}
return '<span class="status-badge status-unknown">Unknown</span>';
}
function escapeHtml(text) {
const map = {
'&': '&amp;',
'<': '&lt;',
'>': '&gt;',
'"': '&quot;',
"'": '&#039;'
};
return String(text).replace(/[&<>"']/g, m => map[m]);
}
function openAddModal() {
editingDeviceId = null;
document.getElementById('modalTitle').textContent = 'Add Device';
document.getElementById('deviceForm').reset();
document.getElementById('unitId').disabled = false;
document.getElementById('tcpEnabled').checked = true;
document.getElementById('ftpEnabled').checked = false;
document.getElementById('pollEnabled').checked = true;
document.getElementById('tcpPort').value = 2255;
document.getElementById('ftpPort').value = 21;
document.getElementById('pollInterval').value = 60;
toggleFtpCredentials();
document.getElementById('deviceModal').classList.add('active');
}
function openEditModal(unitId) {
const device = devices.find(d => d.unit_id === unitId);
if (!device) {
showToast('Device not found', 'error');
return;
}
editingDeviceId = unitId;
document.getElementById('modalTitle').textContent = 'Edit Device';
document.getElementById('unitId').value = device.unit_id;
document.getElementById('unitId').disabled = true;
document.getElementById('host').value = device.host;
document.getElementById('tcpPort').value = device.tcp_port;
document.getElementById('ftpPort').value = device.ftp_port || 21;
document.getElementById('tcpEnabled').checked = device.tcp_enabled;
document.getElementById('ftpEnabled').checked = device.ftp_enabled;
document.getElementById('ftpUsername').value = device.ftp_username || '';
document.getElementById('ftpPassword').value = device.ftp_password || '';
document.getElementById('pollEnabled').checked = device.poll_enabled;
document.getElementById('pollInterval').value = device.poll_interval_seconds || 60;
toggleFtpCredentials();
document.getElementById('deviceModal').classList.add('active');
}
function closeModal() {
document.getElementById('deviceModal').classList.remove('active');
editingDeviceId = null;
}
function toggleFtpCredentials() {
const ftpEnabled = document.getElementById('ftpEnabled').checked;
document.getElementById('ftpCredentialsSection').style.display = ftpEnabled ? 'block' : 'none';
}
async function saveDevice(event) {
event.preventDefault();
const unitId = document.getElementById('unitId').value.trim();
const payload = {
host: document.getElementById('host').value.trim(),
tcp_port: parseInt(document.getElementById('tcpPort').value),
ftp_port: parseInt(document.getElementById('ftpPort').value),
tcp_enabled: document.getElementById('tcpEnabled').checked,
ftp_enabled: document.getElementById('ftpEnabled').checked,
poll_enabled: document.getElementById('pollEnabled').checked,
poll_interval_seconds: parseInt(document.getElementById('pollInterval').value)
};
if (payload.ftp_enabled) {
const username = document.getElementById('ftpUsername').value.trim();
const password = document.getElementById('ftpPassword').value.trim();
if (username) payload.ftp_username = username;
if (password) payload.ftp_password = password;
}
try {
const url = editingDeviceId
? `/api/nl43/${editingDeviceId}/config`
: `/api/nl43/roster`;
const method = editingDeviceId ? 'PUT' : 'POST';
const body = editingDeviceId
? payload
: { unit_id: unitId, ...payload };
const res = await fetch(url, {
method,
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify(body)
});
const data = await res.json();
if (!res.ok) {
showToast(data.detail || 'Failed to save device', 'error');
return;
}
showToast(editingDeviceId ? 'Device updated successfully' : 'Device added successfully', 'success');
closeModal();
await loadRoster();
} catch (err) {
showToast('Error saving device: ' + err.message, 'error');
console.error('Save device error:', err);
}
}
async function deleteDevice(unitId) {
if (!confirm(`Are you sure you want to delete "${unitId}"?\n\nThis will remove the device configuration but will not affect the physical device.`)) {
return;
}
try {
const res = await fetch(`/api/nl43/${unitId}/config`, {
method: 'DELETE'
});
const data = await res.json();
if (!res.ok) {
showToast(data.detail || 'Failed to delete device', 'error');
return;
}
showToast('Device deleted successfully', 'success');
await loadRoster();
} catch (err) {
showToast('Error deleting device: ' + err.message, 'error');
console.error('Delete device error:', err);
}
}
async function testDevice(unitId) {
showToast('Testing device connection...', 'success');
try {
const res = await fetch(`/api/nl43/${unitId}/diagnostics`);
const data = await res.json();
if (!res.ok) {
showToast('Device test failed', 'error');
return;
}
const statusText = {
'pass': 'All systems operational ✓',
'fail': 'Connection failed ✗',
'degraded': 'Partial connectivity ⚠'
};
showToast(statusText[data.overall_status] || 'Test complete',
data.overall_status === 'pass' ? 'success' : 'error');
// Reload to update status
await loadRoster();
} catch (err) {
showToast('Error testing device: ' + err.message, 'error');
console.error('Test device error:', err);
}
}
function showToast(message, type = 'success') {
const toast = document.getElementById('toast');
toast.textContent = message;
toast.className = `toast toast-${type} active`;
setTimeout(() => {
toast.classList.remove('active');
}, 3000);
}
// Close modal when clicking outside
document.getElementById('deviceModal').addEventListener('click', (e) => {
if (e.target.id === 'deviceModal') {
closeModal();
}
});
</script>
</body>
</html>

167
test_polling.sh Executable file
View File

@@ -0,0 +1,167 @@
#!/bin/bash
# Manual test script for background polling functionality
# Usage: ./test_polling.sh [UNIT_ID]
BASE_URL="http://localhost:8100/api/nl43"
UNIT_ID="${1:-NL43-001}"
echo "=========================================="
echo "Background Polling Test Script"
echo "=========================================="
echo "Testing device: $UNIT_ID"
echo "Base URL: $BASE_URL"
echo ""
# Color codes for output
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
RED='\033[0;31m'
NC='\033[0m' # No Color
# Function to print test header
test_header() {
echo ""
echo "=========================================="
echo "$1"
echo "=========================================="
}
# Function to print success
success() {
echo -e "${GREEN}${NC} $1"
}
# Function to print warning
warning() {
echo -e "${YELLOW}${NC} $1"
}
# Function to print error
error() {
echo -e "${RED}${NC} $1"
}
# Test 1: Get current polling configuration
test_header "Test 1: Get Current Polling Configuration"
RESPONSE=$(curl -s "$BASE_URL/$UNIT_ID/polling/config")
echo "$RESPONSE" | jq '.'
if echo "$RESPONSE" | jq -e '.status == "ok"' > /dev/null; then
success "Successfully retrieved polling configuration"
CURRENT_INTERVAL=$(echo "$RESPONSE" | jq -r '.data.poll_interval_seconds')
CURRENT_ENABLED=$(echo "$RESPONSE" | jq -r '.data.poll_enabled')
echo " Current interval: ${CURRENT_INTERVAL}s"
echo " Polling enabled: $CURRENT_ENABLED"
else
error "Failed to retrieve polling configuration"
exit 1
fi
# Test 2: Update polling interval to 30 seconds
test_header "Test 2: Update Polling Interval to 30 Seconds"
RESPONSE=$(curl -s -X PUT "$BASE_URL/$UNIT_ID/polling/config" \
-H "Content-Type: application/json" \
-d '{"poll_interval_seconds": 30}')
echo "$RESPONSE" | jq '.'
if echo "$RESPONSE" | jq -e '.status == "ok"' > /dev/null; then
success "Successfully updated polling interval to 30s"
else
error "Failed to update polling interval"
fi
# Test 3: Check global polling status
test_header "Test 3: Check Global Polling Status"
RESPONSE=$(curl -s "$BASE_URL/_polling/status")
echo "$RESPONSE" | jq '.'
if echo "$RESPONSE" | jq -e '.status == "ok"' > /dev/null; then
success "Successfully retrieved global polling status"
POLLER_RUNNING=$(echo "$RESPONSE" | jq -r '.data.poller_running')
TOTAL_DEVICES=$(echo "$RESPONSE" | jq -r '.data.total_devices')
echo " Poller running: $POLLER_RUNNING"
echo " Total devices: $TOTAL_DEVICES"
else
error "Failed to retrieve global polling status"
fi
# Test 4: Wait for automatic poll to occur
test_header "Test 4: Wait for Automatic Poll (35 seconds)"
warning "Waiting 35 seconds for automatic poll to occur..."
for i in {35..1}; do
echo -ne " ${i}s remaining...\r"
sleep 1
done
echo ""
success "Wait complete"
# Test 5: Check if status was updated by background poller
test_header "Test 5: Verify Background Poll Occurred"
RESPONSE=$(curl -s "$BASE_URL/$UNIT_ID/status")
echo "$RESPONSE" | jq '{last_poll_attempt, last_success, is_reachable, consecutive_failures}'
if echo "$RESPONSE" | jq -e '.status == "ok"' > /dev/null; then
LAST_POLL=$(echo "$RESPONSE" | jq -r '.data.last_poll_attempt')
IS_REACHABLE=$(echo "$RESPONSE" | jq -r '.data.is_reachable')
FAILURES=$(echo "$RESPONSE" | jq -r '.data.consecutive_failures')
if [ "$LAST_POLL" != "null" ]; then
success "Device was polled by background poller"
echo " Last poll: $LAST_POLL"
echo " Reachable: $IS_REACHABLE"
echo " Failures: $FAILURES"
else
warning "No automatic poll detected yet"
fi
else
error "Failed to retrieve device status"
fi
# Test 6: Disable polling
test_header "Test 6: Disable Background Polling"
RESPONSE=$(curl -s -X PUT "$BASE_URL/$UNIT_ID/polling/config" \
-H "Content-Type: application/json" \
-d '{"poll_enabled": false}')
echo "$RESPONSE" | jq '.'
if echo "$RESPONSE" | jq -e '.status == "ok"' > /dev/null; then
success "Successfully disabled background polling"
else
error "Failed to disable polling"
fi
# Test 7: Verify polling is disabled
test_header "Test 7: Verify Polling Disabled in Global Status"
RESPONSE=$(curl -s "$BASE_URL/_polling/status")
DEVICE_ENABLED=$(echo "$RESPONSE" | jq --arg uid "$UNIT_ID" '.data.devices[] | select(.unit_id == $uid) | .poll_enabled')
if [ "$DEVICE_ENABLED" == "false" ]; then
success "Polling correctly shows as disabled for $UNIT_ID"
else
warning "Device still appears in polling list or shows as enabled"
fi
# Test 8: Re-enable polling with original interval
test_header "Test 8: Re-enable Polling with Original Interval"
RESPONSE=$(curl -s -X PUT "$BASE_URL/$UNIT_ID/polling/config" \
-H "Content-Type: application/json" \
-d "{\"poll_enabled\": true, \"poll_interval_seconds\": $CURRENT_INTERVAL}")
echo "$RESPONSE" | jq '.'
if echo "$RESPONSE" | jq -e '.status == "ok"' > /dev/null; then
success "Successfully re-enabled polling with ${CURRENT_INTERVAL}s interval"
else
error "Failed to re-enable polling"
fi
# Summary
test_header "Test Summary"
echo "All tests completed!"
echo ""
echo "Key endpoints tested:"
echo " GET $BASE_URL/{unit_id}/polling/config"
echo " PUT $BASE_URL/{unit_id}/polling/config"
echo " GET $BASE_URL/_polling/status"
echo " GET $BASE_URL/{unit_id}/status (with polling fields)"
echo ""
success "Background polling feature is working correctly"