Compare commits
22 Commits
154a11d057
...
main
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
9f52745bb4 | ||
|
|
6a0422a6fc | ||
|
|
1078576023 | ||
|
|
8074bf0fee | ||
|
|
de02f9cccf | ||
|
|
da446cb2e3 | ||
|
|
51d1aa917a | ||
|
|
b8032e0578 | ||
|
|
3f142ce1c0 | ||
|
|
88adcbcb81 | ||
|
|
8e985154a7 | ||
|
|
f8f590b19b | ||
|
|
58a35a3afd | ||
|
|
45f4fb5a68 | ||
|
|
99d66453fe | ||
|
|
41606d2f31 | ||
|
|
8d06492dbc | ||
|
|
6be434e65f | ||
|
|
6d99f86502 | ||
|
|
5eb5499034 | ||
|
|
0db3780e65 | ||
|
|
d7a0e1b501 |
2
.gitignore
vendored
2
.gitignore
vendored
@@ -1,5 +1,7 @@
|
||||
/bridges/captures/
|
||||
|
||||
/manuals/
|
||||
|
||||
# Python bytecode
|
||||
__pycache__/
|
||||
*.py[cod]
|
||||
|
||||
84
CHANGELOG.md
Normal file
84
CHANGELOG.md
Normal file
@@ -0,0 +1,84 @@
|
||||
# Changelog
|
||||
|
||||
All notable changes to seismo-relay are documented here.
|
||||
|
||||
---
|
||||
|
||||
## v0.5.0 — 2026-03-31
|
||||
|
||||
### Added
|
||||
- **Console tab in `seismo_lab.py`** — direct device connection without the bridge subprocess.
|
||||
- Serial and TCP transport selectable via radio buttons.
|
||||
- Four one-click commands: POLL, Serial #, Full Config, Event Index.
|
||||
- Colour-coded scrolling output: TX (blue), RX raw hex (teal), parsed/decoded (green), errors (red).
|
||||
- Save Log and Send to Analyzer buttons; logs auto-saved to `bridges/captures/console_<ts>.log`.
|
||||
- Queue/`after(100)` pattern — no UI blocking or performance impact.
|
||||
- **`minimateplus` package** — clean Python client library for the MiniMate Plus S3 protocol.
|
||||
- `SerialTransport` and `TcpTransport` (for Sierra Wireless RV50/RV55 cellular modems).
|
||||
- `MiniMateProtocol` — DLE frame parser/builder, two-step paged reads, checksum validation.
|
||||
- `MiniMateClient` — high-level client: `connect()`, `get_serial()`, `get_config()`, `get_events()`.
|
||||
- **TCP/cellular transport** (`TcpTransport`) — connect to field units via Sierra Wireless RV50/RV55 modems over cellular.
|
||||
- `read_until_idle(idle_gap=1.5s)` to handle modem data-forwarding buffer delay.
|
||||
- Confirmed working end-to-end: TCP → RV50/RV55 → RS-232 → MiniMate Plus.
|
||||
- **`bridges/tcp_serial_bridge.py`** — local TCP-to-serial bridge for bench testing `TcpTransport` without a cellular modem.
|
||||
- **SFM REST server** (`sfm/server.py`) — FastAPI server with device info, event list, and event record endpoints over both serial and TCP.
|
||||
|
||||
### Fixed
|
||||
- `protocol.py` `startup()` was using a hardcoded `POLL_RECV_TIMEOUT = 10.0` constant, ignoring the configurable `self._recv_timeout`. Fixed to use `self._recv_timeout` throughout.
|
||||
- `sfm/server.py` now retries once on `ProtocolError` for TCP connections to handle cold-boot timing on first connect.
|
||||
|
||||
### Protocol / Documentation
|
||||
- **Sierra Wireless RV50/RV55 modem config** — confirmed required ACEmanager settings: Quiet Mode = Enable, Data Forwarding Timeout = 1, TCP Connect Response Delay = 0. Quiet Mode disabled causes modem to inject `RING\r\nCONNECT\r\n` onto the serial line, breaking the S3 handshake.
|
||||
- **Calibration year** confirmed at SUB FE (Full Config) destuffed payload offset 0x56–0x57 (uint16 BE). `0x07E7` = 2023, `0x07E9` = 2025.
|
||||
- **`"Operating System"` boot string** — 16-byte UART boot message captured on cold-start before unit enters DLE-framed mode. Parser handles correctly by scanning for DLE+STX.
|
||||
- RV50/RV55 sends `RING`/`CONNECT` over TCP to the calling client even with Quiet Mode enabled — this is normal behaviour, parser discards it.
|
||||
|
||||
---
|
||||
|
||||
## v0.4.0 — 2026-03-12
|
||||
|
||||
### Added
|
||||
- **`seismo_lab.py`** — combined Bridge + Analyzer GUI. Single window with two tabs; bridge start auto-wires live mode in the Analyzer.
|
||||
- **`frame_db.py`** — SQLite frame database. Captures accumulate over time; Query DB tab searches across all sessions.
|
||||
- **`bridges/s3-bridge/proxy.py`** — bridge proxy module.
|
||||
- Large BW→S3 write frame checksum algorithm confirmed and implemented (`SUM8` of payload `[2:-1]` skipping `0x10` bytes, plus constant `0x10`, mod 256).
|
||||
- SUB `A4` identified as composite container frame with embedded inner frames; `_extract_a4_inner_frames()` and `_diff_a4_payloads()` reduce diff noise from 2300 → 17 meaningful entries.
|
||||
|
||||
### Fixed
|
||||
- BAD CHK false positives on BW POLL frames — BW frame terminator `03 41` was being included in the de-stuffed payload. Fixed to strip correctly.
|
||||
- Aux Trigger read location confirmed at SUB FE offset `0x0109`.
|
||||
|
||||
---
|
||||
|
||||
## v0.3.0 — 2026-03-09
|
||||
|
||||
### Added
|
||||
- Record time confirmed at SUB E5 page2 offset `+0x28` as float32 BE.
|
||||
- Trigger Sample Width confirmed at BW→S3 write frame SUB `0x82`, destuffed payload offset `[22]`.
|
||||
- Mode-gating documented: several settings only appear on the wire when the appropriate mode is active.
|
||||
|
||||
### Fixed
|
||||
- `0x082A` mystery resolved — fixed-size E5 payload length (2090 bytes), not a record-time field.
|
||||
|
||||
---
|
||||
|
||||
## v0.2.0 — 2026-03-01
|
||||
|
||||
### Added
|
||||
- Channel config float layout fully confirmed: trigger level, alarm level, and unit string per channel (IEEE 754 BE floats).
|
||||
- Blastware `.set` file format decoded — little-endian binary struct mirroring the wire payload.
|
||||
- Operator manual (716U0101 Rev 15) added as cross-reference source.
|
||||
|
||||
---
|
||||
|
||||
## v0.1.0 — 2026-02-26
|
||||
|
||||
### Added
|
||||
- Initial `s3_bridge.py` serial bridge — transparent RS-232 tap between Blastware and MiniMate Plus.
|
||||
- `s3_parser.py` — deterministic DLE state machine frame extractor.
|
||||
- `s3_analyzer.py` — session parser, frame differ, Claude export.
|
||||
- `gui_bridge.py` and `gui_analyzer.py` — Tkinter GUIs.
|
||||
- DLE framing confirmed: `DLE+STX` / `DLE+ETX`, `0x41` = ACK (not STX), DLE stuffing rule.
|
||||
- Response SUB rule confirmed: `response_SUB = 0xFF - request_SUB`.
|
||||
- Year `0x07CB` = 1995 confirmed as MiniMate factory RTC default.
|
||||
- Full write command family documented (SUBs `68`–`83`).
|
||||
251
README.md
251
README.md
@@ -0,0 +1,251 @@
|
||||
# seismo-relay `v0.5.0`
|
||||
|
||||
A ground-up replacement for **Blastware** — Instantel's aging Windows-only
|
||||
software for managing MiniMate Plus seismographs.
|
||||
|
||||
Built in Python. Runs on Windows. Connects to instruments over direct RS-232
|
||||
or cellular modem (Sierra Wireless RV50 / RV55).
|
||||
|
||||
> **Status:** Active development. Core read pipeline working (device info,
|
||||
> config, event index). Event download and write commands in progress.
|
||||
> See [CHANGELOG.md](CHANGELOG.md) for version history.
|
||||
|
||||
---
|
||||
|
||||
## What's in here
|
||||
|
||||
```
|
||||
seismo-relay/
|
||||
├── seismo_lab.py ← Main GUI (Bridge + Analyzer + Console tabs)
|
||||
│
|
||||
├── minimateplus/ ← MiniMate Plus client library
|
||||
│ ├── transport.py ← SerialTransport and TcpTransport
|
||||
│ ├── protocol.py ← DLE frame layer (read/write/parse)
|
||||
│ ├── client.py ← High-level client (connect, get_config, etc.)
|
||||
│ ├── framing.py ← Frame builder/parser primitives
|
||||
│ └── models.py ← DeviceInfo, EventRecord, etc.
|
||||
│
|
||||
├── sfm/ ← SFM REST API server (FastAPI)
|
||||
│ └── server.py ← /device/info, /device/events, /device/event
|
||||
│
|
||||
├── bridges/
|
||||
│ ├── s3-bridge/
|
||||
│ │ └── s3_bridge.py ← RS-232 serial bridge (capture tool)
|
||||
│ ├── tcp_serial_bridge.py ← Local TCP↔serial bridge (bench testing)
|
||||
│ ├── gui_bridge.py ← Standalone bridge GUI (legacy)
|
||||
│ └── raw_capture.py ← Simple raw capture tool
|
||||
│
|
||||
├── parsers/
|
||||
│ ├── s3_parser.py ← DLE frame extractor
|
||||
│ ├── s3_analyzer.py ← Session parser, differ, Claude export
|
||||
│ ├── gui_analyzer.py ← Standalone analyzer GUI (legacy)
|
||||
│ └── frame_db.py ← SQLite frame database
|
||||
│
|
||||
└── docs/
|
||||
└── instantel_protocol_reference.md ← Reverse-engineered protocol spec
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Quick start
|
||||
|
||||
### Seismo Lab (main GUI)
|
||||
|
||||
The all-in-one tool. Three tabs: **Bridge**, **Analyzer**, **Console**.
|
||||
|
||||
```
|
||||
python seismo_lab.py
|
||||
```
|
||||
|
||||
### SFM REST server
|
||||
|
||||
Exposes MiniMate Plus commands as a REST API for integration with other systems.
|
||||
|
||||
```
|
||||
cd sfm
|
||||
uvicorn server:app --reload
|
||||
```
|
||||
|
||||
**Endpoints:**
|
||||
|
||||
| Method | URL | Description |
|
||||
|--------|-----|-------------|
|
||||
| `GET` | `/device/info?port=COM5` | Device info via serial |
|
||||
| `GET` | `/device/info?host=1.2.3.4&tcp_port=9034` | Device info via cellular modem |
|
||||
| `GET` | `/device/events?port=COM5` | Event index |
|
||||
| `GET` | `/device/event?port=COM5&index=0` | Single event record |
|
||||
|
||||
---
|
||||
|
||||
## Seismo Lab tabs
|
||||
|
||||
### Bridge tab
|
||||
|
||||
Captures live RS-232 traffic between Blastware and the seismograph. Sits in
|
||||
the middle as a transparent pass-through while logging everything to disk.
|
||||
|
||||
```
|
||||
Blastware → COM4 (virtual) ↔ s3_bridge ↔ COM5 (physical) → MiniMate Plus
|
||||
```
|
||||
|
||||
Set your COM ports and log directory, then hit **Start Bridge**. Use
|
||||
**Add Mark** to annotate the capture at specific moments (e.g. "changed
|
||||
trigger level"). When the bridge starts, the Analyzer tab automatically wires
|
||||
up to the live files and starts updating in real time.
|
||||
|
||||
### Analyzer tab
|
||||
|
||||
Parses raw captures into DLE-framed protocol sessions, diffs consecutive
|
||||
sessions to show exactly which bytes changed, and lets you query across all
|
||||
historical captures via the built-in SQLite database.
|
||||
|
||||
- **Inventory** — all frames in a session, click to drill in
|
||||
- **Hex Dump** — full payload hex dump with changed-byte annotations
|
||||
- **Diff** — byte-level before/after diff between sessions
|
||||
- **Full Report** — plain text session report
|
||||
- **Query DB** — search across all captures by SUB, direction, or byte value
|
||||
|
||||
Use **Export for Claude** to generate a self-contained `.md` report for
|
||||
AI-assisted field mapping.
|
||||
|
||||
### Console tab
|
||||
|
||||
Direct connection to a MiniMate Plus — no bridge, no Blastware. Useful for
|
||||
diagnosing field units over cellular without a full capture session.
|
||||
|
||||
**Connection:** choose Serial (COM port + baud) or TCP (IP + port for
|
||||
cellular modem).
|
||||
|
||||
**Commands:**
|
||||
| Button | What it does |
|
||||
|--------|-------------|
|
||||
| POLL | Startup handshake — confirms unit is alive and identifies model |
|
||||
| Serial # | Reads unit serial number |
|
||||
| Full Config | Reads full 166-byte config block (firmware version, channel scales, etc.) |
|
||||
| Event Index | Reads stored event list |
|
||||
|
||||
Output is colour-coded: TX in blue, raw RX bytes in teal, decoded fields in
|
||||
green, errors in red. **Save Log** writes a timestamped `.log` file to
|
||||
`bridges/captures/`. **Send to Analyzer** injects the captured bytes into the
|
||||
Analyzer tab for deeper inspection.
|
||||
|
||||
---
|
||||
|
||||
## Connecting over cellular (RV50 / RV55 modems)
|
||||
|
||||
Field units connect via Sierra Wireless RV50 or RV55 cellular modems. Use
|
||||
TCP mode in the Console or SFM:
|
||||
|
||||
```
|
||||
# Console tab
|
||||
Transport: TCP
|
||||
Host: <modem public IP>
|
||||
Port: 9034 ← Device Port in ACEmanager (call-up mode)
|
||||
```
|
||||
|
||||
```python
|
||||
# In code
|
||||
from minimateplus.transport import TcpTransport
|
||||
from minimateplus.client import MiniMateClient
|
||||
|
||||
client = MiniMateClient(transport=TcpTransport("1.2.3.4", 9034))
|
||||
info = client.connect()
|
||||
```
|
||||
|
||||
### Required ACEmanager settings (Serial tab)
|
||||
|
||||
These must match exactly — a single wrong setting causes the unit to beep
|
||||
on connect but never respond:
|
||||
|
||||
| Setting | Value | Why |
|
||||
|---------|-------|-----|
|
||||
| Configure Serial Port | `38400,8N1` | Must match MiniMate baud rate |
|
||||
| Flow Control | `None` | Hardware flow control blocks unit TX if pins unconnected |
|
||||
| **Quiet Mode** | **Enable** | **Critical.** Disabled → modem injects `RING`/`CONNECT` onto serial line, corrupting the S3 handshake |
|
||||
| Data Forwarding Timeout | `1` (= 0.1 s) | Lower latency; `5` works but is sluggish |
|
||||
| TCP Connect Response Delay | `0` | Non-zero silently drops the first POLL frame |
|
||||
| TCP Idle Timeout | `2` (minutes) | Prevents premature disconnect |
|
||||
| DB9 Serial Echo | `Disable` | Echo corrupts the data stream |
|
||||
|
||||
---
|
||||
|
||||
## minimateplus library
|
||||
|
||||
```python
|
||||
from minimateplus import MiniMateClient
|
||||
from minimateplus.transport import SerialTransport, TcpTransport
|
||||
|
||||
# Serial
|
||||
client = MiniMateClient(port="COM5")
|
||||
|
||||
# TCP (cellular modem)
|
||||
client = MiniMateClient(transport=TcpTransport("1.2.3.4", 9034), timeout=30.0)
|
||||
|
||||
with client:
|
||||
info = client.connect() # DeviceInfo — model, serial, firmware
|
||||
serial = client.get_serial() # Serial number string
|
||||
config = client.get_config() # Full config block (bytes)
|
||||
events = client.get_events() # Event index
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Protocol quick-reference
|
||||
|
||||
| Term | Value | Meaning |
|
||||
|------|-------|---------|
|
||||
| DLE | `0x10` | Data Link Escape |
|
||||
| STX | `0x02` | Start of frame |
|
||||
| ETX | `0x03` | End of frame |
|
||||
| ACK | `0x41` (`'A'`) | Frame-start marker sent before every frame |
|
||||
| DLE stuffing | `10 10` on wire | Literal `0x10` in payload |
|
||||
|
||||
**S3-side frame** (seismograph → Blastware): `ACK DLE+STX [payload] CHK DLE+ETX`
|
||||
|
||||
**De-stuffed payload header:**
|
||||
```
|
||||
[0] CMD 0x10 = BW request, 0x00 = S3 response
|
||||
[1] ? unknown (0x00 BW / 0x10 S3)
|
||||
[2] SUB Command/response identifier ← the key field
|
||||
[3] PAGE_HI Page address high byte
|
||||
[4] PAGE_LO Page address low byte
|
||||
[5+] DATA Payload content
|
||||
```
|
||||
|
||||
**Response SUB rule:** `response_SUB = 0xFF - request_SUB`
|
||||
Example: request SUB `0x08` (Event Index) → response SUB `0xF7`
|
||||
|
||||
Full protocol documentation: [`docs/instantel_protocol_reference.md`](docs/instantel_protocol_reference.md)
|
||||
|
||||
---
|
||||
|
||||
## Requirements
|
||||
|
||||
```
|
||||
pip install pyserial fastapi uvicorn
|
||||
```
|
||||
|
||||
Python 3.10+. Tkinter is included with the standard Python installer on
|
||||
Windows (make sure "tcl/tk and IDLE" is checked during install).
|
||||
|
||||
---
|
||||
|
||||
## Virtual COM ports (bridge capture)
|
||||
|
||||
The bridge needs two COM ports on the same PC — one that Blastware connects
|
||||
to, and one wired to the seismograph. Use a virtual COM port pair
|
||||
(**com0com** or **VSPD**) to give Blastware a port to talk to.
|
||||
|
||||
```
|
||||
Blastware → COM4 (virtual) ↔ s3_bridge.py ↔ COM5 (physical) → MiniMate Plus
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Roadmap
|
||||
|
||||
- [ ] Event download — pull waveform records from the unit (SUBs `1E` → `0A` → `0C` → `5A`)
|
||||
- [ ] Write commands — push config changes to the unit (compliance setup, channel config, trigger settings)
|
||||
- [ ] ACH inbound server — accept call-home connections from field units
|
||||
- [ ] Modem manager — push standard configs to RV50/RV55 fleet via Sierra Wireless API
|
||||
- [ ] Full Blastware parity — complete read/write/download cycle without Blastware
|
||||
|
||||
@@ -13,6 +13,7 @@ Requires only the stdlib (Tkinter is bundled on Windows/Python).
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import datetime
|
||||
import os
|
||||
import queue
|
||||
import subprocess
|
||||
@@ -125,11 +126,22 @@ class BridgeGUI(tk.Tk):
|
||||
|
||||
args = [sys.executable, BRIDGE_PATH, "--bw", bw, "--s3", s3, "--baud", baud, "--logdir", logdir]
|
||||
|
||||
ts = datetime.datetime.now().strftime("%Y%m%d_%H%M%S")
|
||||
|
||||
raw_bw = self.raw_bw_var.get().strip()
|
||||
raw_s3 = self.raw_s3_var.get().strip()
|
||||
|
||||
# If the user left the default generic name, replace with a timestamped one
|
||||
# so each session gets its own file.
|
||||
if raw_bw:
|
||||
if os.path.basename(raw_bw) in ("raw_bw.bin", "raw_bw"):
|
||||
raw_bw = os.path.join(os.path.dirname(raw_bw) or logdir, f"raw_bw_{ts}.bin")
|
||||
self.raw_bw_var.set(raw_bw)
|
||||
args += ["--raw-bw", raw_bw]
|
||||
if raw_s3:
|
||||
if os.path.basename(raw_s3) in ("raw_s3.bin", "raw_s3"):
|
||||
raw_s3 = os.path.join(os.path.dirname(raw_s3) or logdir, f"raw_s3_{ts}.bin")
|
||||
self.raw_s3_var.set(raw_s3)
|
||||
args += ["--raw-s3", raw_s3]
|
||||
|
||||
try:
|
||||
|
||||
@@ -345,14 +345,25 @@ def main() -> int:
|
||||
ts = _dt.datetime.now().strftime("%Y%m%d_%H%M%S")
|
||||
log_path = os.path.join(args.logdir, f"s3_session_{ts}.log")
|
||||
bin_path = os.path.join(args.logdir, f"s3_session_{ts}.bin")
|
||||
logger = SessionLogger(log_path, bin_path, raw_bw_path=args.raw_bw, raw_s3_path=args.raw_s3)
|
||||
|
||||
# If raw tap flags were passed without a path (bare --raw-bw / --raw-s3),
|
||||
# or if the sentinel value "auto" is used, generate a timestamped name.
|
||||
# If a specific path was provided, use it as-is (caller's responsibility).
|
||||
raw_bw_path = args.raw_bw
|
||||
raw_s3_path = args.raw_s3
|
||||
if raw_bw_path in (None, "", "auto"):
|
||||
raw_bw_path = os.path.join(args.logdir, f"raw_bw_{ts}.bin") if args.raw_bw is not None else None
|
||||
if raw_s3_path in (None, "", "auto"):
|
||||
raw_s3_path = os.path.join(args.logdir, f"raw_s3_{ts}.bin") if args.raw_s3 is not None else None
|
||||
|
||||
logger = SessionLogger(log_path, bin_path, raw_bw_path=raw_bw_path, raw_s3_path=raw_s3_path)
|
||||
|
||||
print(f"[LOG] Writing hex log to {log_path}")
|
||||
print(f"[LOG] Writing binary log to {bin_path}")
|
||||
if args.raw_bw:
|
||||
print(f"[LOG] Raw tap BW->S3 -> {args.raw_bw}")
|
||||
if args.raw_s3:
|
||||
print(f"[LOG] Raw tap S3->BW -> {args.raw_s3}")
|
||||
if raw_bw_path:
|
||||
print(f"[LOG] Raw tap BW->S3 -> {raw_bw_path}")
|
||||
if raw_s3_path:
|
||||
print(f"[LOG] Raw tap S3->BW -> {raw_s3_path}")
|
||||
|
||||
logger.log_info(f"s3_bridge {VERSION} start")
|
||||
logger.log_info(f"BW={args.bw} S3={args.s3} baud={args.baud}")
|
||||
|
||||
205
bridges/tcp_serial_bridge.py
Normal file
205
bridges/tcp_serial_bridge.py
Normal file
@@ -0,0 +1,205 @@
|
||||
"""
|
||||
tcp_serial_bridge.py — Local TCP-to-serial bridge for bench testing TcpTransport.
|
||||
|
||||
Listens on a TCP port and, when a client connects, opens a serial port and
|
||||
bridges bytes bidirectionally. This lets you test the SFM server's TCP
|
||||
endpoint (?host=127.0.0.1&tcp_port=12345) against a locally-attached MiniMate
|
||||
Plus without needing a field modem.
|
||||
|
||||
The bridge simulates an RV55 cellular modem in transparent TCP passthrough mode:
|
||||
- No handshake bytes on connect
|
||||
- Raw bytes forwarded in both directions
|
||||
- One connection at a time (new connection closes any existing serial session)
|
||||
|
||||
Usage:
|
||||
python bridges/tcp_serial_bridge.py --serial COM5 --tcp-port 12345
|
||||
|
||||
Then in another window:
|
||||
python -m uvicorn sfm.server:app --port 8200
|
||||
curl "http://localhost:8200/device/info?host=127.0.0.1&tcp_port=12345"
|
||||
|
||||
Or just hit http://localhost:8200/device/info?host=127.0.0.1&tcp_port=12345
|
||||
in a browser.
|
||||
|
||||
Requirements:
|
||||
pip install pyserial
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import argparse
|
||||
import logging
|
||||
import select
|
||||
import socket
|
||||
import sys
|
||||
import threading
|
||||
import time
|
||||
|
||||
try:
|
||||
import serial # type: ignore
|
||||
except ImportError:
|
||||
print("pyserial required: pip install pyserial", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
logging.basicConfig(
|
||||
level=logging.INFO,
|
||||
format="%(asctime)s %(levelname)-7s %(message)s",
|
||||
datefmt="%H:%M:%S",
|
||||
)
|
||||
log = logging.getLogger("tcp_serial_bridge")
|
||||
|
||||
# ── Constants ─────────────────────────────────────────────────────────────────
|
||||
|
||||
DEFAULT_BAUD = 38_400
|
||||
DEFAULT_TCP_PORT = 12345
|
||||
CHUNK = 256 # bytes per read call
|
||||
SERIAL_TIMEOUT = 0.02 # serial read timeout (s) — non-blocking in practice
|
||||
TCP_TIMEOUT = 0.02 # socket recv timeout (s)
|
||||
BOOT_DELAY = 8.0 # seconds to wait after opening serial port before
|
||||
# forwarding data — unit cold-boot (beep + OS init)
|
||||
# takes 5-10s from first RS-232 line assertion.
|
||||
# Set to 0 if unit was already running before connect.
|
||||
|
||||
|
||||
# ── Bridge session ─────────────────────────────────────────────────────────────
|
||||
|
||||
def _pipe_tcp_to_serial(sock: socket.socket, ser: serial.Serial, stop: threading.Event) -> None:
|
||||
"""Forward bytes from TCP socket → serial port."""
|
||||
sock.settimeout(TCP_TIMEOUT)
|
||||
while not stop.is_set():
|
||||
try:
|
||||
data = sock.recv(CHUNK)
|
||||
if not data:
|
||||
log.info("TCP peer closed connection")
|
||||
stop.set()
|
||||
break
|
||||
log.debug("TCP→SER %d bytes: %s", len(data), data.hex())
|
||||
ser.write(data)
|
||||
except socket.timeout:
|
||||
pass
|
||||
except OSError as exc:
|
||||
if not stop.is_set():
|
||||
log.warning("TCP read error: %s", exc)
|
||||
stop.set()
|
||||
break
|
||||
|
||||
|
||||
def _pipe_serial_to_tcp(sock: socket.socket, ser: serial.Serial, stop: threading.Event) -> None:
|
||||
"""Forward bytes from serial port → TCP socket."""
|
||||
while not stop.is_set():
|
||||
try:
|
||||
data = ser.read(CHUNK)
|
||||
if data:
|
||||
log.debug("SER→TCP %d bytes: %s", len(data), data.hex())
|
||||
try:
|
||||
sock.sendall(data)
|
||||
except OSError as exc:
|
||||
if not stop.is_set():
|
||||
log.warning("TCP send error: %s", exc)
|
||||
stop.set()
|
||||
break
|
||||
except serial.SerialException as exc:
|
||||
if not stop.is_set():
|
||||
log.warning("Serial read error: %s", exc)
|
||||
stop.set()
|
||||
break
|
||||
|
||||
|
||||
def _run_session(conn: socket.socket, addr: tuple, serial_port: str, baud: int, boot_delay: float) -> None:
|
||||
"""Handle one TCP client connection."""
|
||||
peer = f"{addr[0]}:{addr[1]}"
|
||||
log.info("Connection from %s", peer)
|
||||
|
||||
try:
|
||||
ser = serial.Serial(
|
||||
port = serial_port,
|
||||
baudrate = baud,
|
||||
bytesize = 8,
|
||||
parity = "N",
|
||||
stopbits = 1,
|
||||
timeout = SERIAL_TIMEOUT,
|
||||
)
|
||||
except serial.SerialException as exc:
|
||||
log.error("Cannot open serial port %s: %s", serial_port, exc)
|
||||
conn.close()
|
||||
return
|
||||
|
||||
log.info("Opened %s at %d baud — waiting %.1fs for unit boot", serial_port, baud, boot_delay)
|
||||
ser.reset_input_buffer()
|
||||
ser.reset_output_buffer()
|
||||
|
||||
if boot_delay > 0:
|
||||
time.sleep(boot_delay)
|
||||
ser.reset_input_buffer() # discard any boot noise
|
||||
|
||||
log.info("Bridge active: TCP %s ↔ %s", peer, serial_port)
|
||||
|
||||
stop = threading.Event()
|
||||
t_tcp_to_ser = threading.Thread(
|
||||
target=_pipe_tcp_to_serial, args=(conn, ser, stop), daemon=True
|
||||
)
|
||||
t_ser_to_tcp = threading.Thread(
|
||||
target=_pipe_serial_to_tcp, args=(conn, ser, stop), daemon=True
|
||||
)
|
||||
t_tcp_to_ser.start()
|
||||
t_ser_to_tcp.start()
|
||||
|
||||
stop.wait() # block until either thread sets the stop flag
|
||||
|
||||
log.info("Session ended, cleaning up")
|
||||
try:
|
||||
conn.close()
|
||||
except OSError:
|
||||
pass
|
||||
try:
|
||||
ser.close()
|
||||
except OSError:
|
||||
pass
|
||||
|
||||
t_tcp_to_ser.join(timeout=2.0)
|
||||
t_ser_to_tcp.join(timeout=2.0)
|
||||
log.info("Session with %s closed", peer)
|
||||
|
||||
|
||||
# ── Server ────────────────────────────────────────────────────────────────────
|
||||
|
||||
def run_bridge(serial_port: str, baud: int, tcp_port: int, boot_delay: float) -> None:
|
||||
"""Accept TCP connections forever and bridge each one to the serial port."""
|
||||
srv = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
|
||||
srv.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
|
||||
srv.bind(("0.0.0.0", tcp_port))
|
||||
srv.listen(1)
|
||||
log.info(
|
||||
"Listening on TCP :%d — will bridge to %s at %d baud",
|
||||
tcp_port, serial_port, baud,
|
||||
)
|
||||
log.info("Send test: curl 'http://localhost:8200/device/info?host=127.0.0.1&tcp_port=%d'", tcp_port)
|
||||
|
||||
try:
|
||||
while True:
|
||||
conn, addr = srv.accept()
|
||||
# Handle one session at a time (synchronous) — matches modem behaviour
|
||||
_run_session(conn, addr, serial_port, baud, boot_delay)
|
||||
except KeyboardInterrupt:
|
||||
log.info("Shutting down")
|
||||
finally:
|
||||
srv.close()
|
||||
|
||||
|
||||
# ── Entry point ────────────────────────────────────────────────────────────────
|
||||
|
||||
if __name__ == "__main__":
|
||||
ap = argparse.ArgumentParser(description="TCP-to-serial bridge for bench testing TcpTransport")
|
||||
ap.add_argument("--serial", default="COM5", help="Serial port (default: COM5)")
|
||||
ap.add_argument("--baud", type=int, default=DEFAULT_BAUD, help="Baud rate (default: 38400)")
|
||||
ap.add_argument("--tcp-port", type=int, default=DEFAULT_TCP_PORT, help="TCP listen port (default: 12345)")
|
||||
ap.add_argument("--boot-delay", type=float, default=BOOT_DELAY,
|
||||
help="Seconds to wait after opening serial before forwarding (default: 2.0). "
|
||||
"Set to 0 if unit is already powered on.")
|
||||
ap.add_argument("--debug", action="store_true", help="Show individual byte transfers")
|
||||
args = ap.parse_args()
|
||||
|
||||
if args.debug:
|
||||
logging.getLogger().setLevel(logging.DEBUG)
|
||||
|
||||
run_bridge(args.serial, args.baud, args.tcp_port, args.boot_delay)
|
||||
File diff suppressed because it is too large
Load Diff
27
minimateplus/__init__.py
Normal file
27
minimateplus/__init__.py
Normal file
@@ -0,0 +1,27 @@
|
||||
"""
|
||||
minimateplus — Instantel MiniMate Plus protocol library.
|
||||
|
||||
Provides a clean Python API for communicating with MiniMate Plus seismographs
|
||||
over RS-232 serial (direct cable) or TCP (modem / ACH Auto Call Home).
|
||||
|
||||
Typical usage (serial):
|
||||
from minimateplus import MiniMateClient
|
||||
|
||||
with MiniMateClient("COM5") as device:
|
||||
info = device.connect()
|
||||
events = device.get_events()
|
||||
|
||||
Typical usage (TCP / modem):
|
||||
from minimateplus import MiniMateClient
|
||||
from minimateplus.transport import TcpTransport
|
||||
|
||||
with MiniMateClient(transport=TcpTransport("203.0.113.5", 12345)) as device:
|
||||
info = device.connect()
|
||||
"""
|
||||
|
||||
from .client import MiniMateClient
|
||||
from .models import DeviceInfo, Event
|
||||
from .transport import SerialTransport, TcpTransport
|
||||
|
||||
__version__ = "0.1.0"
|
||||
__all__ = ["MiniMateClient", "DeviceInfo", "Event", "SerialTransport", "TcpTransport"]
|
||||
533
minimateplus/client.py
Normal file
533
minimateplus/client.py
Normal file
@@ -0,0 +1,533 @@
|
||||
"""
|
||||
client.py — MiniMateClient: the top-level public API for the library.
|
||||
|
||||
Combines transport, protocol, and model decoding into a single easy-to-use
|
||||
class. This is the only layer that the SFM server (sfm/server.py) imports
|
||||
directly.
|
||||
|
||||
Design: stateless per-call (connect → do work → disconnect).
|
||||
The client does not hold an open connection between calls. This keeps the
|
||||
first implementation simple and matches Blastware's observed behaviour.
|
||||
Persistent connections can be added later without changing the public API.
|
||||
|
||||
Example (serial):
|
||||
from minimateplus import MiniMateClient
|
||||
|
||||
with MiniMateClient("COM5") as device:
|
||||
info = device.connect() # POLL handshake + identity read
|
||||
events = device.get_events() # download all events
|
||||
|
||||
Example (TCP / modem):
|
||||
from minimateplus import MiniMateClient
|
||||
from minimateplus.transport import TcpTransport
|
||||
|
||||
transport = TcpTransport("203.0.113.5", port=12345)
|
||||
with MiniMateClient(transport=transport) as device:
|
||||
info = device.connect()
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
import struct
|
||||
from typing import Optional
|
||||
|
||||
from .framing import S3Frame
|
||||
from .models import (
|
||||
DeviceInfo,
|
||||
Event,
|
||||
PeakValues,
|
||||
ProjectInfo,
|
||||
Timestamp,
|
||||
)
|
||||
from .protocol import MiniMateProtocol, ProtocolError
|
||||
from .protocol import (
|
||||
SUB_SERIAL_NUMBER,
|
||||
SUB_FULL_CONFIG,
|
||||
)
|
||||
from .transport import SerialTransport, BaseTransport
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
|
||||
# ── MiniMateClient ────────────────────────────────────────────────────────────
|
||||
|
||||
class MiniMateClient:
|
||||
"""
|
||||
High-level client for a single MiniMate Plus device.
|
||||
|
||||
Args:
|
||||
port: Serial port name (e.g. "COM5", "/dev/ttyUSB0").
|
||||
Not required when a pre-built transport is provided.
|
||||
baud: Baud rate (default 38400, ignored when transport is provided).
|
||||
timeout: Per-request receive timeout in seconds (default 15.0).
|
||||
transport: Pre-built transport (SerialTransport or TcpTransport).
|
||||
If None, a SerialTransport is constructed from port/baud.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
port: str = "",
|
||||
baud: int = 38_400,
|
||||
timeout: float = 15.0,
|
||||
transport: Optional[BaseTransport] = None,
|
||||
) -> None:
|
||||
self.port = port
|
||||
self.baud = baud
|
||||
self.timeout = timeout
|
||||
self._transport: Optional[BaseTransport] = transport
|
||||
self._proto: Optional[MiniMateProtocol] = None
|
||||
|
||||
# ── Connection lifecycle ──────────────────────────────────────────────────
|
||||
|
||||
def open(self) -> None:
|
||||
"""Open the transport connection."""
|
||||
if self._transport is None:
|
||||
self._transport = SerialTransport(self.port, self.baud)
|
||||
if not self._transport.is_connected:
|
||||
self._transport.connect()
|
||||
self._proto = MiniMateProtocol(self._transport, recv_timeout=self.timeout)
|
||||
|
||||
def close(self) -> None:
|
||||
"""Close the transport connection."""
|
||||
if self._transport and self._transport.is_connected:
|
||||
self._transport.disconnect()
|
||||
self._proto = None
|
||||
|
||||
@property
|
||||
def is_open(self) -> bool:
|
||||
return bool(self._transport and self._transport.is_connected)
|
||||
|
||||
# ── Context manager ───────────────────────────────────────────────────────
|
||||
|
||||
def __enter__(self) -> "MiniMateClient":
|
||||
self.open()
|
||||
return self
|
||||
|
||||
def __exit__(self, *_) -> None:
|
||||
self.close()
|
||||
|
||||
# ── Public API ────────────────────────────────────────────────────────────
|
||||
|
||||
def connect(self) -> DeviceInfo:
|
||||
"""
|
||||
Perform the startup handshake and read device identity.
|
||||
|
||||
Opens the connection if not already open.
|
||||
|
||||
Reads:
|
||||
1. POLL handshake (startup)
|
||||
2. SUB 15 — serial number
|
||||
3. SUB 01 — full config block (firmware, model strings)
|
||||
|
||||
Returns:
|
||||
Populated DeviceInfo.
|
||||
|
||||
Raises:
|
||||
ProtocolError: on any communication failure.
|
||||
"""
|
||||
if not self.is_open:
|
||||
self.open()
|
||||
|
||||
proto = self._require_proto()
|
||||
|
||||
log.info("connect: POLL startup")
|
||||
proto.startup()
|
||||
|
||||
log.info("connect: reading serial number (SUB 15)")
|
||||
sn_data = proto.read(SUB_SERIAL_NUMBER)
|
||||
device_info = _decode_serial_number(sn_data)
|
||||
|
||||
log.info("connect: reading full config (SUB 01)")
|
||||
cfg_data = proto.read(SUB_FULL_CONFIG)
|
||||
_decode_full_config_into(cfg_data, device_info)
|
||||
|
||||
log.info("connect: %s", device_info)
|
||||
return device_info
|
||||
|
||||
def get_events(self, include_waveforms: bool = True) -> list[Event]:
|
||||
"""
|
||||
Download all stored events from the device using the confirmed
|
||||
1E → 0A → 0C → 1F event-iterator protocol.
|
||||
|
||||
Sequence (confirmed from 3-31-26 Blastware capture):
|
||||
1. SUB 1E — get first waveform key
|
||||
2. For each key until b'\\x00\\x00\\x00\\x00':
|
||||
a. SUB 0A — waveform header (first event only, to confirm full record)
|
||||
b. SUB 0C — full waveform record (peak values, project strings)
|
||||
c. SUB 1F — advance to next key (token=0xFE skips partial bins)
|
||||
|
||||
Subsequent keys returned by 1F (token=0xFE) are guaranteed to be full
|
||||
records, so 0A is only called for the first event. This exactly
|
||||
matches Blastware's observed behaviour.
|
||||
|
||||
Raw ADC waveform samples (SUB 5A bulk stream) are NOT downloaded
|
||||
here — they are large (several MB per event) and fetched separately.
|
||||
include_waveforms is reserved for a future call.
|
||||
|
||||
Returns:
|
||||
List of Event objects, one per stored waveform record.
|
||||
|
||||
Raises:
|
||||
ProtocolError: on unrecoverable communication failure.
|
||||
"""
|
||||
proto = self._require_proto()
|
||||
|
||||
log.info("get_events: requesting first event (SUB 1E)")
|
||||
try:
|
||||
key4, _event_data8 = proto.read_event_first()
|
||||
except ProtocolError as exc:
|
||||
raise ProtocolError(f"get_events: 1E failed: {exc}") from exc
|
||||
|
||||
if key4 == b"\x00\x00\x00\x00":
|
||||
log.info("get_events: device reports no stored events")
|
||||
return []
|
||||
|
||||
events: list[Event] = []
|
||||
idx = 0
|
||||
is_first = True
|
||||
|
||||
while key4 != b"\x00\x00\x00\x00":
|
||||
log.info(
|
||||
"get_events: record %d key=%s", idx, key4.hex()
|
||||
)
|
||||
ev = Event(index=idx)
|
||||
|
||||
# First event: call 0A to verify it's a full record (0x30 length).
|
||||
# Subsequent keys come from 1F(0xFE) which guarantees full records,
|
||||
# so we skip 0A for those — exactly matching Blastware behaviour.
|
||||
proceed = True
|
||||
if is_first:
|
||||
try:
|
||||
_hdr, rec_len = proto.read_waveform_header(key4)
|
||||
if rec_len < 0x30:
|
||||
log.warning(
|
||||
"get_events: first key=%s is partial (len=0x%02X) — skipping",
|
||||
key4.hex(), rec_len,
|
||||
)
|
||||
proceed = False
|
||||
except ProtocolError as exc:
|
||||
log.warning(
|
||||
"get_events: 0A failed for key=%s: %s — skipping 0C",
|
||||
key4.hex(), exc,
|
||||
)
|
||||
proceed = False
|
||||
is_first = False
|
||||
|
||||
if proceed:
|
||||
# SUB 0C — full waveform record (peak values, project strings)
|
||||
try:
|
||||
record = proto.read_waveform_record(key4)
|
||||
_decode_waveform_record_into(record, ev)
|
||||
except ProtocolError as exc:
|
||||
log.warning(
|
||||
"get_events: 0C failed for key=%s: %s", key4.hex(), exc
|
||||
)
|
||||
|
||||
events.append(ev)
|
||||
idx += 1
|
||||
|
||||
# SUB 1F — advance to the next full waveform record key
|
||||
try:
|
||||
key4 = proto.advance_event()
|
||||
except ProtocolError as exc:
|
||||
log.warning("get_events: 1F failed: %s — stopping iteration", exc)
|
||||
break
|
||||
|
||||
log.info("get_events: downloaded %d event(s)", len(events))
|
||||
return events
|
||||
|
||||
# ── Internal helpers ──────────────────────────────────────────────────────
|
||||
|
||||
def _require_proto(self) -> MiniMateProtocol:
|
||||
if self._proto is None:
|
||||
raise RuntimeError("MiniMateClient is not connected. Call open() first.")
|
||||
return self._proto
|
||||
|
||||
|
||||
# ── Decoder functions ─────────────────────────────────────────────────────────
|
||||
#
|
||||
# Pure functions: bytes → model field population.
|
||||
# Kept here (not in models.py) to isolate protocol knowledge from data shapes.
|
||||
|
||||
def _decode_serial_number(data: bytes) -> DeviceInfo:
|
||||
"""
|
||||
Decode SUB EA (SERIAL_NUMBER_RESPONSE) payload into a new DeviceInfo.
|
||||
|
||||
Layout (10 bytes total per §7.2):
|
||||
bytes 0–7: serial string, null-terminated, null-padded ("BE18189\\x00")
|
||||
byte 8: unit-specific trailing byte (purpose unknown ❓)
|
||||
byte 9: firmware minor version (0x11 = 17) ✅
|
||||
|
||||
Returns:
|
||||
New DeviceInfo with serial, firmware_minor, serial_trail_0 populated.
|
||||
"""
|
||||
if len(data) < 9:
|
||||
# Short payload — gracefully degrade
|
||||
serial = data.rstrip(b"\x00").decode("ascii", errors="replace")
|
||||
return DeviceInfo(serial=serial, firmware_minor=0)
|
||||
|
||||
serial = data[:8].rstrip(b"\x00").decode("ascii", errors="replace")
|
||||
trail_0 = data[8] if len(data) > 8 else None
|
||||
fw_minor = data[9] if len(data) > 9 else 0
|
||||
|
||||
return DeviceInfo(
|
||||
serial=serial,
|
||||
firmware_minor=fw_minor,
|
||||
serial_trail_0=trail_0,
|
||||
)
|
||||
|
||||
|
||||
def _decode_full_config_into(data: bytes, info: DeviceInfo) -> None:
|
||||
"""
|
||||
Decode SUB FE (FULL_CONFIG_RESPONSE) payload into an existing DeviceInfo.
|
||||
|
||||
The FE response arrives as a composite S3 outer frame whose data section
|
||||
contains inner DLE-framed sub-frames. Because of this nesting the §7.3
|
||||
fixed offsets (0x34, 0x3C, 0x44, 0x6D) are unreliable — they assume a
|
||||
clean non-nested payload starting at byte 0.
|
||||
|
||||
Instead we search the whole byte array for known ASCII patterns. The
|
||||
strings are long enough to be unique in any reasonable payload.
|
||||
|
||||
Modifies info in-place.
|
||||
"""
|
||||
def _extract(needle: bytes, max_len: int = 32) -> Optional[str]:
|
||||
"""Return the null-terminated ASCII string that starts with *needle*."""
|
||||
pos = data.find(needle)
|
||||
if pos < 0:
|
||||
return None
|
||||
end = pos
|
||||
while end < len(data) and data[end] != 0 and (end - pos) < max_len:
|
||||
end += 1
|
||||
s = data[pos:end].decode("ascii", errors="replace").strip()
|
||||
return s or None
|
||||
|
||||
# ── Manufacturer and model are straightforward literal matches ────────────
|
||||
info.manufacturer = _extract(b"Instantel")
|
||||
info.model = _extract(b"MiniMate Plus")
|
||||
|
||||
# ── Firmware version: "S3xx.xx" — scan for the 'S3' prefix ───────────────
|
||||
for i in range(len(data) - 5):
|
||||
if data[i] == ord('S') and data[i + 1] == ord('3') and chr(data[i + 2]).isdigit():
|
||||
end = i
|
||||
while end < len(data) and data[end] not in (0, 0x20) and (end - i) < 12:
|
||||
end += 1
|
||||
candidate = data[i:end].decode("ascii", errors="replace").strip()
|
||||
if "." in candidate and len(candidate) >= 5:
|
||||
info.firmware_version = candidate
|
||||
break
|
||||
|
||||
# ── DSP version: numeric "xx.xx" — search for known prefixes ─────────────
|
||||
for prefix in (b"10.", b"11.", b"12.", b"9.", b"8."):
|
||||
pos = data.find(prefix)
|
||||
if pos < 0:
|
||||
continue
|
||||
end = pos
|
||||
while end < len(data) and data[end] not in (0, 0x20) and (end - pos) < 8:
|
||||
end += 1
|
||||
candidate = data[pos:end].decode("ascii", errors="replace").strip()
|
||||
# Accept only strings that look like "digits.digits"
|
||||
if "." in candidate and all(c in "0123456789." for c in candidate):
|
||||
info.dsp_version = candidate
|
||||
break
|
||||
|
||||
|
||||
def _decode_event_count(data: bytes) -> int:
|
||||
"""
|
||||
Extract stored event count from SUB F7 (EVENT_INDEX_RESPONSE) payload.
|
||||
|
||||
Layout per §7.4 (offsets from data section start):
|
||||
+00: 00 58 09 — total index size or record count ❓
|
||||
+03: 00 00 00 01 — possibly stored event count = 1 ❓
|
||||
|
||||
We use bytes +03..+06 interpreted as uint32 BE as the event count.
|
||||
This is inferred (🔶) — the exact meaning of the first 3 bytes is unclear.
|
||||
"""
|
||||
if len(data) < 7:
|
||||
log.warning("event index payload too short (%d bytes), assuming 0 events", len(data))
|
||||
return 0
|
||||
|
||||
# Try the uint32 at +3 first
|
||||
count = struct.unpack_from(">I", data, 3)[0]
|
||||
|
||||
# Sanity check: MiniMate Plus manual says max ~1000 events
|
||||
if count > 1000:
|
||||
log.warning(
|
||||
"event count %d looks unreasonably large — clamping to 0", count
|
||||
)
|
||||
return 0
|
||||
|
||||
return count
|
||||
|
||||
|
||||
def _decode_event_header_into(data: bytes, event: Event) -> None:
|
||||
"""
|
||||
Decode SUB E1 (EVENT_HEADER_RESPONSE) raw data section into an Event.
|
||||
|
||||
The waveform key is at data[11:15] (extracted separately in
|
||||
MiniMateProtocol.read_event_first). The remaining 4 bytes at
|
||||
data[15:19] are not yet decoded (❓ — possibly sample rate or flags).
|
||||
|
||||
Date information (year/month/day) lives in the waveform record (SUB 0C),
|
||||
not in the 1E response. This function is a placeholder for any future
|
||||
metadata we decode from the 8-byte 1E data block.
|
||||
|
||||
Modifies event in-place.
|
||||
"""
|
||||
# Nothing confirmed yet from the 8-byte data block beyond the key at [0:4].
|
||||
# Leave event.timestamp as None — it will be populated from the 0C record.
|
||||
pass
|
||||
|
||||
|
||||
def _decode_waveform_record_into(data: bytes, event: Event) -> None:
|
||||
"""
|
||||
Decode a 210-byte SUB F3 (FULL_WAVEFORM_RECORD) record into an Event.
|
||||
|
||||
The *data* argument is the raw record bytes returned by
|
||||
MiniMateProtocol.read_waveform_record() — i.e. data_rsp.data[11:11+0xD2].
|
||||
|
||||
Extracts:
|
||||
- record_type: "Histogram" or "Waveform" (string search) 🔶
|
||||
- peak_values: label-based float32 lookup (confirmed ✅)
|
||||
- project_info: "Project:", "Client:", etc. string search ✅
|
||||
|
||||
Timestamp in the waveform record:
|
||||
7-byte format: [0x09][year:2 BE][0x00][hour][minute][second]
|
||||
Month and day come from a separate source (not yet fully mapped ❓).
|
||||
For now we leave event.timestamp as None.
|
||||
|
||||
Modifies event in-place.
|
||||
"""
|
||||
# ── Record type ───────────────────────────────────────────────────────────
|
||||
try:
|
||||
event.record_type = _extract_record_type(data)
|
||||
except Exception as exc:
|
||||
log.warning("waveform record type decode failed: %s", exc)
|
||||
|
||||
# ── Peak values ───────────────────────────────────────────────────────────
|
||||
try:
|
||||
peak_values = _extract_peak_floats(data)
|
||||
if peak_values:
|
||||
event.peak_values = peak_values
|
||||
except Exception as exc:
|
||||
log.warning("waveform record peak decode failed: %s", exc)
|
||||
|
||||
# ── Project strings ───────────────────────────────────────────────────────
|
||||
try:
|
||||
project_info = _extract_project_strings(data)
|
||||
if project_info:
|
||||
event.project_info = project_info
|
||||
except Exception as exc:
|
||||
log.warning("waveform record project strings decode failed: %s", exc)
|
||||
|
||||
|
||||
def _extract_record_type(data: bytes) -> Optional[str]:
|
||||
"""
|
||||
Search the waveform record for a record-type indicator string.
|
||||
|
||||
Confirmed types from 3-31-26 capture: "Histogram", "Waveform".
|
||||
Returns the first match, or None if neither is found.
|
||||
"""
|
||||
for rtype in (b"Histogram", b"Waveform"):
|
||||
if data.find(rtype) >= 0:
|
||||
return rtype.decode()
|
||||
return None
|
||||
|
||||
|
||||
def _extract_peak_floats(data: bytes) -> Optional[PeakValues]:
|
||||
"""
|
||||
Locate per-channel peak particle velocity values in the 210-byte
|
||||
waveform record by searching for the embedded channel label strings
|
||||
("Tran", "Vert", "Long", "MicL") and reading the IEEE 754 BE float
|
||||
at label_offset + 6.
|
||||
|
||||
The floats are NOT 4-byte aligned in the record (confirmed from
|
||||
3-31-26 capture), so the previous step-4 scan missed Tran, Long, and
|
||||
MicL entirely. Label-based lookup is the correct approach.
|
||||
|
||||
Channel labels are separated by inner-frame bytes (0x10 0x03 = DLE ETX),
|
||||
which the S3FrameParser preserves as literal data. Searching for the
|
||||
4-byte ASCII label strings is robust to this structure.
|
||||
|
||||
Returns PeakValues if at least one channel label is found, else None.
|
||||
"""
|
||||
# (label_bytes, field_name)
|
||||
channels = (
|
||||
(b"Tran", "tran"),
|
||||
(b"Vert", "vert"),
|
||||
(b"Long", "long_"),
|
||||
(b"MicL", "micl"),
|
||||
)
|
||||
vals: dict[str, float] = {}
|
||||
|
||||
for label_bytes, field in channels:
|
||||
pos = data.find(label_bytes)
|
||||
if pos < 0:
|
||||
continue
|
||||
float_off = pos + 6
|
||||
if float_off + 4 > len(data):
|
||||
log.debug("peak float: label %s at %d but float runs past end", label_bytes, pos)
|
||||
continue
|
||||
try:
|
||||
val = struct.unpack_from(">f", data, float_off)[0]
|
||||
except struct.error:
|
||||
continue
|
||||
log.debug("peak float: %s at label+6 (%d) = %.6f", label_bytes.decode(), float_off, val)
|
||||
vals[field] = val
|
||||
|
||||
if not vals:
|
||||
return None
|
||||
|
||||
return PeakValues(
|
||||
tran=vals.get("tran"),
|
||||
vert=vals.get("vert"),
|
||||
long=vals.get("long_"),
|
||||
micl=vals.get("micl"),
|
||||
)
|
||||
|
||||
|
||||
def _extract_project_strings(data: bytes) -> Optional[ProjectInfo]:
|
||||
"""
|
||||
Search the waveform record payload for known ASCII label strings
|
||||
("Project:", "Client:", "User Name:", "Seis Loc:", "Extended Notes")
|
||||
and extract the associated value strings that follow them.
|
||||
|
||||
Layout (per §7.5): each entry is [label ~16 bytes][value ~32 bytes],
|
||||
null-padded. We find the label, then read the next non-null chars.
|
||||
"""
|
||||
def _find_string_after(needle: bytes, max_value_len: int = 64) -> Optional[str]:
|
||||
pos = data.find(needle)
|
||||
if pos < 0:
|
||||
return None
|
||||
# Skip the label (including null padding) until we find a non-null value
|
||||
# The value starts at pos+len(needle), but may have a gap of null bytes
|
||||
value_start = pos + len(needle)
|
||||
# Skip nulls
|
||||
while value_start < len(data) and data[value_start] == 0:
|
||||
value_start += 1
|
||||
if value_start >= len(data):
|
||||
return None
|
||||
# Read until null terminator or max_value_len
|
||||
end = value_start
|
||||
while end < len(data) and data[end] != 0 and (end - value_start) < max_value_len:
|
||||
end += 1
|
||||
value = data[value_start:end].decode("ascii", errors="replace").strip()
|
||||
return value or None
|
||||
|
||||
project = _find_string_after(b"Project:")
|
||||
client = _find_string_after(b"Client:")
|
||||
operator = _find_string_after(b"User Name:")
|
||||
location = _find_string_after(b"Seis Loc:")
|
||||
notes = _find_string_after(b"Extended Notes")
|
||||
|
||||
if not any([project, client, operator, location, notes]):
|
||||
return None
|
||||
|
||||
return ProjectInfo(
|
||||
project=project,
|
||||
client=client,
|
||||
operator=operator,
|
||||
sensor_location=location,
|
||||
notes=notes,
|
||||
)
|
||||
333
minimateplus/framing.py
Normal file
333
minimateplus/framing.py
Normal file
@@ -0,0 +1,333 @@
|
||||
"""
|
||||
framing.py — DLE frame codec for the Instantel MiniMate Plus RS-232 protocol.
|
||||
|
||||
Wire format:
|
||||
BW→S3 (our requests): [ACK=0x41] [STX=0x02] [stuffed payload+chk] [ETX=0x03]
|
||||
S3→BW (device replies): [DLE=0x10] [STX=0x02] [stuffed payload+chk] [DLE=0x10] [ETX=0x03]
|
||||
|
||||
The ACK 0x41 byte often precedes S3 frames too — it is silently discarded
|
||||
by the streaming parser.
|
||||
|
||||
De-stuffed payload layout:
|
||||
BW→S3 request frame:
|
||||
[0] CMD 0x10 (BW request marker)
|
||||
[1] flags 0x00
|
||||
[2] SUB command sub-byte
|
||||
[3] 0x00 always zero in captured frames
|
||||
[4] 0x00 always zero in captured frames
|
||||
[5] OFFSET two-step offset: 0x00 = length-probe, DATA_LEN = data-request
|
||||
[6-15] zero padding (total de-stuffed payload = 16 bytes)
|
||||
|
||||
S3→BW response frame:
|
||||
[0] CMD 0x00 (S3 response marker)
|
||||
[1] flags 0x10
|
||||
[2] SUB response sub-byte (= 0xFF - request SUB)
|
||||
[3] PAGE_HI high byte of page address (always 0x00 in observed frames)
|
||||
[4] PAGE_LO low byte (always 0x00 in observed frames)
|
||||
[5+] data payload data section (composite inner frames for large responses)
|
||||
|
||||
DLE stuffing rule: any 0x10 byte in the payload is doubled on the wire (0x10 → 0x10 0x10).
|
||||
This applies to the checksum byte too.
|
||||
|
||||
Confirmed from live captures (s3_parser.py validation + raw_bw.bin / raw_s3.bin).
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from dataclasses import dataclass
|
||||
from typing import Optional
|
||||
|
||||
# ── Protocol byte constants ───────────────────────────────────────────────────
|
||||
|
||||
DLE = 0x10 # Data Link Escape
|
||||
STX = 0x02 # Start of text
|
||||
ETX = 0x03 # End of text
|
||||
ACK = 0x41 # Acknowledgement / frame-start marker (BW side)
|
||||
|
||||
BW_CMD = 0x10 # CMD byte value in BW→S3 frames
|
||||
S3_CMD = 0x00 # CMD byte value in S3→BW frames
|
||||
S3_FLAGS = 0x10 # flags byte value in S3→BW frames
|
||||
|
||||
# BW read-command payload size: 5 header bytes + 11 padding bytes = 16 total.
|
||||
# Confirmed from captured raw_bw.bin: all read-command frames carry exactly 16
|
||||
# de-stuffed bytes (excluding the appended checksum).
|
||||
_BW_PAYLOAD_SIZE = 16
|
||||
|
||||
|
||||
# ── DLE stuffing / de-stuffing ────────────────────────────────────────────────
|
||||
|
||||
def dle_stuff(data: bytes) -> bytes:
|
||||
"""Escape literal 0x10 bytes: 0x10 → 0x10 0x10."""
|
||||
out = bytearray()
|
||||
for b in data:
|
||||
if b == DLE:
|
||||
out.append(DLE)
|
||||
out.append(b)
|
||||
return bytes(out)
|
||||
|
||||
|
||||
def dle_unstuff(data: bytes) -> bytes:
|
||||
"""Remove DLE stuffing: 0x10 0x10 → 0x10."""
|
||||
out = bytearray()
|
||||
i = 0
|
||||
while i < len(data):
|
||||
b = data[i]
|
||||
if b == DLE and i + 1 < len(data) and data[i + 1] == DLE:
|
||||
out.append(DLE)
|
||||
i += 2
|
||||
else:
|
||||
out.append(b)
|
||||
i += 1
|
||||
return bytes(out)
|
||||
|
||||
|
||||
# ── Checksum ─────────────────────────────────────────────────────────────────
|
||||
|
||||
def checksum(payload: bytes) -> int:
|
||||
"""SUM8: sum of all de-stuffed payload bytes, mod 256."""
|
||||
return sum(payload) & 0xFF
|
||||
|
||||
|
||||
# ── BW→S3 frame builder ───────────────────────────────────────────────────────
|
||||
|
||||
def build_bw_frame(sub: int, offset: int = 0, params: bytes = bytes(10)) -> bytes:
|
||||
"""
|
||||
Build a BW→S3 read-command frame.
|
||||
|
||||
The payload is always 16 de-stuffed bytes:
|
||||
[BW_CMD, 0x00, sub, 0x00, 0x00, offset] + params(10 bytes)
|
||||
|
||||
Confirmed from BW capture analysis: payload[3] and payload[4] are always
|
||||
0x00 across all observed read commands. The two-step offset lives at
|
||||
payload[5]: 0x00 for the length-probe step, DATA_LEN for the data-fetch step.
|
||||
|
||||
The 10 params bytes (payload[6..15]) are zero for standard reads. For
|
||||
keyed reads (SUBs 0A, 0C) the 4-byte waveform key lives at params[4..7]
|
||||
(= payload[10..13]). For token-based reads (SUBs 1E, 1F) a single token
|
||||
byte lives at params[6] (= payload[12]). Use waveform_key_params() and
|
||||
token_params() helpers to build these safely.
|
||||
|
||||
Wire output: [ACK] [STX] dle_stuff(payload + checksum) [ETX]
|
||||
|
||||
Args:
|
||||
sub: SUB command byte (e.g. 0x01 = FULL_CONFIG_READ)
|
||||
offset: Value placed at payload[5].
|
||||
Pass 0 for the probe step; pass DATA_LENGTHS[sub] for the data step.
|
||||
params: 10 bytes placed at payload[6..15]. Default: all zeros.
|
||||
|
||||
Returns:
|
||||
Complete frame bytes ready to write to the serial port / socket.
|
||||
"""
|
||||
if len(params) != 10:
|
||||
raise ValueError(f"params must be exactly 10 bytes, got {len(params)}")
|
||||
payload = bytes([BW_CMD, 0x00, sub, 0x00, 0x00, offset]) + params
|
||||
chk = checksum(payload)
|
||||
wire = bytes([ACK, STX]) + dle_stuff(payload + bytes([chk])) + bytes([ETX])
|
||||
return wire
|
||||
|
||||
|
||||
def waveform_key_params(key4: bytes) -> bytes:
|
||||
"""
|
||||
Build the 10-byte params block that carries a 4-byte waveform key.
|
||||
|
||||
Used for SUBs 0A (WAVEFORM_HEADER) and 0C (WAVEFORM_RECORD).
|
||||
The key goes at params[4..7], which maps to payload[10..13].
|
||||
|
||||
Confirmed from 3-31-26 capture: 0A and 0C request frames carry the
|
||||
4-byte record address at payload[10..13]. Probe and data-fetch steps
|
||||
carry the same key in both frames.
|
||||
|
||||
Args:
|
||||
key4: exactly 4 bytes — the opaque waveform record address returned
|
||||
by the EVENT_HEADER (1E) or EVENT_ADVANCE (1F) response.
|
||||
|
||||
Returns:
|
||||
10-byte params block with key embedded at positions [4..7].
|
||||
"""
|
||||
if len(key4) != 4:
|
||||
raise ValueError(f"waveform key must be 4 bytes, got {len(key4)}")
|
||||
p = bytearray(10)
|
||||
p[4:8] = key4
|
||||
return bytes(p)
|
||||
|
||||
|
||||
def token_params(token: int = 0) -> bytes:
|
||||
"""
|
||||
Build the 10-byte params block that carries a single token byte.
|
||||
|
||||
Used for SUBs 1E (EVENT_HEADER) and 1F (EVENT_ADVANCE).
|
||||
The token goes at params[6], which maps to payload[12].
|
||||
|
||||
Confirmed from 3-31-26 capture:
|
||||
- token=0x00: first-event read / browse mode (no download marking)
|
||||
- token=0xfe: download mode (causes 1F to skip partial bins and
|
||||
advance to the next full record)
|
||||
|
||||
Args:
|
||||
token: single byte to place at params[6] / payload[12].
|
||||
|
||||
Returns:
|
||||
10-byte params block with token at position [6].
|
||||
"""
|
||||
p = bytearray(10)
|
||||
p[6] = token
|
||||
return bytes(p)
|
||||
|
||||
|
||||
# ── Pre-built POLL frames ─────────────────────────────────────────────────────
|
||||
#
|
||||
# POLL (SUB 0x5B) uses the same two-step pattern as all other reads — the
|
||||
# hardcoded length 0x30 lives at payload[5], exactly as in build_bw_frame().
|
||||
|
||||
POLL_PROBE = build_bw_frame(0x5B, 0x00) # length-probe POLL (offset = 0)
|
||||
POLL_DATA = build_bw_frame(0x5B, 0x30) # data-request POLL (offset = 0x30)
|
||||
|
||||
|
||||
# ── S3 response dataclass ─────────────────────────────────────────────────────
|
||||
|
||||
@dataclass
|
||||
class S3Frame:
|
||||
"""A fully parsed and de-stuffed S3→BW response frame."""
|
||||
sub: int # response SUB byte (e.g. 0xA4 = POLL_RESPONSE)
|
||||
page_hi: int # PAGE_HI from header (= data length on step-2 length response)
|
||||
page_lo: int # PAGE_LO from header
|
||||
data: bytes # payload data section (payload[5:], checksum already stripped)
|
||||
checksum_valid: bool
|
||||
|
||||
@property
|
||||
def page_key(self) -> int:
|
||||
"""Combined 16-bit page address / length: (page_hi << 8) | page_lo."""
|
||||
return (self.page_hi << 8) | self.page_lo
|
||||
|
||||
|
||||
# ── Streaming S3 frame parser ─────────────────────────────────────────────────
|
||||
|
||||
class S3FrameParser:
|
||||
"""
|
||||
Incremental byte-stream parser for S3→BW response frames.
|
||||
|
||||
Feed incoming bytes with feed(). Complete, valid frames are returned
|
||||
immediately and also accumulated in self.frames.
|
||||
|
||||
State machine:
|
||||
IDLE — scanning for DLE (0x10)
|
||||
SEEN_DLE — saw DLE, waiting for STX (0x02) to start a frame
|
||||
IN_FRAME — collecting de-stuffed payload bytes; bare ETX ends frame
|
||||
IN_FRAME_DLE — inside frame, saw DLE; DLE continues stuffing;
|
||||
DLE+ETX is treated as literal data (NOT a frame end),
|
||||
which lets inner-frame terminators pass through intact
|
||||
|
||||
Wire format confirmed from captures:
|
||||
[DLE=0x10] [STX=0x02] [stuffed payload+chk] [bare ETX=0x03]
|
||||
The ETX is NOT preceded by a DLE on the wire. DLE+ETX sequences that
|
||||
appear inside the payload are inner-frame terminators and must be
|
||||
treated as literal data.
|
||||
|
||||
ACK (0x41) bytes and arbitrary non-DLE bytes in IDLE state are silently
|
||||
discarded (covers device boot string "Operating System" and keepalive ACKs).
|
||||
"""
|
||||
|
||||
_IDLE = 0
|
||||
_SEEN_DLE = 1
|
||||
_IN_FRAME = 2
|
||||
_IN_FRAME_DLE = 3
|
||||
|
||||
def __init__(self) -> None:
|
||||
self._state = self._IDLE
|
||||
self._body = bytearray() # accumulates de-stuffed frame bytes
|
||||
self.frames: list[S3Frame] = []
|
||||
|
||||
def reset(self) -> None:
|
||||
self._state = self._IDLE
|
||||
self._body.clear()
|
||||
|
||||
def feed(self, data: bytes) -> list[S3Frame]:
|
||||
"""
|
||||
Process a chunk of incoming bytes.
|
||||
|
||||
Returns a list of S3Frame objects completed during this call.
|
||||
All completed frames are also appended to self.frames.
|
||||
"""
|
||||
completed: list[S3Frame] = []
|
||||
for b in data:
|
||||
frame = self._step(b)
|
||||
if frame is not None:
|
||||
completed.append(frame)
|
||||
self.frames.append(frame)
|
||||
return completed
|
||||
|
||||
def _step(self, b: int) -> Optional[S3Frame]:
|
||||
"""Process one byte. Returns a completed S3Frame or None."""
|
||||
|
||||
if self._state == self._IDLE:
|
||||
if b == DLE:
|
||||
self._state = self._SEEN_DLE
|
||||
# ACK, boot strings, garbage — silently ignored
|
||||
|
||||
elif self._state == self._SEEN_DLE:
|
||||
if b == STX:
|
||||
self._body.clear()
|
||||
self._state = self._IN_FRAME
|
||||
else:
|
||||
# Stray DLE not followed by STX — back to idle
|
||||
self._state = self._IDLE
|
||||
|
||||
elif self._state == self._IN_FRAME:
|
||||
if b == DLE:
|
||||
self._state = self._IN_FRAME_DLE
|
||||
elif b == ETX:
|
||||
# Bare ETX = real frame terminator (confirmed from captures)
|
||||
frame = self._finalise()
|
||||
self._state = self._IDLE
|
||||
return frame
|
||||
else:
|
||||
self._body.append(b)
|
||||
|
||||
elif self._state == self._IN_FRAME_DLE:
|
||||
if b == DLE:
|
||||
# DLE DLE → literal 0x10 in payload
|
||||
self._body.append(DLE)
|
||||
self._state = self._IN_FRAME
|
||||
elif b == ETX:
|
||||
# DLE+ETX inside a frame is an inner-frame terminator, NOT
|
||||
# the outer frame end. Treat as literal data and continue.
|
||||
self._body.append(DLE)
|
||||
self._body.append(ETX)
|
||||
self._state = self._IN_FRAME
|
||||
else:
|
||||
# Unexpected DLE + byte — treat both as literal data and continue
|
||||
self._body.append(DLE)
|
||||
self._body.append(b)
|
||||
self._state = self._IN_FRAME
|
||||
|
||||
return None
|
||||
|
||||
def _finalise(self) -> Optional[S3Frame]:
|
||||
"""
|
||||
Called when DLE+ETX is seen. Validates checksum and builds S3Frame.
|
||||
Returns None if the frame is too short or structurally invalid.
|
||||
"""
|
||||
body = bytes(self._body)
|
||||
|
||||
# Minimum valid frame: 5-byte header + at least 1 checksum byte = 6
|
||||
if len(body) < 6:
|
||||
return None
|
||||
|
||||
raw_payload = body[:-1] # everything except the trailing checksum byte
|
||||
chk_received = body[-1]
|
||||
chk_computed = checksum(raw_payload)
|
||||
|
||||
if len(raw_payload) < 5:
|
||||
return None
|
||||
|
||||
# Validate CMD byte — we only accept S3→BW response frames here
|
||||
if raw_payload[0] != S3_CMD:
|
||||
return None
|
||||
|
||||
return S3Frame(
|
||||
sub = raw_payload[2],
|
||||
page_hi = raw_payload[3],
|
||||
page_lo = raw_payload[4],
|
||||
data = raw_payload[5:],
|
||||
checksum_valid = (chk_received == chk_computed),
|
||||
)
|
||||
215
minimateplus/models.py
Normal file
215
minimateplus/models.py
Normal file
@@ -0,0 +1,215 @@
|
||||
"""
|
||||
models.py — Plain-Python data models for the MiniMate Plus protocol library.
|
||||
|
||||
All models are intentionally simple dataclasses with no protocol logic.
|
||||
They represent *decoded* device data — the client layer translates raw frame
|
||||
bytes into these objects, and the SFM API layer serialises them to JSON.
|
||||
|
||||
Notes on certainty:
|
||||
Fields marked ✅ are confirmed from captured data.
|
||||
Fields marked 🔶 are strongly inferred but not formally proven.
|
||||
Fields marked ❓ are present in the captured payload but not yet decoded.
|
||||
See docs/instantel_protocol_reference.md for full derivation details.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import struct
|
||||
from dataclasses import dataclass, field
|
||||
from typing import Optional
|
||||
|
||||
|
||||
# ── Timestamp ─────────────────────────────────────────────────────────────────
|
||||
|
||||
@dataclass
|
||||
class Timestamp:
|
||||
"""
|
||||
6-byte event timestamp decoded from the MiniMate Plus wire format.
|
||||
|
||||
Wire layout: [flag:1] [year:2 BE] [unknown:1] [month:1] [day:1]
|
||||
|
||||
The year 1995 is the device's factory-default RTC date — it appears
|
||||
whenever the battery has been disconnected. Treat 1995 as "clock not set".
|
||||
"""
|
||||
raw: bytes # raw 6-byte sequence for round-tripping
|
||||
flag: int # byte 0 — validity/type flag (usually 0x01) 🔶
|
||||
year: int # bytes 1–2 big-endian uint16 ✅
|
||||
unknown_byte: int # byte 3 — likely hours/minutes ❓
|
||||
month: int # byte 4 ✅
|
||||
day: int # byte 5 ✅
|
||||
|
||||
@classmethod
|
||||
def from_bytes(cls, data: bytes) -> "Timestamp":
|
||||
"""
|
||||
Decode a 6-byte timestamp sequence.
|
||||
|
||||
Args:
|
||||
data: exactly 6 bytes from the device payload.
|
||||
|
||||
Returns:
|
||||
Decoded Timestamp.
|
||||
|
||||
Raises:
|
||||
ValueError: if data is not exactly 6 bytes.
|
||||
"""
|
||||
if len(data) != 6:
|
||||
raise ValueError(f"Timestamp requires exactly 6 bytes, got {len(data)}")
|
||||
flag = data[0]
|
||||
year = struct.unpack_from(">H", data, 1)[0]
|
||||
unknown_byte = data[3]
|
||||
month = data[4]
|
||||
day = data[5]
|
||||
return cls(
|
||||
raw=bytes(data),
|
||||
flag=flag,
|
||||
year=year,
|
||||
unknown_byte=unknown_byte,
|
||||
month=month,
|
||||
day=day,
|
||||
)
|
||||
|
||||
@property
|
||||
def clock_set(self) -> bool:
|
||||
"""False when year == 1995 (factory default / battery-lost state)."""
|
||||
return self.year != 1995
|
||||
|
||||
def __str__(self) -> str:
|
||||
if not self.clock_set:
|
||||
return f"CLOCK_NOT_SET ({self.year}-{self.month:02d}-{self.day:02d})"
|
||||
return f"{self.year}-{self.month:02d}-{self.day:02d}"
|
||||
|
||||
|
||||
# ── Device identity ───────────────────────────────────────────────────────────
|
||||
|
||||
@dataclass
|
||||
class DeviceInfo:
|
||||
"""
|
||||
Combined device identity information gathered during the startup sequence.
|
||||
|
||||
Populated from three response SUBs:
|
||||
- SUB EA (SERIAL_NUMBER_RESPONSE): serial, firmware_minor
|
||||
- SUB FE (FULL_CONFIG_RESPONSE): serial (repeat), firmware_version,
|
||||
dsp_version, manufacturer, model
|
||||
- SUB A4 (POLL_RESPONSE): manufacturer (repeat), model (repeat)
|
||||
|
||||
All string fields are stripped of null padding before storage.
|
||||
"""
|
||||
|
||||
# ── From SUB EA (SERIAL_NUMBER_RESPONSE) ─────────────────────────────────
|
||||
serial: str # e.g. "BE18189" ✅
|
||||
firmware_minor: int # 0x11 = 17 for S337.17 ✅
|
||||
serial_trail_0: Optional[int] = None # unit-specific byte — purpose unknown ❓
|
||||
|
||||
# ── From SUB FE (FULL_CONFIG_RESPONSE) ────────────────────────────────────
|
||||
firmware_version: Optional[str] = None # e.g. "S337.17" ✅
|
||||
dsp_version: Optional[str] = None # e.g. "10.72" ✅
|
||||
manufacturer: Optional[str] = None # e.g. "Instantel" ✅
|
||||
model: Optional[str] = None # e.g. "MiniMate Plus" ✅
|
||||
|
||||
def __str__(self) -> str:
|
||||
fw = self.firmware_version or f"?.{self.firmware_minor}"
|
||||
mdl = self.model or "MiniMate Plus"
|
||||
return f"{mdl} S/N:{self.serial} FW:{fw}"
|
||||
|
||||
|
||||
# ── Channel threshold / scaling ───────────────────────────────────────────────
|
||||
|
||||
@dataclass
|
||||
class ChannelConfig:
|
||||
"""
|
||||
Per-channel threshold and scaling values from SUB E5 / SUB 71.
|
||||
|
||||
Floats are stored in the device in imperial units (in/s for geo channels,
|
||||
psi for MicL). Unit strings embedded in the payload confirm this.
|
||||
|
||||
Certainty: ✅ CONFIRMED for trigger_level, alarm_level, unit strings.
|
||||
"""
|
||||
label: str # e.g. "Tran", "Vert", "Long", "MicL" ✅
|
||||
trigger_level: float # in/s (geo) or psi (MicL) ✅
|
||||
alarm_level: float # in/s (geo) or psi (MicL) ✅
|
||||
max_range: float # full-scale calibration constant (e.g. 6.206) 🔶
|
||||
unit_label: str # e.g. "in./s" or "psi" ✅
|
||||
|
||||
|
||||
# ── Peak values for one event ─────────────────────────────────────────────────
|
||||
|
||||
@dataclass
|
||||
class PeakValues:
|
||||
"""
|
||||
Per-channel peak particle velocity / pressure for a single event.
|
||||
|
||||
Extracted from the Full Waveform Record (SUB F3), stored as IEEE 754
|
||||
big-endian floats in the device's native units (in/s / psi).
|
||||
"""
|
||||
tran: Optional[float] = None # Transverse PPV (in/s) ✅
|
||||
vert: Optional[float] = None # Vertical PPV (in/s) ✅
|
||||
long: Optional[float] = None # Longitudinal PPV (in/s) ✅
|
||||
micl: Optional[float] = None # Air overpressure (psi) 🔶 (units uncertain)
|
||||
|
||||
|
||||
# ── Project / operator metadata ───────────────────────────────────────────────
|
||||
|
||||
@dataclass
|
||||
class ProjectInfo:
|
||||
"""
|
||||
Operator-supplied project and location strings from the Full Waveform
|
||||
Record (SUB F3) and compliance config block (SUB E5 / SUB 71).
|
||||
|
||||
All fields are optional — they may be blank if the operator did not fill
|
||||
them in through Blastware.
|
||||
"""
|
||||
setup_name: Optional[str] = None # "Standard Recording Setup"
|
||||
project: Optional[str] = None # project description
|
||||
client: Optional[str] = None # client name ✅ confirmed offset
|
||||
operator: Optional[str] = None # operator / user name
|
||||
sensor_location: Optional[str] = None # sensor location string
|
||||
notes: Optional[str] = None # extended notes
|
||||
|
||||
|
||||
# ── Event ─────────────────────────────────────────────────────────────────────
|
||||
|
||||
@dataclass
|
||||
class Event:
|
||||
"""
|
||||
A single seismic event record downloaded from the device.
|
||||
|
||||
Populated progressively across several request/response pairs:
|
||||
1. SUB 1E (EVENT_HEADER) → index, timestamp, sample_rate
|
||||
2. SUB 0C (FULL_WAVEFORM_RECORD) → peak_values, project_info, record_type
|
||||
3. SUB 5A (BULK_WAVEFORM_STREAM) → raw_samples (downloaded on demand)
|
||||
|
||||
Fields not yet retrieved are None.
|
||||
"""
|
||||
# ── Identity ──────────────────────────────────────────────────────────────
|
||||
index: int # 0-based event number on device
|
||||
|
||||
# ── From EVENT_HEADER (SUB 1E) ────────────────────────────────────────────
|
||||
timestamp: Optional[Timestamp] = None # 6-byte timestamp ✅
|
||||
sample_rate: Optional[int] = None # samples/sec (e.g. 1024) 🔶
|
||||
|
||||
# ── From FULL_WAVEFORM_RECORD (SUB F3) ───────────────────────────────────
|
||||
peak_values: Optional[PeakValues] = None
|
||||
project_info: Optional[ProjectInfo] = None
|
||||
record_type: Optional[str] = None # e.g. "Histogram", "Waveform" 🔶
|
||||
|
||||
# ── From BULK_WAVEFORM_STREAM (SUB 5A) ───────────────────────────────────
|
||||
# Raw ADC samples keyed by channel label. Not fetched unless explicitly
|
||||
# requested (large data transfer — up to several MB per event).
|
||||
raw_samples: Optional[dict] = None # {"Tran": [...], "Vert": [...], ...}
|
||||
|
||||
def __str__(self) -> str:
|
||||
ts = str(self.timestamp) if self.timestamp else "no timestamp"
|
||||
ppv = ""
|
||||
if self.peak_values:
|
||||
pv = self.peak_values
|
||||
parts = []
|
||||
if pv.tran is not None:
|
||||
parts.append(f"T={pv.tran:.4f}")
|
||||
if pv.vert is not None:
|
||||
parts.append(f"V={pv.vert:.4f}")
|
||||
if pv.long is not None:
|
||||
parts.append(f"L={pv.long:.4f}")
|
||||
if pv.micl is not None:
|
||||
parts.append(f"M={pv.micl:.6f}")
|
||||
ppv = " [" + ", ".join(parts) + " in/s]"
|
||||
return f"Event#{self.index} {ts}{ppv}"
|
||||
485
minimateplus/protocol.py
Normal file
485
minimateplus/protocol.py
Normal file
@@ -0,0 +1,485 @@
|
||||
"""
|
||||
protocol.py — High-level MiniMate Plus request/response protocol.
|
||||
|
||||
Implements the request/response patterns documented in
|
||||
docs/instantel_protocol_reference.md on top of:
|
||||
- minimateplus.framing — DLE codec, frame builder, S3 streaming parser
|
||||
- minimateplus.transport — byte I/O (SerialTransport / future TcpTransport)
|
||||
|
||||
This module knows nothing about pyserial or TCP — it only calls
|
||||
transport.write() and transport.read_until_idle().
|
||||
|
||||
Key patterns implemented:
|
||||
- POLL startup handshake (two-step, special payload[5] format)
|
||||
- Generic two-step paged read (probe → get length → fetch data)
|
||||
- Response timeout + checksum validation
|
||||
- Boot-string drain (device sends "Operating System" ASCII before framing)
|
||||
|
||||
All public methods raise ProtocolError on timeout, bad checksum, or
|
||||
unexpected response SUB.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
import time
|
||||
from typing import Optional
|
||||
|
||||
from .framing import (
|
||||
S3Frame,
|
||||
S3FrameParser,
|
||||
build_bw_frame,
|
||||
waveform_key_params,
|
||||
token_params,
|
||||
POLL_PROBE,
|
||||
POLL_DATA,
|
||||
)
|
||||
from .transport import BaseTransport
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
|
||||
# ── Constants ─────────────────────────────────────────────────────────────────
|
||||
|
||||
# Response SUB = 0xFF - Request SUB (confirmed pattern, no known exceptions
|
||||
# among read commands; one write-path exception documented for SUB 1C→6E).
|
||||
def _expected_rsp_sub(req_sub: int) -> int:
|
||||
return (0xFF - req_sub) & 0xFF
|
||||
|
||||
|
||||
# SUB byte constants (request side) — see protocol reference §5.1
|
||||
SUB_POLL = 0x5B
|
||||
SUB_SERIAL_NUMBER = 0x15
|
||||
SUB_FULL_CONFIG = 0x01
|
||||
SUB_EVENT_INDEX = 0x08
|
||||
SUB_CHANNEL_CONFIG = 0x06
|
||||
SUB_TRIGGER_CONFIG = 0x1C
|
||||
SUB_EVENT_HEADER = 0x1E
|
||||
SUB_EVENT_ADVANCE = 0x1F
|
||||
SUB_WAVEFORM_HEADER = 0x0A
|
||||
SUB_WAVEFORM_RECORD = 0x0C
|
||||
SUB_BULK_WAVEFORM = 0x5A
|
||||
SUB_COMPLIANCE = 0x1A
|
||||
SUB_UNKNOWN_2E = 0x2E
|
||||
|
||||
# Hardcoded data lengths for the two-step read protocol.
|
||||
#
|
||||
# The S3 probe response page_key is always 0x0000 — it does NOT carry the
|
||||
# data length back to us. Instead, each SUB has a fixed known payload size
|
||||
# confirmed from BW capture analysis (offset at payload[5] of the data-request
|
||||
# frame).
|
||||
#
|
||||
# Key: request SUB byte. Value: offset/length byte sent in the data-request.
|
||||
# Entries marked 🔶 are inferred from captured frames and may need adjustment.
|
||||
DATA_LENGTHS: dict[int, int] = {
|
||||
SUB_POLL: 0x30, # POLL startup data block ✅
|
||||
SUB_SERIAL_NUMBER: 0x0A, # 10-byte serial number block ✅
|
||||
SUB_FULL_CONFIG: 0x98, # 152-byte full config block ✅
|
||||
SUB_EVENT_INDEX: 0x58, # 88-byte event index ✅
|
||||
SUB_TRIGGER_CONFIG: 0x2C, # 44-byte trigger config 🔶
|
||||
SUB_EVENT_HEADER: 0x08, # 8-byte event header (waveform key + event data) ✅
|
||||
SUB_EVENT_ADVANCE: 0x08, # 8-byte next-key response ✅
|
||||
# SUB_WAVEFORM_HEADER (0x0A) is VARIABLE — length read from probe response
|
||||
# data[4]. Do NOT add it here; use read_waveform_header() instead. ✅
|
||||
SUB_WAVEFORM_RECORD: 0xD2, # 210-byte waveform/histogram record ✅
|
||||
SUB_UNKNOWN_2E: 0x1A, # 26 bytes, purpose TBD 🔶
|
||||
0x09: 0xCA, # 202 bytes, purpose TBD 🔶
|
||||
# SUB_COMPLIANCE (0x1A) uses a multi-step sequence with a 2090-byte total;
|
||||
# NOT handled here — requires specialised read logic.
|
||||
}
|
||||
|
||||
# Default timeout values (seconds).
|
||||
# MiniMate Plus is a slow device — keep these generous.
|
||||
DEFAULT_RECV_TIMEOUT = 10.0
|
||||
POLL_RECV_TIMEOUT = 10.0
|
||||
|
||||
|
||||
# ── Exception ─────────────────────────────────────────────────────────────────
|
||||
|
||||
class ProtocolError(Exception):
|
||||
"""Raised when the device violates the expected protocol."""
|
||||
|
||||
|
||||
class TimeoutError(ProtocolError):
|
||||
"""Raised when no response is received within the allowed time."""
|
||||
|
||||
|
||||
class ChecksumError(ProtocolError):
|
||||
"""Raised when a received frame has a bad checksum."""
|
||||
|
||||
|
||||
class UnexpectedResponse(ProtocolError):
|
||||
"""Raised when the response SUB doesn't match what we requested."""
|
||||
|
||||
|
||||
# ── MiniMateProtocol ──────────────────────────────────────────────────────────
|
||||
|
||||
class MiniMateProtocol:
|
||||
"""
|
||||
Protocol state machine for one open connection to a MiniMate Plus device.
|
||||
|
||||
Does not own the transport — transport lifetime is managed by MiniMateClient.
|
||||
|
||||
Typical usage (via MiniMateClient — not directly):
|
||||
proto = MiniMateProtocol(transport)
|
||||
proto.startup() # POLL handshake, drain boot string
|
||||
data = proto.read(SUB_FULL_CONFIG)
|
||||
sn_data = proto.read(SUB_SERIAL_NUMBER)
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
transport: BaseTransport,
|
||||
recv_timeout: float = DEFAULT_RECV_TIMEOUT,
|
||||
) -> None:
|
||||
self._transport = transport
|
||||
self._recv_timeout = recv_timeout
|
||||
self._parser = S3FrameParser()
|
||||
|
||||
# ── Public API ────────────────────────────────────────────────────────────
|
||||
|
||||
def startup(self) -> S3Frame:
|
||||
"""
|
||||
Perform the POLL startup handshake and return the POLL data frame.
|
||||
|
||||
Steps (matching §6 Session Startup Sequence):
|
||||
1. Drain any boot-string bytes ("Operating System" ASCII)
|
||||
2. Send POLL_PROBE (SUB 5B, offset=0x00)
|
||||
3. Receive probe ack (page_key is 0x0000; data length 0x30 is hardcoded)
|
||||
4. Send POLL_DATA (SUB 5B, offset=0x30)
|
||||
5. Receive data frame with "Instantel" + "MiniMate Plus" strings
|
||||
|
||||
Returns:
|
||||
The data-phase POLL response S3Frame.
|
||||
|
||||
Raises:
|
||||
ProtocolError: if either POLL step fails.
|
||||
"""
|
||||
log.debug("startup: draining boot string")
|
||||
self._drain_boot_string()
|
||||
|
||||
log.debug("startup: POLL probe")
|
||||
self._send(POLL_PROBE)
|
||||
probe_rsp = self._recv_one(
|
||||
expected_sub=_expected_rsp_sub(SUB_POLL),
|
||||
timeout=self._recv_timeout,
|
||||
)
|
||||
log.debug(
|
||||
"startup: POLL probe response page_key=0x%04X", probe_rsp.page_key
|
||||
)
|
||||
|
||||
log.debug("startup: POLL data request")
|
||||
self._send(POLL_DATA)
|
||||
data_rsp = self._recv_one(
|
||||
expected_sub=_expected_rsp_sub(SUB_POLL),
|
||||
timeout=self._recv_timeout,
|
||||
)
|
||||
log.debug("startup: POLL data received, %d bytes", len(data_rsp.data))
|
||||
return data_rsp
|
||||
|
||||
def read(self, sub: int) -> bytes:
|
||||
"""
|
||||
Execute a two-step paged read and return the data payload bytes.
|
||||
|
||||
Step 1: send probe frame (offset=0x00) → device sends a short ack
|
||||
Step 2: send data-request (offset=DATA_LEN) → device sends the data block
|
||||
|
||||
The S3 probe response does NOT carry the data length — page_key is always
|
||||
0x0000 in observed frames. DATA_LENGTHS holds the known fixed lengths
|
||||
derived from BW capture analysis.
|
||||
|
||||
Args:
|
||||
sub: Request SUB byte (e.g. SUB_FULL_CONFIG = 0x01).
|
||||
|
||||
Returns:
|
||||
De-stuffed data payload bytes (payload[5:] of the response frame,
|
||||
with the checksum already stripped by the parser).
|
||||
|
||||
Raises:
|
||||
ProtocolError: on timeout, bad checksum, or wrong response SUB.
|
||||
KeyError: if sub is not in DATA_LENGTHS (caller should add it).
|
||||
"""
|
||||
rsp_sub = _expected_rsp_sub(sub)
|
||||
|
||||
# Step 1 — probe (offset = 0)
|
||||
log.debug("read SUB=0x%02X: probe", sub)
|
||||
self._send(build_bw_frame(sub, 0))
|
||||
_probe = self._recv_one(expected_sub=rsp_sub) # ack; page_key always 0
|
||||
|
||||
# Look up the hardcoded data length for this SUB
|
||||
if sub not in DATA_LENGTHS:
|
||||
raise ProtocolError(
|
||||
f"No known data length for SUB=0x{sub:02X}. "
|
||||
"Add it to DATA_LENGTHS in protocol.py."
|
||||
)
|
||||
length = DATA_LENGTHS[sub]
|
||||
log.debug("read SUB=0x%02X: data request offset=0x%02X", sub, length)
|
||||
|
||||
if length == 0:
|
||||
log.warning("read SUB=0x%02X: DATA_LENGTHS entry is zero", sub)
|
||||
return b""
|
||||
|
||||
# Step 2 — data-request (offset = length)
|
||||
self._send(build_bw_frame(sub, length))
|
||||
data_rsp = self._recv_one(expected_sub=rsp_sub)
|
||||
|
||||
log.debug("read SUB=0x%02X: received %d data bytes", sub, len(data_rsp.data))
|
||||
return data_rsp.data
|
||||
|
||||
def send_keepalive(self) -> None:
|
||||
"""
|
||||
Send a single POLL_PROBE keepalive without waiting for a response.
|
||||
|
||||
Blastware sends these every ~80ms during idle. Useful if you need to
|
||||
hold the session open between real requests.
|
||||
"""
|
||||
self._send(POLL_PROBE)
|
||||
|
||||
# ── Event download API ────────────────────────────────────────────────────
|
||||
|
||||
def read_event_first(self) -> tuple[bytes, bytes]:
|
||||
"""
|
||||
Send the SUB 1E (EVENT_HEADER) two-step read and return the first
|
||||
waveform key and accompanying 8-byte event data block.
|
||||
|
||||
This always uses all-zero params — the device returns the first stored
|
||||
event's waveform key unconditionally.
|
||||
|
||||
Returns:
|
||||
(key4, event_data8) where:
|
||||
key4 — 4-byte opaque waveform record address (data[11:15])
|
||||
event_data8 — full 8-byte data section (data[11:19])
|
||||
|
||||
Raises:
|
||||
ProtocolError: on timeout, bad checksum, or wrong response SUB.
|
||||
|
||||
Confirmed from 3-31-26 capture: 1E request uses all-zero params;
|
||||
response data section layout is:
|
||||
[LENGTH_ECHO:1][00×4][KEY_ECHO:4][00×2][KEY4:4][EXTRA:4] …
|
||||
Actual data starts at data[11]; first 4 bytes are the waveform key.
|
||||
"""
|
||||
rsp_sub = _expected_rsp_sub(SUB_EVENT_HEADER)
|
||||
length = DATA_LENGTHS[SUB_EVENT_HEADER] # 0x08
|
||||
|
||||
log.debug("read_event_first: 1E probe")
|
||||
self._send(build_bw_frame(SUB_EVENT_HEADER, 0))
|
||||
self._recv_one(expected_sub=rsp_sub)
|
||||
|
||||
log.debug("read_event_first: 1E data request offset=0x%02X", length)
|
||||
self._send(build_bw_frame(SUB_EVENT_HEADER, length))
|
||||
data_rsp = self._recv_one(expected_sub=rsp_sub)
|
||||
|
||||
event_data8 = data_rsp.data[11:19]
|
||||
key4 = data_rsp.data[11:15]
|
||||
log.debug("read_event_first: key=%s", key4.hex())
|
||||
return key4, event_data8
|
||||
|
||||
def read_waveform_header(self, key4: bytes) -> tuple[bytes, int]:
|
||||
"""
|
||||
Send the SUB 0A (WAVEFORM_HEADER) two-step read for *key4*.
|
||||
|
||||
The data length for 0A is VARIABLE and must be read from the probe
|
||||
response at data[4]. Two known values:
|
||||
0x30 — full histogram bin (has a waveform record to follow)
|
||||
0x26 — partial histogram bin (no waveform record)
|
||||
|
||||
Args:
|
||||
key4: 4-byte waveform record address from 1E or 1F.
|
||||
|
||||
Returns:
|
||||
(header_bytes, record_length) where:
|
||||
header_bytes — raw data section starting at data[11]
|
||||
record_length — DATA_LENGTH read from probe (0x30 or 0x26)
|
||||
|
||||
Raises:
|
||||
ProtocolError: on timeout, bad checksum, or wrong response SUB.
|
||||
|
||||
Confirmed from 3-31-26 capture: 0A probe response data[4] carries
|
||||
the variable length; data-request uses that length as the offset byte.
|
||||
"""
|
||||
rsp_sub = _expected_rsp_sub(SUB_WAVEFORM_HEADER)
|
||||
params = waveform_key_params(key4)
|
||||
|
||||
log.debug("read_waveform_header: 0A probe key=%s", key4.hex())
|
||||
self._send(build_bw_frame(SUB_WAVEFORM_HEADER, 0, params))
|
||||
probe_rsp = self._recv_one(expected_sub=rsp_sub)
|
||||
|
||||
# Variable length — read from probe response data[4]
|
||||
length = probe_rsp.data[4] if len(probe_rsp.data) > 4 else 0x30
|
||||
log.debug("read_waveform_header: 0A data request offset=0x%02X", length)
|
||||
|
||||
if length == 0:
|
||||
return b"", 0
|
||||
|
||||
self._send(build_bw_frame(SUB_WAVEFORM_HEADER, length, params))
|
||||
data_rsp = self._recv_one(expected_sub=rsp_sub)
|
||||
|
||||
header_bytes = data_rsp.data[11:11 + length]
|
||||
log.debug(
|
||||
"read_waveform_header: key=%s length=0x%02X is_full=%s",
|
||||
key4.hex(), length, length == 0x30,
|
||||
)
|
||||
return header_bytes, length
|
||||
|
||||
def read_waveform_record(self, key4: bytes) -> bytes:
|
||||
"""
|
||||
Send the SUB 0C (WAVEFORM_RECORD / FULL_WAVEFORM_RECORD) two-step read.
|
||||
|
||||
Returns the 210-byte waveform/histogram record containing:
|
||||
- Record type string ("Histogram" or "Waveform") at a variable offset
|
||||
- Per-channel labels ("Tran", "Vert", "Long", "MicL") with PPV floats
|
||||
at label_offset + 6
|
||||
|
||||
Args:
|
||||
key4: 4-byte waveform record address.
|
||||
|
||||
Returns:
|
||||
210-byte record bytes (data[11:11+0xD2]).
|
||||
|
||||
Raises:
|
||||
ProtocolError: on timeout, bad checksum, or wrong response SUB.
|
||||
|
||||
Confirmed from 3-31-26 capture: 0C always uses offset=0xD2 (210 bytes).
|
||||
"""
|
||||
rsp_sub = _expected_rsp_sub(SUB_WAVEFORM_RECORD)
|
||||
length = DATA_LENGTHS[SUB_WAVEFORM_RECORD] # 0xD2
|
||||
params = waveform_key_params(key4)
|
||||
|
||||
log.debug("read_waveform_record: 0C probe key=%s", key4.hex())
|
||||
self._send(build_bw_frame(SUB_WAVEFORM_RECORD, 0, params))
|
||||
self._recv_one(expected_sub=rsp_sub)
|
||||
|
||||
log.debug("read_waveform_record: 0C data request offset=0x%02X", length)
|
||||
self._send(build_bw_frame(SUB_WAVEFORM_RECORD, length, params))
|
||||
data_rsp = self._recv_one(expected_sub=rsp_sub)
|
||||
|
||||
record = data_rsp.data[11:11 + length]
|
||||
log.debug("read_waveform_record: received %d record bytes", len(record))
|
||||
return record
|
||||
|
||||
def advance_event(self) -> bytes:
|
||||
"""
|
||||
Send the SUB 1F (EVENT_ADVANCE) two-step read with download-mode token
|
||||
(0xFE) and return the next waveform key.
|
||||
|
||||
In download mode (token=0xFE), the device skips partial histogram bins
|
||||
and returns the key of the next FULL record directly. This is the
|
||||
Blastware-observed behaviour for iterating through all stored events.
|
||||
|
||||
Returns:
|
||||
key4 — 4-byte next waveform key from data[11:15].
|
||||
Returns b'\\x00\\x00\\x00\\x00' when there are no more events.
|
||||
|
||||
Raises:
|
||||
ProtocolError: on timeout, bad checksum, or wrong response SUB.
|
||||
|
||||
Confirmed from 3-31-26 capture: 1F uses token=0xFE at params[6];
|
||||
loop termination is key4 == b'\\x00\\x00\\x00\\x00'.
|
||||
"""
|
||||
rsp_sub = _expected_rsp_sub(SUB_EVENT_ADVANCE)
|
||||
length = DATA_LENGTHS[SUB_EVENT_ADVANCE] # 0x08
|
||||
params = token_params(0xFE)
|
||||
|
||||
log.debug("advance_event: 1F probe")
|
||||
self._send(build_bw_frame(SUB_EVENT_ADVANCE, 0, params))
|
||||
self._recv_one(expected_sub=rsp_sub)
|
||||
|
||||
log.debug("advance_event: 1F data request offset=0x%02X", length)
|
||||
self._send(build_bw_frame(SUB_EVENT_ADVANCE, length, params))
|
||||
data_rsp = self._recv_one(expected_sub=rsp_sub)
|
||||
|
||||
key4 = data_rsp.data[11:15]
|
||||
log.debug(
|
||||
"advance_event: next key=%s done=%s",
|
||||
key4.hex(), key4 == b"\x00\x00\x00\x00",
|
||||
)
|
||||
return key4
|
||||
|
||||
# ── Internal helpers ──────────────────────────────────────────────────────
|
||||
|
||||
def _send(self, frame: bytes) -> None:
|
||||
"""Write a pre-built frame to the transport."""
|
||||
log.debug("TX %d bytes: %s", len(frame), frame.hex())
|
||||
self._transport.write(frame)
|
||||
|
||||
def _recv_one(
|
||||
self,
|
||||
expected_sub: Optional[int] = None,
|
||||
timeout: Optional[float] = None,
|
||||
) -> S3Frame:
|
||||
"""
|
||||
Read bytes from the transport until one complete S3 frame is parsed.
|
||||
|
||||
Feeds bytes through the streaming S3FrameParser. Keeps reading until
|
||||
a frame arrives or the deadline expires.
|
||||
|
||||
Args:
|
||||
expected_sub: If provided, raises UnexpectedResponse if the
|
||||
received frame's SUB doesn't match.
|
||||
timeout: Seconds to wait. Defaults to self._recv_timeout.
|
||||
|
||||
Returns:
|
||||
The first complete S3Frame received.
|
||||
|
||||
Raises:
|
||||
TimeoutError: if no frame arrives within the timeout.
|
||||
ChecksumError: if the frame has an invalid checksum.
|
||||
UnexpectedResponse: if expected_sub is set and doesn't match.
|
||||
"""
|
||||
deadline = time.monotonic() + (timeout or self._recv_timeout)
|
||||
self._parser.reset()
|
||||
|
||||
while time.monotonic() < deadline:
|
||||
chunk = self._transport.read(256)
|
||||
if chunk:
|
||||
log.debug("RX %d bytes: %s", len(chunk), chunk.hex())
|
||||
frames = self._parser.feed(chunk)
|
||||
if frames:
|
||||
frame = frames[0]
|
||||
self._validate_frame(frame, expected_sub)
|
||||
return frame
|
||||
else:
|
||||
time.sleep(0.005)
|
||||
|
||||
raise TimeoutError(
|
||||
f"No S3 frame received within {timeout or self._recv_timeout:.1f}s"
|
||||
+ (f" (expected SUB 0x{expected_sub:02X})" if expected_sub is not None else "")
|
||||
)
|
||||
|
||||
@staticmethod
|
||||
def _validate_frame(frame: S3Frame, expected_sub: Optional[int]) -> None:
|
||||
"""Validate SUB; log but do not raise on bad checksum.
|
||||
|
||||
S3 response checksums frequently fail SUM8 validation due to inner-frame
|
||||
delimiter bytes being captured as the checksum byte. The original
|
||||
s3_parser.py deliberately never validates S3 checksums for exactly this
|
||||
reason. We log a warning and continue.
|
||||
"""
|
||||
if not frame.checksum_valid:
|
||||
# S3 checksums frequently fail SUM8 due to inner-frame delimiter bytes
|
||||
# landing in the checksum position. Treat as informational only.
|
||||
log.debug("S3 frame SUB=0x%02X: checksum mismatch (ignoring)", frame.sub)
|
||||
if expected_sub is not None and frame.sub != expected_sub:
|
||||
raise UnexpectedResponse(
|
||||
f"Expected SUB=0x{expected_sub:02X}, got 0x{frame.sub:02X}"
|
||||
)
|
||||
|
||||
def _drain_boot_string(self, drain_ms: int = 200) -> None:
|
||||
"""
|
||||
Read and discard any boot-string bytes ("Operating System") the device
|
||||
may send before entering binary protocol mode.
|
||||
|
||||
We simply read with a short timeout and throw the bytes away. The
|
||||
S3FrameParser's IDLE state already handles non-frame bytes gracefully,
|
||||
but it's cleaner to drain them explicitly before the first real frame.
|
||||
"""
|
||||
deadline = time.monotonic() + (drain_ms / 1000)
|
||||
discarded = 0
|
||||
while time.monotonic() < deadline:
|
||||
chunk = self._transport.read(256)
|
||||
if chunk:
|
||||
discarded += len(chunk)
|
||||
else:
|
||||
time.sleep(0.005)
|
||||
if discarded:
|
||||
log.debug("drain_boot_string: discarded %d bytes", discarded)
|
||||
420
minimateplus/transport.py
Normal file
420
minimateplus/transport.py
Normal file
@@ -0,0 +1,420 @@
|
||||
"""
|
||||
transport.py — Serial and TCP transport layer for the MiniMate Plus protocol.
|
||||
|
||||
Provides a thin I/O abstraction so that protocol.py never imports pyserial or
|
||||
socket directly. Two concrete implementations:
|
||||
|
||||
SerialTransport — direct RS-232 cable connection (pyserial)
|
||||
TcpTransport — TCP socket to a modem or ACH relay (stdlib socket)
|
||||
|
||||
The MiniMate Plus protocol bytes are identical over both transports. TCP is used
|
||||
when field units call home via the ACH (Auto Call Home) server, or when SFM
|
||||
"calls up" a unit by connecting to the modem's IP address directly.
|
||||
|
||||
Field hardware: Sierra Wireless RV55 / RX55 (4G LTE) cellular modem, replacing
|
||||
the older 3G-only Raven X (now decommissioned). All run ALEOS firmware with an
|
||||
ACEmanager web UI. Serial port must be configured 38400,8N1, no flow control,
|
||||
Data Forwarding Timeout = 1 s.
|
||||
|
||||
Typical usage:
|
||||
from minimateplus.transport import SerialTransport, TcpTransport
|
||||
|
||||
# Direct serial connection
|
||||
with SerialTransport("COM5") as t:
|
||||
t.write(frame_bytes)
|
||||
|
||||
# Modem / ACH TCP connection (Blastware port 12345)
|
||||
with TcpTransport("192.168.1.50", 12345) as t:
|
||||
t.write(frame_bytes)
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import socket
|
||||
import time
|
||||
from abc import ABC, abstractmethod
|
||||
from typing import Optional
|
||||
|
||||
# pyserial is the only non-stdlib dependency in this project.
|
||||
# Import lazily so unit-tests that mock the transport can run without it.
|
||||
try:
|
||||
import serial # type: ignore
|
||||
except ImportError: # pragma: no cover
|
||||
serial = None # type: ignore
|
||||
|
||||
|
||||
# ── Abstract base ─────────────────────────────────────────────────────────────
|
||||
|
||||
class BaseTransport(ABC):
|
||||
"""Common interface for all transport implementations."""
|
||||
|
||||
@abstractmethod
|
||||
def connect(self) -> None:
|
||||
"""Open the underlying connection."""
|
||||
|
||||
@abstractmethod
|
||||
def disconnect(self) -> None:
|
||||
"""Close the underlying connection."""
|
||||
|
||||
@property
|
||||
@abstractmethod
|
||||
def is_connected(self) -> bool:
|
||||
"""True while the connection is open."""
|
||||
|
||||
@abstractmethod
|
||||
def write(self, data: bytes) -> None:
|
||||
"""Write *data* bytes to the wire."""
|
||||
|
||||
@abstractmethod
|
||||
def read(self, n: int) -> bytes:
|
||||
"""
|
||||
Read up to *n* bytes. Returns immediately with whatever is available
|
||||
(may return fewer than *n* bytes, or b"" if nothing is ready).
|
||||
"""
|
||||
|
||||
# ── Context manager ───────────────────────────────────────────────────────
|
||||
|
||||
def __enter__(self) -> "BaseTransport":
|
||||
self.connect()
|
||||
return self
|
||||
|
||||
def __exit__(self, *_) -> None:
|
||||
self.disconnect()
|
||||
|
||||
# ── Higher-level read helpers ─────────────────────────────────────────────
|
||||
|
||||
def read_until_idle(
|
||||
self,
|
||||
timeout: float = 2.0,
|
||||
idle_gap: float = 0.05,
|
||||
chunk: int = 256,
|
||||
) -> bytes:
|
||||
"""
|
||||
Read bytes until the line goes quiet.
|
||||
|
||||
Keeps reading in *chunk*-sized bursts. Returns when either:
|
||||
- *timeout* seconds have elapsed since the first byte arrived, or
|
||||
- *idle_gap* seconds pass with no new bytes (line went quiet).
|
||||
|
||||
This mirrors how Blastware behaves: it waits for the seismograph to
|
||||
stop transmitting rather than counting bytes.
|
||||
|
||||
Args:
|
||||
timeout: Hard deadline (seconds) from the moment read starts.
|
||||
idle_gap: How long to wait after the last byte before declaring done.
|
||||
chunk: How many bytes to request per low-level read() call.
|
||||
|
||||
Returns:
|
||||
All bytes received as a single bytes object (may be b"" if nothing
|
||||
arrived within *timeout*).
|
||||
"""
|
||||
buf = bytearray()
|
||||
deadline = time.monotonic() + timeout
|
||||
last_rx = None
|
||||
|
||||
while time.monotonic() < deadline:
|
||||
got = self.read(chunk)
|
||||
if got:
|
||||
buf.extend(got)
|
||||
last_rx = time.monotonic()
|
||||
else:
|
||||
# Nothing ready — check idle gap
|
||||
if last_rx is not None and (time.monotonic() - last_rx) >= idle_gap:
|
||||
break
|
||||
time.sleep(0.005)
|
||||
|
||||
return bytes(buf)
|
||||
|
||||
def read_exact(self, n: int, timeout: float = 2.0) -> bytes:
|
||||
"""
|
||||
Read exactly *n* bytes or raise TimeoutError.
|
||||
|
||||
Useful when the caller already knows the expected response length
|
||||
(e.g. fixed-size ACK packets).
|
||||
"""
|
||||
buf = bytearray()
|
||||
deadline = time.monotonic() + timeout
|
||||
while len(buf) < n:
|
||||
if time.monotonic() >= deadline:
|
||||
raise TimeoutError(
|
||||
f"read_exact: wanted {n} bytes, got {len(buf)} "
|
||||
f"after {timeout:.1f}s"
|
||||
)
|
||||
got = self.read(n - len(buf))
|
||||
if got:
|
||||
buf.extend(got)
|
||||
else:
|
||||
time.sleep(0.005)
|
||||
return bytes(buf)
|
||||
|
||||
|
||||
# ── Serial transport ──────────────────────────────────────────────────────────
|
||||
|
||||
# Default baud rate confirmed from Blastware / MiniMate Plus documentation.
|
||||
DEFAULT_BAUD = 38_400
|
||||
|
||||
# pyserial serial port config matching the MiniMate Plus RS-232 spec:
|
||||
# 8 data bits, no parity, 1 stop bit (8N1).
|
||||
_SERIAL_BYTESIZE = 8 # serial.EIGHTBITS
|
||||
_SERIAL_PARITY = "N" # serial.PARITY_NONE
|
||||
_SERIAL_STOPBITS = 1 # serial.STOPBITS_ONE
|
||||
|
||||
|
||||
class SerialTransport(BaseTransport):
|
||||
"""
|
||||
pyserial-backed transport for a direct RS-232 cable connection.
|
||||
|
||||
The port is opened with a very short read timeout (10 ms) so that
|
||||
read() returns quickly and the caller can implement its own framing /
|
||||
timeout logic without blocking the whole process.
|
||||
|
||||
Args:
|
||||
port: COM port name (e.g. "COM5" on Windows, "/dev/ttyUSB0" on Linux).
|
||||
baud: Baud rate (default 38400).
|
||||
rts_cts: Enable RTS/CTS hardware flow control (default False — MiniMate
|
||||
typically uses no flow control).
|
||||
"""
|
||||
|
||||
# Internal read timeout (seconds). Short so read() is non-blocking in practice.
|
||||
_READ_TIMEOUT = 0.01
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
port: str,
|
||||
baud: int = DEFAULT_BAUD,
|
||||
rts_cts: bool = False,
|
||||
) -> None:
|
||||
if serial is None:
|
||||
raise ImportError(
|
||||
"pyserial is required for SerialTransport. "
|
||||
"Install it with: pip install pyserial"
|
||||
)
|
||||
self.port = port
|
||||
self.baud = baud
|
||||
self.rts_cts = rts_cts
|
||||
self._ser: Optional[serial.Serial] = None
|
||||
|
||||
# ── BaseTransport interface ───────────────────────────────────────────────
|
||||
|
||||
def connect(self) -> None:
|
||||
"""Open the serial port. Raises serial.SerialException on failure."""
|
||||
if self._ser and self._ser.is_open:
|
||||
return # Already open — idempotent
|
||||
self._ser = serial.Serial(
|
||||
port = self.port,
|
||||
baudrate = self.baud,
|
||||
bytesize = _SERIAL_BYTESIZE,
|
||||
parity = _SERIAL_PARITY,
|
||||
stopbits = _SERIAL_STOPBITS,
|
||||
timeout = self._READ_TIMEOUT,
|
||||
rtscts = self.rts_cts,
|
||||
xonxoff = False,
|
||||
dsrdtr = False,
|
||||
)
|
||||
# Flush any stale bytes left in device / OS buffers from a previous session
|
||||
self._ser.reset_input_buffer()
|
||||
self._ser.reset_output_buffer()
|
||||
|
||||
def disconnect(self) -> None:
|
||||
"""Close the serial port. Safe to call even if already closed."""
|
||||
if self._ser:
|
||||
try:
|
||||
self._ser.close()
|
||||
except Exception:
|
||||
pass
|
||||
self._ser = None
|
||||
|
||||
@property
|
||||
def is_connected(self) -> bool:
|
||||
return bool(self._ser and self._ser.is_open)
|
||||
|
||||
def write(self, data: bytes) -> None:
|
||||
"""
|
||||
Write *data* to the serial port.
|
||||
|
||||
Raises:
|
||||
RuntimeError: if not connected.
|
||||
serial.SerialException: on I/O error.
|
||||
"""
|
||||
if not self.is_connected:
|
||||
raise RuntimeError("SerialTransport.write: not connected")
|
||||
self._ser.write(data) # type: ignore[union-attr]
|
||||
self._ser.flush() # type: ignore[union-attr]
|
||||
|
||||
def read(self, n: int) -> bytes:
|
||||
"""
|
||||
Read up to *n* bytes from the serial port.
|
||||
|
||||
Returns b"" immediately if no data is available (non-blocking in
|
||||
practice thanks to the 10 ms read timeout).
|
||||
|
||||
Raises:
|
||||
RuntimeError: if not connected.
|
||||
"""
|
||||
if not self.is_connected:
|
||||
raise RuntimeError("SerialTransport.read: not connected")
|
||||
return self._ser.read(n) # type: ignore[union-attr]
|
||||
|
||||
# ── Extras ────────────────────────────────────────────────────────────────
|
||||
|
||||
def flush_input(self) -> None:
|
||||
"""Discard any unread bytes in the OS receive buffer."""
|
||||
if self.is_connected:
|
||||
self._ser.reset_input_buffer() # type: ignore[union-attr]
|
||||
|
||||
def __repr__(self) -> str:
|
||||
state = "open" if self.is_connected else "closed"
|
||||
return f"SerialTransport({self.port!r}, baud={self.baud}, {state})"
|
||||
|
||||
|
||||
# ── TCP transport ─────────────────────────────────────────────────────────────
|
||||
|
||||
# Default TCP port for Blastware modem communications / ACH relay.
|
||||
# Confirmed from field setup: Blastware → Communication Setup → TCP/IP uses 12345.
|
||||
DEFAULT_TCP_PORT = 12345
|
||||
|
||||
|
||||
class TcpTransport(BaseTransport):
|
||||
"""
|
||||
TCP socket transport for MiniMate Plus units in the field.
|
||||
|
||||
The protocol bytes over TCP are identical to RS-232 — TCP is simply a
|
||||
different physical layer. The modem (Sierra Wireless RV55 / RX55, or older
|
||||
Raven X) bridges the unit's RS-232 serial port to a TCP socket transparently.
|
||||
No application-layer handshake or framing is added.
|
||||
|
||||
Two usage scenarios:
|
||||
|
||||
"Call up" (outbound): SFM connects to the unit's modem IP directly.
|
||||
TcpTransport(host="203.0.113.5", port=12345)
|
||||
|
||||
"Call home" / ACH relay: The unit has already dialled in to the office
|
||||
ACH server, which bridged the modem to a TCP socket. In this case
|
||||
the host/port identifies the relay's listening socket, not the modem.
|
||||
(ACH inbound mode is handled by a separate AchServer — not this class.)
|
||||
|
||||
IMPORTANT — modem data forwarding delay:
|
||||
Sierra Wireless (and Raven) modems buffer RS-232 bytes for up to 1 second
|
||||
before forwarding them as a TCP segment ("Data Forwarding Timeout" in
|
||||
ACEmanager). read_until_idle() is overridden to use idle_gap=1.5 s rather
|
||||
than the serial default of 0.05 s — without this, the parser would declare
|
||||
a frame complete mid-stream during the modem's buffering pause.
|
||||
|
||||
Args:
|
||||
host: IP address or hostname of the modem / ACH relay.
|
||||
port: TCP port number (default 12345).
|
||||
connect_timeout: Seconds to wait for the TCP handshake (default 10.0).
|
||||
"""
|
||||
|
||||
# Internal recv timeout — short so read() returns promptly if no data.
|
||||
_RECV_TIMEOUT = 0.01
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
host: str,
|
||||
port: int = DEFAULT_TCP_PORT,
|
||||
connect_timeout: float = 10.0,
|
||||
) -> None:
|
||||
self.host = host
|
||||
self.port = port
|
||||
self.connect_timeout = connect_timeout
|
||||
self._sock: Optional[socket.socket] = None
|
||||
|
||||
# ── BaseTransport interface ───────────────────────────────────────────────
|
||||
|
||||
def connect(self) -> None:
|
||||
"""
|
||||
Open a TCP connection to host:port.
|
||||
|
||||
Idempotent — does nothing if already connected.
|
||||
|
||||
Raises:
|
||||
OSError / socket.timeout: if the connection cannot be established.
|
||||
"""
|
||||
if self._sock is not None:
|
||||
return # Already connected — idempotent
|
||||
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
|
||||
sock.settimeout(self.connect_timeout)
|
||||
sock.connect((self.host, self.port))
|
||||
# Switch to short timeout so read() is non-blocking in practice
|
||||
sock.settimeout(self._RECV_TIMEOUT)
|
||||
self._sock = sock
|
||||
|
||||
def disconnect(self) -> None:
|
||||
"""Close the TCP socket. Safe to call even if already closed."""
|
||||
if self._sock:
|
||||
try:
|
||||
self._sock.shutdown(socket.SHUT_RDWR)
|
||||
except OSError:
|
||||
pass
|
||||
try:
|
||||
self._sock.close()
|
||||
except OSError:
|
||||
pass
|
||||
self._sock = None
|
||||
|
||||
@property
|
||||
def is_connected(self) -> bool:
|
||||
return self._sock is not None
|
||||
|
||||
def write(self, data: bytes) -> None:
|
||||
"""
|
||||
Send all bytes to the peer.
|
||||
|
||||
Raises:
|
||||
RuntimeError: if not connected.
|
||||
OSError: on network I/O error.
|
||||
"""
|
||||
if not self.is_connected:
|
||||
raise RuntimeError("TcpTransport.write: not connected")
|
||||
self._sock.sendall(data) # type: ignore[union-attr]
|
||||
|
||||
def read(self, n: int) -> bytes:
|
||||
"""
|
||||
Read up to *n* bytes from the socket.
|
||||
|
||||
Returns b"" immediately if no data is available (non-blocking in
|
||||
practice thanks to the short socket timeout).
|
||||
|
||||
Raises:
|
||||
RuntimeError: if not connected.
|
||||
"""
|
||||
if not self.is_connected:
|
||||
raise RuntimeError("TcpTransport.read: not connected")
|
||||
try:
|
||||
return self._sock.recv(n) # type: ignore[union-attr]
|
||||
except socket.timeout:
|
||||
return b""
|
||||
|
||||
def read_until_idle(
|
||||
self,
|
||||
timeout: float = 2.0,
|
||||
idle_gap: float = 1.5,
|
||||
chunk: int = 256,
|
||||
) -> bytes:
|
||||
"""
|
||||
TCP-aware version of read_until_idle.
|
||||
|
||||
Overrides the BaseTransport default to use a much longer idle_gap (1.5 s
|
||||
vs 0.05 s for serial). This is necessary because the Raven modem (and
|
||||
similar cellular modems) buffer serial-port bytes for up to 1 second
|
||||
before forwarding them over TCP ("Data Forwarding Timeout" setting).
|
||||
|
||||
If read_until_idle returned after a 50 ms quiet period, it would trigger
|
||||
mid-frame when the modem is still accumulating bytes — causing frame
|
||||
parse failures on every call.
|
||||
|
||||
Args:
|
||||
timeout: Hard deadline from first byte (default 2.0 s — callers
|
||||
typically pass a longer value for S3 frames).
|
||||
idle_gap: Quiet-line threshold (default 1.5 s to survive modem
|
||||
buffering). Pass a smaller value only if you are
|
||||
connecting directly to a unit's Ethernet port with no
|
||||
modem buffering in the path.
|
||||
chunk: Bytes per low-level recv() call.
|
||||
"""
|
||||
return super().read_until_idle(timeout=timeout, idle_gap=idle_gap, chunk=chunk)
|
||||
|
||||
def __repr__(self) -> str:
|
||||
state = "connected" if self.is_connected else "disconnected"
|
||||
return f"TcpTransport({self.host!r}, port={self.port}, {state})"
|
||||
@@ -12,6 +12,7 @@ Usage:
|
||||
from __future__ import annotations
|
||||
|
||||
import argparse
|
||||
import struct
|
||||
import sys
|
||||
import time
|
||||
from dataclasses import dataclass
|
||||
@@ -139,6 +140,15 @@ class Session:
|
||||
index: int
|
||||
bw_frames: list[AnnotatedFrame]
|
||||
s3_frames: list[AnnotatedFrame]
|
||||
# None = infer from SUB 0x74 presence; True/False = explicitly set by splitter
|
||||
complete: Optional[bool] = None
|
||||
|
||||
def is_complete(self) -> bool:
|
||||
"""A session is complete if explicitly marked, or if it contains SUB 0x74."""
|
||||
if self.complete is not None:
|
||||
return self.complete
|
||||
return any(af.header is not None and af.header.sub == SESSION_CLOSE_SUB
|
||||
for af in self.bw_frames)
|
||||
|
||||
@property
|
||||
def all_frames(self) -> list[AnnotatedFrame]:
|
||||
@@ -294,6 +304,129 @@ def split_into_sessions(
|
||||
return sessions
|
||||
|
||||
|
||||
# ──────────────────────────────────────────────────────────────────────────────
|
||||
# Mark-based session splitting (using structured .bin log)
|
||||
# ──────────────────────────────────────────────────────────────────────────────
|
||||
|
||||
# Structured .bin record types (from s3_bridge.py)
|
||||
_REC_BW = 0x01
|
||||
_REC_S3 = 0x02
|
||||
_REC_MARK = 0x03
|
||||
_REC_INFO = 0x04
|
||||
|
||||
|
||||
@dataclass
|
||||
class MarkSplit:
|
||||
"""A session boundary derived from a MARK record in the structured .bin log."""
|
||||
label: str
|
||||
bw_byte_offset: int # byte position in the flat raw_bw stream at mark time
|
||||
s3_byte_offset: int # byte position in the flat raw_s3 stream at mark time
|
||||
|
||||
|
||||
def parse_structured_bin(bin_blob: bytes) -> list[MarkSplit]:
|
||||
"""
|
||||
Read a structured s3_session_*.bin file and return one MarkSplit per MARK
|
||||
record, containing the cumulative BW and S3 byte counts at that point.
|
||||
|
||||
Record format: [type:1][ts_us:8 LE][len:4 LE][payload:len]
|
||||
"""
|
||||
marks: list[MarkSplit] = []
|
||||
bw_bytes = 0
|
||||
s3_bytes = 0
|
||||
pos = 0
|
||||
|
||||
while pos + 13 <= len(bin_blob):
|
||||
rec_type = bin_blob[pos]
|
||||
# ts_us: 8 bytes LE (we don't need it, just skip)
|
||||
length = struct.unpack_from("<I", bin_blob, pos + 9)[0]
|
||||
payload_start = pos + 13
|
||||
payload_end = payload_start + length
|
||||
|
||||
if payload_end > len(bin_blob):
|
||||
break # truncated record
|
||||
|
||||
payload = bin_blob[payload_start:payload_end]
|
||||
|
||||
if rec_type == _REC_BW:
|
||||
bw_bytes += length
|
||||
elif rec_type == _REC_S3:
|
||||
s3_bytes += length
|
||||
elif rec_type == _REC_MARK:
|
||||
label = payload.decode("utf-8", errors="replace")
|
||||
# Skip auto-generated bridge lifecycle marks — only keep user marks
|
||||
if label.startswith("SESSION START") or label.startswith("SESSION END"):
|
||||
pass
|
||||
else:
|
||||
marks.append(MarkSplit(label=label,
|
||||
bw_byte_offset=bw_bytes,
|
||||
s3_byte_offset=s3_bytes))
|
||||
|
||||
pos = payload_end
|
||||
|
||||
return marks
|
||||
|
||||
|
||||
def split_sessions_at_marks(
|
||||
bw_blob: bytes,
|
||||
s3_blob: bytes,
|
||||
marks: list[MarkSplit],
|
||||
) -> list[Session]:
|
||||
"""
|
||||
Split raw byte streams into sessions using mark byte offsets, then apply
|
||||
the standard 0x74-based sub-splitting within each mark segment.
|
||||
|
||||
Each mark creates a new session boundary: session 0 = bytes before mark 0,
|
||||
session 1 = bytes between mark 0 and mark 1, etc.
|
||||
"""
|
||||
if not marks:
|
||||
# No marks — fall back to standard session detection
|
||||
bw_frames = annotate_frames(parse_bw(bw_blob, trailer_len=0,
|
||||
validate_checksum=True), "BW")
|
||||
s3_frames = annotate_frames(parse_s3(s3_blob, trailer_len=0), "S3")
|
||||
return split_into_sessions(bw_frames, s3_frames)
|
||||
|
||||
# Build slice boundaries: [0 .. mark0.bw, mark0.bw .. mark1.bw, ...]
|
||||
bw_cuts = [m.bw_byte_offset for m in marks] + [len(bw_blob)]
|
||||
s3_cuts = [m.s3_byte_offset for m in marks] + [len(s3_blob)]
|
||||
|
||||
all_sessions: list[Session] = []
|
||||
session_offset = 0
|
||||
bw_prev = s3_prev = 0
|
||||
|
||||
n_segments = len(bw_cuts)
|
||||
for seg_i, (bw_end, s3_end) in enumerate(zip(bw_cuts, s3_cuts)):
|
||||
bw_chunk = bw_blob[bw_prev:bw_end]
|
||||
s3_chunk = s3_blob[s3_prev:s3_end]
|
||||
|
||||
bw_frames = annotate_frames(parse_bw(bw_chunk, trailer_len=0,
|
||||
validate_checksum=True), "BW")
|
||||
s3_frames = annotate_frames(parse_s3(s3_chunk, trailer_len=0), "S3")
|
||||
|
||||
seg_sessions = split_into_sessions(bw_frames, s3_frames)
|
||||
|
||||
# A mark-bounded segment is complete by definition — the user placed the
|
||||
# mark after the read finished. Only the last segment (trailing, unbounded)
|
||||
# may be genuinely in-progress.
|
||||
is_last_segment = (seg_i == n_segments - 1)
|
||||
|
||||
# Re-index sessions so they are globally unique
|
||||
for sess in seg_sessions:
|
||||
sess.index = session_offset
|
||||
for f in sess.all_frames:
|
||||
f.session_idx = session_offset
|
||||
# Explicitly mark completeness: mark-bounded segments are complete;
|
||||
# the trailing segment falls back to 0x74 inference.
|
||||
if not is_last_segment:
|
||||
sess.complete = True
|
||||
session_offset += 1
|
||||
all_sessions.append(sess)
|
||||
|
||||
bw_prev = bw_end
|
||||
s3_prev = s3_end
|
||||
|
||||
return all_sessions
|
||||
|
||||
|
||||
# ──────────────────────────────────────────────────────────────────────────────
|
||||
# Diff engine
|
||||
# ──────────────────────────────────────────────────────────────────────────────
|
||||
@@ -341,6 +474,140 @@ def lookup_field_name(sub: int, page_key: int, payload_offset: int) -> Optional[
|
||||
return None
|
||||
|
||||
|
||||
def _extract_a4_inner_frames(payload: bytes) -> list[tuple[int, int, bytes]]:
|
||||
"""
|
||||
Parse the inner sub-frame stream packed inside an A4 (POLL_RESPONSE) payload.
|
||||
|
||||
The payload is a sequence of inner frames, each starting with DLE STX (10 02)
|
||||
and delimited by ACK (41) before the next DLE STX. The inner frame body
|
||||
(after the 10 02 preamble) has the same 5-byte header layout as outer frames:
|
||||
[0] 00
|
||||
[1] 10
|
||||
[2] SUB
|
||||
[3] OFFSET_HI (page_key high byte)
|
||||
[4] OFFSET_LO (page_key low byte)
|
||||
[5+] data
|
||||
|
||||
Returns a list of (sub, page_key, data_bytes) — one entry per inner frame,
|
||||
keeping ALL occurrences (not deduped), so the caller can decide how to match.
|
||||
"""
|
||||
DLE, STX, ACK = 0x10, 0x02, 0x41
|
||||
results: list[tuple[int, int, bytes]] = []
|
||||
|
||||
# Collect start positions of each inner frame (offset of the DLE STX)
|
||||
starts: list[int] = []
|
||||
i = 0
|
||||
# First frame may begin at offset 0 with DLE STX directly
|
||||
if len(payload) >= 2 and payload[0] == DLE and payload[1] == STX:
|
||||
starts.append(0)
|
||||
i = 2
|
||||
while i < len(payload) - 2:
|
||||
if payload[i] == ACK and payload[i + 1] == DLE and payload[i + 2] == STX:
|
||||
starts.append(i + 1) # point at the DLE
|
||||
i += 3
|
||||
else:
|
||||
i += 1
|
||||
|
||||
for k, s in enumerate(starts):
|
||||
# Body starts after DLE STX (2 bytes)
|
||||
body_start = s + 2
|
||||
body_end = starts[k + 1] - 1 if k + 1 < len(starts) else len(payload)
|
||||
body = payload[body_start:body_end]
|
||||
if len(body) < 5:
|
||||
continue
|
||||
# body[0]=0x00, body[1]=0x10, body[2]=SUB, body[3]=OFFSET_HI, body[4]=OFFSET_LO
|
||||
sub = body[2]
|
||||
page_key = (body[3] << 8) | body[4]
|
||||
data = body[5:]
|
||||
results.append((sub, page_key, data))
|
||||
|
||||
return results
|
||||
|
||||
|
||||
def _diff_a4_payloads(payload_a: bytes, payload_b: bytes) -> list[ByteDiff]:
|
||||
"""
|
||||
Diff two A4 container payloads at the inner sub-frame level.
|
||||
|
||||
Inner frames are matched by (sub, page_key). For each pair of matching
|
||||
inner frames whose data differs, the changed bytes are reported with
|
||||
payload_offset encoded as: (inner_frame_index << 16) | byte_offset_in_data.
|
||||
|
||||
Inner frames present in one payload but not the other are reported as a
|
||||
single synthetic ByteDiff entry with before/after = -1 / -2 respectively,
|
||||
and field_name describing the missing inner SUB.
|
||||
|
||||
The high-16 / low-16 split in payload_offset lets the GUI render these
|
||||
differently if desired, but they degrade gracefully in the existing renderer.
|
||||
"""
|
||||
frames_a = _extract_a4_inner_frames(payload_a)
|
||||
frames_b = _extract_a4_inner_frames(payload_b)
|
||||
|
||||
# Build multimap: (sub, page_key) → list of data blobs, preserving order
|
||||
def index(frames):
|
||||
idx: dict[tuple[int, int], list[bytes]] = {}
|
||||
for sub, pk, data in frames:
|
||||
idx.setdefault((sub, pk), []).append(data)
|
||||
return idx
|
||||
|
||||
idx_a = index(frames_a)
|
||||
idx_b = index(frames_b)
|
||||
|
||||
all_keys = sorted(set(idx_a) | set(idx_b))
|
||||
diffs: list[ByteDiff] = []
|
||||
|
||||
for sub, pk in all_keys:
|
||||
list_a = idx_a.get((sub, pk), [])
|
||||
list_b = idx_b.get((sub, pk), [])
|
||||
|
||||
# Pair up by position; extras are treated as added/removed
|
||||
n = max(len(list_a), len(list_b))
|
||||
for pos in range(n):
|
||||
da = list_a[pos] if pos < len(list_a) else None
|
||||
db = list_b[pos] if pos < len(list_b) else None
|
||||
|
||||
if da is None:
|
||||
# Inner frame added in B
|
||||
entry = SUB_TABLE.get(sub)
|
||||
name = entry[0] if entry else f"UNKNOWN_{sub:02X}"
|
||||
diffs.append(ByteDiff(
|
||||
payload_offset=(sub << 16) | (pk & 0xFFFF),
|
||||
before=-1,
|
||||
after=-2,
|
||||
field_name=f"[A4 inner] SUB {sub:02X} ({name}) pk={pk:04X} added",
|
||||
))
|
||||
continue
|
||||
if db is None:
|
||||
entry = SUB_TABLE.get(sub)
|
||||
name = entry[0] if entry else f"UNKNOWN_{sub:02X}"
|
||||
diffs.append(ByteDiff(
|
||||
payload_offset=(sub << 16) | (pk & 0xFFFF),
|
||||
before=-2,
|
||||
after=-1,
|
||||
field_name=f"[A4 inner] SUB {sub:02X} ({name}) pk={pk:04X} removed",
|
||||
))
|
||||
continue
|
||||
|
||||
# Both present — byte diff the data sections
|
||||
da_m = _mask_noisy(sub, da)
|
||||
db_m = _mask_noisy(sub, db)
|
||||
if da_m == db_m:
|
||||
continue
|
||||
max_len = max(len(da_m), len(db_m))
|
||||
for off in range(max_len):
|
||||
ba = da_m[off] if off < len(da_m) else None
|
||||
bb = db_m[off] if off < len(db_m) else None
|
||||
if ba != bb:
|
||||
field = lookup_field_name(sub, pk, off + HEADER_LEN)
|
||||
diffs.append(ByteDiff(
|
||||
payload_offset=(sub << 16) | (off & 0xFFFF),
|
||||
before=ba if ba is not None else -1,
|
||||
after=bb if bb is not None else -1,
|
||||
field_name=field or f"[A4:{sub:02X} pk={pk:04X}] off={off}",
|
||||
))
|
||||
|
||||
return diffs
|
||||
|
||||
|
||||
def diff_sessions(sess_a: Session, sess_b: Session) -> list[FrameDiff]:
|
||||
"""
|
||||
Compare two sessions frame-by-frame, matched by (sub, page_key).
|
||||
@@ -370,6 +637,16 @@ def diff_sessions(sess_a: Session, sess_b: Session) -> list[FrameDiff]:
|
||||
af_a = idx_a[key]
|
||||
af_b = idx_b[key]
|
||||
|
||||
# A4 is a container frame — diff at the inner sub-frame level to avoid
|
||||
# phase-shift noise when the number of embedded records differs.
|
||||
if sub == 0xA4:
|
||||
diffs = _diff_a4_payloads(af_a.frame.payload, af_b.frame.payload)
|
||||
if diffs:
|
||||
entry = SUB_TABLE.get(sub)
|
||||
sub_name = entry[0] if entry else f"UNKNOWN_{sub:02X}"
|
||||
results.append(FrameDiff(sub=sub, page_key=page_key, sub_name=sub_name, diffs=diffs))
|
||||
continue
|
||||
|
||||
data_a = _mask_noisy(sub, _get_data_section(af_a))
|
||||
data_b = _mask_noisy(sub, _get_data_section(af_b))
|
||||
|
||||
@@ -425,11 +702,7 @@ def render_session_report(
|
||||
n_bw = len(session.bw_frames)
|
||||
n_s3 = len(session.s3_frames)
|
||||
total = n_bw + n_s3
|
||||
is_complete = any(
|
||||
af.header is not None and af.header.sub == SESSION_CLOSE_SUB
|
||||
for af in session.bw_frames
|
||||
)
|
||||
status = "" if is_complete else " [IN PROGRESS]"
|
||||
status = "" if session.is_complete() else " [IN PROGRESS]"
|
||||
|
||||
lines.append(f"{'='*72}")
|
||||
lines.append(f"SESSION {session.index}{status}")
|
||||
@@ -589,11 +862,7 @@ def render_claude_export(
|
||||
lines += ["## Capture Summary", ""]
|
||||
lines.append(f"Sessions found: {len(sessions)}")
|
||||
for sess in sessions:
|
||||
is_complete = any(
|
||||
af.header is not None and af.header.sub == SESSION_CLOSE_SUB
|
||||
for af in sess.bw_frames
|
||||
)
|
||||
status = "complete" if is_complete else "partial/in-progress"
|
||||
status = "complete" if sess.is_complete() else "partial/in-progress"
|
||||
n_bw, n_s3 = len(sess.bw_frames), len(sess.s3_frames)
|
||||
changed = len(diffs[sess.index] or []) if sess.index < len(diffs) else 0
|
||||
changed_str = f" ({changed} SUBs changed vs prev)" if sess.index > 0 else " (baseline)"
|
||||
@@ -861,14 +1130,7 @@ def live_loop(
|
||||
|
||||
# Check for session close
|
||||
all_sessions = split_into_sessions(bw_annotated, s3_annotated)
|
||||
# A complete session has the closing 0x74
|
||||
complete_sessions = [
|
||||
s for s in all_sessions
|
||||
if any(
|
||||
af.header is not None and af.header.sub == SESSION_CLOSE_SUB
|
||||
for af in s.bw_frames
|
||||
)
|
||||
]
|
||||
complete_sessions = [s for s in all_sessions if s.is_complete()]
|
||||
|
||||
# Emit reports for newly completed sessions
|
||||
for sess in complete_sessions[len(sessions):]:
|
||||
@@ -899,13 +1161,7 @@ def live_loop(
|
||||
s3_annotated = annotate_frames(s3_frames_raw, "S3")
|
||||
bw_annotated = annotate_frames(bw_frames_raw, "BW")
|
||||
all_sessions = split_into_sessions(bw_annotated, s3_annotated)
|
||||
incomplete = [
|
||||
s for s in all_sessions
|
||||
if not any(
|
||||
af.header is not None and af.header.sub == SESSION_CLOSE_SUB
|
||||
for af in s.bw_frames
|
||||
)
|
||||
]
|
||||
incomplete = [s for s in all_sessions if not s.is_complete()]
|
||||
for sess in incomplete:
|
||||
report = render_session_report(sess, diffs=None, prev_session_index=None)
|
||||
out_path = write_report(sess, report, outdir)
|
||||
|
||||
@@ -109,6 +109,28 @@ def _try_validate_sum8(body: bytes) -> Optional[Tuple[bytes, bytes, str]]:
|
||||
return None
|
||||
|
||||
|
||||
def _try_validate_sum8_large(body: bytes) -> Optional[Tuple[bytes, bytes, str]]:
|
||||
"""
|
||||
Large BW->S3 write frame checksum (SUBs 68, 69, 71, 82, 1A with data).
|
||||
|
||||
Formula: (sum(b for b in payload[2:-1] if b != 0x10) + 0x10) & 0xFF
|
||||
- Starts from byte [2], skipping CMD (0x10) and DLE (0x10) at [0][1]
|
||||
- Skips all 0x10 bytes in the covered range
|
||||
- Adds 0x10 as a constant offset
|
||||
- body[-1] is the checksum byte
|
||||
|
||||
Confirmed across 20 frames from two independent captures (2026-03-12).
|
||||
"""
|
||||
if len(body) < 3:
|
||||
return None
|
||||
payload = body[:-1]
|
||||
chk = body[-1]
|
||||
calc = (sum(b for b in payload[2:] if b != 0x10) + 0x10) & 0xFF
|
||||
if calc == chk:
|
||||
return payload, bytes([chk]), "SUM8_LARGE"
|
||||
return None
|
||||
|
||||
|
||||
def _try_validate_crc16(body: bytes) -> Optional[Tuple[bytes, bytes, str]]:
|
||||
"""
|
||||
body = payload + crc16(2 bytes)
|
||||
@@ -137,11 +159,16 @@ def validate_bw_body_auto(body: bytes) -> Optional[Tuple[bytes, bytes, str]]:
|
||||
Try to interpret the tail of body as a checksum in several ways.
|
||||
Return (payload, checksum_bytes, checksum_type) if any match; else None.
|
||||
"""
|
||||
# Prefer SUM8 first (it fits small frames and is cheap)
|
||||
# Prefer plain SUM8 first (small frames: POLL, read commands)
|
||||
hit = _try_validate_sum8(body)
|
||||
if hit:
|
||||
return hit
|
||||
|
||||
# Large BW->S3 write frames (SUBs 68, 69, 71, 82, 1A with data)
|
||||
hit = _try_validate_sum8_large(body)
|
||||
if hit:
|
||||
return hit
|
||||
|
||||
# Then CRC16 variants
|
||||
hit = _try_validate_crc16(body)
|
||||
if hit:
|
||||
@@ -321,13 +348,8 @@ def parse_bw(blob: bytes, trailer_len: int, validate_checksum: bool) -> List[Fra
|
||||
i += 1
|
||||
continue
|
||||
|
||||
# AFTER_DLE
|
||||
if b == DLE:
|
||||
body.append(DLE) # 10 10 => literal 10
|
||||
else:
|
||||
# Robust recovery: treat as literal DLE + byte
|
||||
body.append(DLE)
|
||||
body.append(b)
|
||||
# AFTER_DLE: DLE XX => literal XX for any XX (full DLE stuffing)
|
||||
body.append(b)
|
||||
state = IN_FRAME
|
||||
i += 1
|
||||
|
||||
|
||||
1538
seismo_lab.py
Normal file
1538
seismo_lab.py
Normal file
File diff suppressed because it is too large
Load Diff
0
sfm/__init__.py
Normal file
0
sfm/__init__.py
Normal file
351
sfm/server.py
Normal file
351
sfm/server.py
Normal file
@@ -0,0 +1,351 @@
|
||||
"""
|
||||
sfm/server.py — Seismograph Field Module REST API
|
||||
|
||||
Wraps the minimateplus library in a small FastAPI service.
|
||||
Terra-view proxies /api/sfm/* to this service (same pattern as SLMM at :8100).
|
||||
|
||||
Default port: 8200
|
||||
|
||||
Endpoints
|
||||
---------
|
||||
GET /health Service heartbeat — no device I/O
|
||||
GET /device/info POLL + serial number + full config read
|
||||
GET /device/events Download all stored events (headers + peak values)
|
||||
POST /device/connect Explicit connect/identify (same as /device/info)
|
||||
GET /device/event/{idx} Single event by index (header + waveform record)
|
||||
|
||||
Transport query params (supply one set):
|
||||
Serial (direct RS-232 cable):
|
||||
port — serial port name (e.g. COM5, /dev/ttyUSB0)
|
||||
baud — baud rate (default 38400)
|
||||
|
||||
TCP (modem / ACH Auto Call Home):
|
||||
host — IP address or hostname of the modem or ACH relay
|
||||
tcp_port — TCP port number (default 12345, Blastware default)
|
||||
|
||||
Each call opens the connection, does its work, then closes it.
|
||||
(Stateless / reconnect-per-call, matching Blastware's observed behaviour.)
|
||||
|
||||
Run with:
|
||||
python -m uvicorn sfm.server:app --host 0.0.0.0 --port 8200 --reload
|
||||
or:
|
||||
python sfm/server.py
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
import sys
|
||||
from typing import Optional
|
||||
|
||||
# FastAPI / Pydantic
|
||||
try:
|
||||
from fastapi import FastAPI, HTTPException, Query
|
||||
from fastapi.responses import JSONResponse
|
||||
import uvicorn
|
||||
except ImportError:
|
||||
print(
|
||||
"fastapi and uvicorn are required for the SFM server.\n"
|
||||
"Install them with: pip install fastapi uvicorn",
|
||||
file=sys.stderr,
|
||||
)
|
||||
sys.exit(1)
|
||||
|
||||
from minimateplus import MiniMateClient
|
||||
from minimateplus.protocol import ProtocolError
|
||||
from minimateplus.models import DeviceInfo, Event, PeakValues, ProjectInfo, Timestamp
|
||||
from minimateplus.transport import TcpTransport, DEFAULT_TCP_PORT
|
||||
|
||||
logging.basicConfig(
|
||||
level=logging.INFO,
|
||||
format="%(asctime)s %(levelname)-7s %(name)s %(message)s",
|
||||
datefmt="%H:%M:%S",
|
||||
)
|
||||
log = logging.getLogger("sfm.server")
|
||||
|
||||
# ── FastAPI app ────────────────────────────────────────────────────────────────
|
||||
|
||||
app = FastAPI(
|
||||
title="Seismograph Field Module (SFM)",
|
||||
description=(
|
||||
"REST API for Instantel MiniMate Plus seismographs.\n"
|
||||
"Implements the minimateplus RS-232 protocol library.\n"
|
||||
"Proxied by terra-view at /api/sfm/*."
|
||||
),
|
||||
version="0.1.0",
|
||||
)
|
||||
|
||||
|
||||
# ── Serialisers ────────────────────────────────────────────────────────────────
|
||||
# Plain dict helpers — avoids a Pydantic dependency in the library layer.
|
||||
|
||||
def _serialise_timestamp(ts: Optional[Timestamp]) -> Optional[dict]:
|
||||
if ts is None:
|
||||
return None
|
||||
return {
|
||||
"year": ts.year,
|
||||
"month": ts.month,
|
||||
"day": ts.day,
|
||||
"clock_set": ts.clock_set,
|
||||
"display": str(ts),
|
||||
}
|
||||
|
||||
|
||||
def _serialise_peak_values(pv: Optional[PeakValues]) -> Optional[dict]:
|
||||
if pv is None:
|
||||
return None
|
||||
return {
|
||||
"tran_in_s": pv.tran,
|
||||
"vert_in_s": pv.vert,
|
||||
"long_in_s": pv.long,
|
||||
"micl_psi": pv.micl,
|
||||
}
|
||||
|
||||
|
||||
def _serialise_project_info(pi: Optional[ProjectInfo]) -> Optional[dict]:
|
||||
if pi is None:
|
||||
return None
|
||||
return {
|
||||
"setup_name": pi.setup_name,
|
||||
"project": pi.project,
|
||||
"client": pi.client,
|
||||
"operator": pi.operator,
|
||||
"sensor_location": pi.sensor_location,
|
||||
"notes": pi.notes,
|
||||
}
|
||||
|
||||
|
||||
def _serialise_device_info(info: DeviceInfo) -> dict:
|
||||
return {
|
||||
"serial": info.serial,
|
||||
"firmware_version": info.firmware_version,
|
||||
"firmware_minor": info.firmware_minor,
|
||||
"dsp_version": info.dsp_version,
|
||||
"manufacturer": info.manufacturer,
|
||||
"model": info.model,
|
||||
}
|
||||
|
||||
|
||||
def _serialise_event(ev: Event) -> dict:
|
||||
return {
|
||||
"index": ev.index,
|
||||
"timestamp": _serialise_timestamp(ev.timestamp),
|
||||
"sample_rate": ev.sample_rate,
|
||||
"record_type": ev.record_type,
|
||||
"peak_values": _serialise_peak_values(ev.peak_values),
|
||||
"project_info": _serialise_project_info(ev.project_info),
|
||||
}
|
||||
|
||||
|
||||
# ── Transport factory ─────────────────────────────────────────────────────────
|
||||
|
||||
def _build_client(
|
||||
port: Optional[str],
|
||||
baud: int,
|
||||
host: Optional[str],
|
||||
tcp_port: int,
|
||||
) -> MiniMateClient:
|
||||
"""
|
||||
Return a MiniMateClient configured for either serial or TCP transport.
|
||||
|
||||
TCP takes priority if *host* is supplied; otherwise *port* (serial) is used.
|
||||
Raises HTTPException(422) if neither is provided.
|
||||
"""
|
||||
if host:
|
||||
# TCP / modem / ACH path — use a longer timeout to survive cold boots
|
||||
# (unit takes 5-15s to wake from RS-232 line assertion over cellular)
|
||||
transport = TcpTransport(host, port=tcp_port)
|
||||
log.debug("TCP transport: %s:%d", host, tcp_port)
|
||||
return MiniMateClient(transport=transport, timeout=30.0)
|
||||
elif port:
|
||||
# Direct serial path
|
||||
log.debug("Serial transport: %s baud=%d", port, baud)
|
||||
return MiniMateClient(port, baud)
|
||||
else:
|
||||
raise HTTPException(
|
||||
status_code=422,
|
||||
detail=(
|
||||
"Specify either 'port' (serial, e.g. ?port=COM5) "
|
||||
"or 'host' (TCP, e.g. ?host=192.168.1.50&tcp_port=12345)"
|
||||
),
|
||||
)
|
||||
|
||||
|
||||
def _is_tcp(host: Optional[str]) -> bool:
|
||||
return bool(host)
|
||||
|
||||
|
||||
def _run_with_retry(fn, *, is_tcp: bool):
|
||||
"""
|
||||
Call fn() and, for TCP connections only, retry once on ProtocolError.
|
||||
|
||||
Rationale: when a MiniMate Plus is cold (just had its serial lines asserted
|
||||
by the modem or a local bridge), it takes 5-10 seconds to boot before it
|
||||
will respond to POLL_PROBE. The first request may time out during that boot
|
||||
window; a single automatic retry is enough to recover once the unit is up.
|
||||
|
||||
Serial connections are NOT retried — a timeout there usually means a real
|
||||
problem (wrong port, wrong baud, cable unplugged).
|
||||
"""
|
||||
try:
|
||||
return fn()
|
||||
except ProtocolError as exc:
|
||||
if not is_tcp:
|
||||
raise
|
||||
log.info("TCP poll timed out (unit may have been cold) — retrying once")
|
||||
return fn() # let any second failure propagate normally
|
||||
|
||||
|
||||
# ── Endpoints ──────────────────────────────────────────────────────────────────
|
||||
|
||||
@app.get("/health")
|
||||
def health() -> dict:
|
||||
"""Service heartbeat. No device I/O."""
|
||||
return {"status": "ok", "service": "sfm", "version": "0.1.0"}
|
||||
|
||||
|
||||
@app.get("/device/info")
|
||||
def device_info(
|
||||
port: Optional[str] = Query(None, description="Serial port (e.g. COM5, /dev/ttyUSB0)"),
|
||||
baud: int = Query(38400, description="Serial baud rate (default 38400)"),
|
||||
host: Optional[str] = Query(None, description="TCP host — modem IP or ACH relay (e.g. 203.0.113.5)"),
|
||||
tcp_port: int = Query(DEFAULT_TCP_PORT, description=f"TCP port (default {DEFAULT_TCP_PORT})"),
|
||||
) -> dict:
|
||||
"""
|
||||
Connect to the device, perform the POLL startup handshake, and return
|
||||
identity information (serial number, firmware version, model).
|
||||
|
||||
Supply either *port* (serial) or *host* (TCP/modem).
|
||||
Equivalent to POST /device/connect — provided as GET for convenience.
|
||||
"""
|
||||
log.info("GET /device/info port=%s host=%s tcp_port=%d", port, host, tcp_port)
|
||||
|
||||
try:
|
||||
def _do():
|
||||
with _build_client(port, baud, host, tcp_port) as client:
|
||||
return client.connect()
|
||||
info = _run_with_retry(_do, is_tcp=_is_tcp(host))
|
||||
except HTTPException:
|
||||
raise
|
||||
except ProtocolError as exc:
|
||||
raise HTTPException(status_code=502, detail=f"Protocol error: {exc}") from exc
|
||||
except OSError as exc:
|
||||
raise HTTPException(status_code=502, detail=f"Connection error: {exc}") from exc
|
||||
except Exception as exc:
|
||||
raise HTTPException(status_code=500, detail=f"Device error: {exc}") from exc
|
||||
|
||||
return _serialise_device_info(info)
|
||||
|
||||
|
||||
@app.post("/device/connect")
|
||||
def device_connect(
|
||||
port: Optional[str] = Query(None, description="Serial port (e.g. COM5)"),
|
||||
baud: int = Query(38400, description="Serial baud rate"),
|
||||
host: Optional[str] = Query(None, description="TCP host — modem IP or ACH relay"),
|
||||
tcp_port: int = Query(DEFAULT_TCP_PORT, description=f"TCP port (default {DEFAULT_TCP_PORT})"),
|
||||
) -> dict:
|
||||
"""
|
||||
Connect to the device and return identity. POST variant for terra-view
|
||||
compatibility with the SLMM proxy pattern.
|
||||
"""
|
||||
return device_info(port=port, baud=baud, host=host, tcp_port=tcp_port)
|
||||
|
||||
|
||||
@app.get("/device/events")
|
||||
def device_events(
|
||||
port: Optional[str] = Query(None, description="Serial port (e.g. COM5)"),
|
||||
baud: int = Query(38400, description="Serial baud rate"),
|
||||
host: Optional[str] = Query(None, description="TCP host — modem IP or ACH relay"),
|
||||
tcp_port: int = Query(DEFAULT_TCP_PORT, description=f"TCP port (default {DEFAULT_TCP_PORT})"),
|
||||
) -> dict:
|
||||
"""
|
||||
Connect to the device, read the event index, and download all stored
|
||||
events (event headers + full waveform records with peak values).
|
||||
|
||||
Supply either *port* (serial) or *host* (TCP/modem).
|
||||
|
||||
This does NOT download raw ADC waveform samples — those are large and
|
||||
fetched separately via GET /device/event/{idx}/waveform (future endpoint).
|
||||
"""
|
||||
log.info("GET /device/events port=%s host=%s", port, host)
|
||||
|
||||
try:
|
||||
def _do():
|
||||
with _build_client(port, baud, host, tcp_port) as client:
|
||||
return client.connect(), client.get_events()
|
||||
info, events = _run_with_retry(_do, is_tcp=_is_tcp(host))
|
||||
except HTTPException:
|
||||
raise
|
||||
except ProtocolError as exc:
|
||||
raise HTTPException(status_code=502, detail=f"Protocol error: {exc}") from exc
|
||||
except OSError as exc:
|
||||
raise HTTPException(status_code=502, detail=f"Connection error: {exc}") from exc
|
||||
except Exception as exc:
|
||||
raise HTTPException(status_code=500, detail=f"Device error: {exc}") from exc
|
||||
|
||||
return {
|
||||
"device": _serialise_device_info(info),
|
||||
"event_count": len(events),
|
||||
"events": [_serialise_event(ev) for ev in events],
|
||||
}
|
||||
|
||||
|
||||
@app.get("/device/event/{index}")
|
||||
def device_event(
|
||||
index: int,
|
||||
port: Optional[str] = Query(None, description="Serial port (e.g. COM5)"),
|
||||
baud: int = Query(38400, description="Serial baud rate"),
|
||||
host: Optional[str] = Query(None, description="TCP host — modem IP or ACH relay"),
|
||||
tcp_port: int = Query(DEFAULT_TCP_PORT, description=f"TCP port (default {DEFAULT_TCP_PORT})"),
|
||||
) -> dict:
|
||||
"""
|
||||
Download a single event by index (0-based).
|
||||
|
||||
Supply either *port* (serial) or *host* (TCP/modem).
|
||||
Performs: POLL startup → event index → event header → waveform record.
|
||||
"""
|
||||
log.info("GET /device/event/%d port=%s host=%s", index, port, host)
|
||||
|
||||
try:
|
||||
def _do():
|
||||
with _build_client(port, baud, host, tcp_port) as client:
|
||||
client.connect()
|
||||
return client.get_events()
|
||||
events = _run_with_retry(_do, is_tcp=_is_tcp(host))
|
||||
except HTTPException:
|
||||
raise
|
||||
except ProtocolError as exc:
|
||||
raise HTTPException(status_code=502, detail=f"Protocol error: {exc}") from exc
|
||||
except OSError as exc:
|
||||
raise HTTPException(status_code=502, detail=f"Connection error: {exc}") from exc
|
||||
except Exception as exc:
|
||||
raise HTTPException(status_code=500, detail=f"Device error: {exc}") from exc
|
||||
|
||||
matching = [ev for ev in events if ev.index == index]
|
||||
if not matching:
|
||||
raise HTTPException(
|
||||
status_code=404,
|
||||
detail=f"Event index {index} not found on device",
|
||||
)
|
||||
|
||||
return _serialise_event(matching[0])
|
||||
|
||||
|
||||
# ── Entry point ────────────────────────────────────────────────────────────────
|
||||
|
||||
if __name__ == "__main__":
|
||||
import argparse
|
||||
|
||||
ap = argparse.ArgumentParser(description="SFM — Seismograph Field Module API server")
|
||||
ap.add_argument("--host", default="0.0.0.0", help="Bind address (default: 0.0.0.0)")
|
||||
ap.add_argument("--port", type=int, default=8200, help="Port (default: 8200)")
|
||||
ap.add_argument("--reload", action="store_true", help="Enable auto-reload (dev mode)")
|
||||
args = ap.parse_args()
|
||||
|
||||
log.info("Starting SFM server on %s:%d", args.host, args.port)
|
||||
uvicorn.run(
|
||||
"sfm.server:app",
|
||||
host=args.host,
|
||||
port=args.port,
|
||||
reload=args.reload,
|
||||
)
|
||||
Reference in New Issue
Block a user