Compare commits
85 Commits
43c9c8b3a3
...
main
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
1c570b083a | ||
|
|
2286d2ccf8 | ||
|
|
755050b347 | ||
|
|
6adf8b6078 | ||
|
|
4fb1bbfe35 | ||
|
|
3effa1aab5 | ||
|
|
95f2becf21 | ||
|
|
2cb95cd45e | ||
|
|
7cd8fda5e8 | ||
|
|
f495b91d8a | ||
|
|
e4730376ad | ||
|
|
23e4febba6 | ||
|
|
8941dd0aef | ||
|
|
dfb974d658 | ||
|
|
790e442a7a | ||
|
|
5d0f0855f2 | ||
|
|
0f5aa7a3fc | ||
|
|
3b04d4683b | ||
|
|
0363425d83 | ||
|
|
66967e036c | ||
|
|
9bf20803c2 | ||
|
|
9b1ed1f3a8 | ||
|
|
501b5080e9 | ||
|
|
5948c833bd | ||
|
|
c4a5da893c | ||
|
|
638e60532c | ||
|
|
6eecd0c1d1 | ||
|
|
870a10365e | ||
|
|
b2d10fd689 | ||
|
|
ce44852383 | ||
|
|
6a42facf02 | ||
|
|
4b703811d9 | ||
|
|
ea4475c9ad | ||
|
|
df51fe0668 | ||
|
|
114cbb4679 | ||
|
|
58a5f15ed5 | ||
|
|
eee1e36a1b | ||
|
|
a5069c302d | ||
|
|
6e0b83efa5 | ||
|
|
d4b1b834a7 | ||
|
|
824322597a | ||
|
|
7e501620fc | ||
|
|
32b9d3050c | ||
|
|
a8187eccd0 | ||
|
|
4944974f6e | ||
|
|
f74992f4e5 | ||
|
|
9f52745bb4 | ||
|
|
6a0422a6fc | ||
|
|
1078576023 | ||
|
|
8074bf0fee | ||
|
|
de02f9cccf | ||
|
|
da446cb2e3 | ||
|
|
51d1aa917a | ||
|
|
b8032e0578 | ||
|
|
3f142ce1c0 | ||
|
|
88adcbcb81 | ||
|
|
8e985154a7 | ||
|
|
f8f590b19b | ||
|
|
58a35a3afd | ||
|
|
45f4fb5a68 | ||
|
|
99d66453fe | ||
|
|
41606d2f31 | ||
|
|
8d06492dbc | ||
|
|
6be434e65f | ||
|
|
6d99f86502 | ||
|
|
5eb5499034 | ||
|
|
0db3780e65 | ||
|
|
d7a0e1b501 | ||
|
|
154a11d057 | ||
|
|
faa869d03b | ||
|
|
fa9873cf4a | ||
|
|
a684d3e642 | ||
|
|
22d4023ea0 | ||
|
|
a5a21a6c32 | ||
|
|
4448c74f6c | ||
|
|
feceb7b482 | ||
|
|
3acb49da0c | ||
|
|
927aad6c1f | ||
|
|
9c0753f5d3 | ||
|
|
50be6410fe | ||
|
|
8ca40d52a4 | ||
|
|
9db55ffcee | ||
|
|
967a5b2dad | ||
|
|
088e81b55d | ||
|
|
6e6c9874f0 |
28
.gitignore
vendored
Normal file
28
.gitignore
vendored
Normal file
@@ -0,0 +1,28 @@
|
||||
/bridges/captures/
|
||||
/example-events/
|
||||
|
||||
/manuals/
|
||||
|
||||
# Python bytecode
|
||||
__pycache__/
|
||||
*.py[cod]
|
||||
|
||||
# Virtual environments
|
||||
.venv/
|
||||
venv/
|
||||
env/
|
||||
|
||||
# Editor / OS
|
||||
.vscode/
|
||||
*.swp
|
||||
.DS_Store
|
||||
Thumbs.db
|
||||
|
||||
# Analyzer outputs
|
||||
*.report
|
||||
claude_export_*.md
|
||||
|
||||
# Frame database
|
||||
*.db
|
||||
*.db-wal
|
||||
*.db-shm
|
||||
149
CHANGELOG.md
Normal file
149
CHANGELOG.md
Normal file
@@ -0,0 +1,149 @@
|
||||
# Changelog
|
||||
|
||||
All notable changes to seismo-relay are documented here.
|
||||
|
||||
---
|
||||
|
||||
## v0.7.0 — 2026-04-03
|
||||
|
||||
### Added
|
||||
- **Raw ADC waveform decode — `_decode_a5_waveform(frames_data, event)`** in `client.py`.
|
||||
Parses the complete set of SUB 5A A5 response frames into per-channel time-series:
|
||||
- Reads the STRT record from A5[0] (bytes 7+): extracts `total_samples` (BE uint16 at +8),
|
||||
`pretrig_samples` (BE uint16 at +16), and `rectime_seconds` (uint8 at +18) into
|
||||
`event.total_samples / pretrig_samples / rectime_seconds`.
|
||||
- Skips the 6-byte preamble (`00 00 ff ff ff ff`) that follows the 21-byte STRT header;
|
||||
waveform data begins at `strt_pos + 27`.
|
||||
- Strips the 8-byte per-frame counter header from A5[1–6, 8] before appending waveform bytes.
|
||||
- Skips A5[7] (metadata-only) and A5[9] (terminator).
|
||||
- **Cross-frame alignment correction**: accumulates `running_offset % 8` across all frames
|
||||
and discards `(8 − align) % 8` leading bytes per frame to re-align to a T/V/L/M boundary.
|
||||
Required because individual frame waveform payloads are not always multiples of 8 bytes.
|
||||
- Decodes as 4-channel interleaved signed 16-bit LE at 8 bytes per sample-set:
|
||||
bytes 0–1 = Tran, 2–3 = Vert, 4–5 = Long, 6–7 = Mic.
|
||||
- Stores result in `event.raw_samples = {"Tran": [...], "Vert": [...], "Long": [...], "Mic": [...]}`.
|
||||
- **`download_waveform(event)` public method** on `MiniMateClient`.
|
||||
Issues a full SUB 5A stream with `stop_after_metadata=False`, then calls
|
||||
`_decode_a5_waveform()` to populate `event.raw_samples` and `event.total_samples /
|
||||
pretrig_samples / rectime_seconds`. Previously only metadata frames were fetched during
|
||||
`get_events()`; raw waveform data is now available on demand.
|
||||
- **`Event` model new fields** (`models.py`): `total_samples`, `pretrig_samples`,
|
||||
`rectime_seconds` (from STRT record), and `_waveform_key` (4-byte key stored during
|
||||
`get_events()` for later use by `download_waveform()`).
|
||||
|
||||
### Protocol / Documentation
|
||||
- **SUB 5A A5[0] STRT record layout confirmed** (✅ 2026-04-03, 4-2-26 blast capture):
|
||||
- STRT header is 21 bytes: `b"STRT"` + length fields + `total_samples` (BE uint16 at +8) +
|
||||
`pretrig_samples` (BE uint16 at +16) + `rectime_seconds` (uint8 at +18).
|
||||
- Followed by 6-byte preamble: `00 00 ff ff ff ff`. Waveform begins at `strt_pos + 27`.
|
||||
- Confirmed: 4-2-26 blast → `total_samples=9306`, `pretrig_samples=298`, `rectime_seconds=70`.
|
||||
- **Blast/waveform mode A5 format confirmed** (✅ 2026-04-03, 4-2-26 blast capture):
|
||||
4-channel interleaved int16 LE at 8 bytes per sample-set; cross-frame alignment correction
|
||||
required. 948 of 9306 total sample-sets captured via `stop_after_metadata=True` (10 frames).
|
||||
- **Noise/histogram mode A5 format — endianness corrected** (✅ 2026-04-03, 3-31-26 capture):
|
||||
32-byte block samples are signed 16-bit **little-endian** (previously documented as BE).
|
||||
`0a 00` → LE int16 = 10 (correct noise floor); BE would give 2560 (wrong).
|
||||
- Protocol reference §7.6 rewritten — split into §7.6.1 (Blast/Waveform mode) and §7.6.2
|
||||
(Noise/Histogram mode), each with confirmed field layouts and open questions noted.
|
||||
|
||||
---
|
||||
|
||||
## v0.6.0 — 2026-04-02
|
||||
|
||||
### Added
|
||||
- **True event-time metadata via SUB 5A bulk waveform stream** — `get_events()` now issues a SUB 5A request after each SUB 0C download, reads the A5 response frames, and extracts the `Client:`, `User Name:`, and `Seis Loc:` fields as they existed at the moment the event was recorded. Previously these fields were backfilled from the current compliance config (SUB 1A), which reflects today's setup, not the setup active when the event triggered.
|
||||
- `build_5a_frame(offset_word, raw_params)` in `framing.py` — reproduces Blastware's exact wire format for SUB 5A requests: raw (non-DLE-stuffed) `offset_hi`, DLE-stuffed params, and a DLE-aware checksum where `10 XX` pairs count only `XX`.
|
||||
- `bulk_waveform_params()` returns 11 bytes (extra trailing `0x00` confirmed from 1-2-26 BW wire capture).
|
||||
- `read_bulk_waveform_stream(key4, *, stop_after_metadata=True, max_chunks=32)` in `protocol.py` — loops sending chunk requests (counter increments `0x0400` per chunk), stops early when `b"Project:"` is found, then sends a termination frame.
|
||||
- `_decode_a5_metadata_into(frames_data, event)` in `client.py` — needle-searches A5 frame data for `Project:`, `Client:`, `User Name:`, `Seis Loc:`, `Extended Notes` and overwrites `event.project_info`.
|
||||
- **`get_events()` sequence extended** — now `1E → 0A → 0C → 5A → 1F` per event.
|
||||
|
||||
### Fixed
|
||||
- **Compliance config (SUB 1A) channel block missing** — orphaned `self._send(build_bw_frame(SUB_COMPLIANCE, 0x2A, _DATA_PARAMS))` before the B/C/D receive loop had no corresponding `recv_one()`, shifting all subsequent receives one step behind and leaving frame D's channel-block data (trigger_level_geo, alarm_level_geo, max_range_geo) unread. Removed the orphaned send. Total config bytes received now correctly ~2126 (was ~1071).
|
||||
- **Compliance config anchor search range** — `_decode_compliance_config_into()` searched `cfg[40:100]` for the sample-rate/record-time anchor. With the orphaned-send bug fixed the 44-byte padding it had been adding is gone, and the anchor now appears at `cfg[11]`. Search widened to `cfg[0:150]` to be robust to future layout shifts.
|
||||
- Removed byte-content deduplication from `read_compliance_config()` — was masking the real receive-ordering bug.
|
||||
|
||||
### Protocol / Documentation
|
||||
- **SUB 5A frame format confirmed** — `offset_hi` byte (`0x10`) must be sent raw (not DLE-stuffed); checksum is DLE-aware (only the second byte of a `10 XX` pair is summed). Standard `build_bw_frame` DLE-stuffs `0x10` incorrectly for 5A — a dedicated `build_5a_frame` is required.
|
||||
- **Event-time metadata source confirmed** — `Client:`, `User Name:`, and `Seis Loc:` strings are present in A5 frame 7 of the bulk waveform stream (SUB 5A), not in the 210-byte SUB 0C waveform record. They reflect the compliance setup as it was when the event was stored on the device.
|
||||
|
||||
---
|
||||
|
||||
## v0.5.0 — 2026-03-31
|
||||
|
||||
### Added
|
||||
- **Console tab in `seismo_lab.py`** — direct device connection without the bridge subprocess.
|
||||
- Serial and TCP transport selectable via radio buttons.
|
||||
- Four one-click commands: POLL, Serial #, Full Config, Event Index.
|
||||
- Colour-coded scrolling output: TX (blue), RX raw hex (teal), parsed/decoded (green), errors (red).
|
||||
- Save Log and Send to Analyzer buttons; logs auto-saved to `bridges/captures/console_<ts>.log`.
|
||||
- Queue/`after(100)` pattern — no UI blocking or performance impact.
|
||||
- **`minimateplus` package** — clean Python client library for the MiniMate Plus S3 protocol.
|
||||
- `SerialTransport` and `TcpTransport` (for Sierra Wireless RV50/RV55 cellular modems).
|
||||
- `MiniMateProtocol` — DLE frame parser/builder, two-step paged reads, checksum validation.
|
||||
- `MiniMateClient` — high-level client: `connect()`, `get_serial()`, `get_config()`, `get_events()`.
|
||||
- **TCP/cellular transport** (`TcpTransport`) — connect to field units via Sierra Wireless RV50/RV55 modems over cellular.
|
||||
- `read_until_idle(idle_gap=1.5s)` to handle modem data-forwarding buffer delay.
|
||||
- Confirmed working end-to-end: TCP → RV50/RV55 → RS-232 → MiniMate Plus.
|
||||
- **`bridges/tcp_serial_bridge.py`** — local TCP-to-serial bridge for bench testing `TcpTransport` without a cellular modem.
|
||||
- **SFM REST server** (`sfm/server.py`) — FastAPI server with device info, event list, and event record endpoints over both serial and TCP.
|
||||
|
||||
### Fixed
|
||||
- `protocol.py` `startup()` was using a hardcoded `POLL_RECV_TIMEOUT = 10.0` constant, ignoring the configurable `self._recv_timeout`. Fixed to use `self._recv_timeout` throughout.
|
||||
- `sfm/server.py` now retries once on `ProtocolError` for TCP connections to handle cold-boot timing on first connect.
|
||||
|
||||
### Protocol / Documentation
|
||||
- **Sierra Wireless RV50/RV55 modem config** — confirmed required ACEmanager settings: Quiet Mode = Enable, Data Forwarding Timeout = 1, TCP Connect Response Delay = 0. Quiet Mode disabled causes modem to inject `RING\r\nCONNECT\r\n` onto the serial line, breaking the S3 handshake.
|
||||
- **Calibration year** confirmed at SUB FE (Full Config) destuffed payload offset 0x56–0x57 (uint16 BE). `0x07E7` = 2023, `0x07E9` = 2025.
|
||||
- **`"Operating System"` boot string** — 16-byte UART boot message captured on cold-start before unit enters DLE-framed mode. Parser handles correctly by scanning for DLE+STX.
|
||||
- RV50/RV55 sends `RING`/`CONNECT` over TCP to the calling client even with Quiet Mode enabled — this is normal behaviour, parser discards it.
|
||||
|
||||
---
|
||||
|
||||
## v0.4.0 — 2026-03-12
|
||||
|
||||
### Added
|
||||
- **`seismo_lab.py`** — combined Bridge + Analyzer GUI. Single window with two tabs; bridge start auto-wires live mode in the Analyzer.
|
||||
- **`frame_db.py`** — SQLite frame database. Captures accumulate over time; Query DB tab searches across all sessions.
|
||||
- **`bridges/s3-bridge/proxy.py`** — bridge proxy module.
|
||||
- Large BW→S3 write frame checksum algorithm confirmed and implemented (`SUM8` of payload `[2:-1]` skipping `0x10` bytes, plus constant `0x10`, mod 256).
|
||||
- SUB `A4` identified as composite container frame with embedded inner frames; `_extract_a4_inner_frames()` and `_diff_a4_payloads()` reduce diff noise from 2300 → 17 meaningful entries.
|
||||
|
||||
### Fixed
|
||||
- BAD CHK false positives on BW POLL frames — BW frame terminator `03 41` was being included in the de-stuffed payload. Fixed to strip correctly.
|
||||
- Aux Trigger read location confirmed at SUB FE offset `0x0109`.
|
||||
|
||||
---
|
||||
|
||||
## v0.3.0 — 2026-03-09
|
||||
|
||||
### Added
|
||||
- Record time confirmed at SUB E5 page2 offset `+0x28` as float32 BE.
|
||||
- Trigger Sample Width confirmed at BW→S3 write frame SUB `0x82`, destuffed payload offset `[22]`.
|
||||
- Mode-gating documented: several settings only appear on the wire when the appropriate mode is active.
|
||||
|
||||
### Fixed
|
||||
- `0x082A` mystery resolved — fixed-size E5 payload length (2090 bytes), not a record-time field.
|
||||
|
||||
---
|
||||
|
||||
## v0.2.0 — 2026-03-01
|
||||
|
||||
### Added
|
||||
- Channel config float layout fully confirmed: trigger level, alarm level, and unit string per channel (IEEE 754 BE floats).
|
||||
- Blastware `.set` file format decoded — little-endian binary struct mirroring the wire payload.
|
||||
- Operator manual (716U0101 Rev 15) added as cross-reference source.
|
||||
|
||||
---
|
||||
|
||||
## v0.1.0 — 2026-02-26
|
||||
|
||||
### Added
|
||||
- Initial `s3_bridge.py` serial bridge — transparent RS-232 tap between Blastware and MiniMate Plus.
|
||||
- `s3_parser.py` — deterministic DLE state machine frame extractor.
|
||||
- `s3_analyzer.py` — session parser, frame differ, Claude export.
|
||||
- `gui_bridge.py` and `gui_analyzer.py` — Tkinter GUIs.
|
||||
- DLE framing confirmed: `DLE+STX` / `DLE+ETX`, `0x41` = ACK (not STX), DLE stuffing rule.
|
||||
- Response SUB rule confirmed: `response_SUB = 0xFF - request_SUB`.
|
||||
- Year `0x07CB` = 1995 confirmed as MiniMate factory RTC default.
|
||||
- Full write command family documented (SUBs `68`–`83`).
|
||||
312
CLAUDE.md
Normal file
312
CLAUDE.md
Normal file
@@ -0,0 +1,312 @@
|
||||
# CLAUDE.md — seismo-relay
|
||||
|
||||
Ground-up Python replacement for **Blastware**, Instantel's Windows-only software for
|
||||
managing MiniMate Plus seismographs. Connects over direct RS-232 or cellular modem
|
||||
(Sierra Wireless RV50 / RV55). Current version: **v0.7.0**.
|
||||
|
||||
---
|
||||
|
||||
## Project layout
|
||||
|
||||
```
|
||||
minimateplus/ ← Python client library (primary focus)
|
||||
transport.py ← SerialTransport, TcpTransport
|
||||
framing.py ← DLE codec, frame builders, S3FrameParser
|
||||
protocol.py ← MiniMateProtocol — wire-level read/write methods
|
||||
client.py ← MiniMateClient — high-level API (connect, get_events, …)
|
||||
models.py ← DeviceInfo, EventRecord, ComplianceConfig, …
|
||||
|
||||
sfm/server.py ← FastAPI REST server exposing device data over HTTP
|
||||
seismo_lab.py ← Tkinter GUI (Bridge + Analyzer + Console tabs)
|
||||
docs/
|
||||
instantel_protocol_reference.md ← reverse-engineered protocol spec ("the Rosetta Stone")
|
||||
CHANGELOG.md ← version history
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Current implementation state (v0.6.0)
|
||||
|
||||
Full read pipeline working end-to-end over TCP/cellular:
|
||||
|
||||
| Step | SUB | Status |
|
||||
|---|---|---|
|
||||
| POLL / startup handshake | 5B | ✅ |
|
||||
| Serial number | 15 | ✅ |
|
||||
| Full config (firmware, calibration date, etc.) | FE | ✅ |
|
||||
| Compliance config (record time, sample rate, geo thresholds) | 1A | ✅ |
|
||||
| Event index | 08 | ✅ |
|
||||
| Event header / first key | 1E | ✅ |
|
||||
| Waveform header | 0A | ✅ |
|
||||
| Waveform record (peaks, timestamp, project) | 0C | ✅ |
|
||||
| **Bulk waveform stream (event-time metadata)** | **5A** | ✅ **new v0.6.0** |
|
||||
| Event advance / next key | 1F | ✅ |
|
||||
| Write commands (push config to device) | 68–83 | ❌ not yet implemented |
|
||||
|
||||
`get_events()` sequence per event: `1E → 0A → 0C → 5A → 1F`
|
||||
|
||||
---
|
||||
|
||||
## Protocol fundamentals
|
||||
|
||||
### DLE framing
|
||||
|
||||
```
|
||||
BW→S3 (our requests): [ACK=0x41] [STX=0x02] [stuffed payload+chk] [ETX=0x03]
|
||||
S3→BW (device replies): [DLE=0x10] [STX=0x02] [stuffed payload+chk] [bare ETX=0x03]
|
||||
```
|
||||
|
||||
- **DLE stuffing rule:** any literal `0x10` byte in the payload is doubled on the wire
|
||||
(`0x10` → `0x10 0x10`). This includes the checksum byte.
|
||||
- **Inner-frame terminators:** large S3 responses (A4, E5) contain embedded sub-frames
|
||||
using `DLE+ETX` as inner terminators. The outer parser treats `DLE+ETX` inside a frame
|
||||
as literal data — the bare ETX is the ONLY real frame terminator.
|
||||
- **Response SUB rule:** `response_SUB = 0xFF - request_SUB`
|
||||
(one known exception: SUB `1C` → response `6E`, not `0xE3`)
|
||||
- **Two-step read pattern:** every read command is sent twice — probe step (`offset=0x00`,
|
||||
get length) then data step (`offset=DATA_LENGTH`, get payload). All data lengths are
|
||||
hardcoded constants, not read from the probe response.
|
||||
|
||||
### De-stuffed payload header
|
||||
|
||||
```
|
||||
BW→S3 (request):
|
||||
[0] CMD 0x10
|
||||
[1] flags 0x00
|
||||
[2] SUB command byte
|
||||
[3] 0x00 always zero
|
||||
[4] 0x00 always zero
|
||||
[5] OFFSET 0x00 for probe step; DATA_LENGTH for data step
|
||||
[6-15] params (key, token, etc. — see helpers in framing.py)
|
||||
|
||||
S3→BW (response):
|
||||
[0] CMD 0x00
|
||||
[1] flags 0x10
|
||||
[2] SUB response sub byte
|
||||
[3] PAGE_HI
|
||||
[4] PAGE_LO
|
||||
[5+] data
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Critical protocol gotchas (hard-won — do not re-derive)
|
||||
|
||||
### SUB 5A — bulk waveform stream — NON-STANDARD frame format
|
||||
|
||||
**Always use `build_5a_frame()` for SUB 5A. Never use `build_bw_frame()` for SUB 5A.**
|
||||
|
||||
`build_bw_frame` produces WRONG output for 5A for two reasons:
|
||||
|
||||
1. **`offset_hi = 0x10` must NOT be DLE-stuffed.** Blastware sends the offset field raw.
|
||||
`build_bw_frame` would stuff it to `10 10` on the wire — the device silently ignores
|
||||
the frame. `build_5a_frame` writes it as a bare `10`.
|
||||
|
||||
2. **DLE-aware checksum.** When computing the checksum, `10 XX` pairs in the stuffed
|
||||
section contribute only `XX` to the running sum; lone bytes contribute normally. This
|
||||
differs from the standard SUM8-of-destuffed-payload that all other commands use.
|
||||
|
||||
Both differences confirmed by reproducing Blastware's exact wire bytes from the 1-2-26
|
||||
BW TX capture. All 10 frames verified.
|
||||
|
||||
### SUB 5A — params are 11 bytes for chunk frames, 10 for termination
|
||||
|
||||
`bulk_waveform_params()` returns 11 bytes (extra trailing `0x00`). The 11th byte was
|
||||
confirmed from the BW wire capture. `bulk_waveform_term_params()` returns 10 bytes.
|
||||
Do not swap them.
|
||||
|
||||
### SUB 5A — event-time metadata lives in A5 frame 7
|
||||
|
||||
The bulk stream sends 9+ A5 response frames. Frame 7 (0-indexed) contains the compliance
|
||||
setup as it existed when the event was recorded:
|
||||
|
||||
```
|
||||
"Project:" → project description
|
||||
"Client:" → client name ← NOT in the 0C record
|
||||
"User Name:" → operator name ← NOT in the 0C record
|
||||
"Seis Loc:" → sensor location ← NOT in the 0C record
|
||||
"Extended Notes"→ notes
|
||||
```
|
||||
|
||||
These strings are **NOT** present in the 210-byte SUB 0C waveform record. They reflect
|
||||
the setup at record time, not the current device config — this is why we fetch them from
|
||||
5A instead of backfilling from the current compliance config.
|
||||
|
||||
`stop_after_metadata=True` (default) stops the 5A loop as soon as `b"Project:"` appears,
|
||||
then sends the termination frame.
|
||||
|
||||
### SUB 1E / 1F — event iteration null sentinel and token position (FIXED, do not re-introduce)
|
||||
|
||||
**token_params bug (FIXED):** The token byte was at `params[6]` (wrong). Both 3-31-26 and
|
||||
4-3-26 BW TX captures confirm it belongs at **`params[7]`** (raw: `00 00 00 00 00 00 00 fe 00 00`).
|
||||
With the wrong position the device ignores the token and 1F returns null immediately.
|
||||
|
||||
**all-zero params required (empirically confirmed):** Even with the correct token position,
|
||||
sending `token=0xFE` causes the device to return null from 1F in multi-event sessions.
|
||||
All callers (`count_events`, `get_events`) must use `advance_event(browse=True)` which
|
||||
sends all-zero params. The 3-31-26 capture that "confirmed" token=0xFE had only one event
|
||||
stored — 1F always returns null at end-of-events, so we never actually observed 1F
|
||||
successfully returning a second key with token=0xFE. Empirical evidence from live device
|
||||
testing with 2+ events is definitive: **always use all-zero params for 1F.**
|
||||
|
||||
**0A context requirement:** `advance_event()` (1F) only returns a valid next-event key
|
||||
when a preceding `read_waveform_header()` (0A) call has established device waveform
|
||||
context for the current key. Call 0A before every event in the loop, not just the first.
|
||||
Calling 1F cold (after only 1E, with no 0A) returns the null sentinel regardless of how
|
||||
many events are stored.
|
||||
|
||||
**1F response layout:** The next event's key IS at `data_rsp.data[11:15]` (= payload[16:20]).
|
||||
Confirmed from 4-3-26 browse-mode S3 captures:
|
||||
```
|
||||
1F after 0A(key0=01110000): data[11:15]=0111245a data[15:19]=00001e36 ← valid
|
||||
1F after 0A(key1=0111245a): data[11:15]=01114290 data[15:19]=00000046 ← valid
|
||||
1F null sentinel: data[11:15]=00000000 data[15:19]=00000000 ← done
|
||||
```
|
||||
|
||||
**Null sentinel:** `data8[4:8] == b"\x00\x00\x00\x00"` (= `data_rsp.data[15:19]`)
|
||||
works for BOTH 1E trailing (offset to next event key) and 1F response (null key
|
||||
echo) — in both cases, all zeros means "no more events."
|
||||
|
||||
**1E response layout:** `data_rsp.data[11:15]` = event 0's actual key; `data_rsp.data[15:19]`
|
||||
= sample-count offset to the next event key (key1 = key0 + this offset). If offset == 0,
|
||||
there is only one event.
|
||||
|
||||
**Correct iteration pattern (confirmed empirically with live device, 2+ events):**
|
||||
```
|
||||
1E(all zeros) → key0, trailing0 ← trailing0 non-zero if event 1 exists
|
||||
0A(key0) ← REQUIRED: establishes device context
|
||||
0C(key0) [+ 5A(key0) for get_events] ← read record data
|
||||
1F(all zeros / browse=True) → key1 ← use all-zero params, NOT token=0xFE
|
||||
0A(key1) ← REQUIRED before each advance
|
||||
0C(key1) [+ 5A(key1) for get_events]
|
||||
1F(all zeros) → null ← done
|
||||
```
|
||||
|
||||
`advance_event(browse=True)` sends all-zero params; `advance_event()` default (browse=False)
|
||||
sends token=0xFE and is NOT used by any caller.
|
||||
`advance_event()` returns `(key4, event_data8)`.
|
||||
Callers (`count_events`, `get_events`) loop while `data8[4:8] != b"\x00\x00\x00\x00"`.
|
||||
|
||||
### SUB 1A — compliance config — orphaned send bug (FIXED, do not re-introduce)
|
||||
|
||||
`read_compliance_config()` sends a 4-frame sequence (A, B, C, D) where:
|
||||
- Frame A is a probe (no `recv_one` needed — device ACKs but returns no data page)
|
||||
- Frames B, C, D each need a `recv_one` to collect the response
|
||||
|
||||
**There must be NO extra `self._send(...)` call before the B/C/D recv loop without a
|
||||
matching `recv_one()`.** An orphaned send shifts all receives one step behind, leaving
|
||||
frame D's channel block (trigger_level_geo, alarm_level_geo, max_range_geo) unread and
|
||||
producing only ~1071 bytes instead of ~2126.
|
||||
|
||||
### SUB 1A — anchor search range
|
||||
|
||||
`_decode_compliance_config_into()` locates sample_rate and record_time via the anchor
|
||||
`b'\x01\x2c\x00\x00\xbe\x80\x00\x00\x00\x00'`. Search range is `cfg[0:150]`.
|
||||
|
||||
Do not narrow this to `cfg[40:100]` — the old range was only accidentally correct because
|
||||
the orphaned-send bug was prepending a 44-byte spurious header, pushing the anchor from
|
||||
its real position (cfg[11]) into the 40–100 window.
|
||||
|
||||
### Sample rate and DLE jitter in cfg data
|
||||
|
||||
Sample rate 4096 (`0x1000`) causes DLE jitter: the frame carries `10 10 00` on the wire,
|
||||
which unstuffs to `10 00` — 2 bytes instead of 3. This makes frame C 1 byte shorter and
|
||||
shifts all subsequent absolute offsets by −1. The anchor approach is immune to this.
|
||||
Do NOT use fixed absolute offsets for sample_rate or record_time.
|
||||
|
||||
### TCP / cellular transport
|
||||
|
||||
- Protocol bytes over TCP are bit-for-bit identical to RS-232. No wrapping.
|
||||
- The modem (RV50/RV55) forwards bytes with up to ~1s buffering. `TcpTransport` uses
|
||||
`read_until_idle(idle_gap=1.5s)` to drain the buffer completely before parsing.
|
||||
- Cold-boot: unit sends the 16-byte ASCII string `"Operating System"` before entering
|
||||
DLE-framed mode. The parser discards it (scans for DLE+STX).
|
||||
- RV50/RV55 sends `\r\nRING\r\n\r\nCONNECT\r\n` over TCP to the caller even with
|
||||
Quiet Mode enabled. Parser handles this — do not strip it manually before feeding to
|
||||
`S3FrameParser`.
|
||||
|
||||
### Required ACEmanager settings (Sierra Wireless RV50/RV55)
|
||||
|
||||
| Setting | Value | Why |
|
||||
|---|---|---|
|
||||
| Configure Serial Port | `38400,8N1` | Must match MiniMate baud |
|
||||
| Flow Control | `None` | Hardware FC blocks TX if pins unconnected |
|
||||
| **Quiet Mode** | **Enable** | **Critical.** Disabled injects `RING`/`CONNECT` onto serial, corrupting S3 handshake |
|
||||
| Data Forwarding Timeout | `1` (= 0.1 s) | Lower latency |
|
||||
| TCP Connect Response Delay | `0` | Non-zero silently drops first POLL frame |
|
||||
| TCP Idle Timeout | `2` (minutes) | Prevents premature disconnect |
|
||||
| DB9 Serial Echo | `Disable` | Echo corrupts the data stream |
|
||||
|
||||
---
|
||||
|
||||
## Key confirmed field locations
|
||||
|
||||
### SUB FE — Full Config (166 destuffed bytes)
|
||||
|
||||
| Offset | Field | Type | Notes |
|
||||
|---|---|---|---|
|
||||
| 0x34 | firmware version string | ASCII | e.g. `"S338.17"` |
|
||||
| 0x56–0x57 | calibration year | uint16 BE | `0x07E9` = 2025 |
|
||||
| 0x0109 | aux trigger enabled | uint8 | `0x00` = off, `0x01` = on |
|
||||
|
||||
### SUB 1A — Compliance Config (~2126 bytes total after 4-frame sequence)
|
||||
|
||||
| Field | How to find it |
|
||||
|---|---|
|
||||
| sample_rate | uint16 BE at anchor − 2 |
|
||||
| record_time | float32 BE at anchor + 10 |
|
||||
| trigger_level_geo | float32 BE, located in channel block |
|
||||
| alarm_level_geo | float32 BE, adjacent to trigger_level_geo |
|
||||
| max_range_geo | float32 BE, adjacent to alarm_level_geo |
|
||||
| setup_name | ASCII, null-padded, in cfg body |
|
||||
| project / client / operator / sensor_location | ASCII, label-value pairs |
|
||||
|
||||
Anchor: `b'\x01\x2c\x00\x00\xbe\x80\x00\x00\x00\x00'`, search `cfg[0:150]`
|
||||
|
||||
### SUB 0C — Waveform Record (210 bytes = data[11:11+0xD2])
|
||||
|
||||
| Offset | Field | Type |
|
||||
|---|---|---|
|
||||
| 0 | day | uint8 |
|
||||
| 1 | sub_code | uint8 (`0x10` = Waveform single-shot, `0x03` = Waveform continuous) |
|
||||
| 2 | month | uint8 |
|
||||
| 3–4 | year | uint16 BE |
|
||||
| 5 | unknown | uint8 (always 0) |
|
||||
| 6 | hour | uint8 |
|
||||
| 7 | minute | uint8 |
|
||||
| 8 | second | uint8 |
|
||||
| 87 | peak_vector_sum | float32 BE |
|
||||
| label+6 | PPV per channel | float32 BE (search for `"Tran"`, `"Vert"`, `"Long"`, `"MicL"`) |
|
||||
|
||||
PPV labels are NOT 4-byte aligned. The label-offset+6 approach is the only reliable method.
|
||||
|
||||
---
|
||||
|
||||
## SFM REST API (sfm/server.py)
|
||||
|
||||
```
|
||||
GET /device/info?port=COM5 ← serial
|
||||
GET /device/info?host=1.2.3.4&tcp_port=9034 ← cellular
|
||||
GET /device/events?host=1.2.3.4&tcp_port=9034&baud=38400
|
||||
GET /device/event?host=1.2.3.4&tcp_port=9034&index=0
|
||||
```
|
||||
|
||||
Server retries once on `ProtocolError` for TCP connections (handles cold-boot timing).
|
||||
|
||||
---
|
||||
|
||||
## Key wire captures (reference material)
|
||||
|
||||
| Capture | Location | Contents |
|
||||
|---|---|---|
|
||||
| 1-2-26 | `bridges/captures/1-2-26/` | SUB 5A BW TX frames — used to confirm 5A frame format, 11-byte params, DLE-aware checksum |
|
||||
| 3-11-26 | `bridges/captures/3-11-26/` | Full compliance setup write, Aux Trigger capture |
|
||||
| 3-31-26 | `bridges/captures/3-31-26/` | Complete event download cycle (148 BW / 147 S3 frames) — confirmed 1E/0A/0C/1F sequence |
|
||||
|
||||
---
|
||||
|
||||
## What's next
|
||||
|
||||
- Write commands (SUBs 68–83) — push compliance config, channel config, trigger settings to device
|
||||
- ACH inbound server — accept call-home connections from field units
|
||||
- Modem manager — push RV50/RV55 configs via Sierra Wireless API
|
||||
259
README.md
259
README.md
@@ -0,0 +1,259 @@
|
||||
# seismo-relay `v0.6.0`
|
||||
|
||||
A ground-up replacement for **Blastware** — Instantel's aging Windows-only
|
||||
software for managing MiniMate Plus seismographs.
|
||||
|
||||
Built in Python. Runs on Windows. Connects to instruments over direct RS-232
|
||||
or cellular modem (Sierra Wireless RV50 / RV55).
|
||||
|
||||
> **Status:** Active development. Full read pipeline working end-to-end:
|
||||
> device info, compliance config (with geo thresholds), event download with
|
||||
> true event-time metadata (project / client / operator / sensor location
|
||||
> sourced from the device at record-time via SUB 5A). Write commands in progress.
|
||||
> See [CHANGELOG.md](CHANGELOG.md) for version history.
|
||||
|
||||
---
|
||||
|
||||
## What's in here
|
||||
|
||||
```
|
||||
seismo-relay/
|
||||
├── seismo_lab.py ← Main GUI (Bridge + Analyzer + Console tabs)
|
||||
│
|
||||
├── minimateplus/ ← MiniMate Plus client library
|
||||
│ ├── transport.py ← SerialTransport and TcpTransport
|
||||
│ ├── protocol.py ← DLE frame layer (read/write/parse)
|
||||
│ ├── client.py ← High-level client (connect, get_config, etc.)
|
||||
│ ├── framing.py ← Frame builder/parser primitives
|
||||
│ └── models.py ← DeviceInfo, EventRecord, etc.
|
||||
│
|
||||
├── sfm/ ← SFM REST API server (FastAPI)
|
||||
│ └── server.py ← /device/info, /device/events, /device/event
|
||||
│
|
||||
├── bridges/
|
||||
│ ├── s3-bridge/
|
||||
│ │ └── s3_bridge.py ← RS-232 serial bridge (capture tool)
|
||||
│ ├── tcp_serial_bridge.py ← Local TCP↔serial bridge (bench testing)
|
||||
│ ├── gui_bridge.py ← Standalone bridge GUI (legacy)
|
||||
│ └── raw_capture.py ← Simple raw capture tool
|
||||
│
|
||||
├── parsers/
|
||||
│ ├── s3_parser.py ← DLE frame extractor
|
||||
│ ├── s3_analyzer.py ← Session parser, differ, Claude export
|
||||
│ ├── gui_analyzer.py ← Standalone analyzer GUI (legacy)
|
||||
│ └── frame_db.py ← SQLite frame database
|
||||
│
|
||||
└── docs/
|
||||
└── instantel_protocol_reference.md ← Reverse-engineered protocol spec
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Quick start
|
||||
|
||||
### Seismo Lab (main GUI)
|
||||
|
||||
The all-in-one tool. Three tabs: **Bridge**, **Analyzer**, **Console**.
|
||||
|
||||
```
|
||||
python seismo_lab.py
|
||||
```
|
||||
|
||||
### SFM REST server
|
||||
|
||||
Exposes MiniMate Plus commands as a REST API for integration with other systems.
|
||||
|
||||
```
|
||||
cd sfm
|
||||
uvicorn server:app --reload
|
||||
```
|
||||
|
||||
**Endpoints:**
|
||||
|
||||
| Method | URL | Description |
|
||||
|--------|-----|-------------|
|
||||
| `GET` | `/device/info?port=COM5` | Device info via serial |
|
||||
| `GET` | `/device/info?host=1.2.3.4&tcp_port=9034` | Device info via cellular modem |
|
||||
| `GET` | `/device/events?port=COM5` | Event index |
|
||||
| `GET` | `/device/event?port=COM5&index=0` | Single event record |
|
||||
|
||||
---
|
||||
|
||||
## Seismo Lab tabs
|
||||
|
||||
### Bridge tab
|
||||
|
||||
Captures live RS-232 traffic between Blastware and the seismograph. Sits in
|
||||
the middle as a transparent pass-through while logging everything to disk.
|
||||
|
||||
```
|
||||
Blastware → COM4 (virtual) ↔ s3_bridge ↔ COM5 (physical) → MiniMate Plus
|
||||
```
|
||||
|
||||
Set your COM ports and log directory, then hit **Start Bridge**. Use
|
||||
**Add Mark** to annotate the capture at specific moments (e.g. "changed
|
||||
trigger level"). When the bridge starts, the Analyzer tab automatically wires
|
||||
up to the live files and starts updating in real time.
|
||||
|
||||
### Analyzer tab
|
||||
|
||||
Parses raw captures into DLE-framed protocol sessions, diffs consecutive
|
||||
sessions to show exactly which bytes changed, and lets you query across all
|
||||
historical captures via the built-in SQLite database.
|
||||
|
||||
- **Inventory** — all frames in a session, click to drill in
|
||||
- **Hex Dump** — full payload hex dump with changed-byte annotations
|
||||
- **Diff** — byte-level before/after diff between sessions
|
||||
- **Full Report** — plain text session report
|
||||
- **Query DB** — search across all captures by SUB, direction, or byte value
|
||||
|
||||
Use **Export for Claude** to generate a self-contained `.md` report for
|
||||
AI-assisted field mapping.
|
||||
|
||||
### Console tab
|
||||
|
||||
Direct connection to a MiniMate Plus — no bridge, no Blastware. Useful for
|
||||
diagnosing field units over cellular without a full capture session.
|
||||
|
||||
**Connection:** choose Serial (COM port + baud) or TCP (IP + port for
|
||||
cellular modem).
|
||||
|
||||
**Commands:**
|
||||
| Button | What it does |
|
||||
|--------|-------------|
|
||||
| POLL | Startup handshake — confirms unit is alive and identifies model |
|
||||
| Serial # | Reads unit serial number |
|
||||
| Full Config | Reads full 166-byte config block (firmware version, channel scales, etc.) |
|
||||
| Event Index | Reads stored event list |
|
||||
|
||||
Output is colour-coded: TX in blue, raw RX bytes in teal, decoded fields in
|
||||
green, errors in red. **Save Log** writes a timestamped `.log` file to
|
||||
`bridges/captures/`. **Send to Analyzer** injects the captured bytes into the
|
||||
Analyzer tab for deeper inspection.
|
||||
|
||||
---
|
||||
|
||||
## Connecting over cellular (RV50 / RV55 modems)
|
||||
|
||||
Field units connect via Sierra Wireless RV50 or RV55 cellular modems. Use
|
||||
TCP mode in the Console or SFM:
|
||||
|
||||
```
|
||||
# Console tab
|
||||
Transport: TCP
|
||||
Host: <modem public IP>
|
||||
Port: 9034 ← Device Port in ACEmanager (call-up mode)
|
||||
```
|
||||
|
||||
```python
|
||||
# In code
|
||||
from minimateplus.transport import TcpTransport
|
||||
from minimateplus.client import MiniMateClient
|
||||
|
||||
client = MiniMateClient(transport=TcpTransport("1.2.3.4", 9034), timeout=30.0)
|
||||
info = client.connect()
|
||||
```
|
||||
|
||||
### Required ACEmanager settings (Serial tab)
|
||||
|
||||
These must match exactly — a single wrong setting causes the unit to beep
|
||||
on connect but never respond:
|
||||
|
||||
| Setting | Value | Why |
|
||||
|---------|-------|-----|
|
||||
| Configure Serial Port | `38400,8N1` | Must match MiniMate baud rate |
|
||||
| Flow Control | `None` | Hardware flow control blocks unit TX if pins unconnected |
|
||||
| **Quiet Mode** | **Enable** | **Critical.** Disabled → modem injects `RING`/`CONNECT` onto serial line, corrupting the S3 handshake |
|
||||
| Data Forwarding Timeout | `1` (= 0.1 s) | Lower latency; `5` works but is sluggish |
|
||||
| TCP Connect Response Delay | `0` | Non-zero silently drops the first POLL frame |
|
||||
| TCP Idle Timeout | `2` (minutes) | Prevents premature disconnect |
|
||||
| DB9 Serial Echo | `Disable` | Echo corrupts the data stream |
|
||||
|
||||
---
|
||||
|
||||
## minimateplus library
|
||||
|
||||
```python
|
||||
from minimateplus import MiniMateClient
|
||||
from minimateplus.transport import SerialTransport, TcpTransport
|
||||
|
||||
# Serial
|
||||
client = MiniMateClient(port="COM5")
|
||||
|
||||
# TCP (cellular modem)
|
||||
client = MiniMateClient(transport=TcpTransport("1.2.3.4", 9034), timeout=30.0)
|
||||
|
||||
with client:
|
||||
info = client.connect() # DeviceInfo — model, serial, firmware, compliance config
|
||||
serial = client.get_serial() # Serial number string
|
||||
config = client.get_config() # Full config block (bytes)
|
||||
events = client.get_events() # List[EventRecord] with true event-time metadata
|
||||
```
|
||||
|
||||
`get_events()` runs the full download sequence per event: `1E → 0A → 0C → 5A → 1F`.
|
||||
The SUB 5A bulk waveform stream is used to retrieve `client`, `operator`, and
|
||||
`sensor_location` as they existed at record time — not backfilled from the current
|
||||
compliance config.
|
||||
|
||||
---
|
||||
|
||||
## Protocol quick-reference
|
||||
|
||||
| Term | Value | Meaning |
|
||||
|------|-------|---------|
|
||||
| DLE | `0x10` | Data Link Escape |
|
||||
| STX | `0x02` | Start of frame |
|
||||
| ETX | `0x03` | End of frame |
|
||||
| ACK | `0x41` (`'A'`) | Frame-start marker sent before every frame |
|
||||
| DLE stuffing | `10 10` on wire | Literal `0x10` in payload |
|
||||
|
||||
**S3-side frame** (seismograph → Blastware): `ACK DLE+STX [payload] CHK DLE+ETX`
|
||||
|
||||
**De-stuffed payload header:**
|
||||
```
|
||||
[0] CMD 0x10 = BW request, 0x00 = S3 response
|
||||
[1] ? unknown (0x00 BW / 0x10 S3)
|
||||
[2] SUB Command/response identifier ← the key field
|
||||
[3] PAGE_HI Page address high byte
|
||||
[4] PAGE_LO Page address low byte
|
||||
[5+] DATA Payload content
|
||||
```
|
||||
|
||||
**Response SUB rule:** `response_SUB = 0xFF - request_SUB`
|
||||
Example: request SUB `0x08` (Event Index) → response SUB `0xF7`
|
||||
|
||||
Full protocol documentation: [`docs/instantel_protocol_reference.md`](docs/instantel_protocol_reference.md)
|
||||
|
||||
---
|
||||
|
||||
## Requirements
|
||||
|
||||
```
|
||||
pip install pyserial fastapi uvicorn
|
||||
```
|
||||
|
||||
Python 3.10+. Tkinter is included with the standard Python installer on
|
||||
Windows (make sure "tcl/tk and IDLE" is checked during install).
|
||||
|
||||
---
|
||||
|
||||
## Virtual COM ports (bridge capture)
|
||||
|
||||
The bridge needs two COM ports on the same PC — one that Blastware connects
|
||||
to, and one wired to the seismograph. Use a virtual COM port pair
|
||||
(**com0com** or **VSPD**) to give Blastware a port to talk to.
|
||||
|
||||
```
|
||||
Blastware → COM4 (virtual) ↔ s3_bridge.py ↔ COM5 (physical) → MiniMate Plus
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Roadmap
|
||||
|
||||
- [x] Event download — pull waveform records from the unit (`1E → 0A → 0C → 5A → 1F`)
|
||||
- [x] True event-time metadata — project / client / operator / sensor location from SUB 5A
|
||||
- [ ] Write commands — push config changes to the unit (compliance setup, channel config, trigger settings)
|
||||
- [ ] ACH inbound server — accept call-home connections from field units
|
||||
- [ ] Modem manager — push standard configs to RV50/RV55 fleet via Sierra Wireless API
|
||||
- [ ] Full Blastware parity — complete read/write/download cycle without Blastware
|
||||
|
||||
226
bridges/gui_bridge.py
Normal file
226
bridges/gui_bridge.py
Normal file
@@ -0,0 +1,226 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
gui_bridge.py — simple Tk GUI wrapper for s3_bridge.py (Windows-friendly).
|
||||
|
||||
Features:
|
||||
- Select BW and S3 COM ports, baud, log directory.
|
||||
- Optional raw taps (BW->S3, S3->BW).
|
||||
- Start/Stop buttons spawn/terminate s3_bridge as a subprocess.
|
||||
- Live stdout view from the bridge process.
|
||||
|
||||
Requires only the stdlib (Tkinter is bundled on Windows/Python).
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import datetime
|
||||
import os
|
||||
import queue
|
||||
import subprocess
|
||||
import sys
|
||||
import threading
|
||||
import tkinter as tk
|
||||
from tkinter import filedialog, messagebox, scrolledtext, simpledialog
|
||||
|
||||
SCRIPT_DIR = os.path.dirname(os.path.abspath(__file__))
|
||||
BRIDGE_PATH = os.path.join(SCRIPT_DIR, "s3-bridge", "s3_bridge.py")
|
||||
|
||||
|
||||
class BridgeGUI(tk.Tk):
|
||||
def __init__(self) -> None:
|
||||
super().__init__()
|
||||
self.title("S3 Bridge GUI")
|
||||
self.process: subprocess.Popen | None = None
|
||||
self.stdout_q: queue.Queue[str] = queue.Queue()
|
||||
self._build_widgets()
|
||||
self._poll_stdout()
|
||||
|
||||
def _build_widgets(self) -> None:
|
||||
pad = {"padx": 6, "pady": 4}
|
||||
|
||||
# Row 0: Ports
|
||||
tk.Label(self, text="BW COM:").grid(row=0, column=0, sticky="e", **pad)
|
||||
self.bw_var = tk.StringVar(value="COM4")
|
||||
tk.Entry(self, textvariable=self.bw_var, width=10).grid(row=0, column=1, sticky="w", **pad)
|
||||
|
||||
tk.Label(self, text="S3 COM:").grid(row=0, column=2, sticky="e", **pad)
|
||||
self.s3_var = tk.StringVar(value="COM5")
|
||||
tk.Entry(self, textvariable=self.s3_var, width=10).grid(row=0, column=3, sticky="w", **pad)
|
||||
|
||||
# Row 1: Baud
|
||||
tk.Label(self, text="Baud:").grid(row=1, column=0, sticky="e", **pad)
|
||||
self.baud_var = tk.StringVar(value="38400")
|
||||
tk.Entry(self, textvariable=self.baud_var, width=10).grid(row=1, column=1, sticky="w", **pad)
|
||||
|
||||
# Row 1: Logdir chooser
|
||||
tk.Label(self, text="Log dir:").grid(row=1, column=2, sticky="e", **pad)
|
||||
self.logdir_var = tk.StringVar(value=".")
|
||||
tk.Entry(self, textvariable=self.logdir_var, width=24).grid(row=1, column=3, sticky="we", **pad)
|
||||
tk.Button(self, text="Browse", command=self._choose_dir).grid(row=1, column=4, sticky="w", **pad)
|
||||
|
||||
# Row 2: Raw taps
|
||||
self.raw_bw_var = tk.StringVar(value="")
|
||||
self.raw_s3_var = tk.StringVar(value="")
|
||||
tk.Checkbutton(self, text="Save BW->S3 raw", command=self._toggle_raw_bw, onvalue="1", offvalue="").grid(row=2, column=0, sticky="w", **pad)
|
||||
tk.Entry(self, textvariable=self.raw_bw_var, width=28).grid(row=2, column=1, columnspan=3, sticky="we", **pad)
|
||||
tk.Button(self, text="...", command=lambda: self._choose_file(self.raw_bw_var, "bw")).grid(row=2, column=4, **pad)
|
||||
|
||||
tk.Checkbutton(self, text="Save S3->BW raw", command=self._toggle_raw_s3, onvalue="1", offvalue="").grid(row=3, column=0, sticky="w", **pad)
|
||||
tk.Entry(self, textvariable=self.raw_s3_var, width=28).grid(row=3, column=1, columnspan=3, sticky="we", **pad)
|
||||
tk.Button(self, text="...", command=lambda: self._choose_file(self.raw_s3_var, "s3")).grid(row=3, column=4, **pad)
|
||||
|
||||
# Row 4: Status + buttons
|
||||
self.status_var = tk.StringVar(value="Idle")
|
||||
tk.Label(self, textvariable=self.status_var, anchor="w").grid(row=4, column=0, columnspan=5, sticky="we", **pad)
|
||||
|
||||
tk.Button(self, text="Start", command=self.start_bridge, width=12).grid(row=5, column=0, columnspan=2, **pad)
|
||||
tk.Button(self, text="Stop", command=self.stop_bridge, width=12).grid(row=5, column=2, columnspan=2, **pad)
|
||||
self.mark_btn = tk.Button(self, text="Add Mark", command=self.add_mark, width=12, state="disabled")
|
||||
self.mark_btn.grid(row=5, column=4, **pad)
|
||||
|
||||
# Row 6: Log view
|
||||
self.log_view = scrolledtext.ScrolledText(self, height=20, width=90, state="disabled")
|
||||
self.log_view.grid(row=6, column=0, columnspan=5, sticky="nsew", **pad)
|
||||
|
||||
# Grid weights
|
||||
for c in range(5):
|
||||
self.grid_columnconfigure(c, weight=1)
|
||||
self.grid_rowconfigure(6, weight=1)
|
||||
|
||||
def _choose_dir(self) -> None:
|
||||
path = filedialog.askdirectory()
|
||||
if path:
|
||||
self.logdir_var.set(path)
|
||||
|
||||
def _choose_file(self, var: tk.StringVar, direction: str) -> None:
|
||||
filename = filedialog.asksaveasfilename(
|
||||
title=f"Raw tap file for {direction}",
|
||||
defaultextension=".bin",
|
||||
filetypes=[("Binary", "*.bin"), ("All files", "*.*")]
|
||||
)
|
||||
if filename:
|
||||
var.set(filename)
|
||||
|
||||
def _toggle_raw_bw(self) -> None:
|
||||
if not self.raw_bw_var.get():
|
||||
# default name
|
||||
self.raw_bw_var.set(os.path.join(self.logdir_var.get(), "raw_bw.bin"))
|
||||
|
||||
def _toggle_raw_s3(self) -> None:
|
||||
if not self.raw_s3_var.get():
|
||||
self.raw_s3_var.set(os.path.join(self.logdir_var.get(), "raw_s3.bin"))
|
||||
|
||||
def start_bridge(self) -> None:
|
||||
if self.process and self.process.poll() is None:
|
||||
messagebox.showinfo("Bridge", "Bridge is already running.")
|
||||
return
|
||||
|
||||
bw = self.bw_var.get().strip()
|
||||
s3 = self.s3_var.get().strip()
|
||||
baud = self.baud_var.get().strip()
|
||||
logdir = self.logdir_var.get().strip() or "."
|
||||
|
||||
if not bw or not s3:
|
||||
messagebox.showerror("Error", "Please enter both BW and S3 COM ports.")
|
||||
return
|
||||
|
||||
args = [sys.executable, BRIDGE_PATH, "--bw", bw, "--s3", s3, "--baud", baud, "--logdir", logdir]
|
||||
|
||||
ts = datetime.datetime.now().strftime("%Y%m%d_%H%M%S")
|
||||
|
||||
raw_bw = self.raw_bw_var.get().strip()
|
||||
raw_s3 = self.raw_s3_var.get().strip()
|
||||
|
||||
# If the user left the default generic name, replace with a timestamped one
|
||||
# so each session gets its own file.
|
||||
if raw_bw:
|
||||
if os.path.basename(raw_bw) in ("raw_bw.bin", "raw_bw"):
|
||||
raw_bw = os.path.join(os.path.dirname(raw_bw) or logdir, f"raw_bw_{ts}.bin")
|
||||
self.raw_bw_var.set(raw_bw)
|
||||
args += ["--raw-bw", raw_bw]
|
||||
if raw_s3:
|
||||
if os.path.basename(raw_s3) in ("raw_s3.bin", "raw_s3"):
|
||||
raw_s3 = os.path.join(os.path.dirname(raw_s3) or logdir, f"raw_s3_{ts}.bin")
|
||||
self.raw_s3_var.set(raw_s3)
|
||||
args += ["--raw-s3", raw_s3]
|
||||
|
||||
try:
|
||||
self.process = subprocess.Popen(
|
||||
args,
|
||||
stdout=subprocess.PIPE,
|
||||
stderr=subprocess.STDOUT,
|
||||
stdin=subprocess.PIPE,
|
||||
text=True,
|
||||
bufsize=1,
|
||||
)
|
||||
except Exception as e:
|
||||
messagebox.showerror("Error", f"Failed to start bridge: {e}")
|
||||
return
|
||||
|
||||
threading.Thread(target=self._reader_thread, daemon=True).start()
|
||||
self.status_var.set("Running...")
|
||||
self._append_log("== Bridge started ==\n")
|
||||
self.mark_btn.configure(state="normal")
|
||||
|
||||
def stop_bridge(self) -> None:
|
||||
if self.process and self.process.poll() is None:
|
||||
self.process.terminate()
|
||||
try:
|
||||
self.process.wait(timeout=3)
|
||||
except subprocess.TimeoutExpired:
|
||||
self.process.kill()
|
||||
self.status_var.set("Stopped")
|
||||
self._append_log("== Bridge stopped ==\n")
|
||||
self.mark_btn.configure(state="disabled")
|
||||
|
||||
def _reader_thread(self) -> None:
|
||||
if not self.process or not self.process.stdout:
|
||||
return
|
||||
for line in self.process.stdout:
|
||||
self.stdout_q.put(line)
|
||||
self.stdout_q.put("<<process-exit>>")
|
||||
|
||||
def add_mark(self) -> None:
|
||||
if not self.process or not self.process.stdin or self.process.poll() is not None:
|
||||
return
|
||||
label = simpledialog.askstring("Mark", "Enter label for mark:", parent=self)
|
||||
if label is None or label.strip() == "":
|
||||
return
|
||||
try:
|
||||
# Mimic CLI behavior: send 'm' + Enter, then label + Enter
|
||||
self.process.stdin.write("m\n")
|
||||
self.process.stdin.write(label.strip() + "\n")
|
||||
self.process.stdin.flush()
|
||||
self._append_log(f"[GUI] Mark sent: {label.strip()}\n")
|
||||
except Exception as e:
|
||||
messagebox.showerror("Error", f"Failed to send mark: {e}")
|
||||
|
||||
def _poll_stdout(self) -> None:
|
||||
try:
|
||||
while True:
|
||||
line = self.stdout_q.get_nowait()
|
||||
if line == "<<process-exit>>":
|
||||
self.status_var.set("Stopped")
|
||||
self.mark_btn.configure(state="disabled")
|
||||
break
|
||||
self._append_log(line)
|
||||
except queue.Empty:
|
||||
pass
|
||||
finally:
|
||||
self.after(100, self._poll_stdout)
|
||||
|
||||
def _append_log(self, text: str) -> None:
|
||||
self.log_view.configure(state="normal")
|
||||
self.log_view.insert(tk.END, text)
|
||||
self.log_view.see(tk.END)
|
||||
self.log_view.configure(state="disabled")
|
||||
|
||||
|
||||
def main() -> int:
|
||||
app = BridgeGUI()
|
||||
app.mainloop()
|
||||
return 0
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
raise SystemExit(main())
|
||||
157
bridges/raw_capture.py
Normal file
157
bridges/raw_capture.py
Normal file
@@ -0,0 +1,157 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
raw_capture.py — minimal serial logger for raw byte collection.
|
||||
|
||||
Opens a single COM port, streams all bytes to a timestamped binary file,
|
||||
and does no parsing or forwarding. Useful when you just need the raw
|
||||
wire data without DLE framing or Blastware bridging.
|
||||
|
||||
Record format (little-endian):
|
||||
[ts_us:8][len:4][payload:len]
|
||||
Exactly one record type is used, so there is no type byte.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import argparse
|
||||
import datetime as _dt
|
||||
import os
|
||||
import signal
|
||||
import sys
|
||||
import time
|
||||
from typing import Optional
|
||||
|
||||
import serial
|
||||
|
||||
|
||||
def now_ts() -> str:
|
||||
t = _dt.datetime.now()
|
||||
return t.strftime("%H:%M:%S.") + f"{int(t.microsecond/1000):03d}"
|
||||
|
||||
|
||||
def pack_u32_le(n: int) -> bytes:
|
||||
return bytes((n & 0xFF, (n >> 8) & 0xFF, (n >> 16) & 0xFF, (n >> 24) & 0xFF))
|
||||
|
||||
|
||||
def pack_u64_le(n: int) -> bytes:
|
||||
out = []
|
||||
for i in range(8):
|
||||
out.append((n >> (8 * i)) & 0xFF)
|
||||
return bytes(out)
|
||||
|
||||
|
||||
def open_serial(port: str, baud: int, timeout: float) -> serial.Serial:
|
||||
return serial.Serial(
|
||||
port=port,
|
||||
baudrate=baud,
|
||||
bytesize=serial.EIGHTBITS,
|
||||
parity=serial.PARITY_NONE,
|
||||
stopbits=serial.STOPBITS_ONE,
|
||||
timeout=timeout,
|
||||
write_timeout=timeout,
|
||||
)
|
||||
|
||||
|
||||
class RawWriter:
|
||||
def __init__(self, path: str):
|
||||
self.path = path
|
||||
self._fh = open(path, "ab", buffering=0)
|
||||
|
||||
def write(self, payload: bytes, ts_us: Optional[int] = None) -> None:
|
||||
if ts_us is None:
|
||||
ts_us = int(time.time() * 1_000_000)
|
||||
header = pack_u64_le(ts_us) + pack_u32_le(len(payload))
|
||||
self._fh.write(header)
|
||||
if payload:
|
||||
self._fh.write(payload)
|
||||
|
||||
def close(self) -> None:
|
||||
try:
|
||||
self._fh.flush()
|
||||
finally:
|
||||
self._fh.close()
|
||||
|
||||
|
||||
def capture_loop(port: serial.Serial, writer: RawWriter, stop_flag: "StopFlag", status_every_s: float) -> None:
|
||||
last_status = time.monotonic()
|
||||
bytes_written = 0
|
||||
|
||||
while not stop_flag.is_set():
|
||||
try:
|
||||
n = port.in_waiting
|
||||
chunk = port.read(n if n and n < 4096 else (4096 if n else 1))
|
||||
except serial.SerialException as e:
|
||||
print(f"[{now_ts()}] [ERROR] serial exception: {e!r}", file=sys.stderr)
|
||||
break
|
||||
|
||||
if chunk:
|
||||
writer.write(chunk)
|
||||
bytes_written += len(chunk)
|
||||
|
||||
if status_every_s > 0:
|
||||
now = time.monotonic()
|
||||
if now - last_status >= status_every_s:
|
||||
print(f"[{now_ts()}] captured {bytes_written} bytes", flush=True)
|
||||
last_status = now
|
||||
|
||||
if not chunk:
|
||||
time.sleep(0.002)
|
||||
|
||||
|
||||
class StopFlag:
|
||||
def __init__(self):
|
||||
self._set = False
|
||||
|
||||
def set(self):
|
||||
self._set = True
|
||||
|
||||
def is_set(self) -> bool:
|
||||
return self._set
|
||||
|
||||
|
||||
def main() -> int:
|
||||
ap = argparse.ArgumentParser(description="Raw serial capture to timestamped binary file (no forwarding).")
|
||||
ap.add_argument("--port", default="COM5", help="Serial port to capture (default: COM5)")
|
||||
ap.add_argument("--baud", type=int, default=38400, help="Baud rate (default: 38400)")
|
||||
ap.add_argument("--timeout", type=float, default=0.05, help="Serial read timeout in seconds (default: 0.05)")
|
||||
ap.add_argument("--logdir", default=".", help="Directory to write captures (default: .)")
|
||||
ap.add_argument("--status-every", type=float, default=5.0, help="Seconds between progress lines (0 disables)")
|
||||
args = ap.parse_args()
|
||||
|
||||
os.makedirs(args.logdir, exist_ok=True)
|
||||
ts = _dt.datetime.now().strftime("%Y%m%d_%H%M%S")
|
||||
bin_path = os.path.join(args.logdir, f"raw_capture_{ts}.bin")
|
||||
|
||||
print(f"[INFO] Opening {args.port} @ {args.baud}...")
|
||||
try:
|
||||
ser = open_serial(args.port, args.baud, args.timeout)
|
||||
except Exception as e:
|
||||
print(f"[ERROR] failed to open port: {e!r}", file=sys.stderr)
|
||||
return 2
|
||||
|
||||
writer = RawWriter(bin_path)
|
||||
print(f"[INFO] Writing raw bytes to {bin_path}")
|
||||
print("[INFO] Press Ctrl+C to stop.")
|
||||
|
||||
stop = StopFlag()
|
||||
|
||||
def handle_sigint(sig, frame):
|
||||
stop.set()
|
||||
|
||||
signal.signal(signal.SIGINT, handle_sigint)
|
||||
|
||||
try:
|
||||
capture_loop(ser, writer, stop, args.status_every)
|
||||
finally:
|
||||
writer.close()
|
||||
try:
|
||||
ser.close()
|
||||
except Exception:
|
||||
pass
|
||||
print(f"[INFO] Capture stopped. Total bytes written: {os.path.getsize(bin_path)}")
|
||||
|
||||
return 0
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
raise SystemExit(main())
|
||||
@@ -1,7 +1,7 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
s3_bridge.py — S3 <-> Blastware serial bridge with raw binary capture + DLE-aware text framing
|
||||
Version: v0.5.0
|
||||
Version: v0.5.1
|
||||
|
||||
What’s new vs v0.4.0:
|
||||
- .bin is now a TRUE raw capture stream with direction + timestamps (record container format).
|
||||
@@ -10,6 +10,8 @@ What’s new vs v0.4.0:
|
||||
- frame end = 0x10 0x03 (DLE ETX)
|
||||
(No longer splits on bare 0x03.)
|
||||
- Marks/Info are stored as proper record types in .bin (no unsafe sentinel bytes).
|
||||
- Optional raw taps: use --raw-bw / --raw-s3 to also dump byte-for-byte traffic per direction
|
||||
with no headers (for tools that just need a flat stream).
|
||||
|
||||
BIN record format (little-endian):
|
||||
[type:1][ts_us:8][len:4][payload:len]
|
||||
@@ -33,7 +35,7 @@ from typing import Optional
|
||||
|
||||
import serial
|
||||
|
||||
VERSION = "v0.5.0"
|
||||
VERSION = "v0.5.1"
|
||||
|
||||
DLE = 0x10
|
||||
STX = 0x02
|
||||
@@ -84,12 +86,15 @@ def pack_u64_le(n: int) -> bytes:
|
||||
|
||||
|
||||
class SessionLogger:
|
||||
def __init__(self, path: str, bin_path: str):
|
||||
def __init__(self, path: str, bin_path: str, raw_bw_path: Optional[str] = None, raw_s3_path: Optional[str] = None):
|
||||
self.path = path
|
||||
self.bin_path = bin_path
|
||||
self._fh = open(path, "a", buffering=1, encoding="utf-8", errors="replace")
|
||||
self._bin_fh = open(bin_path, "ab", buffering=0)
|
||||
self._lock = threading.Lock()
|
||||
# Optional pure-byte taps (no headers). BW=Blastware tx, S3=device tx.
|
||||
self._raw_bw = open(raw_bw_path, "ab", buffering=0) if raw_bw_path else None
|
||||
self._raw_s3 = open(raw_s3_path, "ab", buffering=0) if raw_s3_path else None
|
||||
|
||||
def log_line(self, line: str) -> None:
|
||||
with self._lock:
|
||||
@@ -103,6 +108,11 @@ class SessionLogger:
|
||||
self._bin_fh.write(header)
|
||||
if payload:
|
||||
self._bin_fh.write(payload)
|
||||
# Raw taps: write only the payload bytes (no headers)
|
||||
if rec_type == REC_BW and self._raw_bw:
|
||||
self._raw_bw.write(payload)
|
||||
if rec_type == REC_S3 and self._raw_s3:
|
||||
self._raw_s3.write(payload)
|
||||
|
||||
def log_mark(self, label: str) -> None:
|
||||
ts = now_ts()
|
||||
@@ -122,6 +132,10 @@ class SessionLogger:
|
||||
finally:
|
||||
self._fh.close()
|
||||
self._bin_fh.close()
|
||||
if self._raw_bw:
|
||||
self._raw_bw.close()
|
||||
if self._raw_s3:
|
||||
self._raw_s3.close()
|
||||
|
||||
|
||||
class DLEFrameSniffer:
|
||||
@@ -307,10 +321,12 @@ def annotation_loop(logger: SessionLogger, stop: threading.Event) -> None:
|
||||
|
||||
def main() -> int:
|
||||
ap = argparse.ArgumentParser()
|
||||
ap.add_argument("--bw", default="COM5", help="Blastware-side COM port (default: COM5)")
|
||||
ap.add_argument("--s3", default="COM4", help="S3-side COM port (default: COM4)")
|
||||
ap.add_argument("--bw", default="COM4", help="Blastware-side COM port (default: COM4)")
|
||||
ap.add_argument("--s3", default="COM5", help="S3-side COM port (default: COM5)")
|
||||
ap.add_argument("--baud", type=int, default=38400, help="Baud rate (default: 38400)")
|
||||
ap.add_argument("--logdir", default=".", help="Directory to write session logs into (default: .)")
|
||||
ap.add_argument("--raw-bw", default=None, help="Optional file to append raw bytes sent from BW->S3 (no headers)")
|
||||
ap.add_argument("--raw-s3", default=None, help="Optional file to append raw bytes sent from S3->BW (no headers)")
|
||||
ap.add_argument("--quiet", action="store_true", help="No console heartbeat output")
|
||||
ap.add_argument("--status-every", type=float, default=0.0, help="Seconds between console heartbeat lines (default: 0 = off)")
|
||||
args = ap.parse_args()
|
||||
@@ -329,10 +345,25 @@ def main() -> int:
|
||||
ts = _dt.datetime.now().strftime("%Y%m%d_%H%M%S")
|
||||
log_path = os.path.join(args.logdir, f"s3_session_{ts}.log")
|
||||
bin_path = os.path.join(args.logdir, f"s3_session_{ts}.bin")
|
||||
logger = SessionLogger(log_path, bin_path)
|
||||
|
||||
# If raw tap flags were passed without a path (bare --raw-bw / --raw-s3),
|
||||
# or if the sentinel value "auto" is used, generate a timestamped name.
|
||||
# If a specific path was provided, use it as-is (caller's responsibility).
|
||||
raw_bw_path = args.raw_bw
|
||||
raw_s3_path = args.raw_s3
|
||||
if raw_bw_path in (None, "", "auto"):
|
||||
raw_bw_path = os.path.join(args.logdir, f"raw_bw_{ts}.bin") if args.raw_bw is not None else None
|
||||
if raw_s3_path in (None, "", "auto"):
|
||||
raw_s3_path = os.path.join(args.logdir, f"raw_s3_{ts}.bin") if args.raw_s3 is not None else None
|
||||
|
||||
logger = SessionLogger(log_path, bin_path, raw_bw_path=raw_bw_path, raw_s3_path=raw_s3_path)
|
||||
|
||||
print(f"[LOG] Writing hex log to {log_path}")
|
||||
print(f"[LOG] Writing binary log to {bin_path}")
|
||||
if raw_bw_path:
|
||||
print(f"[LOG] Raw tap BW->S3 -> {raw_bw_path}")
|
||||
if raw_s3_path:
|
||||
print(f"[LOG] Raw tap S3->BW -> {raw_s3_path}")
|
||||
|
||||
logger.log_info(f"s3_bridge {VERSION} start")
|
||||
logger.log_info(f"BW={args.bw} S3={args.s3} baud={args.baud}")
|
||||
|
||||
205
bridges/tcp_serial_bridge.py
Normal file
205
bridges/tcp_serial_bridge.py
Normal file
@@ -0,0 +1,205 @@
|
||||
"""
|
||||
tcp_serial_bridge.py — Local TCP-to-serial bridge for bench testing TcpTransport.
|
||||
|
||||
Listens on a TCP port and, when a client connects, opens a serial port and
|
||||
bridges bytes bidirectionally. This lets you test the SFM server's TCP
|
||||
endpoint (?host=127.0.0.1&tcp_port=12345) against a locally-attached MiniMate
|
||||
Plus without needing a field modem.
|
||||
|
||||
The bridge simulates an RV55 cellular modem in transparent TCP passthrough mode:
|
||||
- No handshake bytes on connect
|
||||
- Raw bytes forwarded in both directions
|
||||
- One connection at a time (new connection closes any existing serial session)
|
||||
|
||||
Usage:
|
||||
python bridges/tcp_serial_bridge.py --serial COM5 --tcp-port 12345
|
||||
|
||||
Then in another window:
|
||||
python -m uvicorn sfm.server:app --port 8200
|
||||
curl "http://localhost:8200/device/info?host=127.0.0.1&tcp_port=12345"
|
||||
|
||||
Or just hit http://localhost:8200/device/info?host=127.0.0.1&tcp_port=12345
|
||||
in a browser.
|
||||
|
||||
Requirements:
|
||||
pip install pyserial
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import argparse
|
||||
import logging
|
||||
import select
|
||||
import socket
|
||||
import sys
|
||||
import threading
|
||||
import time
|
||||
|
||||
try:
|
||||
import serial # type: ignore
|
||||
except ImportError:
|
||||
print("pyserial required: pip install pyserial", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
logging.basicConfig(
|
||||
level=logging.INFO,
|
||||
format="%(asctime)s %(levelname)-7s %(message)s",
|
||||
datefmt="%H:%M:%S",
|
||||
)
|
||||
log = logging.getLogger("tcp_serial_bridge")
|
||||
|
||||
# ── Constants ─────────────────────────────────────────────────────────────────
|
||||
|
||||
DEFAULT_BAUD = 38_400
|
||||
DEFAULT_TCP_PORT = 12345
|
||||
CHUNK = 256 # bytes per read call
|
||||
SERIAL_TIMEOUT = 0.02 # serial read timeout (s) — non-blocking in practice
|
||||
TCP_TIMEOUT = 0.02 # socket recv timeout (s)
|
||||
BOOT_DELAY = 8.0 # seconds to wait after opening serial port before
|
||||
# forwarding data — unit cold-boot (beep + OS init)
|
||||
# takes 5-10s from first RS-232 line assertion.
|
||||
# Set to 0 if unit was already running before connect.
|
||||
|
||||
|
||||
# ── Bridge session ─────────────────────────────────────────────────────────────
|
||||
|
||||
def _pipe_tcp_to_serial(sock: socket.socket, ser: serial.Serial, stop: threading.Event) -> None:
|
||||
"""Forward bytes from TCP socket → serial port."""
|
||||
sock.settimeout(TCP_TIMEOUT)
|
||||
while not stop.is_set():
|
||||
try:
|
||||
data = sock.recv(CHUNK)
|
||||
if not data:
|
||||
log.info("TCP peer closed connection")
|
||||
stop.set()
|
||||
break
|
||||
log.debug("TCP→SER %d bytes: %s", len(data), data.hex())
|
||||
ser.write(data)
|
||||
except socket.timeout:
|
||||
pass
|
||||
except OSError as exc:
|
||||
if not stop.is_set():
|
||||
log.warning("TCP read error: %s", exc)
|
||||
stop.set()
|
||||
break
|
||||
|
||||
|
||||
def _pipe_serial_to_tcp(sock: socket.socket, ser: serial.Serial, stop: threading.Event) -> None:
|
||||
"""Forward bytes from serial port → TCP socket."""
|
||||
while not stop.is_set():
|
||||
try:
|
||||
data = ser.read(CHUNK)
|
||||
if data:
|
||||
log.debug("SER→TCP %d bytes: %s", len(data), data.hex())
|
||||
try:
|
||||
sock.sendall(data)
|
||||
except OSError as exc:
|
||||
if not stop.is_set():
|
||||
log.warning("TCP send error: %s", exc)
|
||||
stop.set()
|
||||
break
|
||||
except serial.SerialException as exc:
|
||||
if not stop.is_set():
|
||||
log.warning("Serial read error: %s", exc)
|
||||
stop.set()
|
||||
break
|
||||
|
||||
|
||||
def _run_session(conn: socket.socket, addr: tuple, serial_port: str, baud: int, boot_delay: float) -> None:
|
||||
"""Handle one TCP client connection."""
|
||||
peer = f"{addr[0]}:{addr[1]}"
|
||||
log.info("Connection from %s", peer)
|
||||
|
||||
try:
|
||||
ser = serial.Serial(
|
||||
port = serial_port,
|
||||
baudrate = baud,
|
||||
bytesize = 8,
|
||||
parity = "N",
|
||||
stopbits = 1,
|
||||
timeout = SERIAL_TIMEOUT,
|
||||
)
|
||||
except serial.SerialException as exc:
|
||||
log.error("Cannot open serial port %s: %s", serial_port, exc)
|
||||
conn.close()
|
||||
return
|
||||
|
||||
log.info("Opened %s at %d baud — waiting %.1fs for unit boot", serial_port, baud, boot_delay)
|
||||
ser.reset_input_buffer()
|
||||
ser.reset_output_buffer()
|
||||
|
||||
if boot_delay > 0:
|
||||
time.sleep(boot_delay)
|
||||
ser.reset_input_buffer() # discard any boot noise
|
||||
|
||||
log.info("Bridge active: TCP %s ↔ %s", peer, serial_port)
|
||||
|
||||
stop = threading.Event()
|
||||
t_tcp_to_ser = threading.Thread(
|
||||
target=_pipe_tcp_to_serial, args=(conn, ser, stop), daemon=True
|
||||
)
|
||||
t_ser_to_tcp = threading.Thread(
|
||||
target=_pipe_serial_to_tcp, args=(conn, ser, stop), daemon=True
|
||||
)
|
||||
t_tcp_to_ser.start()
|
||||
t_ser_to_tcp.start()
|
||||
|
||||
stop.wait() # block until either thread sets the stop flag
|
||||
|
||||
log.info("Session ended, cleaning up")
|
||||
try:
|
||||
conn.close()
|
||||
except OSError:
|
||||
pass
|
||||
try:
|
||||
ser.close()
|
||||
except OSError:
|
||||
pass
|
||||
|
||||
t_tcp_to_ser.join(timeout=2.0)
|
||||
t_ser_to_tcp.join(timeout=2.0)
|
||||
log.info("Session with %s closed", peer)
|
||||
|
||||
|
||||
# ── Server ────────────────────────────────────────────────────────────────────
|
||||
|
||||
def run_bridge(serial_port: str, baud: int, tcp_port: int, boot_delay: float) -> None:
|
||||
"""Accept TCP connections forever and bridge each one to the serial port."""
|
||||
srv = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
|
||||
srv.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
|
||||
srv.bind(("0.0.0.0", tcp_port))
|
||||
srv.listen(1)
|
||||
log.info(
|
||||
"Listening on TCP :%d — will bridge to %s at %d baud",
|
||||
tcp_port, serial_port, baud,
|
||||
)
|
||||
log.info("Send test: curl 'http://localhost:8200/device/info?host=127.0.0.1&tcp_port=%d'", tcp_port)
|
||||
|
||||
try:
|
||||
while True:
|
||||
conn, addr = srv.accept()
|
||||
# Handle one session at a time (synchronous) — matches modem behaviour
|
||||
_run_session(conn, addr, serial_port, baud, boot_delay)
|
||||
except KeyboardInterrupt:
|
||||
log.info("Shutting down")
|
||||
finally:
|
||||
srv.close()
|
||||
|
||||
|
||||
# ── Entry point ────────────────────────────────────────────────────────────────
|
||||
|
||||
if __name__ == "__main__":
|
||||
ap = argparse.ArgumentParser(description="TCP-to-serial bridge for bench testing TcpTransport")
|
||||
ap.add_argument("--serial", default="COM5", help="Serial port (default: COM5)")
|
||||
ap.add_argument("--baud", type=int, default=DEFAULT_BAUD, help="Baud rate (default: 38400)")
|
||||
ap.add_argument("--tcp-port", type=int, default=DEFAULT_TCP_PORT, help="TCP listen port (default: 12345)")
|
||||
ap.add_argument("--boot-delay", type=float, default=BOOT_DELAY,
|
||||
help="Seconds to wait after opening serial before forwarding (default: 2.0). "
|
||||
"Set to 0 if unit is already powered on.")
|
||||
ap.add_argument("--debug", action="store_true", help="Show individual byte transfers")
|
||||
args = ap.parse_args()
|
||||
|
||||
if args.debug:
|
||||
logging.getLogger().setLevel(logging.DEBUG)
|
||||
|
||||
run_bridge(args.serial, args.baud, args.tcp_port, args.boot_delay)
|
||||
File diff suppressed because it is too large
Load Diff
27
minimateplus/__init__.py
Normal file
27
minimateplus/__init__.py
Normal file
@@ -0,0 +1,27 @@
|
||||
"""
|
||||
minimateplus — Instantel MiniMate Plus protocol library.
|
||||
|
||||
Provides a clean Python API for communicating with MiniMate Plus seismographs
|
||||
over RS-232 serial (direct cable) or TCP (modem / ACH Auto Call Home).
|
||||
|
||||
Typical usage (serial):
|
||||
from minimateplus import MiniMateClient
|
||||
|
||||
with MiniMateClient("COM5") as device:
|
||||
info = device.connect()
|
||||
events = device.get_events()
|
||||
|
||||
Typical usage (TCP / modem):
|
||||
from minimateplus import MiniMateClient
|
||||
from minimateplus.transport import TcpTransport
|
||||
|
||||
with MiniMateClient(transport=TcpTransport("203.0.113.5", 12345)) as device:
|
||||
info = device.connect()
|
||||
"""
|
||||
|
||||
from .client import MiniMateClient
|
||||
from .models import DeviceInfo, Event
|
||||
from .transport import SerialTransport, TcpTransport
|
||||
|
||||
__version__ = "0.1.0"
|
||||
__all__ = ["MiniMateClient", "DeviceInfo", "Event", "SerialTransport", "TcpTransport"]
|
||||
1242
minimateplus/client.py
Normal file
1242
minimateplus/client.py
Normal file
File diff suppressed because it is too large
Load Diff
485
minimateplus/framing.py
Normal file
485
minimateplus/framing.py
Normal file
@@ -0,0 +1,485 @@
|
||||
"""
|
||||
framing.py — DLE frame codec for the Instantel MiniMate Plus RS-232 protocol.
|
||||
|
||||
Wire format:
|
||||
BW→S3 (our requests): [ACK=0x41] [STX=0x02] [stuffed payload+chk] [ETX=0x03]
|
||||
S3→BW (device replies): [DLE=0x10] [STX=0x02] [stuffed payload+chk] [DLE=0x10] [ETX=0x03]
|
||||
|
||||
The ACK 0x41 byte often precedes S3 frames too — it is silently discarded
|
||||
by the streaming parser.
|
||||
|
||||
De-stuffed payload layout:
|
||||
BW→S3 request frame:
|
||||
[0] CMD 0x10 (BW request marker)
|
||||
[1] flags 0x00
|
||||
[2] SUB command sub-byte
|
||||
[3] 0x00 always zero in captured frames
|
||||
[4] 0x00 always zero in captured frames
|
||||
[5] OFFSET two-step offset: 0x00 = length-probe, DATA_LEN = data-request
|
||||
[6-15] zero padding (total de-stuffed payload = 16 bytes)
|
||||
|
||||
S3→BW response frame:
|
||||
[0] CMD 0x00 (S3 response marker)
|
||||
[1] flags 0x10
|
||||
[2] SUB response sub-byte (= 0xFF - request SUB)
|
||||
[3] PAGE_HI high byte of page address (always 0x00 in observed frames)
|
||||
[4] PAGE_LO low byte (always 0x00 in observed frames)
|
||||
[5+] data payload data section (composite inner frames for large responses)
|
||||
|
||||
DLE stuffing rule: any 0x10 byte in the payload is doubled on the wire (0x10 → 0x10 0x10).
|
||||
This applies to the checksum byte too.
|
||||
|
||||
Confirmed from live captures (s3_parser.py validation + raw_bw.bin / raw_s3.bin).
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from dataclasses import dataclass
|
||||
from typing import Optional
|
||||
|
||||
# ── Protocol byte constants ───────────────────────────────────────────────────
|
||||
|
||||
DLE = 0x10 # Data Link Escape
|
||||
STX = 0x02 # Start of text
|
||||
ETX = 0x03 # End of text
|
||||
ACK = 0x41 # Acknowledgement / frame-start marker (BW side)
|
||||
|
||||
BW_CMD = 0x10 # CMD byte value in BW→S3 frames
|
||||
S3_CMD = 0x00 # CMD byte value in S3→BW frames
|
||||
S3_FLAGS = 0x10 # flags byte value in S3→BW frames
|
||||
|
||||
# BW read-command payload size: 5 header bytes + 11 padding bytes = 16 total.
|
||||
# Confirmed from captured raw_bw.bin: all read-command frames carry exactly 16
|
||||
# de-stuffed bytes (excluding the appended checksum).
|
||||
_BW_PAYLOAD_SIZE = 16
|
||||
|
||||
|
||||
# ── DLE stuffing / de-stuffing ────────────────────────────────────────────────
|
||||
|
||||
def dle_stuff(data: bytes) -> bytes:
|
||||
"""Escape literal 0x10 bytes: 0x10 → 0x10 0x10."""
|
||||
out = bytearray()
|
||||
for b in data:
|
||||
if b == DLE:
|
||||
out.append(DLE)
|
||||
out.append(b)
|
||||
return bytes(out)
|
||||
|
||||
|
||||
def dle_unstuff(data: bytes) -> bytes:
|
||||
"""Remove DLE stuffing: 0x10 0x10 → 0x10."""
|
||||
out = bytearray()
|
||||
i = 0
|
||||
while i < len(data):
|
||||
b = data[i]
|
||||
if b == DLE and i + 1 < len(data) and data[i + 1] == DLE:
|
||||
out.append(DLE)
|
||||
i += 2
|
||||
else:
|
||||
out.append(b)
|
||||
i += 1
|
||||
return bytes(out)
|
||||
|
||||
|
||||
# ── Checksum ─────────────────────────────────────────────────────────────────
|
||||
|
||||
def checksum(payload: bytes) -> int:
|
||||
"""SUM8: sum of all de-stuffed payload bytes, mod 256."""
|
||||
return sum(payload) & 0xFF
|
||||
|
||||
|
||||
# ── BW→S3 frame builder ───────────────────────────────────────────────────────
|
||||
|
||||
# SUB byte for 5A — used by build_5a_frame below (protocol.py has the full
|
||||
# constant set; defined here to avoid a circular import).
|
||||
SUB_5A = 0x5A
|
||||
|
||||
|
||||
def build_5a_frame(offset_word: int, raw_params: bytes) -> bytes:
|
||||
"""
|
||||
Build a BW→S3 frame for SUB 5A (BULK_WAVEFORM_STREAM) that exactly
|
||||
matches Blastware's wire output.
|
||||
|
||||
SUB 5A uses a DIFFERENT frame format from all other read commands:
|
||||
1. The offset field (bytes [4:6]) is written RAW — the 0x10 in
|
||||
offset_hi=0x10 is NOT DLE-stuffed, unlike build_bw_frame().
|
||||
2. The checksum uses a DLE-aware sum: for each 0x10 XX pair in the
|
||||
stuffed section, only XX contributes; lone bytes contribute normally.
|
||||
This differs from the standard SUM8 checksum on the unstuffed payload.
|
||||
|
||||
Both differences are confirmed from the 1-2-26 BW TX capture (all 10 frames
|
||||
verified against this algorithm on 2026-04-02).
|
||||
|
||||
Args:
|
||||
offset_word: 16-bit offset (0x1004 for probe/chunks, 0x005A for term).
|
||||
raw_params: 10 params bytes (from bulk_waveform_params or
|
||||
bulk_waveform_term_params). 0x10 bytes in params ARE
|
||||
DLE-stuffed (BW confirmed this for counter=0x1000 and
|
||||
counter=0x1004 in the capture).
|
||||
|
||||
Returns:
|
||||
Complete frame bytes: [ACK][STX][stuffed_section][chk][ETX]
|
||||
"""
|
||||
if len(raw_params) not in (10, 11):
|
||||
raise ValueError(f"raw_params must be 10 or 11 bytes, got {len(raw_params)}")
|
||||
|
||||
# Build stuffed section between STX and checksum
|
||||
s = bytearray()
|
||||
s += b"\x10\x10" # DLE-stuffed BW_CMD
|
||||
s += b"\x00" # flags
|
||||
s += bytes([SUB_5A]) # sub = 0x5A
|
||||
s += b"\x00" # field3
|
||||
s += bytes([(offset_word >> 8) & 0xFF, # offset_hi — raw, NOT stuffed
|
||||
offset_word & 0xFF]) # offset_lo
|
||||
for b in raw_params: # params — DLE-stuffed
|
||||
if b == DLE:
|
||||
s.append(DLE)
|
||||
s.append(b)
|
||||
|
||||
# DLE-aware checksum: for 0x10 XX pairs count XX; for lone bytes count them
|
||||
chk, i = 0, 0
|
||||
while i < len(s):
|
||||
if s[i] == DLE and i + 1 < len(s):
|
||||
chk = (chk + s[i + 1]) & 0xFF
|
||||
i += 2
|
||||
else:
|
||||
chk = (chk + s[i]) & 0xFF
|
||||
i += 1
|
||||
|
||||
return bytes([ACK, STX]) + bytes(s) + bytes([chk, ETX])
|
||||
|
||||
|
||||
def build_bw_frame(sub: int, offset: int = 0, params: bytes = bytes(10)) -> bytes:
|
||||
"""
|
||||
Build a BW→S3 read-command frame.
|
||||
|
||||
The payload is always 16 de-stuffed bytes:
|
||||
[BW_CMD, 0x00, sub, 0x00, 0x00, offset] + params(10 bytes)
|
||||
|
||||
Confirmed from BW capture analysis: payload[3] and payload[4] are always
|
||||
0x00 across all observed read commands. The two-step offset lives at
|
||||
payload[5]: 0x00 for the length-probe step, DATA_LEN for the data-fetch step.
|
||||
|
||||
The 10 params bytes (payload[6..15]) are zero for standard reads. For
|
||||
keyed reads (SUBs 0A, 0C) the 4-byte waveform key lives at params[4..7]
|
||||
(= payload[10..13]). For token-based reads (SUBs 1E, 1F) a single token
|
||||
byte lives at params[6] (= payload[12]). Use waveform_key_params() and
|
||||
token_params() helpers to build these safely.
|
||||
|
||||
Wire output: [ACK] [STX] dle_stuff(payload + checksum) [ETX]
|
||||
|
||||
Args:
|
||||
sub: SUB command byte (e.g. 0x01 = FULL_CONFIG_READ)
|
||||
offset: Value placed at payload[5].
|
||||
Pass 0 for the probe step; pass DATA_LENGTHS[sub] for the data step.
|
||||
params: 10 bytes placed at payload[6..15]. Default: all zeros.
|
||||
|
||||
Returns:
|
||||
Complete frame bytes ready to write to the serial port / socket.
|
||||
"""
|
||||
if len(params) != 10:
|
||||
raise ValueError(f"params must be exactly 10 bytes, got {len(params)}")
|
||||
if offset > 0xFFFF:
|
||||
raise ValueError(f"offset must fit in uint16, got {offset:#06x}")
|
||||
# offset is a uint16 split across bytes [4] (high) and [5] (low).
|
||||
# For all standard reads (offset ≤ 0xFF), byte[4] = 0x00 — consistent with
|
||||
# every captured BW frame. For large payloads (e.g. SUB 1A / E5 at 0x082A),
|
||||
# byte[4] carries the high byte. 🔶 INFERRED — confirm once E5 is captured.
|
||||
offset_hi = (offset >> 8) & 0xFF
|
||||
offset_lo = offset & 0xFF
|
||||
payload = bytes([BW_CMD, 0x00, sub, 0x00, offset_hi, offset_lo]) + params
|
||||
chk = checksum(payload)
|
||||
wire = bytes([ACK, STX]) + dle_stuff(payload + bytes([chk])) + bytes([ETX])
|
||||
return wire
|
||||
|
||||
|
||||
def waveform_key_params(key4: bytes) -> bytes:
|
||||
"""
|
||||
Build the 10-byte params block that carries a 4-byte waveform key.
|
||||
|
||||
Used for SUBs 0A (WAVEFORM_HEADER) and 0C (WAVEFORM_RECORD).
|
||||
The key goes at params[4..7], which maps to payload[10..13].
|
||||
|
||||
Confirmed from 3-31-26 capture: 0A and 0C request frames carry the
|
||||
4-byte record address at payload[10..13]. Probe and data-fetch steps
|
||||
carry the same key in both frames.
|
||||
|
||||
Args:
|
||||
key4: exactly 4 bytes — the opaque waveform record address returned
|
||||
by the EVENT_HEADER (1E) or EVENT_ADVANCE (1F) response.
|
||||
|
||||
Returns:
|
||||
10-byte params block with key embedded at positions [4..7].
|
||||
"""
|
||||
if len(key4) != 4:
|
||||
raise ValueError(f"waveform key must be 4 bytes, got {len(key4)}")
|
||||
p = bytearray(10)
|
||||
p[4:8] = key4
|
||||
return bytes(p)
|
||||
|
||||
|
||||
def token_params(token: int = 0) -> bytes:
|
||||
"""
|
||||
Build the 10-byte params block that carries a single token byte.
|
||||
|
||||
Used for SUBs 1E (EVENT_HEADER) and 1F (EVENT_ADVANCE).
|
||||
The token goes at params[7], which maps to payload[13].
|
||||
|
||||
Confirmed from BOTH 3-31-26 and 4-3-26 BW TX captures:
|
||||
raw params bytes: 00 00 00 00 00 00 00 fe 00 00
|
||||
token is at index 7 (not 6 — that was wrong).
|
||||
|
||||
- token=0x00: first-event read / browse mode (no download marking)
|
||||
- token=0xfe: download mode (causes 1F to skip partial bins and
|
||||
advance to the next full record)
|
||||
|
||||
The device echoes the token at data[8] of the S3 response (payload[13]),
|
||||
distinct from the next-event key at data[11:15] (payload[16:20]).
|
||||
|
||||
Args:
|
||||
token: single byte to place at params[7] / payload[13].
|
||||
|
||||
Returns:
|
||||
10-byte params block with token at position [7].
|
||||
"""
|
||||
p = bytearray(10)
|
||||
p[7] = token
|
||||
return bytes(p)
|
||||
|
||||
|
||||
def bulk_waveform_params(key4: bytes, counter: int, *, is_probe: bool = False) -> bytes:
|
||||
"""
|
||||
Build the 10-byte params block for SUB 5A (BULK_WAVEFORM_STREAM) requests.
|
||||
|
||||
Confirmed 2026-04-02 from 1-2-26 BW TX capture analysis:
|
||||
|
||||
Probe / first request (is_probe=True, counter=0):
|
||||
params[0] = 0x00
|
||||
params[1:5] = key4 (all 4 key bytes; counter overlaps key4[2:4] = 0x0000)
|
||||
params[5:] = zeros
|
||||
|
||||
Regular chunk requests (is_probe=False):
|
||||
params[0] = 0x00
|
||||
params[1:3] = key4[0:2] (first 2 key bytes as session handle)
|
||||
params[3:5] = counter (BE uint16) (chunk position, increments by 0x0400)
|
||||
params[5:] = zeros
|
||||
|
||||
Termination request: DO NOT use this helper — see bulk_waveform_term_params().
|
||||
|
||||
Args:
|
||||
key4: 4-byte waveform key from EVENT_HEADER (1E) response.
|
||||
counter: Chunk position counter (uint16 BE). Pass 0 for probe.
|
||||
is_probe: If True, embed full key4 (probe step only).
|
||||
|
||||
Returns:
|
||||
11-byte params block. (BW confirmed: chunk frames carry 11 params bytes,
|
||||
not 10; the extra trailing 0x00 was confirmed from 1-2-26 wire capture
|
||||
on 2026-04-02.)
|
||||
"""
|
||||
if len(key4) != 4:
|
||||
raise ValueError(f"waveform key must be 4 bytes, got {len(key4)}")
|
||||
p = bytearray(11) # 11 bytes confirmed from BW wire capture
|
||||
p[0] = 0x00
|
||||
p[1] = key4[0]
|
||||
p[2] = key4[1]
|
||||
if is_probe:
|
||||
# Full key4; counter=0 is implied (overlaps with key4[2:4] which must be 0x0000)
|
||||
p[3] = key4[2]
|
||||
p[4] = key4[3]
|
||||
else:
|
||||
p[3] = (counter >> 8) & 0xFF
|
||||
p[4] = counter & 0xFF
|
||||
return bytes(p)
|
||||
|
||||
|
||||
def bulk_waveform_term_params(key4: bytes, counter: int) -> bytes:
|
||||
"""
|
||||
Build the 10-byte params block for the SUB 5A termination request.
|
||||
|
||||
The termination request uses offset=0x005A and a DIFFERENT params layout —
|
||||
the leading 0x00 byte is dropped, key4[0:2] shifts to params[0:2], and the
|
||||
counter high byte is at params[2]:
|
||||
|
||||
params[0] = key4[0]
|
||||
params[1] = key4[1]
|
||||
params[2] = (counter >> 8) & 0xFF
|
||||
params[3:] = zeros
|
||||
|
||||
Counter for the termination request = last_regular_counter + 0x0400.
|
||||
|
||||
Confirmed from 1-2-26 BW TX capture: final request (frame 83) uses
|
||||
offset=0x005A, params[0:3] = key4[0:2] + term_counter_hi.
|
||||
|
||||
Args:
|
||||
key4: 4-byte waveform key.
|
||||
counter: Termination counter (= last regular counter + 0x0400).
|
||||
|
||||
Returns:
|
||||
10-byte params block.
|
||||
"""
|
||||
if len(key4) != 4:
|
||||
raise ValueError(f"waveform key must be 4 bytes, got {len(key4)}")
|
||||
p = bytearray(10)
|
||||
p[0] = key4[0]
|
||||
p[1] = key4[1]
|
||||
p[2] = (counter >> 8) & 0xFF
|
||||
return bytes(p)
|
||||
|
||||
|
||||
# ── Pre-built POLL frames ─────────────────────────────────────────────────────
|
||||
#
|
||||
# POLL (SUB 0x5B) uses the same two-step pattern as all other reads — the
|
||||
# hardcoded length 0x30 lives at payload[5], exactly as in build_bw_frame().
|
||||
|
||||
POLL_PROBE = build_bw_frame(0x5B, 0x00) # length-probe POLL (offset = 0)
|
||||
POLL_DATA = build_bw_frame(0x5B, 0x30) # data-request POLL (offset = 0x30)
|
||||
|
||||
|
||||
# ── S3 response dataclass ─────────────────────────────────────────────────────
|
||||
|
||||
@dataclass
|
||||
class S3Frame:
|
||||
"""A fully parsed and de-stuffed S3→BW response frame."""
|
||||
sub: int # response SUB byte (e.g. 0xA4 = POLL_RESPONSE)
|
||||
page_hi: int # PAGE_HI from header (= data length on step-2 length response)
|
||||
page_lo: int # PAGE_LO from header
|
||||
data: bytes # payload data section (payload[5:], checksum already stripped)
|
||||
checksum_valid: bool
|
||||
|
||||
@property
|
||||
def page_key(self) -> int:
|
||||
"""Combined 16-bit page address / length: (page_hi << 8) | page_lo."""
|
||||
return (self.page_hi << 8) | self.page_lo
|
||||
|
||||
|
||||
# ── Streaming S3 frame parser ─────────────────────────────────────────────────
|
||||
|
||||
class S3FrameParser:
|
||||
"""
|
||||
Incremental byte-stream parser for S3→BW response frames.
|
||||
|
||||
Feed incoming bytes with feed(). Complete, valid frames are returned
|
||||
immediately and also accumulated in self.frames.
|
||||
|
||||
State machine:
|
||||
IDLE — scanning for DLE (0x10)
|
||||
SEEN_DLE — saw DLE, waiting for STX (0x02) to start a frame
|
||||
IN_FRAME — collecting de-stuffed payload bytes; bare ETX ends frame
|
||||
IN_FRAME_DLE — inside frame, saw DLE; DLE continues stuffing;
|
||||
DLE+ETX is treated as literal data (NOT a frame end),
|
||||
which lets inner-frame terminators pass through intact
|
||||
|
||||
Wire format confirmed from captures:
|
||||
[DLE=0x10] [STX=0x02] [stuffed payload+chk] [bare ETX=0x03]
|
||||
The ETX is NOT preceded by a DLE on the wire. DLE+ETX sequences that
|
||||
appear inside the payload are inner-frame terminators and must be
|
||||
treated as literal data.
|
||||
|
||||
ACK (0x41) bytes and arbitrary non-DLE bytes in IDLE state are silently
|
||||
discarded (covers device boot string "Operating System" and keepalive ACKs).
|
||||
"""
|
||||
|
||||
_IDLE = 0
|
||||
_SEEN_DLE = 1
|
||||
_IN_FRAME = 2
|
||||
_IN_FRAME_DLE = 3
|
||||
|
||||
def __init__(self) -> None:
|
||||
self._state = self._IDLE
|
||||
self._body = bytearray() # accumulates de-stuffed frame bytes
|
||||
self.frames: list[S3Frame] = []
|
||||
|
||||
def reset(self) -> None:
|
||||
self._state = self._IDLE
|
||||
self._body.clear()
|
||||
|
||||
def feed(self, data: bytes) -> list[S3Frame]:
|
||||
"""
|
||||
Process a chunk of incoming bytes.
|
||||
|
||||
Returns a list of S3Frame objects completed during this call.
|
||||
All completed frames are also appended to self.frames.
|
||||
"""
|
||||
completed: list[S3Frame] = []
|
||||
for b in data:
|
||||
frame = self._step(b)
|
||||
if frame is not None:
|
||||
completed.append(frame)
|
||||
self.frames.append(frame)
|
||||
return completed
|
||||
|
||||
def _step(self, b: int) -> Optional[S3Frame]:
|
||||
"""Process one byte. Returns a completed S3Frame or None."""
|
||||
|
||||
if self._state == self._IDLE:
|
||||
if b == DLE:
|
||||
self._state = self._SEEN_DLE
|
||||
# ACK, boot strings, garbage — silently ignored
|
||||
|
||||
elif self._state == self._SEEN_DLE:
|
||||
if b == STX:
|
||||
self._body.clear()
|
||||
self._state = self._IN_FRAME
|
||||
else:
|
||||
# Stray DLE not followed by STX — back to idle
|
||||
self._state = self._IDLE
|
||||
|
||||
elif self._state == self._IN_FRAME:
|
||||
if b == DLE:
|
||||
self._state = self._IN_FRAME_DLE
|
||||
elif b == ETX:
|
||||
# Bare ETX = real frame terminator (confirmed from captures)
|
||||
frame = self._finalise()
|
||||
self._state = self._IDLE
|
||||
return frame
|
||||
else:
|
||||
self._body.append(b)
|
||||
|
||||
elif self._state == self._IN_FRAME_DLE:
|
||||
if b == DLE:
|
||||
# DLE DLE → literal 0x10 in payload
|
||||
self._body.append(DLE)
|
||||
self._state = self._IN_FRAME
|
||||
elif b == ETX:
|
||||
# DLE+ETX inside a frame is an inner-frame terminator, NOT
|
||||
# the outer frame end. Treat as literal data and continue.
|
||||
self._body.append(DLE)
|
||||
self._body.append(ETX)
|
||||
self._state = self._IN_FRAME
|
||||
else:
|
||||
# Unexpected DLE + byte — treat both as literal data and continue
|
||||
self._body.append(DLE)
|
||||
self._body.append(b)
|
||||
self._state = self._IN_FRAME
|
||||
|
||||
return None
|
||||
|
||||
def _finalise(self) -> Optional[S3Frame]:
|
||||
"""
|
||||
Called when DLE+ETX is seen. Validates checksum and builds S3Frame.
|
||||
Returns None if the frame is too short or structurally invalid.
|
||||
"""
|
||||
body = bytes(self._body)
|
||||
|
||||
# Minimum valid frame: 5-byte header + at least 1 checksum byte = 6
|
||||
if len(body) < 6:
|
||||
return None
|
||||
|
||||
raw_payload = body[:-1] # everything except the trailing checksum byte
|
||||
chk_received = body[-1]
|
||||
chk_computed = checksum(raw_payload)
|
||||
|
||||
if len(raw_payload) < 5:
|
||||
return None
|
||||
|
||||
# Validate CMD byte — we only accept S3→BW response frames here
|
||||
if raw_payload[0] != S3_CMD:
|
||||
return None
|
||||
|
||||
return S3Frame(
|
||||
sub = raw_payload[2],
|
||||
page_hi = raw_payload[3],
|
||||
page_lo = raw_payload[4],
|
||||
data = raw_payload[5:],
|
||||
checksum_valid = (chk_received == chk_computed),
|
||||
)
|
||||
419
minimateplus/models.py
Normal file
419
minimateplus/models.py
Normal file
@@ -0,0 +1,419 @@
|
||||
"""
|
||||
models.py — Plain-Python data models for the MiniMate Plus protocol library.
|
||||
|
||||
All models are intentionally simple dataclasses with no protocol logic.
|
||||
They represent *decoded* device data — the client layer translates raw frame
|
||||
bytes into these objects, and the SFM API layer serialises them to JSON.
|
||||
|
||||
Notes on certainty:
|
||||
Fields marked ✅ are confirmed from captured data.
|
||||
Fields marked 🔶 are strongly inferred but not formally proven.
|
||||
Fields marked ❓ are present in the captured payload but not yet decoded.
|
||||
See docs/instantel_protocol_reference.md for full derivation details.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import struct
|
||||
from dataclasses import dataclass, field
|
||||
from typing import Optional
|
||||
|
||||
|
||||
# ── Timestamp ─────────────────────────────────────────────────────────────────
|
||||
|
||||
@dataclass
|
||||
class Timestamp:
|
||||
"""
|
||||
Event timestamp decoded from the MiniMate Plus wire format.
|
||||
|
||||
Two source formats exist:
|
||||
|
||||
1. 6-byte format (from event index / 1E header — not yet decoded in client):
|
||||
[flag:1] [year:2 BE] [unknown:1] [month:1] [day:1]
|
||||
Use Timestamp.from_bytes().
|
||||
|
||||
2. 9-byte format (from Full Waveform Record / 0C, bytes 0–8) ✅ CONFIRMED:
|
||||
[day:1] [sub_code:1] [month:1] [year:2 BE] [unknown:1] [hour:1] [min:1] [sec:1]
|
||||
Use Timestamp.from_waveform_record().
|
||||
|
||||
Confirmed 2026-04-01 against Blastware event report (BE11529 thump event):
|
||||
raw bytes: 01 10 04 07 ea 00 00 1c 0c
|
||||
→ day=1, sub_code=0x10 (Waveform mode), month=4, year=2026,
|
||||
hour=0, minute=28, second=12 ← matches Blastware "00:28:12 April 1, 2026"
|
||||
|
||||
The sub_code at byte[1] is the record-mode indicator:
|
||||
0x10 → Waveform (continuous / single-shot) ✅
|
||||
other → Histogram (code not yet captured ❓)
|
||||
|
||||
The year 1995 is the device's factory-default RTC date — it appears
|
||||
whenever the battery has been disconnected. Treat 1995 as "clock not set".
|
||||
"""
|
||||
raw: bytes # raw bytes for round-tripping
|
||||
flag: int # byte 0 of 6-byte format, or sub_code from 9-byte format
|
||||
year: int # ✅
|
||||
unknown_byte: int # separator byte (purpose unclear ❓)
|
||||
month: int # ✅
|
||||
day: int # ✅
|
||||
|
||||
# Time fields — populated only from the 9-byte waveform-record format
|
||||
hour: Optional[int] = None # ✅ (waveform record format)
|
||||
minute: Optional[int] = None # ✅ (waveform record format)
|
||||
second: Optional[int] = None # ✅ (waveform record format)
|
||||
|
||||
@classmethod
|
||||
def from_bytes(cls, data: bytes) -> "Timestamp":
|
||||
"""
|
||||
Decode a 6-byte timestamp (6-byte event-index format).
|
||||
|
||||
Args:
|
||||
data: exactly 6 bytes from the device payload.
|
||||
|
||||
Returns:
|
||||
Decoded Timestamp (no time fields).
|
||||
|
||||
Raises:
|
||||
ValueError: if data is not exactly 6 bytes.
|
||||
"""
|
||||
if len(data) != 6:
|
||||
raise ValueError(f"Timestamp requires exactly 6 bytes, got {len(data)}")
|
||||
flag = data[0]
|
||||
year = struct.unpack_from(">H", data, 1)[0]
|
||||
unknown_byte = data[3]
|
||||
month = data[4]
|
||||
day = data[5]
|
||||
return cls(
|
||||
raw=bytes(data),
|
||||
flag=flag,
|
||||
year=year,
|
||||
unknown_byte=unknown_byte,
|
||||
month=month,
|
||||
day=day,
|
||||
)
|
||||
|
||||
@classmethod
|
||||
def from_waveform_record(cls, data: bytes) -> "Timestamp":
|
||||
"""
|
||||
Decode a 9-byte timestamp from the first bytes of a 210-byte waveform
|
||||
record (SUB 0C / Full Waveform Record response).
|
||||
|
||||
Wire layout (✅ CONFIRMED 2026-04-01 against Blastware event report):
|
||||
byte[0]: day (uint8)
|
||||
byte[1]: sub_code / mode flag (0x10 = Waveform single-shot)
|
||||
byte[2]: month (uint8)
|
||||
bytes[3–4]: year (big-endian uint16)
|
||||
byte[5]: unknown (0x00 in all observed samples)
|
||||
byte[6]: hour (uint8)
|
||||
byte[7]: minute (uint8)
|
||||
byte[8]: second (uint8)
|
||||
|
||||
Used for sub_code=0x10 records only. For sub_code=0x03 (continuous
|
||||
mode) use from_continuous_record() — the layout is shifted by 1 byte.
|
||||
|
||||
Args:
|
||||
data: at least 9 bytes; only the first 9 are consumed.
|
||||
|
||||
Returns:
|
||||
Decoded Timestamp with hour/minute/second populated.
|
||||
|
||||
Raises:
|
||||
ValueError: if data is fewer than 9 bytes.
|
||||
"""
|
||||
if len(data) < 9:
|
||||
raise ValueError(
|
||||
f"Waveform record timestamp requires at least 9 bytes, got {len(data)}"
|
||||
)
|
||||
day = data[0]
|
||||
sub_code = data[1] # 0x10 = Waveform single-shot
|
||||
month = data[2]
|
||||
year = struct.unpack_from(">H", data, 3)[0]
|
||||
unknown_byte = data[5]
|
||||
hour = data[6]
|
||||
minute = data[7]
|
||||
second = data[8]
|
||||
return cls(
|
||||
raw=bytes(data[:9]),
|
||||
flag=sub_code,
|
||||
year=year,
|
||||
unknown_byte=unknown_byte,
|
||||
month=month,
|
||||
day=day,
|
||||
hour=hour,
|
||||
minute=minute,
|
||||
second=second,
|
||||
)
|
||||
|
||||
@classmethod
|
||||
def from_continuous_record(cls, data: bytes) -> "Timestamp":
|
||||
"""
|
||||
Decode a 10-byte timestamp from the first bytes of a sub_code=0x03
|
||||
(Waveform Continuous) 210-byte record.
|
||||
|
||||
Wire layout (✅ CONFIRMED 2026-04-03 against Blastware event report,
|
||||
event recorded at 15:20:17 April 3 2026, raw: 10 03 10 04 07 ea 00 0f 14 11):
|
||||
byte[0]: unknown_a (0x10 observed — meaning TBD)
|
||||
byte[1]: day (uint8)
|
||||
byte[2]: unknown_b (0x10 observed — meaning TBD)
|
||||
bytes[3]: month (uint8)
|
||||
bytes[4–5]: year (big-endian uint16)
|
||||
byte[6]: unknown (0x00 in all observed samples)
|
||||
byte[7]: hour (uint8)
|
||||
byte[8]: minute (uint8)
|
||||
byte[9]: second (uint8)
|
||||
|
||||
This is the sub_code=0x10 layout shifted forward by 1 byte, with two
|
||||
extra unknown bytes at [0] and [2]. The sub_code (0x03) itself is at
|
||||
byte[1] in the raw record, which also encodes the day — but the day
|
||||
value (3 = April 3rd) happens to differ from the sub_code (0x03) only
|
||||
in semantics; the byte is shared.
|
||||
|
||||
Args:
|
||||
data: at least 10 bytes; only the first 10 are consumed.
|
||||
|
||||
Returns:
|
||||
Decoded Timestamp with hour/minute/second populated.
|
||||
|
||||
Raises:
|
||||
ValueError: if data is fewer than 10 bytes.
|
||||
"""
|
||||
if len(data) < 10:
|
||||
raise ValueError(
|
||||
f"Continuous record timestamp requires at least 10 bytes, got {len(data)}"
|
||||
)
|
||||
unknown_a = data[0] # 0x10 observed; meaning unknown
|
||||
day = data[1] # doubles as the sub_code byte (0x03) — day=3 on Apr 3
|
||||
unknown_b = data[2] # 0x10 observed; meaning unknown
|
||||
month = data[3]
|
||||
year = struct.unpack_from(">H", data, 4)[0]
|
||||
unknown_byte = data[6]
|
||||
hour = data[7]
|
||||
minute = data[8]
|
||||
second = data[9]
|
||||
return cls(
|
||||
raw=bytes(data[:10]),
|
||||
flag=unknown_a,
|
||||
year=year,
|
||||
unknown_byte=unknown_byte,
|
||||
month=month,
|
||||
day=day,
|
||||
hour=hour,
|
||||
minute=minute,
|
||||
second=second,
|
||||
)
|
||||
|
||||
@property
|
||||
def clock_set(self) -> bool:
|
||||
"""False when year == 1995 (factory default / battery-lost state)."""
|
||||
return self.year != 1995
|
||||
|
||||
def __str__(self) -> str:
|
||||
if not self.clock_set:
|
||||
return f"CLOCK_NOT_SET ({self.year}-{self.month:02d}-{self.day:02d})"
|
||||
date_str = f"{self.year}-{self.month:02d}-{self.day:02d}"
|
||||
if self.hour is not None:
|
||||
return f"{date_str} {self.hour:02d}:{self.minute:02d}:{self.second:02d}"
|
||||
return date_str
|
||||
|
||||
|
||||
# ── Device identity ───────────────────────────────────────────────────────────
|
||||
|
||||
@dataclass
|
||||
class DeviceInfo:
|
||||
"""
|
||||
Combined device identity information gathered during the startup sequence.
|
||||
|
||||
Populated from three response SUBs:
|
||||
- SUB EA (SERIAL_NUMBER_RESPONSE): serial, firmware_minor
|
||||
- SUB FE (FULL_CONFIG_RESPONSE): serial (repeat), firmware_version,
|
||||
dsp_version, manufacturer, model
|
||||
- SUB A4 (POLL_RESPONSE): manufacturer (repeat), model (repeat)
|
||||
|
||||
All string fields are stripped of null padding before storage.
|
||||
"""
|
||||
|
||||
# ── From SUB EA (SERIAL_NUMBER_RESPONSE) ─────────────────────────────────
|
||||
serial: str # e.g. "BE18189" ✅
|
||||
firmware_minor: int # 0x11 = 17 for S337.17 ✅
|
||||
serial_trail_0: Optional[int] = None # unit-specific byte — purpose unknown ❓
|
||||
|
||||
# ── From SUB FE (FULL_CONFIG_RESPONSE) ────────────────────────────────────
|
||||
firmware_version: Optional[str] = None # e.g. "S337.17" ✅
|
||||
dsp_version: Optional[str] = None # e.g. "10.72" ✅
|
||||
manufacturer: Optional[str] = None # e.g. "Instantel" ✅
|
||||
model: Optional[str] = None # e.g. "MiniMate Plus" ✅
|
||||
|
||||
# ── From SUB 1A (COMPLIANCE_CONFIG_RESPONSE) ──────────────────────────────
|
||||
compliance_config: Optional["ComplianceConfig"] = None # E5 response, read in connect()
|
||||
|
||||
# ── From SUB 08 (EVENT_INDEX_RESPONSE) ────────────────────────────────────
|
||||
event_count: Optional[int] = None # stored event count from F7 response 🔶
|
||||
|
||||
def __str__(self) -> str:
|
||||
fw = self.firmware_version or f"?.{self.firmware_minor}"
|
||||
mdl = self.model or "MiniMate Plus"
|
||||
return f"{mdl} S/N:{self.serial} FW:{fw}"
|
||||
|
||||
|
||||
# ── Channel threshold / scaling ───────────────────────────────────────────────
|
||||
|
||||
@dataclass
|
||||
class ChannelConfig:
|
||||
"""
|
||||
Per-channel threshold and scaling values from SUB E5 / SUB 71.
|
||||
|
||||
Floats are stored in the device in imperial units (in/s for geo channels,
|
||||
psi for MicL). Unit strings embedded in the payload confirm this.
|
||||
|
||||
Certainty: ✅ CONFIRMED for trigger_level, alarm_level, unit strings.
|
||||
"""
|
||||
label: str # e.g. "Tran", "Vert", "Long", "MicL" ✅
|
||||
trigger_level: float # in/s (geo) or psi (MicL) ✅
|
||||
alarm_level: float # in/s (geo) or psi (MicL) ✅
|
||||
max_range: float # full-scale calibration constant (e.g. 6.206) 🔶
|
||||
unit_label: str # e.g. "in./s" or "psi" ✅
|
||||
|
||||
|
||||
# ── Peak values for one event ─────────────────────────────────────────────────
|
||||
|
||||
@dataclass
|
||||
class PeakValues:
|
||||
"""
|
||||
Per-channel peak particle velocity / pressure for a single event, plus the
|
||||
scalar Peak Vector Sum.
|
||||
|
||||
Extracted from the Full Waveform Record (SUB F3 / 0C response), stored as
|
||||
IEEE 754 big-endian floats in the device's native units (in/s / psi).
|
||||
|
||||
Per-channel PPV location (✅ CONFIRMED 2026-04-01):
|
||||
Found by searching for the 4-byte channel label string ("Tran", "Vert",
|
||||
"Long", "MicL") and reading the float at label_offset + 6.
|
||||
|
||||
Peak Vector Sum (✅ CONFIRMED 2026-04-01):
|
||||
Fixed offset 87 in the 210-byte record.
|
||||
= √(Tran² + Vert² + Long²) at the sample instant of maximum combined
|
||||
geo motion. NOT the vector sum of the three per-channel peak values
|
||||
(those may occur at different times).
|
||||
Matches Blastware's "Peak Vector Sum" display exactly.
|
||||
"""
|
||||
tran: Optional[float] = None # Transverse PPV (in/s) ✅
|
||||
vert: Optional[float] = None # Vertical PPV (in/s) ✅
|
||||
long: Optional[float] = None # Longitudinal PPV (in/s) ✅
|
||||
micl: Optional[float] = None # Air overpressure (psi) 🔶 (units uncertain)
|
||||
peak_vector_sum: Optional[float] = None # Scalar geo PVS (in/s) ✅
|
||||
|
||||
|
||||
# ── Project / operator metadata ───────────────────────────────────────────────
|
||||
|
||||
@dataclass
|
||||
class ProjectInfo:
|
||||
"""
|
||||
Operator-supplied project and location strings from the Full Waveform
|
||||
Record (SUB F3) and compliance config block (SUB E5 / SUB 71).
|
||||
|
||||
All fields are optional — they may be blank if the operator did not fill
|
||||
them in through Blastware.
|
||||
"""
|
||||
setup_name: Optional[str] = None # "Standard Recording Setup"
|
||||
project: Optional[str] = None # project description
|
||||
client: Optional[str] = None # client name ✅ confirmed offset
|
||||
operator: Optional[str] = None # operator / user name
|
||||
sensor_location: Optional[str] = None # sensor location string
|
||||
notes: Optional[str] = None # extended notes
|
||||
|
||||
|
||||
# ── Compliance Config ──────────────────────────────────────────────────────────
|
||||
|
||||
@dataclass
|
||||
class ComplianceConfig:
|
||||
"""
|
||||
Device compliance and recording configuration from SUB 1A (response E5).
|
||||
|
||||
Contains device-wide settings like record time, trigger/alarm thresholds,
|
||||
and operator-supplied strings. This is read once during connect() and
|
||||
cached in DeviceInfo.
|
||||
|
||||
All fields are optional — some may not be decoded yet or may be absent
|
||||
from the device configuration.
|
||||
"""
|
||||
raw: Optional[bytes] = None # full 2090-byte payload (for debugging)
|
||||
|
||||
# Recording parameters (✅ CONFIRMED from §7.6)
|
||||
record_time: Optional[float] = None # seconds (7.0, 10.0, 13.0, etc.)
|
||||
sample_rate: Optional[int] = None # sps (1024, 2048, 4096, etc.) — NOT YET FOUND ❓
|
||||
|
||||
# Trigger/alarm levels (✅ CONFIRMED per-channel at §7.6)
|
||||
# For now we store the first geo channel (Transverse) as representatives;
|
||||
# full per-channel data would require structured Channel objects.
|
||||
trigger_level_geo: Optional[float] = None # in/s (first geo channel)
|
||||
alarm_level_geo: Optional[float] = None # in/s (first geo channel)
|
||||
max_range_geo: Optional[float] = None # in/s full-scale range
|
||||
|
||||
# Project/setup strings (sourced from E5 / SUB 71 write payload)
|
||||
# These are the FULL project metadata from compliance config,
|
||||
# complementing the sparse ProjectInfo found in the waveform record (SUB 0C).
|
||||
setup_name: Optional[str] = None # "Standard Recording Setup"
|
||||
project: Optional[str] = None # project description
|
||||
client: Optional[str] = None # client name
|
||||
operator: Optional[str] = None # operator / user name
|
||||
sensor_location: Optional[str] = None # sensor location string
|
||||
notes: Optional[str] = None # extended notes / additional info
|
||||
|
||||
|
||||
# ── Event ─────────────────────────────────────────────────────────────────────
|
||||
|
||||
@dataclass
|
||||
class Event:
|
||||
"""
|
||||
A single seismic event record downloaded from the device.
|
||||
|
||||
Populated progressively across several request/response pairs:
|
||||
1. SUB 1E (EVENT_HEADER) → index, timestamp, sample_rate
|
||||
2. SUB 0C (FULL_WAVEFORM_RECORD) → peak_values, project_info, record_type
|
||||
3. SUB 5A (BULK_WAVEFORM_STREAM) → raw_samples (downloaded on demand)
|
||||
|
||||
Fields not yet retrieved are None.
|
||||
"""
|
||||
# ── Identity ──────────────────────────────────────────────────────────────
|
||||
index: int # 0-based event number on device
|
||||
|
||||
# ── From EVENT_HEADER (SUB 1E) ────────────────────────────────────────────
|
||||
timestamp: Optional[Timestamp] = None # 6-byte timestamp ✅
|
||||
sample_rate: Optional[int] = None # samples/sec (e.g. 1024) 🔶
|
||||
|
||||
# ── From FULL_WAVEFORM_RECORD (SUB F3) ───────────────────────────────────
|
||||
peak_values: Optional[PeakValues] = None
|
||||
project_info: Optional[ProjectInfo] = None
|
||||
record_type: Optional[str] = None # e.g. "Histogram", "Waveform" 🔶
|
||||
|
||||
# ── From BULK_WAVEFORM_STREAM (SUB 5A) ───────────────────────────────────
|
||||
# Raw ADC samples keyed by channel label. Not fetched unless explicitly
|
||||
# requested (large data transfer — up to several MB per event).
|
||||
raw_samples: Optional[dict] = None # {"Tran": [...], "Vert": [...], ...}
|
||||
total_samples: Optional[int] = None # from STRT record: expected total sample-sets
|
||||
pretrig_samples: Optional[int] = None # from STRT record: pre-trigger sample count
|
||||
rectime_seconds: Optional[int] = None # from STRT record: record duration (seconds)
|
||||
|
||||
# ── Debug / introspection ─────────────────────────────────────────────────
|
||||
# Raw 210-byte waveform record bytes, set when debug mode is active.
|
||||
# Exposed by the SFM server via ?debug=true so field layouts can be verified.
|
||||
_raw_record: Optional[bytes] = field(default=None, repr=False)
|
||||
|
||||
# 4-byte waveform key used to request this event via SUB 5A.
|
||||
# Set by get_events(); required by download_waveform().
|
||||
_waveform_key: Optional[bytes] = field(default=None, repr=False)
|
||||
|
||||
def __str__(self) -> str:
|
||||
ts = str(self.timestamp) if self.timestamp else "no timestamp"
|
||||
ppv = ""
|
||||
if self.peak_values:
|
||||
pv = self.peak_values
|
||||
parts = []
|
||||
if pv.tran is not None:
|
||||
parts.append(f"T={pv.tran:.4f}")
|
||||
if pv.vert is not None:
|
||||
parts.append(f"V={pv.vert:.4f}")
|
||||
if pv.long is not None:
|
||||
parts.append(f"L={pv.long:.4f}")
|
||||
if pv.micl is not None:
|
||||
parts.append(f"M={pv.micl:.6f}")
|
||||
ppv = " [" + ", ".join(parts) + " in/s]"
|
||||
return f"Event#{self.index} {ts}{ppv}"
|
||||
794
minimateplus/protocol.py
Normal file
794
minimateplus/protocol.py
Normal file
@@ -0,0 +1,794 @@
|
||||
"""
|
||||
protocol.py — High-level MiniMate Plus request/response protocol.
|
||||
|
||||
Implements the request/response patterns documented in
|
||||
docs/instantel_protocol_reference.md on top of:
|
||||
- minimateplus.framing — DLE codec, frame builder, S3 streaming parser
|
||||
- minimateplus.transport — byte I/O (SerialTransport / future TcpTransport)
|
||||
|
||||
This module knows nothing about pyserial or TCP — it only calls
|
||||
transport.write() and transport.read_until_idle().
|
||||
|
||||
Key patterns implemented:
|
||||
- POLL startup handshake (two-step, special payload[5] format)
|
||||
- Generic two-step paged read (probe → get length → fetch data)
|
||||
- Response timeout + checksum validation
|
||||
- Boot-string drain (device sends "Operating System" ASCII before framing)
|
||||
|
||||
All public methods raise ProtocolError on timeout, bad checksum, or
|
||||
unexpected response SUB.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
import time
|
||||
from typing import Optional
|
||||
|
||||
from .framing import (
|
||||
S3Frame,
|
||||
S3FrameParser,
|
||||
build_bw_frame,
|
||||
build_5a_frame,
|
||||
waveform_key_params,
|
||||
token_params,
|
||||
bulk_waveform_params,
|
||||
bulk_waveform_term_params,
|
||||
POLL_PROBE,
|
||||
POLL_DATA,
|
||||
)
|
||||
from .transport import BaseTransport
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
|
||||
# ── Constants ─────────────────────────────────────────────────────────────────
|
||||
|
||||
# Response SUB = 0xFF - Request SUB (confirmed pattern, no known exceptions
|
||||
# among read commands; one write-path exception documented for SUB 1C→6E).
|
||||
def _expected_rsp_sub(req_sub: int) -> int:
|
||||
return (0xFF - req_sub) & 0xFF
|
||||
|
||||
|
||||
# SUB byte constants (request side) — see protocol reference §5.1
|
||||
SUB_POLL = 0x5B
|
||||
SUB_SERIAL_NUMBER = 0x15
|
||||
SUB_FULL_CONFIG = 0x01
|
||||
SUB_EVENT_INDEX = 0x08
|
||||
SUB_CHANNEL_CONFIG = 0x06
|
||||
SUB_TRIGGER_CONFIG = 0x1C
|
||||
SUB_EVENT_HEADER = 0x1E
|
||||
SUB_EVENT_ADVANCE = 0x1F
|
||||
SUB_WAVEFORM_HEADER = 0x0A
|
||||
SUB_WAVEFORM_RECORD = 0x0C
|
||||
SUB_BULK_WAVEFORM = 0x5A
|
||||
SUB_COMPLIANCE = 0x1A
|
||||
SUB_UNKNOWN_2E = 0x2E
|
||||
|
||||
# Hardcoded data lengths for the two-step read protocol.
|
||||
#
|
||||
# The S3 probe response page_key is always 0x0000 — it does NOT carry the
|
||||
# data length back to us. Instead, each SUB has a fixed known payload size
|
||||
# confirmed from BW capture analysis (offset at payload[5] of the data-request
|
||||
# frame).
|
||||
#
|
||||
# Key: request SUB byte. Value: offset/length byte sent in the data-request.
|
||||
# Entries marked 🔶 are inferred from captured frames and may need adjustment.
|
||||
DATA_LENGTHS: dict[int, int] = {
|
||||
SUB_POLL: 0x30, # POLL startup data block ✅
|
||||
SUB_SERIAL_NUMBER: 0x0A, # 10-byte serial number block ✅
|
||||
SUB_FULL_CONFIG: 0x98, # 152-byte full config block ✅
|
||||
SUB_EVENT_INDEX: 0x58, # 88-byte event index ✅
|
||||
SUB_TRIGGER_CONFIG: 0x2C, # 44-byte trigger config 🔶
|
||||
SUB_EVENT_HEADER: 0x08, # 8-byte event header (waveform key + event data) ✅
|
||||
SUB_EVENT_ADVANCE: 0x08, # 8-byte next-key response ✅
|
||||
# SUB_WAVEFORM_HEADER (0x0A) is VARIABLE — length read from probe response
|
||||
# data[4]. Do NOT add it here; use read_waveform_header() instead. ✅
|
||||
SUB_WAVEFORM_RECORD: 0xD2, # 210-byte waveform/histogram record ✅
|
||||
SUB_UNKNOWN_2E: 0x1A, # 26 bytes, purpose TBD 🔶
|
||||
0x09: 0xCA, # 202 bytes, purpose TBD 🔶
|
||||
# SUB_COMPLIANCE (0x1A) uses a multi-step sequence with a 2090-byte total;
|
||||
# NOT handled here — requires specialised read logic.
|
||||
}
|
||||
|
||||
# SUB 5A (BULK_WAVEFORM_STREAM) protocol constants.
|
||||
# Confirmed from 1-2-26 BW TX capture analysis (2026-04-02).
|
||||
_BULK_CHUNK_OFFSET = 0x1004 # offset field for probe + all regular chunk requests ✅
|
||||
_BULK_TERM_OFFSET = 0x005A # offset field for termination request ✅
|
||||
_BULK_COUNTER_STEP = 0x0400 # chunk counter increment per request ✅
|
||||
# Note: BW's second chunk used counter=0x1004 rather than the expected 0x0400.
|
||||
# This appears to be a waveform-specific pre-trigger byte offset unique to BW's
|
||||
# implementation. All subsequent chunks incremented by 0x0400 as expected.
|
||||
# 🔶 INFERRED: device echoes the counter back but may not validate it.
|
||||
# Confirm empirically on first live test.
|
||||
|
||||
# Default timeout values (seconds).
|
||||
# MiniMate Plus is a slow device — keep these generous.
|
||||
DEFAULT_RECV_TIMEOUT = 10.0
|
||||
POLL_RECV_TIMEOUT = 10.0
|
||||
|
||||
|
||||
# ── Exception ─────────────────────────────────────────────────────────────────
|
||||
|
||||
class ProtocolError(Exception):
|
||||
"""Raised when the device violates the expected protocol."""
|
||||
|
||||
|
||||
class TimeoutError(ProtocolError):
|
||||
"""Raised when no response is received within the allowed time."""
|
||||
|
||||
|
||||
class ChecksumError(ProtocolError):
|
||||
"""Raised when a received frame has a bad checksum."""
|
||||
|
||||
|
||||
class UnexpectedResponse(ProtocolError):
|
||||
"""Raised when the response SUB doesn't match what we requested."""
|
||||
|
||||
|
||||
# ── MiniMateProtocol ──────────────────────────────────────────────────────────
|
||||
|
||||
class MiniMateProtocol:
|
||||
"""
|
||||
Protocol state machine for one open connection to a MiniMate Plus device.
|
||||
|
||||
Does not own the transport — transport lifetime is managed by MiniMateClient.
|
||||
|
||||
Typical usage (via MiniMateClient — not directly):
|
||||
proto = MiniMateProtocol(transport)
|
||||
proto.startup() # POLL handshake, drain boot string
|
||||
data = proto.read(SUB_FULL_CONFIG)
|
||||
sn_data = proto.read(SUB_SERIAL_NUMBER)
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
transport: BaseTransport,
|
||||
recv_timeout: float = DEFAULT_RECV_TIMEOUT,
|
||||
) -> None:
|
||||
self._transport = transport
|
||||
self._recv_timeout = recv_timeout
|
||||
self._parser = S3FrameParser()
|
||||
# Extra frames buffered by _recv_one that arrived alongside the target frame.
|
||||
# Used when reset_parser=False so we don't discard already-parsed frames.
|
||||
self._pending_frames: list[S3Frame] = []
|
||||
|
||||
# ── Public API ────────────────────────────────────────────────────────────
|
||||
|
||||
def startup(self) -> S3Frame:
|
||||
"""
|
||||
Perform the POLL startup handshake and return the POLL data frame.
|
||||
|
||||
Steps (matching §6 Session Startup Sequence):
|
||||
1. Drain any boot-string bytes ("Operating System" ASCII)
|
||||
2. Send POLL_PROBE (SUB 5B, offset=0x00)
|
||||
3. Receive probe ack (page_key is 0x0000; data length 0x30 is hardcoded)
|
||||
4. Send POLL_DATA (SUB 5B, offset=0x30)
|
||||
5. Receive data frame with "Instantel" + "MiniMate Plus" strings
|
||||
|
||||
Returns:
|
||||
The data-phase POLL response S3Frame.
|
||||
|
||||
Raises:
|
||||
ProtocolError: if either POLL step fails.
|
||||
"""
|
||||
log.debug("startup: draining boot string")
|
||||
self._drain_boot_string()
|
||||
|
||||
log.debug("startup: POLL probe")
|
||||
self._send(POLL_PROBE)
|
||||
probe_rsp = self._recv_one(
|
||||
expected_sub=_expected_rsp_sub(SUB_POLL),
|
||||
timeout=self._recv_timeout,
|
||||
)
|
||||
log.debug(
|
||||
"startup: POLL probe response page_key=0x%04X", probe_rsp.page_key
|
||||
)
|
||||
|
||||
log.debug("startup: POLL data request")
|
||||
self._send(POLL_DATA)
|
||||
data_rsp = self._recv_one(
|
||||
expected_sub=_expected_rsp_sub(SUB_POLL),
|
||||
timeout=self._recv_timeout,
|
||||
)
|
||||
log.debug("startup: POLL data received, %d bytes", len(data_rsp.data))
|
||||
return data_rsp
|
||||
|
||||
def read(self, sub: int) -> bytes:
|
||||
"""
|
||||
Execute a two-step paged read and return the data payload bytes.
|
||||
|
||||
Step 1: send probe frame (offset=0x00) → device sends a short ack
|
||||
Step 2: send data-request (offset=DATA_LEN) → device sends the data block
|
||||
|
||||
The S3 probe response does NOT carry the data length — page_key is always
|
||||
0x0000 in observed frames. DATA_LENGTHS holds the known fixed lengths
|
||||
derived from BW capture analysis.
|
||||
|
||||
Args:
|
||||
sub: Request SUB byte (e.g. SUB_FULL_CONFIG = 0x01).
|
||||
|
||||
Returns:
|
||||
De-stuffed data payload bytes (payload[5:] of the response frame,
|
||||
with the checksum already stripped by the parser).
|
||||
|
||||
Raises:
|
||||
ProtocolError: on timeout, bad checksum, or wrong response SUB.
|
||||
KeyError: if sub is not in DATA_LENGTHS (caller should add it).
|
||||
"""
|
||||
rsp_sub = _expected_rsp_sub(sub)
|
||||
|
||||
# Step 1 — probe (offset = 0)
|
||||
log.debug("read SUB=0x%02X: probe", sub)
|
||||
self._send(build_bw_frame(sub, 0))
|
||||
_probe = self._recv_one(expected_sub=rsp_sub) # ack; page_key always 0
|
||||
|
||||
# Look up the hardcoded data length for this SUB
|
||||
if sub not in DATA_LENGTHS:
|
||||
raise ProtocolError(
|
||||
f"No known data length for SUB=0x{sub:02X}. "
|
||||
"Add it to DATA_LENGTHS in protocol.py."
|
||||
)
|
||||
length = DATA_LENGTHS[sub]
|
||||
log.debug("read SUB=0x%02X: data request offset=0x%02X", sub, length)
|
||||
|
||||
if length == 0:
|
||||
log.warning("read SUB=0x%02X: DATA_LENGTHS entry is zero", sub)
|
||||
return b""
|
||||
|
||||
# Step 2 — data-request (offset = length)
|
||||
self._send(build_bw_frame(sub, length))
|
||||
data_rsp = self._recv_one(expected_sub=rsp_sub)
|
||||
|
||||
log.debug("read SUB=0x%02X: received %d data bytes", sub, len(data_rsp.data))
|
||||
return data_rsp.data
|
||||
|
||||
def send_keepalive(self) -> None:
|
||||
"""
|
||||
Send a single POLL_PROBE keepalive without waiting for a response.
|
||||
|
||||
Blastware sends these every ~80ms during idle. Useful if you need to
|
||||
hold the session open between real requests.
|
||||
"""
|
||||
self._send(POLL_PROBE)
|
||||
|
||||
# ── Event download API ────────────────────────────────────────────────────
|
||||
|
||||
def read_event_index(self) -> bytes:
|
||||
"""
|
||||
Send the SUB 08 (EVENT_INDEX) two-step read and return the raw 88-byte
|
||||
(0x58) index block.
|
||||
|
||||
The index block contains:
|
||||
+0x00 (3 bytes): total index size or record count — purpose partially
|
||||
decoded; byte [3] may be a high byte of event count.
|
||||
+0x03 (4 bytes): stored event count as uint32 BE ❓ (inferred from
|
||||
captures; see §7.4 in protocol reference)
|
||||
+0x07 onwards: 6-byte event timestamps (see §8), one per event
|
||||
|
||||
Caller is responsible for parsing the returned bytes.
|
||||
|
||||
Returns:
|
||||
Raw 88-byte data section (data[11:11+0x58]).
|
||||
|
||||
Raises:
|
||||
ProtocolError: on timeout, bad checksum, or wrong response SUB.
|
||||
"""
|
||||
rsp_sub = _expected_rsp_sub(SUB_EVENT_INDEX)
|
||||
length = DATA_LENGTHS[SUB_EVENT_INDEX] # 0x58
|
||||
|
||||
log.debug("read_event_index: 08 probe")
|
||||
self._send(build_bw_frame(SUB_EVENT_INDEX, 0))
|
||||
self._recv_one(expected_sub=rsp_sub)
|
||||
|
||||
log.debug("read_event_index: 08 data request offset=0x%02X", length)
|
||||
self._send(build_bw_frame(SUB_EVENT_INDEX, length))
|
||||
data_rsp = self._recv_one(expected_sub=rsp_sub)
|
||||
|
||||
raw = data_rsp.data[11 : 11 + length]
|
||||
log.debug("read_event_index: got %d bytes", len(raw))
|
||||
return raw
|
||||
|
||||
def read_event_first(self) -> tuple[bytes, bytes]:
|
||||
"""
|
||||
Send the SUB 1E (EVENT_HEADER) two-step read and return the first
|
||||
waveform key and accompanying 8-byte event data block.
|
||||
|
||||
This always uses all-zero params — the device returns the first stored
|
||||
event's waveform key unconditionally.
|
||||
|
||||
Returns:
|
||||
(key4, event_data8) where:
|
||||
key4 — 4-byte opaque waveform record address (data[11:15])
|
||||
event_data8 — full 8-byte data section (data[11:19])
|
||||
|
||||
Raises:
|
||||
ProtocolError: on timeout, bad checksum, or wrong response SUB.
|
||||
|
||||
Confirmed from 3-31-26 capture: 1E request uses all-zero params;
|
||||
response data section layout is:
|
||||
[LENGTH_ECHO:1][00×4][KEY_ECHO:4][00×2][KEY4:4][EXTRA:4] …
|
||||
Actual data starts at data[11]; first 4 bytes are the waveform key.
|
||||
"""
|
||||
rsp_sub = _expected_rsp_sub(SUB_EVENT_HEADER)
|
||||
length = DATA_LENGTHS[SUB_EVENT_HEADER] # 0x08
|
||||
|
||||
log.debug("read_event_first: 1E probe")
|
||||
self._send(build_bw_frame(SUB_EVENT_HEADER, 0))
|
||||
self._recv_one(expected_sub=rsp_sub)
|
||||
|
||||
log.debug("read_event_first: 1E data request offset=0x%02X", length)
|
||||
self._send(build_bw_frame(SUB_EVENT_HEADER, length))
|
||||
data_rsp = self._recv_one(expected_sub=rsp_sub)
|
||||
|
||||
event_data8 = data_rsp.data[11:19]
|
||||
key4 = data_rsp.data[11:15]
|
||||
log.debug("read_event_first: key=%s", key4.hex())
|
||||
return key4, event_data8
|
||||
|
||||
def read_waveform_header(self, key4: bytes) -> tuple[bytes, int]:
|
||||
"""
|
||||
Send the SUB 0A (WAVEFORM_HEADER) two-step read for *key4*.
|
||||
|
||||
The data length for 0A is VARIABLE and must be read from the probe
|
||||
response at data[4]. Two known values:
|
||||
0x30 — full histogram bin (has a waveform record to follow)
|
||||
0x26 — partial histogram bin (no waveform record)
|
||||
|
||||
Args:
|
||||
key4: 4-byte waveform record address from 1E or 1F.
|
||||
|
||||
Returns:
|
||||
(header_bytes, record_length) where:
|
||||
header_bytes — raw data section starting at data[11]
|
||||
record_length — DATA_LENGTH read from probe (0x30 or 0x26)
|
||||
|
||||
Raises:
|
||||
ProtocolError: on timeout, bad checksum, or wrong response SUB.
|
||||
|
||||
Confirmed from 3-31-26 capture: 0A probe response data[4] carries
|
||||
the variable length; data-request uses that length as the offset byte.
|
||||
"""
|
||||
rsp_sub = _expected_rsp_sub(SUB_WAVEFORM_HEADER)
|
||||
params = waveform_key_params(key4)
|
||||
|
||||
log.debug("read_waveform_header: 0A probe key=%s", key4.hex())
|
||||
self._send(build_bw_frame(SUB_WAVEFORM_HEADER, 0, params))
|
||||
probe_rsp = self._recv_one(expected_sub=rsp_sub)
|
||||
|
||||
# Variable length — read from probe response data[4]
|
||||
length = probe_rsp.data[4] if len(probe_rsp.data) > 4 else 0x30
|
||||
log.debug("read_waveform_header: 0A data request offset=0x%02X", length)
|
||||
|
||||
if length == 0:
|
||||
return b"", 0
|
||||
|
||||
self._send(build_bw_frame(SUB_WAVEFORM_HEADER, length, params))
|
||||
data_rsp = self._recv_one(expected_sub=rsp_sub)
|
||||
|
||||
header_bytes = data_rsp.data[11:11 + length]
|
||||
log.debug(
|
||||
"read_waveform_header: key=%s length=0x%02X is_full=%s",
|
||||
key4.hex(), length, length == 0x30,
|
||||
)
|
||||
return header_bytes, length
|
||||
|
||||
def read_waveform_record(self, key4: bytes) -> bytes:
|
||||
"""
|
||||
Send the SUB 0C (WAVEFORM_RECORD / FULL_WAVEFORM_RECORD) two-step read.
|
||||
|
||||
Returns the 210-byte waveform/histogram record containing:
|
||||
- Record type string ("Histogram" or "Waveform") at a variable offset
|
||||
- Per-channel labels ("Tran", "Vert", "Long", "MicL") with PPV floats
|
||||
at label_offset + 6
|
||||
|
||||
Args:
|
||||
key4: 4-byte waveform record address.
|
||||
|
||||
Returns:
|
||||
210-byte record bytes (data[11:11+0xD2]).
|
||||
|
||||
Raises:
|
||||
ProtocolError: on timeout, bad checksum, or wrong response SUB.
|
||||
|
||||
Confirmed from 3-31-26 capture: 0C always uses offset=0xD2 (210 bytes).
|
||||
"""
|
||||
rsp_sub = _expected_rsp_sub(SUB_WAVEFORM_RECORD)
|
||||
length = DATA_LENGTHS[SUB_WAVEFORM_RECORD] # 0xD2
|
||||
params = waveform_key_params(key4)
|
||||
|
||||
log.debug("read_waveform_record: 0C probe key=%s", key4.hex())
|
||||
self._send(build_bw_frame(SUB_WAVEFORM_RECORD, 0, params))
|
||||
self._recv_one(expected_sub=rsp_sub)
|
||||
|
||||
log.debug("read_waveform_record: 0C data request offset=0x%02X", length)
|
||||
self._send(build_bw_frame(SUB_WAVEFORM_RECORD, length, params))
|
||||
data_rsp = self._recv_one(expected_sub=rsp_sub)
|
||||
|
||||
record = data_rsp.data[11:11 + length]
|
||||
log.debug("read_waveform_record: received %d record bytes", len(record))
|
||||
return record
|
||||
|
||||
def read_bulk_waveform_stream(
|
||||
self,
|
||||
key4: bytes,
|
||||
*,
|
||||
stop_after_metadata: bool = True,
|
||||
max_chunks: int = 32,
|
||||
) -> list[bytes]:
|
||||
"""
|
||||
Download the SUB 5A (BULK_WAVEFORM_STREAM) A5 frames for one event.
|
||||
|
||||
The bulk waveform stream carries both raw ADC samples (large) and
|
||||
event-time metadata strings ("Project:", "Client:", "User Name:",
|
||||
"Seis Loc:", "Extended Notes") embedded in one of the middle frames
|
||||
(confirmed: A5[7] of 9 for 1-2-26 capture).
|
||||
|
||||
Protocol is request-per-chunk, NOT a continuous stream:
|
||||
1. Probe (offset=_BULK_CHUNK_OFFSET, is_probe=True, counter=0x0000)
|
||||
2. Chunks (offset=_BULK_CHUNK_OFFSET, is_probe=False, counter+=0x0400)
|
||||
3. Loop until metadata found (stop_after_metadata=True) or max_chunks
|
||||
4. Termination (offset=_BULK_TERM_OFFSET, counter=last+_BULK_COUNTER_STEP)
|
||||
Device responds with a final A5 frame (page_key=0x0000).
|
||||
|
||||
The termination frame (page_key=0x0000) is NOT included in the returned list.
|
||||
|
||||
Args:
|
||||
key4: 4-byte waveform key from EVENT_HEADER (1E).
|
||||
stop_after_metadata: If True (default), send termination as soon as
|
||||
b"Project:" is found in a frame's data — avoids
|
||||
downloading the full ADC waveform payload (several
|
||||
hundred KB). Set False to download everything.
|
||||
max_chunks: Safety cap on the number of chunk requests sent
|
||||
(default 32; a typical event uses 9 large frames).
|
||||
|
||||
Returns:
|
||||
List of raw data bytes from each A5 response frame (not including
|
||||
the terminator frame). Frame indices match the request sequence:
|
||||
index 0 = probe response, index 1 = first chunk, etc.
|
||||
|
||||
Raises:
|
||||
ProtocolError: on timeout, bad checksum, or unexpected SUB.
|
||||
|
||||
Confirmed from 1-2-26 BW TX/RX captures (2026-04-02):
|
||||
- probe + 8 regular chunks + 1 termination = 10 TX frames
|
||||
- 9 large A5 responses + 1 terminator A5 = 10 RX frames
|
||||
- page_key=0x0010 on large frames; page_key=0x0000 on terminator ✅
|
||||
- "Project:" metadata at A5[7].data[626] ✅
|
||||
"""
|
||||
if len(key4) != 4:
|
||||
raise ValueError(f"waveform key must be 4 bytes, got {len(key4)}")
|
||||
|
||||
rsp_sub = _expected_rsp_sub(SUB_BULK_WAVEFORM) # 0xFF - 0x5A = 0xA5
|
||||
frames_data: list[bytes] = []
|
||||
counter = 0
|
||||
|
||||
# ── Step 1: probe ────────────────────────────────────────────────────
|
||||
log.debug("5A probe key=%s", key4.hex())
|
||||
params = bulk_waveform_params(key4, 0, is_probe=True)
|
||||
self._send(build_5a_frame(_BULK_CHUNK_OFFSET, params))
|
||||
rsp = self._recv_one(expected_sub=rsp_sub)
|
||||
frames_data.append(rsp.data)
|
||||
log.debug("5A A5[0] page_key=0x%04X %d bytes", rsp.page_key, len(rsp.data))
|
||||
|
||||
# ── Step 2: chunk loop ───────────────────────────────────────────────
|
||||
for chunk_num in range(1, max_chunks + 1):
|
||||
counter = chunk_num * _BULK_COUNTER_STEP
|
||||
params = bulk_waveform_params(key4, counter)
|
||||
log.debug("5A chunk %d counter=0x%04X", chunk_num, counter)
|
||||
self._send(build_5a_frame(_BULK_CHUNK_OFFSET, params))
|
||||
rsp = self._recv_one(expected_sub=rsp_sub)
|
||||
|
||||
if rsp.page_key == 0x0000:
|
||||
# Device unexpectedly terminated mid-stream (no termination needed).
|
||||
log.debug("5A A5[%d] page_key=0x0000 — device terminated early", chunk_num)
|
||||
return frames_data
|
||||
|
||||
frames_data.append(rsp.data)
|
||||
log.debug(
|
||||
"5A A5[%d] page_key=0x%04X %d bytes",
|
||||
chunk_num, rsp.page_key, len(rsp.data),
|
||||
)
|
||||
|
||||
if stop_after_metadata and b"Project:" in rsp.data:
|
||||
log.debug("5A A5[%d] metadata found — stopping early", chunk_num)
|
||||
break
|
||||
else:
|
||||
log.warning(
|
||||
"5A reached max_chunks=%d without end-of-stream; sending termination",
|
||||
max_chunks,
|
||||
)
|
||||
|
||||
# ── Step 3: termination ──────────────────────────────────────────────
|
||||
term_counter = counter + _BULK_COUNTER_STEP
|
||||
term_params = bulk_waveform_term_params(key4, term_counter)
|
||||
log.debug(
|
||||
"5A termination term_counter=0x%04X offset=0x%04X",
|
||||
term_counter, _BULK_TERM_OFFSET,
|
||||
)
|
||||
self._send(build_5a_frame(_BULK_TERM_OFFSET, term_params))
|
||||
try:
|
||||
term_rsp = self._recv_one(expected_sub=rsp_sub)
|
||||
log.debug(
|
||||
"5A termination response page_key=0x%04X %d bytes",
|
||||
term_rsp.page_key, len(term_rsp.data),
|
||||
)
|
||||
except TimeoutError:
|
||||
log.debug("5A no termination response — device may have already closed")
|
||||
|
||||
return frames_data
|
||||
|
||||
def advance_event(self, browse: bool = False) -> tuple[bytes, bytes]:
|
||||
"""
|
||||
Send the SUB 1F (EVENT_ADVANCE) two-step read and return the next
|
||||
waveform key and the full 8-byte event data block.
|
||||
|
||||
browse=False (default, download mode): sends token=0xFE at params[7].
|
||||
Used by get_events() — the token causes the device to skip partial
|
||||
histogram bins and return the key of the next FULL record.
|
||||
|
||||
browse=True: sends all-zero params (no token). Matches Blastware's
|
||||
confirmed browse-mode sequence: 0A → 1F(zeros) → 0A → 1F(zeros).
|
||||
Used by count_events() where no 0C/5A download occurs.
|
||||
|
||||
IMPORTANT: A preceding 0A (read_waveform_header) call is REQUIRED in
|
||||
both modes to establish device waveform context. Without it, 1F
|
||||
returns the null sentinel regardless of how many events are stored.
|
||||
|
||||
Returns:
|
||||
(key4, event_data8) where:
|
||||
key4 — 4-byte opaque waveform record address (data[11:15]).
|
||||
event_data8 — full 8-byte block (data[11:19]).
|
||||
|
||||
End-of-events sentinel: event_data8[4:8] == b'\\x00\\x00\\x00\\x00'.
|
||||
DO NOT use key4 == b'\\x00\\x00\\x00\\x00' as the sentinel — key4 is
|
||||
all-zeros for event 0 (the very first stored event) and will cause the
|
||||
loop to terminate prematurely.
|
||||
|
||||
Raises:
|
||||
ProtocolError: on timeout, bad checksum, or wrong response SUB.
|
||||
"""
|
||||
rsp_sub = _expected_rsp_sub(SUB_EVENT_ADVANCE)
|
||||
length = DATA_LENGTHS[SUB_EVENT_ADVANCE] # 0x08
|
||||
params = token_params(0) if browse else token_params(0xFE)
|
||||
|
||||
mode = "browse" if browse else "download"
|
||||
log.debug("advance_event: 1F probe mode=%s params=%s", mode, params.hex())
|
||||
self._send(build_bw_frame(SUB_EVENT_ADVANCE, 0, params))
|
||||
self._recv_one(expected_sub=rsp_sub)
|
||||
|
||||
log.debug("advance_event: 1F data request offset=0x%02X", length)
|
||||
self._send(build_bw_frame(SUB_EVENT_ADVANCE, length, params))
|
||||
data_rsp = self._recv_one(expected_sub=rsp_sub)
|
||||
|
||||
event_data8 = data_rsp.data[11:19]
|
||||
key4 = data_rsp.data[11:15]
|
||||
is_done = event_data8[4:8] == b"\x00\x00\x00\x00"
|
||||
log.debug(
|
||||
"advance_event: next key=%s data8=%s done=%s",
|
||||
key4.hex(), event_data8.hex(), is_done,
|
||||
)
|
||||
return key4, event_data8
|
||||
|
||||
def read_compliance_config(self) -> bytes:
|
||||
"""
|
||||
Send the SUB 1A (COMPLIANCE_CONFIG) multi-step read and accumulate
|
||||
all E5 response frames into a single config byte string.
|
||||
|
||||
BE18189 sends the full config in one large E5 frame (~4245 cfg bytes).
|
||||
BE11529 appears to chunk the response — each E5 frame carries ~44 bytes
|
||||
of cfg data. This method loops until the expected 0x082A (2090) bytes
|
||||
are accumulated or the inter-frame gap exceeds _INTER_FRAME_TIMEOUT.
|
||||
|
||||
Frame structure (confirmed from raw BW captures 3-11-26):
|
||||
Probe (Frame A): byte[5]=0x00, params[7]=0x64
|
||||
Data req (Frame D): byte[5]=0x2A, params[2]=0x08, params[7]=0x64
|
||||
|
||||
0x082A split: byte[5]=0x2A (offset low), params[2]=0x08 (length high)
|
||||
params[7]=0x64 required in both probe and data-request.
|
||||
|
||||
Returns:
|
||||
Accumulated compliance config bytes. First frame: data[11:] (skips
|
||||
11-byte echo header). Subsequent frames: structure logged and
|
||||
accumulated from data[11:] as well — adjust offset if structure differs.
|
||||
|
||||
Raises:
|
||||
ProtocolError: if the very first E5 frame is not received (hard timeout).
|
||||
"""
|
||||
rsp_sub = _expected_rsp_sub(SUB_COMPLIANCE)
|
||||
|
||||
# Probe — params[7]=0x64 required (confirmed from BW capture)
|
||||
_PROBE_PARAMS = bytes([0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x64, 0x00, 0x00])
|
||||
log.debug("read_compliance_config: 1A probe")
|
||||
self._send(build_bw_frame(SUB_COMPLIANCE, 0, _PROBE_PARAMS))
|
||||
self._recv_one(expected_sub=rsp_sub)
|
||||
|
||||
# Frame D params — offset=0x002A, params[2]=0x08, params[7]=0x64
|
||||
_DATA_PARAMS = bytes([0x00, 0x00, 0x08, 0x00, 0x00, 0x00, 0x00, 0x64, 0x00, 0x00])
|
||||
|
||||
# ── Multi-request accumulation ────────────────────────────────────────
|
||||
#
|
||||
# Full BW sequence (confirmed from raw_bw captures 3-11-26):
|
||||
#
|
||||
# Frame B: offset=0x0400 params[2]=0x00 → requests cfg bytes 0..1023
|
||||
# Frame C: offset=0x0400 params[2]=0x04 → requests cfg bytes 1024..2047
|
||||
# Frame D: offset=0x002A params[2]=0x08 → requests cfg bytes 2048..2089
|
||||
#
|
||||
# Total: 0x0400 + 0x0400 + 0x002A = 0x082A = 2090 bytes.
|
||||
#
|
||||
# The "offset" field in B and C encodes the chunk length (0x0400 = 1024),
|
||||
# not a byte offset into the config. params[2] tracks cumulative pages
|
||||
# (0x00 → 0x04 → 0x08; each page = 256 bytes → 0x04 pages = 1024 bytes).
|
||||
#
|
||||
# Each request gets its own E5 response with an 11-byte echo header.
|
||||
# Devices that send the full block in a single frame (BE18189) may return
|
||||
# the entire config from the last request alone — we handle both cases by
|
||||
# trying each step and concatenating whatever arrives.
|
||||
|
||||
_DATA_PARAMS_B = bytes([0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x64, 0x00, 0x00])
|
||||
_DATA_PARAMS_C = bytes([0x00, 0x00, 0x04, 0x00, 0x00, 0x00, 0x00, 0x64, 0x00, 0x00])
|
||||
# _DATA_PARAMS_D already built above as _DATA_PARAMS
|
||||
|
||||
_STEPS = [
|
||||
("B", 0x0400, _DATA_PARAMS_B),
|
||||
("C", 0x0400, _DATA_PARAMS_C),
|
||||
("D", 0x002A, _DATA_PARAMS), # _DATA_PARAMS built above
|
||||
]
|
||||
|
||||
config = bytearray()
|
||||
|
||||
for step_name, step_offset, step_params in _STEPS:
|
||||
log.debug(
|
||||
"read_compliance_config: sending frame %s offset=0x%04X params=%s",
|
||||
step_name, step_offset, step_params.hex(),
|
||||
)
|
||||
self._send(build_bw_frame(SUB_COMPLIANCE, step_offset, step_params))
|
||||
|
||||
try:
|
||||
data_rsp = self._recv_one(expected_sub=rsp_sub)
|
||||
except TimeoutError:
|
||||
log.warning(
|
||||
"read_compliance_config: frame %s — no E5 response (timeout)",
|
||||
step_name,
|
||||
)
|
||||
continue
|
||||
|
||||
chunk = data_rsp.data[11:]
|
||||
log.warning(
|
||||
"read_compliance_config: frame %s page=0x%04X data=%d cfg_chunk=%d running_total=%d",
|
||||
step_name, data_rsp.page_key, len(data_rsp.data),
|
||||
len(chunk), len(config) + len(chunk),
|
||||
)
|
||||
config.extend(chunk)
|
||||
|
||||
# Safety drain: catch any extra frame the device may buffer on slow links.
|
||||
try:
|
||||
tail_rsp = self._recv_one(expected_sub=rsp_sub, timeout=2.0)
|
||||
tail_chunk = tail_rsp.data[11:]
|
||||
log.warning(
|
||||
"read_compliance_config: unexpected tail frame page=0x%04X "
|
||||
"cfg_chunk=%d running_total=%d",
|
||||
tail_rsp.page_key, len(tail_chunk), len(config) + len(tail_chunk),
|
||||
)
|
||||
config.extend(tail_chunk)
|
||||
except TimeoutError:
|
||||
pass
|
||||
|
||||
log.warning(
|
||||
"read_compliance_config: done — %d cfg bytes total",
|
||||
len(config),
|
||||
)
|
||||
|
||||
# Hex dump first 128 bytes for field mapping
|
||||
for row in range(0, min(len(config), 128), 16):
|
||||
row_bytes = bytes(config[row:row + 16])
|
||||
hex_part = ' '.join(f'{b:02x}' for b in row_bytes)
|
||||
asc_part = ''.join(chr(b) if 32 <= b < 127 else '.' for b in row_bytes)
|
||||
log.warning(" cfg[%04x]: %-48s %s", row, hex_part, asc_part)
|
||||
|
||||
return bytes(config)
|
||||
|
||||
# ── Internal helpers ──────────────────────────────────────────────────────
|
||||
|
||||
def _send(self, frame: bytes) -> None:
|
||||
"""Write a pre-built frame to the transport."""
|
||||
log.debug("TX %d bytes: %s", len(frame), frame.hex())
|
||||
self._transport.write(frame)
|
||||
|
||||
def _recv_one(
|
||||
self,
|
||||
expected_sub: Optional[int] = None,
|
||||
timeout: Optional[float] = None,
|
||||
reset_parser: bool = True,
|
||||
) -> S3Frame:
|
||||
"""
|
||||
Read bytes from the transport until one complete S3 frame is parsed.
|
||||
|
||||
Feeds bytes through the streaming S3FrameParser. Keeps reading until
|
||||
a frame arrives or the deadline expires.
|
||||
|
||||
Args:
|
||||
expected_sub: If provided, raises UnexpectedResponse if the
|
||||
received frame's SUB doesn't match.
|
||||
timeout: Seconds to wait. Defaults to self._recv_timeout.
|
||||
reset_parser: If True (default), reset the parser before reading.
|
||||
Pass False when accumulating multiple frames from a
|
||||
single device response (e.g. chunked E5 replies) so
|
||||
that bytes already buffered between frames are not lost.
|
||||
|
||||
Returns:
|
||||
The first complete S3Frame received.
|
||||
|
||||
Raises:
|
||||
TimeoutError: if no frame arrives within the timeout.
|
||||
ChecksumError: if the frame has an invalid checksum.
|
||||
UnexpectedResponse: if expected_sub is set and doesn't match.
|
||||
"""
|
||||
deadline = time.monotonic() + (timeout or self._recv_timeout)
|
||||
if reset_parser:
|
||||
self._parser.reset()
|
||||
self._pending_frames.clear()
|
||||
|
||||
# If a prior read() parsed more frames than it returned (e.g. two frames
|
||||
# arrived in one TCP chunk), return the buffered one immediately.
|
||||
if self._pending_frames:
|
||||
frame = self._pending_frames.pop(0)
|
||||
self._validate_frame(frame, expected_sub)
|
||||
return frame
|
||||
|
||||
while time.monotonic() < deadline:
|
||||
chunk = self._transport.read(256)
|
||||
if chunk:
|
||||
log.debug("RX %d bytes: %s", len(chunk), chunk.hex())
|
||||
frames = self._parser.feed(chunk)
|
||||
if frames:
|
||||
# Stash any extras so subsequent calls with reset_parser=False see them
|
||||
self._pending_frames.extend(frames[1:])
|
||||
frame = frames[0]
|
||||
self._validate_frame(frame, expected_sub)
|
||||
return frame
|
||||
else:
|
||||
time.sleep(0.005)
|
||||
|
||||
raise TimeoutError(
|
||||
f"No S3 frame received within {timeout or self._recv_timeout:.1f}s"
|
||||
+ (f" (expected SUB 0x{expected_sub:02X})" if expected_sub is not None else "")
|
||||
)
|
||||
|
||||
@staticmethod
|
||||
def _validate_frame(frame: S3Frame, expected_sub: Optional[int]) -> None:
|
||||
"""Validate SUB; log but do not raise on bad checksum.
|
||||
|
||||
S3 response checksums frequently fail SUM8 validation due to inner-frame
|
||||
delimiter bytes being captured as the checksum byte. The original
|
||||
s3_parser.py deliberately never validates S3 checksums for exactly this
|
||||
reason. We log a warning and continue.
|
||||
"""
|
||||
if not frame.checksum_valid:
|
||||
# S3 checksums frequently fail SUM8 due to inner-frame delimiter bytes
|
||||
# landing in the checksum position. Treat as informational only.
|
||||
log.debug("S3 frame SUB=0x%02X: checksum mismatch (ignoring)", frame.sub)
|
||||
if expected_sub is not None and frame.sub != expected_sub:
|
||||
raise UnexpectedResponse(
|
||||
f"Expected SUB=0x{expected_sub:02X}, got 0x{frame.sub:02X}"
|
||||
)
|
||||
|
||||
def _drain_boot_string(self, drain_ms: int = 200) -> None:
|
||||
"""
|
||||
Read and discard any boot-string bytes ("Operating System") the device
|
||||
may send before entering binary protocol mode.
|
||||
|
||||
We simply read with a short timeout and throw the bytes away. The
|
||||
S3FrameParser's IDLE state already handles non-frame bytes gracefully,
|
||||
but it's cleaner to drain them explicitly before the first real frame.
|
||||
"""
|
||||
deadline = time.monotonic() + (drain_ms / 1000)
|
||||
discarded = 0
|
||||
while time.monotonic() < deadline:
|
||||
chunk = self._transport.read(256)
|
||||
if chunk:
|
||||
discarded += len(chunk)
|
||||
else:
|
||||
time.sleep(0.005)
|
||||
if discarded:
|
||||
log.debug("drain_boot_string: discarded %d bytes", discarded)
|
||||
420
minimateplus/transport.py
Normal file
420
minimateplus/transport.py
Normal file
@@ -0,0 +1,420 @@
|
||||
"""
|
||||
transport.py — Serial and TCP transport layer for the MiniMate Plus protocol.
|
||||
|
||||
Provides a thin I/O abstraction so that protocol.py never imports pyserial or
|
||||
socket directly. Two concrete implementations:
|
||||
|
||||
SerialTransport — direct RS-232 cable connection (pyserial)
|
||||
TcpTransport — TCP socket to a modem or ACH relay (stdlib socket)
|
||||
|
||||
The MiniMate Plus protocol bytes are identical over both transports. TCP is used
|
||||
when field units call home via the ACH (Auto Call Home) server, or when SFM
|
||||
"calls up" a unit by connecting to the modem's IP address directly.
|
||||
|
||||
Field hardware: Sierra Wireless RV55 / RX55 (4G LTE) cellular modem, replacing
|
||||
the older 3G-only Raven X (now decommissioned). All run ALEOS firmware with an
|
||||
ACEmanager web UI. Serial port must be configured 38400,8N1, no flow control,
|
||||
Data Forwarding Timeout = 1 s.
|
||||
|
||||
Typical usage:
|
||||
from minimateplus.transport import SerialTransport, TcpTransport
|
||||
|
||||
# Direct serial connection
|
||||
with SerialTransport("COM5") as t:
|
||||
t.write(frame_bytes)
|
||||
|
||||
# Modem / ACH TCP connection (Blastware port 12345)
|
||||
with TcpTransport("192.168.1.50", 12345) as t:
|
||||
t.write(frame_bytes)
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import socket
|
||||
import time
|
||||
from abc import ABC, abstractmethod
|
||||
from typing import Optional
|
||||
|
||||
# pyserial is the only non-stdlib dependency in this project.
|
||||
# Import lazily so unit-tests that mock the transport can run without it.
|
||||
try:
|
||||
import serial # type: ignore
|
||||
except ImportError: # pragma: no cover
|
||||
serial = None # type: ignore
|
||||
|
||||
|
||||
# ── Abstract base ─────────────────────────────────────────────────────────────
|
||||
|
||||
class BaseTransport(ABC):
|
||||
"""Common interface for all transport implementations."""
|
||||
|
||||
@abstractmethod
|
||||
def connect(self) -> None:
|
||||
"""Open the underlying connection."""
|
||||
|
||||
@abstractmethod
|
||||
def disconnect(self) -> None:
|
||||
"""Close the underlying connection."""
|
||||
|
||||
@property
|
||||
@abstractmethod
|
||||
def is_connected(self) -> bool:
|
||||
"""True while the connection is open."""
|
||||
|
||||
@abstractmethod
|
||||
def write(self, data: bytes) -> None:
|
||||
"""Write *data* bytes to the wire."""
|
||||
|
||||
@abstractmethod
|
||||
def read(self, n: int) -> bytes:
|
||||
"""
|
||||
Read up to *n* bytes. Returns immediately with whatever is available
|
||||
(may return fewer than *n* bytes, or b"" if nothing is ready).
|
||||
"""
|
||||
|
||||
# ── Context manager ───────────────────────────────────────────────────────
|
||||
|
||||
def __enter__(self) -> "BaseTransport":
|
||||
self.connect()
|
||||
return self
|
||||
|
||||
def __exit__(self, *_) -> None:
|
||||
self.disconnect()
|
||||
|
||||
# ── Higher-level read helpers ─────────────────────────────────────────────
|
||||
|
||||
def read_until_idle(
|
||||
self,
|
||||
timeout: float = 2.0,
|
||||
idle_gap: float = 0.05,
|
||||
chunk: int = 256,
|
||||
) -> bytes:
|
||||
"""
|
||||
Read bytes until the line goes quiet.
|
||||
|
||||
Keeps reading in *chunk*-sized bursts. Returns when either:
|
||||
- *timeout* seconds have elapsed since the first byte arrived, or
|
||||
- *idle_gap* seconds pass with no new bytes (line went quiet).
|
||||
|
||||
This mirrors how Blastware behaves: it waits for the seismograph to
|
||||
stop transmitting rather than counting bytes.
|
||||
|
||||
Args:
|
||||
timeout: Hard deadline (seconds) from the moment read starts.
|
||||
idle_gap: How long to wait after the last byte before declaring done.
|
||||
chunk: How many bytes to request per low-level read() call.
|
||||
|
||||
Returns:
|
||||
All bytes received as a single bytes object (may be b"" if nothing
|
||||
arrived within *timeout*).
|
||||
"""
|
||||
buf = bytearray()
|
||||
deadline = time.monotonic() + timeout
|
||||
last_rx = None
|
||||
|
||||
while time.monotonic() < deadline:
|
||||
got = self.read(chunk)
|
||||
if got:
|
||||
buf.extend(got)
|
||||
last_rx = time.monotonic()
|
||||
else:
|
||||
# Nothing ready — check idle gap
|
||||
if last_rx is not None and (time.monotonic() - last_rx) >= idle_gap:
|
||||
break
|
||||
time.sleep(0.005)
|
||||
|
||||
return bytes(buf)
|
||||
|
||||
def read_exact(self, n: int, timeout: float = 2.0) -> bytes:
|
||||
"""
|
||||
Read exactly *n* bytes or raise TimeoutError.
|
||||
|
||||
Useful when the caller already knows the expected response length
|
||||
(e.g. fixed-size ACK packets).
|
||||
"""
|
||||
buf = bytearray()
|
||||
deadline = time.monotonic() + timeout
|
||||
while len(buf) < n:
|
||||
if time.monotonic() >= deadline:
|
||||
raise TimeoutError(
|
||||
f"read_exact: wanted {n} bytes, got {len(buf)} "
|
||||
f"after {timeout:.1f}s"
|
||||
)
|
||||
got = self.read(n - len(buf))
|
||||
if got:
|
||||
buf.extend(got)
|
||||
else:
|
||||
time.sleep(0.005)
|
||||
return bytes(buf)
|
||||
|
||||
|
||||
# ── Serial transport ──────────────────────────────────────────────────────────
|
||||
|
||||
# Default baud rate confirmed from Blastware / MiniMate Plus documentation.
|
||||
DEFAULT_BAUD = 38_400
|
||||
|
||||
# pyserial serial port config matching the MiniMate Plus RS-232 spec:
|
||||
# 8 data bits, no parity, 1 stop bit (8N1).
|
||||
_SERIAL_BYTESIZE = 8 # serial.EIGHTBITS
|
||||
_SERIAL_PARITY = "N" # serial.PARITY_NONE
|
||||
_SERIAL_STOPBITS = 1 # serial.STOPBITS_ONE
|
||||
|
||||
|
||||
class SerialTransport(BaseTransport):
|
||||
"""
|
||||
pyserial-backed transport for a direct RS-232 cable connection.
|
||||
|
||||
The port is opened with a very short read timeout (10 ms) so that
|
||||
read() returns quickly and the caller can implement its own framing /
|
||||
timeout logic without blocking the whole process.
|
||||
|
||||
Args:
|
||||
port: COM port name (e.g. "COM5" on Windows, "/dev/ttyUSB0" on Linux).
|
||||
baud: Baud rate (default 38400).
|
||||
rts_cts: Enable RTS/CTS hardware flow control (default False — MiniMate
|
||||
typically uses no flow control).
|
||||
"""
|
||||
|
||||
# Internal read timeout (seconds). Short so read() is non-blocking in practice.
|
||||
_READ_TIMEOUT = 0.01
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
port: str,
|
||||
baud: int = DEFAULT_BAUD,
|
||||
rts_cts: bool = False,
|
||||
) -> None:
|
||||
if serial is None:
|
||||
raise ImportError(
|
||||
"pyserial is required for SerialTransport. "
|
||||
"Install it with: pip install pyserial"
|
||||
)
|
||||
self.port = port
|
||||
self.baud = baud
|
||||
self.rts_cts = rts_cts
|
||||
self._ser: Optional[serial.Serial] = None
|
||||
|
||||
# ── BaseTransport interface ───────────────────────────────────────────────
|
||||
|
||||
def connect(self) -> None:
|
||||
"""Open the serial port. Raises serial.SerialException on failure."""
|
||||
if self._ser and self._ser.is_open:
|
||||
return # Already open — idempotent
|
||||
self._ser = serial.Serial(
|
||||
port = self.port,
|
||||
baudrate = self.baud,
|
||||
bytesize = _SERIAL_BYTESIZE,
|
||||
parity = _SERIAL_PARITY,
|
||||
stopbits = _SERIAL_STOPBITS,
|
||||
timeout = self._READ_TIMEOUT,
|
||||
rtscts = self.rts_cts,
|
||||
xonxoff = False,
|
||||
dsrdtr = False,
|
||||
)
|
||||
# Flush any stale bytes left in device / OS buffers from a previous session
|
||||
self._ser.reset_input_buffer()
|
||||
self._ser.reset_output_buffer()
|
||||
|
||||
def disconnect(self) -> None:
|
||||
"""Close the serial port. Safe to call even if already closed."""
|
||||
if self._ser:
|
||||
try:
|
||||
self._ser.close()
|
||||
except Exception:
|
||||
pass
|
||||
self._ser = None
|
||||
|
||||
@property
|
||||
def is_connected(self) -> bool:
|
||||
return bool(self._ser and self._ser.is_open)
|
||||
|
||||
def write(self, data: bytes) -> None:
|
||||
"""
|
||||
Write *data* to the serial port.
|
||||
|
||||
Raises:
|
||||
RuntimeError: if not connected.
|
||||
serial.SerialException: on I/O error.
|
||||
"""
|
||||
if not self.is_connected:
|
||||
raise RuntimeError("SerialTransport.write: not connected")
|
||||
self._ser.write(data) # type: ignore[union-attr]
|
||||
self._ser.flush() # type: ignore[union-attr]
|
||||
|
||||
def read(self, n: int) -> bytes:
|
||||
"""
|
||||
Read up to *n* bytes from the serial port.
|
||||
|
||||
Returns b"" immediately if no data is available (non-blocking in
|
||||
practice thanks to the 10 ms read timeout).
|
||||
|
||||
Raises:
|
||||
RuntimeError: if not connected.
|
||||
"""
|
||||
if not self.is_connected:
|
||||
raise RuntimeError("SerialTransport.read: not connected")
|
||||
return self._ser.read(n) # type: ignore[union-attr]
|
||||
|
||||
# ── Extras ────────────────────────────────────────────────────────────────
|
||||
|
||||
def flush_input(self) -> None:
|
||||
"""Discard any unread bytes in the OS receive buffer."""
|
||||
if self.is_connected:
|
||||
self._ser.reset_input_buffer() # type: ignore[union-attr]
|
||||
|
||||
def __repr__(self) -> str:
|
||||
state = "open" if self.is_connected else "closed"
|
||||
return f"SerialTransport({self.port!r}, baud={self.baud}, {state})"
|
||||
|
||||
|
||||
# ── TCP transport ─────────────────────────────────────────────────────────────
|
||||
|
||||
# Default TCP port for Blastware modem communications / ACH relay.
|
||||
# Confirmed from field setup: Blastware → Communication Setup → TCP/IP uses 12345.
|
||||
DEFAULT_TCP_PORT = 12345
|
||||
|
||||
|
||||
class TcpTransport(BaseTransport):
|
||||
"""
|
||||
TCP socket transport for MiniMate Plus units in the field.
|
||||
|
||||
The protocol bytes over TCP are identical to RS-232 — TCP is simply a
|
||||
different physical layer. The modem (Sierra Wireless RV55 / RX55, or older
|
||||
Raven X) bridges the unit's RS-232 serial port to a TCP socket transparently.
|
||||
No application-layer handshake or framing is added.
|
||||
|
||||
Two usage scenarios:
|
||||
|
||||
"Call up" (outbound): SFM connects to the unit's modem IP directly.
|
||||
TcpTransport(host="203.0.113.5", port=12345)
|
||||
|
||||
"Call home" / ACH relay: The unit has already dialled in to the office
|
||||
ACH server, which bridged the modem to a TCP socket. In this case
|
||||
the host/port identifies the relay's listening socket, not the modem.
|
||||
(ACH inbound mode is handled by a separate AchServer — not this class.)
|
||||
|
||||
IMPORTANT — modem data forwarding delay:
|
||||
Sierra Wireless (and Raven) modems buffer RS-232 bytes for up to 1 second
|
||||
before forwarding them as a TCP segment ("Data Forwarding Timeout" in
|
||||
ACEmanager). read_until_idle() is overridden to use idle_gap=1.5 s rather
|
||||
than the serial default of 0.05 s — without this, the parser would declare
|
||||
a frame complete mid-stream during the modem's buffering pause.
|
||||
|
||||
Args:
|
||||
host: IP address or hostname of the modem / ACH relay.
|
||||
port: TCP port number (default 12345).
|
||||
connect_timeout: Seconds to wait for the TCP handshake (default 10.0).
|
||||
"""
|
||||
|
||||
# Internal recv timeout — short so read() returns promptly if no data.
|
||||
_RECV_TIMEOUT = 0.01
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
host: str,
|
||||
port: int = DEFAULT_TCP_PORT,
|
||||
connect_timeout: float = 10.0,
|
||||
) -> None:
|
||||
self.host = host
|
||||
self.port = port
|
||||
self.connect_timeout = connect_timeout
|
||||
self._sock: Optional[socket.socket] = None
|
||||
|
||||
# ── BaseTransport interface ───────────────────────────────────────────────
|
||||
|
||||
def connect(self) -> None:
|
||||
"""
|
||||
Open a TCP connection to host:port.
|
||||
|
||||
Idempotent — does nothing if already connected.
|
||||
|
||||
Raises:
|
||||
OSError / socket.timeout: if the connection cannot be established.
|
||||
"""
|
||||
if self._sock is not None:
|
||||
return # Already connected — idempotent
|
||||
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
|
||||
sock.settimeout(self.connect_timeout)
|
||||
sock.connect((self.host, self.port))
|
||||
# Switch to short timeout so read() is non-blocking in practice
|
||||
sock.settimeout(self._RECV_TIMEOUT)
|
||||
self._sock = sock
|
||||
|
||||
def disconnect(self) -> None:
|
||||
"""Close the TCP socket. Safe to call even if already closed."""
|
||||
if self._sock:
|
||||
try:
|
||||
self._sock.shutdown(socket.SHUT_RDWR)
|
||||
except OSError:
|
||||
pass
|
||||
try:
|
||||
self._sock.close()
|
||||
except OSError:
|
||||
pass
|
||||
self._sock = None
|
||||
|
||||
@property
|
||||
def is_connected(self) -> bool:
|
||||
return self._sock is not None
|
||||
|
||||
def write(self, data: bytes) -> None:
|
||||
"""
|
||||
Send all bytes to the peer.
|
||||
|
||||
Raises:
|
||||
RuntimeError: if not connected.
|
||||
OSError: on network I/O error.
|
||||
"""
|
||||
if not self.is_connected:
|
||||
raise RuntimeError("TcpTransport.write: not connected")
|
||||
self._sock.sendall(data) # type: ignore[union-attr]
|
||||
|
||||
def read(self, n: int) -> bytes:
|
||||
"""
|
||||
Read up to *n* bytes from the socket.
|
||||
|
||||
Returns b"" immediately if no data is available (non-blocking in
|
||||
practice thanks to the short socket timeout).
|
||||
|
||||
Raises:
|
||||
RuntimeError: if not connected.
|
||||
"""
|
||||
if not self.is_connected:
|
||||
raise RuntimeError("TcpTransport.read: not connected")
|
||||
try:
|
||||
return self._sock.recv(n) # type: ignore[union-attr]
|
||||
except socket.timeout:
|
||||
return b""
|
||||
|
||||
def read_until_idle(
|
||||
self,
|
||||
timeout: float = 2.0,
|
||||
idle_gap: float = 1.5,
|
||||
chunk: int = 256,
|
||||
) -> bytes:
|
||||
"""
|
||||
TCP-aware version of read_until_idle.
|
||||
|
||||
Overrides the BaseTransport default to use a much longer idle_gap (1.5 s
|
||||
vs 0.05 s for serial). This is necessary because the Raven modem (and
|
||||
similar cellular modems) buffer serial-port bytes for up to 1 second
|
||||
before forwarding them over TCP ("Data Forwarding Timeout" setting).
|
||||
|
||||
If read_until_idle returned after a 50 ms quiet period, it would trigger
|
||||
mid-frame when the modem is still accumulating bytes — causing frame
|
||||
parse failures on every call.
|
||||
|
||||
Args:
|
||||
timeout: Hard deadline from first byte (default 2.0 s — callers
|
||||
typically pass a longer value for S3 frames).
|
||||
idle_gap: Quiet-line threshold (default 1.5 s to survive modem
|
||||
buffering). Pass a smaller value only if you are
|
||||
connecting directly to a unit's Ethernet port with no
|
||||
modem buffering in the path.
|
||||
chunk: Bytes per low-level recv() call.
|
||||
"""
|
||||
return super().read_until_idle(timeout=timeout, idle_gap=idle_gap, chunk=chunk)
|
||||
|
||||
def __repr__(self) -> str:
|
||||
state = "connected" if self.is_connected else "disconnected"
|
||||
return f"TcpTransport({self.host!r}, port={self.port}, {state})"
|
||||
125
parsers/README_s3_parser.md
Normal file
125
parsers/README_s3_parser.md
Normal file
@@ -0,0 +1,125 @@
|
||||
# s3_parser.py
|
||||
|
||||
## Purpose
|
||||
|
||||
`s3_parser.py` extracts complete DLE-framed packets from raw serial
|
||||
capture files produced by the `s3_bridge` logger.
|
||||
|
||||
It operates strictly at the **framing layer**. It does **not** decode
|
||||
higher-level protocol structures.
|
||||
|
||||
This parser is designed specifically for Instantel / Series 3--style
|
||||
serial traffic using:
|
||||
|
||||
- `DLE STX` (`0x10 0x02`) to start a frame
|
||||
- `DLE ETX` (`0x10 0x03`) to end a frame
|
||||
- DLE byte stuffing (`0x10 0x10` → literal `0x10`)
|
||||
|
||||
------------------------------------------------------------------------
|
||||
|
||||
## Design Philosophy
|
||||
|
||||
This parser:
|
||||
|
||||
- Uses a deterministic state machine (no regex, no global scanning).
|
||||
- Assumes raw wire framing is preserved (`DLE+ETX` is present).
|
||||
- Does **not** attempt auto-detection of framing style.
|
||||
- Extracts only complete `STX → ETX` frame pairs.
|
||||
- Safely ignores incomplete trailing frames at EOF.
|
||||
|
||||
Separation of concerns is intentional:
|
||||
|
||||
- **Parser = framing extraction**
|
||||
- **Decoder = protocol interpretation (future layer)**
|
||||
|
||||
Do not add message-level logic here.
|
||||
|
||||
------------------------------------------------------------------------
|
||||
|
||||
## Input
|
||||
|
||||
Raw binary `.bin` files captured from:
|
||||
|
||||
- `--raw-bw` tap (Blastware → S3)
|
||||
- `--raw-s3` tap (S3 → Blastware)
|
||||
|
||||
These must preserve raw serial bytes.
|
||||
|
||||
------------------------------------------------------------------------
|
||||
|
||||
## Usage
|
||||
|
||||
Basic frame extraction:
|
||||
|
||||
``` bash
|
||||
python s3_parser.py raw_s3.bin --trailer-len 2
|
||||
```
|
||||
|
||||
Options:
|
||||
|
||||
- `--trailer-len N`
|
||||
- Number of bytes to capture after `DLE ETX`
|
||||
- Often `2` (CRC16)
|
||||
- `--crc`
|
||||
- Attempts CRC16 validation against first 2 trailer bytes
|
||||
- Tries several common CRC16 variants
|
||||
- `--crc-endian {little|big}`
|
||||
- Endianness for interpreting trailer bytes (default: little)
|
||||
- `--out frames.jsonl`
|
||||
- Writes full JSONL output instead of printing summary
|
||||
|
||||
------------------------------------------------------------------------
|
||||
|
||||
## Output Format
|
||||
|
||||
Each extracted frame produces:
|
||||
|
||||
``` json
|
||||
{
|
||||
"index": 0,
|
||||
"start_offset": 20,
|
||||
"end_offset": 4033,
|
||||
"payload_len": 3922,
|
||||
"payload_hex": "...",
|
||||
"trailer_hex": "000f",
|
||||
"crc_match": null
|
||||
}
|
||||
```
|
||||
|
||||
Where:
|
||||
|
||||
- `payload_hex` = unescaped payload bytes (DLE stuffing removed)
|
||||
- `trailer_hex` = bytes immediately following `DLE ETX`
|
||||
- `crc_match` = matched CRC algorithm (if `--crc` enabled)
|
||||
|
||||
------------------------------------------------------------------------
|
||||
|
||||
## Known Behavior
|
||||
|
||||
- Frames that start but never receive a matching `DLE ETX` before EOF
|
||||
are discarded.
|
||||
- Embedded `0x10 0x02` inside payload does not trigger a new frame
|
||||
(correct behavior).
|
||||
- Embedded `0x10 0x10` is correctly unescaped to a single `0x10`.
|
||||
|
||||
------------------------------------------------------------------------
|
||||
|
||||
## What This Parser Does NOT Do
|
||||
|
||||
- It does not decode Instantel message structure.
|
||||
- It does not interpret block IDs or message types.
|
||||
- It does not validate protocol-level fields.
|
||||
- It does not reconstruct multi-frame logical responses.
|
||||
|
||||
That is the responsibility of a higher-level decoder.
|
||||
|
||||
------------------------------------------------------------------------
|
||||
|
||||
## Status
|
||||
|
||||
Framing layer verified against:
|
||||
|
||||
- raw_bw.bin (command/control direction)
|
||||
- raw_s3.bin (device response direction)
|
||||
|
||||
State machine validated via start/end instrumentation.
|
||||
98
parsers/bw_frames.jsonl
Normal file
98
parsers/bw_frames.jsonl
Normal file
@@ -0,0 +1,98 @@
|
||||
{"index": 0, "start_offset": 0, "end_offset": 21, "payload_len": 17, "payload_hex": "10005b000000000000000000000000006b", "trailer_hex": "", "checksum_valid": null, "checksum_type": null, "checksum_hex": null}
|
||||
{"index": 1, "start_offset": 21, "end_offset": 42, "payload_len": 17, "payload_hex": "10005b000000000000000000000000006b", "trailer_hex": "", "checksum_valid": null, "checksum_type": null, "checksum_hex": null}
|
||||
{"index": 2, "start_offset": 42, "end_offset": 63, "payload_len": 17, "payload_hex": "10005b000030000000000000000000009b", "trailer_hex": "", "checksum_valid": null, "checksum_type": null, "checksum_hex": null}
|
||||
{"index": 3, "start_offset": 63, "end_offset": 84, "payload_len": 17, "payload_hex": "10005b000000000000000000000000006b", "trailer_hex": "", "checksum_valid": null, "checksum_type": null, "checksum_hex": null}
|
||||
{"index": 4, "start_offset": 84, "end_offset": 105, "payload_len": 17, "payload_hex": "10005b000030000000000000000000009b", "trailer_hex": "", "checksum_valid": null, "checksum_type": null, "checksum_hex": null}
|
||||
{"index": 5, "start_offset": 105, "end_offset": 126, "payload_len": 17, "payload_hex": "10005b000000000000000000000000006b", "trailer_hex": "", "checksum_valid": null, "checksum_type": null, "checksum_hex": null}
|
||||
{"index": 6, "start_offset": 126, "end_offset": 147, "payload_len": 17, "payload_hex": "10005b000030000000000000000000009b", "trailer_hex": "", "checksum_valid": null, "checksum_type": null, "checksum_hex": null}
|
||||
{"index": 7, "start_offset": 147, "end_offset": 168, "payload_len": 17, "payload_hex": "10005b000000000000000000000000006b", "trailer_hex": "", "checksum_valid": null, "checksum_type": null, "checksum_hex": null}
|
||||
{"index": 8, "start_offset": 168, "end_offset": 189, "payload_len": 17, "payload_hex": "10005b000030000000000000000000009b", "trailer_hex": "", "checksum_valid": null, "checksum_type": null, "checksum_hex": null}
|
||||
{"index": 9, "start_offset": 189, "end_offset": 210, "payload_len": 17, "payload_hex": "10005b000000000000000000000000006b", "trailer_hex": "", "checksum_valid": null, "checksum_type": null, "checksum_hex": null}
|
||||
{"index": 10, "start_offset": 210, "end_offset": 231, "payload_len": 17, "payload_hex": "10005b000030000000000000000000009b", "trailer_hex": "", "checksum_valid": null, "checksum_type": null, "checksum_hex": null}
|
||||
{"index": 11, "start_offset": 231, "end_offset": 252, "payload_len": 17, "payload_hex": "10005b000000000000000000000000006b", "trailer_hex": "", "checksum_valid": null, "checksum_type": null, "checksum_hex": null}
|
||||
{"index": 12, "start_offset": 252, "end_offset": 273, "payload_len": 17, "payload_hex": "10005b000030000000000000000000009b", "trailer_hex": "", "checksum_valid": null, "checksum_type": null, "checksum_hex": null}
|
||||
{"index": 13, "start_offset": 273, "end_offset": 294, "payload_len": 17, "payload_hex": "1000150000000000000000000000000025", "trailer_hex": "", "checksum_valid": null, "checksum_type": null, "checksum_hex": null}
|
||||
{"index": 14, "start_offset": 294, "end_offset": 315, "payload_len": 17, "payload_hex": "10001500000a000000000000000000002f", "trailer_hex": "", "checksum_valid": null, "checksum_type": null, "checksum_hex": null}
|
||||
{"index": 15, "start_offset": 315, "end_offset": 427, "payload_len": 108, "payload_hex": "10006800005a00000000000000000000005809000000010107cb00061e00010107cb00140000000000173b00000000000000000000000000000100000000000100000000000000010001000000000000000000000000000000000064000000000000001effdc0000100200c8", "trailer_hex": "", "checksum_valid": null, "checksum_type": null, "checksum_hex": null}
|
||||
{"index": 16, "start_offset": 427, "end_offset": 448, "payload_len": 17, "payload_hex": "1000730000000000000000000000000083", "trailer_hex": "", "checksum_valid": null, "checksum_type": null, "checksum_hex": null}
|
||||
{"index": 17, "start_offset": 448, "end_offset": 1497, "payload_len": 1045, "payload_hex": "1000710010040000000000000000000000082a6400001004100400003c0000be800000000040400000001003000f000000073dbb457a3db956e1000100015374616e64617264205265636f7264696e672053657475702e7365740000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000050726f6a6563743a0000000000000000000000000000544553542000000000000000000000000000000000000000000000000000000000000000000000000000436c69656e743a000000000000000000000000000000436c6175646520746573743200000000000000000000000000000000000000000000000000000000000055736572204e616d653a00000000000000000000000054657272612d4d656368616e69637320496e632e202d20422e204861727269736f6e000000000000000053656973204c6f633a000000000000000000000000004c6f636174696f6e202331202d20427269616e7320486f75736500000000000000000000000000000000457874656e646564204e6f74657300000000000000000a000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000007", "trailer_hex": "", "checksum_valid": null, "checksum_type": null, "checksum_hex": null}
|
||||
{"index": 18, "start_offset": 1497, "end_offset": 2574, "payload_len": 1073, "payload_hex": "1000710010040000001004000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000015472616e000000010050000f0028001510021003011004001003000040c697fd00003f19999a696e2e00400000002f730000000156657274000000010050000f0028001510021003011004001003000040c697fd00003f19999a696e2e00400000002f73000000014c6f6e67000000010050000f0028001510021003011004001003000040c697fd00003f19999a696e2e00400000002f73000000004d69634c000000100200c80032000a000a1002d501db000500003d38560800003c1374bc707369003cac0831284c29000010025472616e320000010050000f0028001510021003011004001003000040c697fd00003f000000696e2e00400000002f73000000100256657274320000010050000f0028001510021003011004001003000040c697fd00003f000000696e2e00400000002f7300000010024c6f6e67320000010050000f0028001510021003011004001003000040c697fd00003f000000696e2e00400000002f73000000004d69634c1002", "trailer_hex": "", "checksum_valid": null, "checksum_type": null, "checksum_hex": null}
|
||||
{"index": 19, "start_offset": 2574, "end_offset": 2641, "payload_len": 63, "payload_hex": "10007100002c00000800000000000000320000100200c80032000a000a1002d501db000500003d38560800003c23d70a707369003cac0831284c29007cea32", "trailer_hex": "", "checksum_valid": null, "checksum_type": null, "checksum_hex": null}
|
||||
{"index": 20, "start_offset": 2641, "end_offset": 2662, "payload_len": 17, "payload_hex": "1000720000000000000000000000000082", "trailer_hex": "", "checksum_valid": null, "checksum_type": null, "checksum_hex": null}
|
||||
{"index": 21, "start_offset": 2662, "end_offset": 2711, "payload_len": 45, "payload_hex": "10008200001c00000000000000000000001ad5000001080affffffffffffffffffffffffffffffffffff00009e", "trailer_hex": "", "checksum_valid": null, "checksum_type": null, "checksum_hex": null}
|
||||
{"index": 22, "start_offset": 2711, "end_offset": 2732, "payload_len": 17, "payload_hex": "1000830000000000000000000000000093", "trailer_hex": "", "checksum_valid": null, "checksum_type": null, "checksum_hex": null}
|
||||
{"index": 23, "start_offset": 2732, "end_offset": 2957, "payload_len": 221, "payload_hex": "1000690000ca0000000000000000000000c8080000010001000100010001000100010010020001001e0010020001000a000a4576656e742053756d6d617279205265706f7274000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000002580000801018c76af", "trailer_hex": "", "checksum_valid": null, "checksum_type": null, "checksum_hex": null}
|
||||
{"index": 24, "start_offset": 2957, "end_offset": 2978, "payload_len": 17, "payload_hex": "1000740000000000000000000000000084", "trailer_hex": "", "checksum_valid": null, "checksum_type": null, "checksum_hex": null}
|
||||
{"index": 25, "start_offset": 2978, "end_offset": 2999, "payload_len": 17, "payload_hex": "1000720000000000000000000000000082", "trailer_hex": "", "checksum_valid": null, "checksum_type": null, "checksum_hex": null}
|
||||
{"index": 26, "start_offset": 2999, "end_offset": 3020, "payload_len": 17, "payload_hex": "10005b000000000000000000000000006b", "trailer_hex": "", "checksum_valid": null, "checksum_type": null, "checksum_hex": null}
|
||||
{"index": 27, "start_offset": 3020, "end_offset": 3041, "payload_len": 17, "payload_hex": "10005b000030000000000000000000009b", "trailer_hex": "", "checksum_valid": null, "checksum_type": null, "checksum_hex": null}
|
||||
{"index": 28, "start_offset": 3041, "end_offset": 3062, "payload_len": 17, "payload_hex": "10005b000000000000000000000000006b", "trailer_hex": "", "checksum_valid": null, "checksum_type": null, "checksum_hex": null}
|
||||
{"index": 29, "start_offset": 3062, "end_offset": 3083, "payload_len": 17, "payload_hex": "10005b000030000000000000000000009b", "trailer_hex": "", "checksum_valid": null, "checksum_type": null, "checksum_hex": null}
|
||||
{"index": 30, "start_offset": 3083, "end_offset": 3104, "payload_len": 17, "payload_hex": "10005b000000000000000000000000006b", "trailer_hex": "", "checksum_valid": null, "checksum_type": null, "checksum_hex": null}
|
||||
{"index": 31, "start_offset": 3104, "end_offset": 3125, "payload_len": 17, "payload_hex": "10005b000030000000000000000000009b", "trailer_hex": "", "checksum_valid": null, "checksum_type": null, "checksum_hex": null}
|
||||
{"index": 32, "start_offset": 3125, "end_offset": 3146, "payload_len": 17, "payload_hex": "1000150000000000000000000000000025", "trailer_hex": "", "checksum_valid": null, "checksum_type": null, "checksum_hex": null}
|
||||
{"index": 33, "start_offset": 3146, "end_offset": 3167, "payload_len": 17, "payload_hex": "10001500000a000000000000000000002f", "trailer_hex": "", "checksum_valid": null, "checksum_type": null, "checksum_hex": null}
|
||||
{"index": 34, "start_offset": 3167, "end_offset": 3188, "payload_len": 17, "payload_hex": "1000010000000000000000000000000011", "trailer_hex": "", "checksum_valid": null, "checksum_type": null, "checksum_hex": null}
|
||||
{"index": 35, "start_offset": 3188, "end_offset": 3209, "payload_len": 17, "payload_hex": "10000100009800000000000000000000a9", "trailer_hex": "", "checksum_valid": null, "checksum_type": null, "checksum_hex": null}
|
||||
{"index": 36, "start_offset": 3209, "end_offset": 3230, "payload_len": 17, "payload_hex": "1000080000000000000000000000000018", "trailer_hex": "", "checksum_valid": null, "checksum_type": null, "checksum_hex": null}
|
||||
{"index": 37, "start_offset": 3230, "end_offset": 3251, "payload_len": 17, "payload_hex": "1000080000580000000000000000000070", "trailer_hex": "", "checksum_valid": null, "checksum_type": null, "checksum_hex": null}
|
||||
{"index": 38, "start_offset": 3251, "end_offset": 3272, "payload_len": 17, "payload_hex": "10005b000000000000000000000000006b", "trailer_hex": "", "checksum_valid": null, "checksum_type": null, "checksum_hex": null}
|
||||
{"index": 39, "start_offset": 3272, "end_offset": 3293, "payload_len": 17, "payload_hex": "10005b000030000000000000000000009b", "trailer_hex": "", "checksum_valid": null, "checksum_type": null, "checksum_hex": null}
|
||||
{"index": 40, "start_offset": 3293, "end_offset": 3314, "payload_len": 17, "payload_hex": "10005b000000000000000000000000006b", "trailer_hex": "", "checksum_valid": null, "checksum_type": null, "checksum_hex": null}
|
||||
{"index": 41, "start_offset": 3314, "end_offset": 3335, "payload_len": 17, "payload_hex": "10005b000030000000000000000000009b", "trailer_hex": "", "checksum_valid": null, "checksum_type": null, "checksum_hex": null}
|
||||
{"index": 42, "start_offset": 3335, "end_offset": 3356, "payload_len": 17, "payload_hex": "10005b000000000000000000000000006b", "trailer_hex": "", "checksum_valid": null, "checksum_type": null, "checksum_hex": null}
|
||||
{"index": 43, "start_offset": 3356, "end_offset": 3377, "payload_len": 17, "payload_hex": "10005b000030000000000000000000009b", "trailer_hex": "", "checksum_valid": null, "checksum_type": null, "checksum_hex": null}
|
||||
{"index": 44, "start_offset": 3377, "end_offset": 3398, "payload_len": 17, "payload_hex": "1000010000000000000000000000000011", "trailer_hex": "", "checksum_valid": null, "checksum_type": null, "checksum_hex": null}
|
||||
{"index": 45, "start_offset": 3398, "end_offset": 3419, "payload_len": 17, "payload_hex": "10000100009800000000000000000000a9", "trailer_hex": "", "checksum_valid": null, "checksum_type": null, "checksum_hex": null}
|
||||
{"index": 46, "start_offset": 3419, "end_offset": 3440, "payload_len": 17, "payload_hex": "10005b000000000000000000000000006b", "trailer_hex": "", "checksum_valid": null, "checksum_type": null, "checksum_hex": null}
|
||||
{"index": 47, "start_offset": 3440, "end_offset": 3461, "payload_len": 17, "payload_hex": "10005b000030000000000000000000009b", "trailer_hex": "", "checksum_valid": null, "checksum_type": null, "checksum_hex": null}
|
||||
{"index": 48, "start_offset": 3461, "end_offset": 3482, "payload_len": 17, "payload_hex": "10005b000000000000000000000000006b", "trailer_hex": "", "checksum_valid": null, "checksum_type": null, "checksum_hex": null}
|
||||
{"index": 49, "start_offset": 3482, "end_offset": 3503, "payload_len": 17, "payload_hex": "10005b000030000000000000000000009b", "trailer_hex": "", "checksum_valid": null, "checksum_type": null, "checksum_hex": null}
|
||||
{"index": 50, "start_offset": 3503, "end_offset": 3524, "payload_len": 17, "payload_hex": "10005b000000000000000000000000006b", "trailer_hex": "", "checksum_valid": null, "checksum_type": null, "checksum_hex": null}
|
||||
{"index": 51, "start_offset": 3524, "end_offset": 3545, "payload_len": 17, "payload_hex": "10005b000030000000000000000000009b", "trailer_hex": "", "checksum_valid": null, "checksum_type": null, "checksum_hex": null}
|
||||
{"index": 52, "start_offset": 3545, "end_offset": 3566, "payload_len": 17, "payload_hex": "1000150000000000000000000000000025", "trailer_hex": "", "checksum_valid": null, "checksum_type": null, "checksum_hex": null}
|
||||
{"index": 53, "start_offset": 3566, "end_offset": 3587, "payload_len": 17, "payload_hex": "10001500000a000000000000000000002f", "trailer_hex": "", "checksum_valid": null, "checksum_type": null, "checksum_hex": null}
|
||||
{"index": 54, "start_offset": 3587, "end_offset": 3608, "payload_len": 17, "payload_hex": "10005b000000000000000000000000006b", "trailer_hex": "", "checksum_valid": null, "checksum_type": null, "checksum_hex": null}
|
||||
{"index": 55, "start_offset": 3608, "end_offset": 3629, "payload_len": 17, "payload_hex": "10005b000030000000000000000000009b", "trailer_hex": "", "checksum_valid": null, "checksum_type": null, "checksum_hex": null}
|
||||
{"index": 56, "start_offset": 3629, "end_offset": 3650, "payload_len": 17, "payload_hex": "10005b000000000000000000000000006b", "trailer_hex": "", "checksum_valid": null, "checksum_type": null, "checksum_hex": null}
|
||||
{"index": 57, "start_offset": 3650, "end_offset": 3671, "payload_len": 17, "payload_hex": "10005b000030000000000000000000009b", "trailer_hex": "", "checksum_valid": null, "checksum_type": null, "checksum_hex": null}
|
||||
{"index": 58, "start_offset": 3671, "end_offset": 3692, "payload_len": 17, "payload_hex": "10005b000000000000000000000000006b", "trailer_hex": "", "checksum_valid": null, "checksum_type": null, "checksum_hex": null}
|
||||
{"index": 59, "start_offset": 3692, "end_offset": 3713, "payload_len": 17, "payload_hex": "10005b000030000000000000000000009b", "trailer_hex": "", "checksum_valid": null, "checksum_type": null, "checksum_hex": null}
|
||||
{"index": 60, "start_offset": 3713, "end_offset": 3734, "payload_len": 17, "payload_hex": "1000150000000000000000000000000025", "trailer_hex": "", "checksum_valid": null, "checksum_type": null, "checksum_hex": null}
|
||||
{"index": 61, "start_offset": 3734, "end_offset": 3755, "payload_len": 17, "payload_hex": "10001500000a000000000000000000002f", "trailer_hex": "", "checksum_valid": null, "checksum_type": null, "checksum_hex": null}
|
||||
{"index": 62, "start_offset": 3755, "end_offset": 3776, "payload_len": 17, "payload_hex": "10005b000000000000000000000000006b", "trailer_hex": "", "checksum_valid": null, "checksum_type": null, "checksum_hex": null}
|
||||
{"index": 63, "start_offset": 3776, "end_offset": 3797, "payload_len": 17, "payload_hex": "10005b000030000000000000000000009b", "trailer_hex": "", "checksum_valid": null, "checksum_type": null, "checksum_hex": null}
|
||||
{"index": 64, "start_offset": 3797, "end_offset": 3818, "payload_len": 17, "payload_hex": "10005b000000000000000000000000006b", "trailer_hex": "", "checksum_valid": null, "checksum_type": null, "checksum_hex": null}
|
||||
{"index": 65, "start_offset": 3818, "end_offset": 3839, "payload_len": 17, "payload_hex": "10005b000030000000000000000000009b", "trailer_hex": "", "checksum_valid": null, "checksum_type": null, "checksum_hex": null}
|
||||
{"index": 66, "start_offset": 3839, "end_offset": 3860, "payload_len": 17, "payload_hex": "10005b000000000000000000000000006b", "trailer_hex": "", "checksum_valid": null, "checksum_type": null, "checksum_hex": null}
|
||||
{"index": 67, "start_offset": 3860, "end_offset": 3881, "payload_len": 17, "payload_hex": "10005b000030000000000000000000009b", "trailer_hex": "", "checksum_valid": null, "checksum_type": null, "checksum_hex": null}
|
||||
{"index": 68, "start_offset": 3881, "end_offset": 3902, "payload_len": 17, "payload_hex": "10005b000000000000000000000000006b", "trailer_hex": "", "checksum_valid": null, "checksum_type": null, "checksum_hex": null}
|
||||
{"index": 69, "start_offset": 3902, "end_offset": 3923, "payload_len": 17, "payload_hex": "10005b000030000000000000000000009b", "trailer_hex": "", "checksum_valid": null, "checksum_type": null, "checksum_hex": null}
|
||||
{"index": 70, "start_offset": 3923, "end_offset": 3944, "payload_len": 17, "payload_hex": "10005b000000000000000000000000006b", "trailer_hex": "", "checksum_valid": null, "checksum_type": null, "checksum_hex": null}
|
||||
{"index": 71, "start_offset": 3944, "end_offset": 3965, "payload_len": 17, "payload_hex": "10005b000030000000000000000000009b", "trailer_hex": "", "checksum_valid": null, "checksum_type": null, "checksum_hex": null}
|
||||
{"index": 72, "start_offset": 3965, "end_offset": 3986, "payload_len": 17, "payload_hex": "10005b000000000000000000000000006b", "trailer_hex": "", "checksum_valid": null, "checksum_type": null, "checksum_hex": null}
|
||||
{"index": 73, "start_offset": 3986, "end_offset": 4007, "payload_len": 17, "payload_hex": "10005b000030000000000000000000009b", "trailer_hex": "", "checksum_valid": null, "checksum_type": null, "checksum_hex": null}
|
||||
{"index": 74, "start_offset": 4007, "end_offset": 4028, "payload_len": 17, "payload_hex": "10005b000000000000000000000000006b", "trailer_hex": "", "checksum_valid": null, "checksum_type": null, "checksum_hex": null}
|
||||
{"index": 75, "start_offset": 4028, "end_offset": 4049, "payload_len": 17, "payload_hex": "10005b000030000000000000000000009b", "trailer_hex": "", "checksum_valid": null, "checksum_type": null, "checksum_hex": null}
|
||||
{"index": 76, "start_offset": 4049, "end_offset": 4070, "payload_len": 17, "payload_hex": "10005b000000000000000000000000006b", "trailer_hex": "", "checksum_valid": null, "checksum_type": null, "checksum_hex": null}
|
||||
{"index": 77, "start_offset": 4070, "end_offset": 4091, "payload_len": 17, "payload_hex": "10005b000030000000000000000000009b", "trailer_hex": "", "checksum_valid": null, "checksum_type": null, "checksum_hex": null}
|
||||
{"index": 78, "start_offset": 4091, "end_offset": 4112, "payload_len": 17, "payload_hex": "10005b000000000000000000000000006b", "trailer_hex": "", "checksum_valid": null, "checksum_type": null, "checksum_hex": null}
|
||||
{"index": 79, "start_offset": 4112, "end_offset": 4133, "payload_len": 17, "payload_hex": "10005b000030000000000000000000009b", "trailer_hex": "", "checksum_valid": null, "checksum_type": null, "checksum_hex": null}
|
||||
{"index": 80, "start_offset": 4133, "end_offset": 4154, "payload_len": 17, "payload_hex": "1000010000000000000000000000000011", "trailer_hex": "", "checksum_valid": null, "checksum_type": null, "checksum_hex": null}
|
||||
{"index": 81, "start_offset": 4154, "end_offset": 4175, "payload_len": 17, "payload_hex": "10000100009800000000000000000000a9", "trailer_hex": "", "checksum_valid": null, "checksum_type": null, "checksum_hex": null}
|
||||
{"index": 82, "start_offset": 4175, "end_offset": 4196, "payload_len": 17, "payload_hex": "10002e000000000000000000000000003e", "trailer_hex": "", "checksum_valid": null, "checksum_type": null, "checksum_hex": null}
|
||||
{"index": 83, "start_offset": 4196, "end_offset": 4217, "payload_len": 17, "payload_hex": "10002e00001a0000000000000000000058", "trailer_hex": "", "checksum_valid": null, "checksum_type": null, "checksum_hex": null}
|
||||
{"index": 84, "start_offset": 4217, "end_offset": 4238, "payload_len": 17, "payload_hex": "1000010000000000000000000000000011", "trailer_hex": "", "checksum_valid": null, "checksum_type": null, "checksum_hex": null}
|
||||
{"index": 85, "start_offset": 4238, "end_offset": 4259, "payload_len": 17, "payload_hex": "10000100009800000000000000000000a9", "trailer_hex": "", "checksum_valid": null, "checksum_type": null, "checksum_hex": null}
|
||||
{"index": 86, "start_offset": 4259, "end_offset": 4280, "payload_len": 17, "payload_hex": "10001a000000000000000000006400008e", "trailer_hex": "", "checksum_valid": null, "checksum_type": null, "checksum_hex": null}
|
||||
{"index": 87, "start_offset": 4280, "end_offset": 4302, "payload_len": 18, "payload_hex": "10001a001004000000000000000064000092", "trailer_hex": "", "checksum_valid": null, "checksum_type": null, "checksum_hex": null}
|
||||
{"index": 88, "start_offset": 4302, "end_offset": 4325, "payload_len": 19, "payload_hex": "10001a00100400000010040000000064000096", "trailer_hex": "", "checksum_valid": null, "checksum_type": null, "checksum_hex": null}
|
||||
{"index": 89, "start_offset": 4325, "end_offset": 4346, "payload_len": 17, "payload_hex": "10001a00002a00000800000000640000c0", "trailer_hex": "", "checksum_valid": null, "checksum_type": null, "checksum_hex": null}
|
||||
{"index": 90, "start_offset": 4346, "end_offset": 4367, "payload_len": 17, "payload_hex": "1000090000000000000000000000000019", "trailer_hex": "", "checksum_valid": null, "checksum_type": null, "checksum_hex": null}
|
||||
{"index": 91, "start_offset": 4367, "end_offset": 4388, "payload_len": 17, "payload_hex": "1000090000ca00000000000000000000e3", "trailer_hex": "", "checksum_valid": null, "checksum_type": null, "checksum_hex": null}
|
||||
{"index": 92, "start_offset": 4388, "end_offset": 4409, "payload_len": 17, "payload_hex": "1000080000000000000000000000000018", "trailer_hex": "", "checksum_valid": null, "checksum_type": null, "checksum_hex": null}
|
||||
{"index": 93, "start_offset": 4409, "end_offset": 4430, "payload_len": 17, "payload_hex": "1000080000580000000000000000000070", "trailer_hex": "", "checksum_valid": null, "checksum_type": null, "checksum_hex": null}
|
||||
{"index": 94, "start_offset": 4430, "end_offset": 4451, "payload_len": 17, "payload_hex": "1000010000000000000000000000000011", "trailer_hex": "", "checksum_valid": null, "checksum_type": null, "checksum_hex": null}
|
||||
{"index": 95, "start_offset": 4451, "end_offset": 4472, "payload_len": 17, "payload_hex": "10000100009800000000000000000000a9", "trailer_hex": "", "checksum_valid": null, "checksum_type": null, "checksum_hex": null}
|
||||
{"index": 96, "start_offset": 4472, "end_offset": 4493, "payload_len": 17, "payload_hex": "1000080000000000000000000000000018", "trailer_hex": "", "checksum_valid": null, "checksum_type": null, "checksum_hex": null}
|
||||
{"index": 97, "start_offset": 4493, "end_offset": 4514, "payload_len": 17, "payload_hex": "1000080000580000000000000000000070", "trailer_hex": "", "checksum_valid": null, "checksum_type": null, "checksum_hex": null}
|
||||
337
parsers/frame_db.py
Normal file
337
parsers/frame_db.py
Normal file
@@ -0,0 +1,337 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
frame_db.py — SQLite frame database for Instantel protocol captures.
|
||||
|
||||
Schema:
|
||||
captures — one row per ingested capture pair (deduped by SHA256)
|
||||
frames — one row per parsed frame
|
||||
byte_values — one row per (frame, offset, value) for fast indexed queries
|
||||
|
||||
Usage:
|
||||
db = FrameDB() # opens default DB at ~/.seismo_lab/frames.db
|
||||
db = FrameDB(path) # custom path
|
||||
cap_id = db.ingest(sessions, s3_path, bw_path)
|
||||
rows = db.query_frames(sub=0xF7, direction="S3")
|
||||
rows = db.query_by_byte(offset=85, value=0x0A)
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import hashlib
|
||||
import os
|
||||
import sqlite3
|
||||
import struct
|
||||
from pathlib import Path
|
||||
from typing import Optional
|
||||
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
# DB location
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
|
||||
DEFAULT_DB_DIR = Path.home() / ".seismo_lab"
|
||||
DEFAULT_DB_PATH = DEFAULT_DB_DIR / "frames.db"
|
||||
|
||||
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
# Schema
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
|
||||
_DDL = """
|
||||
PRAGMA journal_mode=WAL;
|
||||
PRAGMA foreign_keys=ON;
|
||||
|
||||
CREATE TABLE IF NOT EXISTS captures (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
timestamp TEXT NOT NULL, -- ISO-8601 ingest time
|
||||
s3_path TEXT,
|
||||
bw_path TEXT,
|
||||
capture_hash TEXT NOT NULL UNIQUE, -- SHA256 of s3_blob+bw_blob
|
||||
notes TEXT DEFAULT ''
|
||||
);
|
||||
|
||||
CREATE TABLE IF NOT EXISTS frames (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
capture_id INTEGER NOT NULL REFERENCES captures(id) ON DELETE CASCADE,
|
||||
session_idx INTEGER NOT NULL,
|
||||
direction TEXT NOT NULL, -- 'BW' or 'S3'
|
||||
sub INTEGER, -- NULL if malformed
|
||||
page_key INTEGER,
|
||||
sub_name TEXT,
|
||||
payload BLOB NOT NULL,
|
||||
payload_len INTEGER NOT NULL,
|
||||
checksum_ok INTEGER -- 1/0/NULL
|
||||
);
|
||||
|
||||
CREATE INDEX IF NOT EXISTS idx_frames_capture ON frames(capture_id);
|
||||
CREATE INDEX IF NOT EXISTS idx_frames_sub ON frames(sub);
|
||||
CREATE INDEX IF NOT EXISTS idx_frames_page_key ON frames(page_key);
|
||||
CREATE INDEX IF NOT EXISTS idx_frames_dir ON frames(direction);
|
||||
|
||||
CREATE TABLE IF NOT EXISTS byte_values (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
frame_id INTEGER NOT NULL REFERENCES frames(id) ON DELETE CASCADE,
|
||||
offset INTEGER NOT NULL,
|
||||
value INTEGER NOT NULL
|
||||
);
|
||||
|
||||
CREATE INDEX IF NOT EXISTS idx_bv_frame ON byte_values(frame_id);
|
||||
CREATE INDEX IF NOT EXISTS idx_bv_offset ON byte_values(offset);
|
||||
CREATE INDEX IF NOT EXISTS idx_bv_value ON byte_values(value);
|
||||
CREATE INDEX IF NOT EXISTS idx_bv_off_val ON byte_values(offset, value);
|
||||
"""
|
||||
|
||||
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
# Helpers
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
|
||||
def _sha256_blobs(s3_blob: bytes, bw_blob: bytes) -> str:
|
||||
h = hashlib.sha256()
|
||||
h.update(s3_blob)
|
||||
h.update(bw_blob)
|
||||
return h.hexdigest()
|
||||
|
||||
|
||||
def _interp_bytes(data: bytes, offset: int) -> dict:
|
||||
"""
|
||||
Return multi-interpretation dict for 1–4 bytes starting at offset.
|
||||
Used in the GUI's byte interpretation panel.
|
||||
"""
|
||||
result: dict = {}
|
||||
remaining = len(data) - offset
|
||||
if remaining <= 0:
|
||||
return result
|
||||
|
||||
b1 = data[offset]
|
||||
result["uint8"] = b1
|
||||
result["int8"] = b1 if b1 < 128 else b1 - 256
|
||||
|
||||
if remaining >= 2:
|
||||
u16be = struct.unpack_from(">H", data, offset)[0]
|
||||
u16le = struct.unpack_from("<H", data, offset)[0]
|
||||
result["uint16_be"] = u16be
|
||||
result["uint16_le"] = u16le
|
||||
|
||||
if remaining >= 4:
|
||||
f32be = struct.unpack_from(">f", data, offset)[0]
|
||||
f32le = struct.unpack_from("<f", data, offset)[0]
|
||||
u32be = struct.unpack_from(">I", data, offset)[0]
|
||||
u32le = struct.unpack_from("<I", data, offset)[0]
|
||||
result["float32_be"] = round(f32be, 6)
|
||||
result["float32_le"] = round(f32le, 6)
|
||||
result["uint32_be"] = u32be
|
||||
result["uint32_le"] = u32le
|
||||
|
||||
return result
|
||||
|
||||
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
# FrameDB class
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
|
||||
class FrameDB:
|
||||
def __init__(self, path: Optional[Path] = None) -> None:
|
||||
if path is None:
|
||||
path = DEFAULT_DB_PATH
|
||||
path = Path(path)
|
||||
path.parent.mkdir(parents=True, exist_ok=True)
|
||||
self.path = path
|
||||
self._con = sqlite3.connect(str(path), check_same_thread=False)
|
||||
self._con.row_factory = sqlite3.Row
|
||||
self._init_schema()
|
||||
|
||||
def _init_schema(self) -> None:
|
||||
self._con.executescript(_DDL)
|
||||
self._con.commit()
|
||||
|
||||
def close(self) -> None:
|
||||
self._con.close()
|
||||
|
||||
# ── Ingest ────────────────────────────────────────────────────────────
|
||||
|
||||
def ingest(
|
||||
self,
|
||||
sessions: list, # list[Session] from s3_analyzer
|
||||
s3_path: Optional[Path],
|
||||
bw_path: Optional[Path],
|
||||
notes: str = "",
|
||||
) -> Optional[int]:
|
||||
"""
|
||||
Ingest a list of sessions into the DB.
|
||||
Returns capture_id, or None if already ingested (duplicate hash).
|
||||
"""
|
||||
import datetime
|
||||
|
||||
s3_blob = s3_path.read_bytes() if s3_path and s3_path.exists() else b""
|
||||
bw_blob = bw_path.read_bytes() if bw_path and bw_path.exists() else b""
|
||||
cap_hash = _sha256_blobs(s3_blob, bw_blob)
|
||||
|
||||
# Dedup check
|
||||
row = self._con.execute(
|
||||
"SELECT id FROM captures WHERE capture_hash=?", (cap_hash,)
|
||||
).fetchone()
|
||||
if row:
|
||||
return None # already in DB
|
||||
|
||||
ts = datetime.datetime.now().isoformat(timespec="seconds")
|
||||
cur = self._con.execute(
|
||||
"INSERT INTO captures (timestamp, s3_path, bw_path, capture_hash, notes) "
|
||||
"VALUES (?, ?, ?, ?, ?)",
|
||||
(ts, str(s3_path) if s3_path else None,
|
||||
str(bw_path) if bw_path else None,
|
||||
cap_hash, notes)
|
||||
)
|
||||
cap_id = cur.lastrowid
|
||||
|
||||
for sess in sessions:
|
||||
for af in sess.all_frames:
|
||||
frame_id = self._insert_frame(cap_id, af)
|
||||
self._insert_byte_values(frame_id, af.frame.payload)
|
||||
|
||||
self._con.commit()
|
||||
return cap_id
|
||||
|
||||
def _insert_frame(self, cap_id: int, af) -> int:
|
||||
"""Insert one AnnotatedFrame; return its rowid."""
|
||||
sub = af.header.sub if af.header else None
|
||||
page_key = af.header.page_key if af.header else None
|
||||
chk_ok = None
|
||||
if af.frame.checksum_valid is True:
|
||||
chk_ok = 1
|
||||
elif af.frame.checksum_valid is False:
|
||||
chk_ok = 0
|
||||
|
||||
cur = self._con.execute(
|
||||
"INSERT INTO frames "
|
||||
"(capture_id, session_idx, direction, sub, page_key, sub_name, payload, payload_len, checksum_ok) "
|
||||
"VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?)",
|
||||
(cap_id, af.session_idx, af.source,
|
||||
sub, page_key, af.sub_name,
|
||||
af.frame.payload, len(af.frame.payload), chk_ok)
|
||||
)
|
||||
return cur.lastrowid
|
||||
|
||||
def _insert_byte_values(self, frame_id: int, payload: bytes) -> None:
|
||||
"""Insert one row per byte in payload into byte_values."""
|
||||
rows = [(frame_id, i, b) for i, b in enumerate(payload)]
|
||||
self._con.executemany(
|
||||
"INSERT INTO byte_values (frame_id, offset, value) VALUES (?, ?, ?)",
|
||||
rows
|
||||
)
|
||||
|
||||
# ── Queries ───────────────────────────────────────────────────────────
|
||||
|
||||
def list_captures(self) -> list[sqlite3.Row]:
|
||||
return self._con.execute(
|
||||
"SELECT id, timestamp, s3_path, bw_path, notes, "
|
||||
" (SELECT COUNT(*) FROM frames WHERE capture_id=captures.id) AS frame_count "
|
||||
"FROM captures ORDER BY id DESC"
|
||||
).fetchall()
|
||||
|
||||
def query_frames(
|
||||
self,
|
||||
capture_id: Optional[int] = None,
|
||||
direction: Optional[str] = None, # "BW" or "S3"
|
||||
sub: Optional[int] = None,
|
||||
page_key: Optional[int] = None,
|
||||
limit: int = 500,
|
||||
) -> list[sqlite3.Row]:
|
||||
"""
|
||||
Query frames table with optional filters.
|
||||
Returns rows with: id, capture_id, session_idx, direction, sub, page_key,
|
||||
sub_name, payload, payload_len, checksum_ok
|
||||
"""
|
||||
clauses = []
|
||||
params = []
|
||||
|
||||
if capture_id is not None:
|
||||
clauses.append("capture_id=?"); params.append(capture_id)
|
||||
if direction is not None:
|
||||
clauses.append("direction=?"); params.append(direction)
|
||||
if sub is not None:
|
||||
clauses.append("sub=?"); params.append(sub)
|
||||
if page_key is not None:
|
||||
clauses.append("page_key=?"); params.append(page_key)
|
||||
|
||||
where = ("WHERE " + " AND ".join(clauses)) if clauses else ""
|
||||
sql = f"SELECT * FROM frames {where} ORDER BY id LIMIT ?"
|
||||
params.append(limit)
|
||||
|
||||
return self._con.execute(sql, params).fetchall()
|
||||
|
||||
def query_by_byte(
|
||||
self,
|
||||
offset: int,
|
||||
value: Optional[int] = None,
|
||||
capture_id: Optional[int] = None,
|
||||
direction: Optional[str] = None,
|
||||
sub: Optional[int] = None,
|
||||
limit: int = 500,
|
||||
) -> list[sqlite3.Row]:
|
||||
"""
|
||||
Return frames that have a specific byte at a specific offset.
|
||||
Joins byte_values -> frames for indexed lookup.
|
||||
"""
|
||||
clauses = ["bv.offset=?"]
|
||||
params = [offset]
|
||||
|
||||
if value is not None:
|
||||
clauses.append("bv.value=?"); params.append(value)
|
||||
if capture_id is not None:
|
||||
clauses.append("f.capture_id=?"); params.append(capture_id)
|
||||
if direction is not None:
|
||||
clauses.append("f.direction=?"); params.append(direction)
|
||||
if sub is not None:
|
||||
clauses.append("f.sub=?"); params.append(sub)
|
||||
|
||||
where = "WHERE " + " AND ".join(clauses)
|
||||
sql = (
|
||||
f"SELECT f.*, bv.offset AS q_offset, bv.value AS q_value "
|
||||
f"FROM byte_values bv "
|
||||
f"JOIN frames f ON f.id=bv.frame_id "
|
||||
f"{where} "
|
||||
f"ORDER BY f.id LIMIT ?"
|
||||
)
|
||||
params.append(limit)
|
||||
return self._con.execute(sql, params).fetchall()
|
||||
|
||||
def get_frame_payload(self, frame_id: int) -> Optional[bytes]:
|
||||
row = self._con.execute(
|
||||
"SELECT payload FROM frames WHERE id=?", (frame_id,)
|
||||
).fetchone()
|
||||
return bytes(row["payload"]) if row else None
|
||||
|
||||
def get_distinct_subs(self, capture_id: Optional[int] = None) -> list[int]:
|
||||
if capture_id is not None:
|
||||
rows = self._con.execute(
|
||||
"SELECT DISTINCT sub FROM frames WHERE capture_id=? AND sub IS NOT NULL ORDER BY sub",
|
||||
(capture_id,)
|
||||
).fetchall()
|
||||
else:
|
||||
rows = self._con.execute(
|
||||
"SELECT DISTINCT sub FROM frames WHERE sub IS NOT NULL ORDER BY sub"
|
||||
).fetchall()
|
||||
return [r[0] for r in rows]
|
||||
|
||||
def get_distinct_offsets(self, capture_id: Optional[int] = None) -> list[int]:
|
||||
if capture_id is not None:
|
||||
rows = self._con.execute(
|
||||
"SELECT DISTINCT bv.offset FROM byte_values bv "
|
||||
"JOIN frames f ON f.id=bv.frame_id WHERE f.capture_id=? ORDER BY bv.offset",
|
||||
(capture_id,)
|
||||
).fetchall()
|
||||
else:
|
||||
rows = self._con.execute(
|
||||
"SELECT DISTINCT offset FROM byte_values ORDER BY offset"
|
||||
).fetchall()
|
||||
return [r[0] for r in rows]
|
||||
|
||||
def interpret_offset(self, payload: bytes, offset: int) -> dict:
|
||||
"""Return multi-format interpretation of bytes starting at offset."""
|
||||
return _interp_bytes(payload, offset)
|
||||
|
||||
def get_stats(self) -> dict:
|
||||
captures = self._con.execute("SELECT COUNT(*) FROM captures").fetchone()[0]
|
||||
frames = self._con.execute("SELECT COUNT(*) FROM frames").fetchone()[0]
|
||||
bv_rows = self._con.execute("SELECT COUNT(*) FROM byte_values").fetchone()[0]
|
||||
return {"captures": captures, "frames": frames, "byte_value_rows": bv_rows}
|
||||
940
parsers/gui_analyzer.py
Normal file
940
parsers/gui_analyzer.py
Normal file
@@ -0,0 +1,940 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
gui_analyzer.py — Tkinter GUI for s3_analyzer.
|
||||
|
||||
Layout:
|
||||
┌─────────────────────────────────────────────────────────┐
|
||||
│ [S3 file: ___________ Browse] [BW file: ___ Browse] │
|
||||
│ [Analyze] [Live mode toggle] Status: Idle │
|
||||
├──────────────────┬──────────────────────────────────────┤
|
||||
│ Session list │ Detail panel (tabs) │
|
||||
│ ─ Session 0 │ Inventory | Hex Dump | Diff │
|
||||
│ └ POLL (BW) │ │
|
||||
│ └ POLL_RESP │ (content of selected tab) │
|
||||
│ ─ Session 1 │ │
|
||||
│ └ ... │ │
|
||||
└──────────────────┴──────────────────────────────────────┘
|
||||
│ Status bar │
|
||||
└─────────────────────────────────────────────────────────┘
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import queue
|
||||
import sys
|
||||
import threading
|
||||
import time
|
||||
import tkinter as tk
|
||||
from pathlib import Path
|
||||
from tkinter import filedialog, font, messagebox, ttk
|
||||
from typing import Optional
|
||||
|
||||
sys.path.insert(0, str(Path(__file__).parent))
|
||||
from s3_analyzer import ( # noqa: E402
|
||||
AnnotatedFrame,
|
||||
FrameDiff,
|
||||
Session,
|
||||
annotate_frames,
|
||||
diff_sessions,
|
||||
format_hex_dump,
|
||||
parse_bw,
|
||||
parse_s3,
|
||||
render_session_report,
|
||||
split_into_sessions,
|
||||
write_claude_export,
|
||||
)
|
||||
from frame_db import FrameDB, DEFAULT_DB_PATH # noqa: E402
|
||||
|
||||
|
||||
# ──────────────────────────────────────────────────────────────────────────────
|
||||
# Colour palette (dark-ish terminal feel)
|
||||
# ──────────────────────────────────────────────────────────────────────────────
|
||||
BG = "#1e1e1e"
|
||||
BG2 = "#252526"
|
||||
BG3 = "#2d2d30"
|
||||
FG = "#d4d4d4"
|
||||
FG_DIM = "#6a6a6a"
|
||||
ACCENT = "#569cd6"
|
||||
ACCENT2 = "#4ec9b0"
|
||||
RED = "#f44747"
|
||||
YELLOW = "#dcdcaa"
|
||||
GREEN = "#4caf50"
|
||||
ORANGE = "#ce9178"
|
||||
|
||||
COL_BW = "#9cdcfe" # BW frames
|
||||
COL_S3 = "#4ec9b0" # S3 frames
|
||||
COL_DIFF = "#f44747" # Changed bytes
|
||||
COL_KNOW = "#4caf50" # Known-field annotations
|
||||
COL_HEAD = "#569cd6" # Section headers
|
||||
|
||||
MONO = ("Consolas", 9)
|
||||
MONO_SM = ("Consolas", 8)
|
||||
|
||||
|
||||
# ──────────────────────────────────────────────────────────────────────────────
|
||||
# State container
|
||||
# ──────────────────────────────────────────────────────────────────────────────
|
||||
|
||||
class AnalyzerState:
|
||||
def __init__(self) -> None:
|
||||
self.sessions: list[Session] = []
|
||||
self.diffs: list[Optional[list[FrameDiff]]] = [] # diffs[i] = diff of session i vs i-1
|
||||
self.s3_path: Optional[Path] = None
|
||||
self.bw_path: Optional[Path] = None
|
||||
self.last_capture_id: Optional[int] = None
|
||||
|
||||
|
||||
# ──────────────────────────────────────────────────────────────────────────────
|
||||
# Main GUI
|
||||
# ──────────────────────────────────────────────────────────────────────────────
|
||||
|
||||
class AnalyzerGUI(tk.Tk):
|
||||
def __init__(self) -> None:
|
||||
super().__init__()
|
||||
self.title("S3 Protocol Analyzer")
|
||||
self.configure(bg=BG)
|
||||
self.minsize(1050, 600)
|
||||
|
||||
self.state = AnalyzerState()
|
||||
self._live_thread: Optional[threading.Thread] = None
|
||||
self._live_stop = threading.Event()
|
||||
self._live_q: queue.Queue[str] = queue.Queue()
|
||||
self._db = FrameDB()
|
||||
|
||||
self._build_widgets()
|
||||
self._poll_live_queue()
|
||||
|
||||
# ── widget construction ────────────────────────────────────────────────
|
||||
|
||||
def _build_widgets(self) -> None:
|
||||
self._build_toolbar()
|
||||
self._build_panes()
|
||||
self._build_statusbar()
|
||||
|
||||
def _build_toolbar(self) -> None:
|
||||
bar = tk.Frame(self, bg=BG2, pady=4)
|
||||
bar.pack(side=tk.TOP, fill=tk.X)
|
||||
|
||||
pad = {"padx": 5, "pady": 2}
|
||||
|
||||
# S3 file
|
||||
tk.Label(bar, text="S3 raw:", bg=BG2, fg=FG, font=MONO).pack(side=tk.LEFT, **pad)
|
||||
self.s3_var = tk.StringVar()
|
||||
tk.Entry(bar, textvariable=self.s3_var, width=28, bg=BG3, fg=FG,
|
||||
insertbackground=FG, relief="flat", font=MONO).pack(side=tk.LEFT, **pad)
|
||||
tk.Button(bar, text="Browse", bg=BG3, fg=FG, relief="flat",
|
||||
activebackground=ACCENT, cursor="hand2",
|
||||
command=lambda: self._browse_file(self.s3_var, "raw_s3.bin")
|
||||
).pack(side=tk.LEFT, **pad)
|
||||
|
||||
tk.Label(bar, text=" BW raw:", bg=BG2, fg=FG, font=MONO).pack(side=tk.LEFT, **pad)
|
||||
self.bw_var = tk.StringVar()
|
||||
tk.Entry(bar, textvariable=self.bw_var, width=28, bg=BG3, fg=FG,
|
||||
insertbackground=FG, relief="flat", font=MONO).pack(side=tk.LEFT, **pad)
|
||||
tk.Button(bar, text="Browse", bg=BG3, fg=FG, relief="flat",
|
||||
activebackground=ACCENT, cursor="hand2",
|
||||
command=lambda: self._browse_file(self.bw_var, "raw_bw.bin")
|
||||
).pack(side=tk.LEFT, **pad)
|
||||
|
||||
# Buttons
|
||||
tk.Frame(bar, bg=BG2, width=10).pack(side=tk.LEFT)
|
||||
self.analyze_btn = tk.Button(bar, text="Analyze", bg=ACCENT, fg="#ffffff",
|
||||
relief="flat", padx=10, cursor="hand2",
|
||||
font=("Consolas", 9, "bold"),
|
||||
command=self._run_analyze)
|
||||
self.analyze_btn.pack(side=tk.LEFT, **pad)
|
||||
|
||||
self.live_btn = tk.Button(bar, text="Live: OFF", bg=BG3, fg=FG,
|
||||
relief="flat", padx=10, cursor="hand2",
|
||||
font=MONO, command=self._toggle_live)
|
||||
self.live_btn.pack(side=tk.LEFT, **pad)
|
||||
|
||||
self.export_btn = tk.Button(bar, text="Export for Claude", bg=ORANGE, fg="#000000",
|
||||
relief="flat", padx=10, cursor="hand2",
|
||||
font=("Consolas", 9, "bold"),
|
||||
command=self._run_export, state="disabled")
|
||||
self.export_btn.pack(side=tk.LEFT, **pad)
|
||||
|
||||
self.status_var = tk.StringVar(value="Idle")
|
||||
tk.Label(bar, textvariable=self.status_var, bg=BG2, fg=FG_DIM,
|
||||
font=MONO, anchor="w").pack(side=tk.LEFT, padx=10)
|
||||
|
||||
def _build_panes(self) -> None:
|
||||
pane = tk.PanedWindow(self, orient=tk.HORIZONTAL, bg=BG,
|
||||
sashwidth=4, sashrelief="flat")
|
||||
pane.pack(fill=tk.BOTH, expand=True, padx=0, pady=0)
|
||||
|
||||
# ── Left: session/frame tree ──────────────────────────────────────
|
||||
left = tk.Frame(pane, bg=BG2, width=260)
|
||||
pane.add(left, minsize=200)
|
||||
|
||||
tk.Label(left, text="Sessions", bg=BG2, fg=ACCENT,
|
||||
font=("Consolas", 9, "bold"), anchor="w", padx=6).pack(fill=tk.X)
|
||||
|
||||
tree_frame = tk.Frame(left, bg=BG2)
|
||||
tree_frame.pack(fill=tk.BOTH, expand=True)
|
||||
|
||||
style = ttk.Style()
|
||||
style.theme_use("clam")
|
||||
style.configure("Treeview",
|
||||
background=BG2, foreground=FG, fieldbackground=BG2,
|
||||
font=MONO_SM, rowheight=18, borderwidth=0)
|
||||
style.configure("Treeview.Heading",
|
||||
background=BG3, foreground=ACCENT, font=MONO_SM)
|
||||
style.map("Treeview", background=[("selected", BG3)],
|
||||
foreground=[("selected", "#ffffff")])
|
||||
|
||||
self.tree = ttk.Treeview(tree_frame, columns=("info",), show="tree headings",
|
||||
selectmode="browse")
|
||||
self.tree.heading("#0", text="Frame")
|
||||
self.tree.heading("info", text="Info")
|
||||
self.tree.column("#0", width=160, stretch=True)
|
||||
self.tree.column("info", width=80, stretch=False)
|
||||
|
||||
vsb = ttk.Scrollbar(tree_frame, orient="vertical", command=self.tree.yview)
|
||||
self.tree.configure(yscrollcommand=vsb.set)
|
||||
vsb.pack(side=tk.RIGHT, fill=tk.Y)
|
||||
self.tree.pack(fill=tk.BOTH, expand=True)
|
||||
|
||||
self.tree.tag_configure("session", foreground=ACCENT, font=("Consolas", 9, "bold"))
|
||||
self.tree.tag_configure("bw_frame", foreground=COL_BW)
|
||||
self.tree.tag_configure("s3_frame", foreground=COL_S3)
|
||||
self.tree.tag_configure("bad_chk", foreground=RED)
|
||||
self.tree.tag_configure("malformed", foreground=RED)
|
||||
|
||||
self.tree.bind("<<TreeviewSelect>>", self._on_tree_select)
|
||||
|
||||
# ── Right: detail notebook ────────────────────────────────────────
|
||||
right = tk.Frame(pane, bg=BG)
|
||||
pane.add(right, minsize=600)
|
||||
|
||||
style.configure("TNotebook", background=BG2, borderwidth=0)
|
||||
style.configure("TNotebook.Tab", background=BG3, foreground=FG,
|
||||
font=MONO, padding=[8, 2])
|
||||
style.map("TNotebook.Tab", background=[("selected", BG)],
|
||||
foreground=[("selected", ACCENT)])
|
||||
|
||||
self.nb = ttk.Notebook(right)
|
||||
self.nb.pack(fill=tk.BOTH, expand=True)
|
||||
|
||||
# Tab: Inventory
|
||||
self.inv_text = self._make_text_tab("Inventory")
|
||||
# Tab: Hex Dump
|
||||
self.hex_text = self._make_text_tab("Hex Dump")
|
||||
# Tab: Diff
|
||||
self.diff_text = self._make_text_tab("Diff")
|
||||
# Tab: Full Report (raw text)
|
||||
self.report_text = self._make_text_tab("Full Report")
|
||||
# Tab: Query (DB)
|
||||
self._build_query_tab()
|
||||
|
||||
# Tag colours for rich text in all tabs
|
||||
for w in (self.inv_text, self.hex_text, self.diff_text, self.report_text):
|
||||
w.tag_configure("head", foreground=COL_HEAD, font=("Consolas", 9, "bold"))
|
||||
w.tag_configure("bw", foreground=COL_BW)
|
||||
w.tag_configure("s3", foreground=COL_S3)
|
||||
w.tag_configure("changed", foreground=COL_DIFF)
|
||||
w.tag_configure("known", foreground=COL_KNOW)
|
||||
w.tag_configure("dim", foreground=FG_DIM)
|
||||
w.tag_configure("normal", foreground=FG)
|
||||
w.tag_configure("warn", foreground=YELLOW)
|
||||
w.tag_configure("addr", foreground=ORANGE)
|
||||
|
||||
def _make_text_tab(self, title: str) -> tk.Text:
|
||||
frame = tk.Frame(self.nb, bg=BG)
|
||||
self.nb.add(frame, text=title)
|
||||
w = tk.Text(frame, bg=BG, fg=FG, font=MONO, state="disabled",
|
||||
relief="flat", wrap="none", insertbackground=FG,
|
||||
selectbackground=BG3, selectforeground="#ffffff")
|
||||
vsb = ttk.Scrollbar(frame, orient="vertical", command=w.yview)
|
||||
hsb = ttk.Scrollbar(frame, orient="horizontal", command=w.xview)
|
||||
w.configure(yscrollcommand=vsb.set, xscrollcommand=hsb.set)
|
||||
vsb.pack(side=tk.RIGHT, fill=tk.Y)
|
||||
hsb.pack(side=tk.BOTTOM, fill=tk.X)
|
||||
w.pack(fill=tk.BOTH, expand=True)
|
||||
return w
|
||||
|
||||
def _build_query_tab(self) -> None:
|
||||
"""Build the Query tab: filter controls + results table + interpretation panel."""
|
||||
frame = tk.Frame(self.nb, bg=BG)
|
||||
self.nb.add(frame, text="Query DB")
|
||||
|
||||
# ── Filter row ────────────────────────────────────────────────────
|
||||
filt = tk.Frame(frame, bg=BG2, pady=4)
|
||||
filt.pack(side=tk.TOP, fill=tk.X)
|
||||
|
||||
pad = {"padx": 4, "pady": 2}
|
||||
|
||||
# Capture filter
|
||||
tk.Label(filt, text="Capture:", bg=BG2, fg=FG, font=MONO_SM).grid(row=0, column=0, sticky="e", **pad)
|
||||
self._q_capture_var = tk.StringVar(value="All")
|
||||
self._q_capture_cb = ttk.Combobox(filt, textvariable=self._q_capture_var,
|
||||
width=18, font=MONO_SM, state="readonly")
|
||||
self._q_capture_cb.grid(row=0, column=1, sticky="w", **pad)
|
||||
|
||||
# Direction filter
|
||||
tk.Label(filt, text="Dir:", bg=BG2, fg=FG, font=MONO_SM).grid(row=0, column=2, sticky="e", **pad)
|
||||
self._q_dir_var = tk.StringVar(value="All")
|
||||
self._q_dir_cb = ttk.Combobox(filt, textvariable=self._q_dir_var,
|
||||
values=["All", "BW", "S3"],
|
||||
width=6, font=MONO_SM, state="readonly")
|
||||
self._q_dir_cb.grid(row=0, column=3, sticky="w", **pad)
|
||||
|
||||
# SUB filter
|
||||
tk.Label(filt, text="SUB:", bg=BG2, fg=FG, font=MONO_SM).grid(row=0, column=4, sticky="e", **pad)
|
||||
self._q_sub_var = tk.StringVar(value="All")
|
||||
self._q_sub_cb = ttk.Combobox(filt, textvariable=self._q_sub_var,
|
||||
width=12, font=MONO_SM, state="readonly")
|
||||
self._q_sub_cb.grid(row=0, column=5, sticky="w", **pad)
|
||||
|
||||
# Byte offset filter
|
||||
tk.Label(filt, text="Offset:", bg=BG2, fg=FG, font=MONO_SM).grid(row=0, column=6, sticky="e", **pad)
|
||||
self._q_offset_var = tk.StringVar(value="")
|
||||
tk.Entry(filt, textvariable=self._q_offset_var, width=8, bg=BG3, fg=FG,
|
||||
font=MONO_SM, insertbackground=FG, relief="flat").grid(row=0, column=7, sticky="w", **pad)
|
||||
|
||||
# Value filter
|
||||
tk.Label(filt, text="Value:", bg=BG2, fg=FG, font=MONO_SM).grid(row=0, column=8, sticky="e", **pad)
|
||||
self._q_value_var = tk.StringVar(value="")
|
||||
tk.Entry(filt, textvariable=self._q_value_var, width=8, bg=BG3, fg=FG,
|
||||
font=MONO_SM, insertbackground=FG, relief="flat").grid(row=0, column=9, sticky="w", **pad)
|
||||
|
||||
# Run / Refresh buttons
|
||||
tk.Button(filt, text="Run Query", bg=ACCENT, fg="#ffffff", relief="flat",
|
||||
padx=8, cursor="hand2", font=("Consolas", 8, "bold"),
|
||||
command=self._run_db_query).grid(row=0, column=10, padx=8)
|
||||
tk.Button(filt, text="Refresh dropdowns", bg=BG3, fg=FG, relief="flat",
|
||||
padx=6, cursor="hand2", font=MONO_SM,
|
||||
command=self._refresh_query_dropdowns).grid(row=0, column=11, padx=4)
|
||||
|
||||
# DB stats label
|
||||
self._q_stats_var = tk.StringVar(value="DB: —")
|
||||
tk.Label(filt, textvariable=self._q_stats_var, bg=BG2, fg=FG_DIM,
|
||||
font=MONO_SM).grid(row=0, column=12, padx=12, sticky="w")
|
||||
|
||||
# ── Results table ─────────────────────────────────────────────────
|
||||
res_frame = tk.Frame(frame, bg=BG)
|
||||
res_frame.pack(side=tk.TOP, fill=tk.BOTH, expand=True)
|
||||
|
||||
# Results treeview
|
||||
cols = ("cap", "sess", "dir", "sub", "sub_name", "page", "len", "chk")
|
||||
self._q_tree = ttk.Treeview(res_frame, columns=cols,
|
||||
show="headings", selectmode="browse")
|
||||
col_cfg = [
|
||||
("cap", "Cap", 40),
|
||||
("sess", "Sess", 40),
|
||||
("dir", "Dir", 40),
|
||||
("sub", "SUB", 50),
|
||||
("sub_name", "Name", 160),
|
||||
("page", "Page", 60),
|
||||
("len", "Len", 50),
|
||||
("chk", "Chk", 50),
|
||||
]
|
||||
for cid, heading, width in col_cfg:
|
||||
self._q_tree.heading(cid, text=heading, anchor="w")
|
||||
self._q_tree.column(cid, width=width, stretch=(cid == "sub_name"))
|
||||
|
||||
q_vsb = ttk.Scrollbar(res_frame, orient="vertical", command=self._q_tree.yview)
|
||||
q_hsb = ttk.Scrollbar(res_frame, orient="horizontal", command=self._q_tree.xview)
|
||||
self._q_tree.configure(yscrollcommand=q_vsb.set, xscrollcommand=q_hsb.set)
|
||||
q_vsb.pack(side=tk.RIGHT, fill=tk.Y)
|
||||
q_hsb.pack(side=tk.BOTTOM, fill=tk.X)
|
||||
self._q_tree.pack(side=tk.LEFT, fill=tk.BOTH, expand=True)
|
||||
|
||||
self._q_tree.tag_configure("bw_row", foreground=COL_BW)
|
||||
self._q_tree.tag_configure("s3_row", foreground=COL_S3)
|
||||
self._q_tree.tag_configure("bad_row", foreground=RED)
|
||||
|
||||
# ── Interpretation panel (below results) ──────────────────────────
|
||||
interp_frame = tk.Frame(frame, bg=BG2, height=120)
|
||||
interp_frame.pack(side=tk.BOTTOM, fill=tk.X)
|
||||
interp_frame.pack_propagate(False)
|
||||
|
||||
tk.Label(interp_frame, text="Byte interpretation (click a row, enter offset):",
|
||||
bg=BG2, fg=ACCENT, font=MONO_SM, anchor="w", padx=6).pack(fill=tk.X)
|
||||
|
||||
interp_inner = tk.Frame(interp_frame, bg=BG2)
|
||||
interp_inner.pack(fill=tk.X, padx=6, pady=2)
|
||||
|
||||
tk.Label(interp_inner, text="Offset:", bg=BG2, fg=FG, font=MONO_SM).pack(side=tk.LEFT)
|
||||
self._interp_offset_var = tk.StringVar(value="5")
|
||||
tk.Entry(interp_inner, textvariable=self._interp_offset_var,
|
||||
width=6, bg=BG3, fg=FG, font=MONO_SM,
|
||||
insertbackground=FG, relief="flat").pack(side=tk.LEFT, padx=4)
|
||||
tk.Button(interp_inner, text="Interpret", bg=BG3, fg=FG, relief="flat",
|
||||
cursor="hand2", font=MONO_SM,
|
||||
command=self._run_interpret).pack(side=tk.LEFT, padx=4)
|
||||
|
||||
self._interp_text = tk.Text(interp_frame, bg=BG2, fg=FG, font=MONO_SM,
|
||||
height=4, relief="flat", state="disabled",
|
||||
insertbackground=FG)
|
||||
self._interp_text.pack(fill=tk.X, padx=6, pady=2)
|
||||
self._interp_text.tag_configure("label", foreground=FG_DIM)
|
||||
self._interp_text.tag_configure("value", foreground=YELLOW)
|
||||
|
||||
# Store frame rows by tree iid -> db row
|
||||
self._q_rows: dict[str, object] = {}
|
||||
self._q_capture_rows: list = [None]
|
||||
self._q_sub_values: list = [None]
|
||||
self._q_tree.bind("<<TreeviewSelect>>", self._on_q_select)
|
||||
|
||||
# Init dropdowns
|
||||
self._refresh_query_dropdowns()
|
||||
|
||||
def _refresh_query_dropdowns(self) -> None:
|
||||
"""Reload capture and SUB dropdowns from the DB."""
|
||||
try:
|
||||
captures = self._db.list_captures()
|
||||
cap_labels = ["All"] + [
|
||||
f"#{r['id']} {r['timestamp'][:16]} ({r['frame_count']} frames)"
|
||||
for r in captures
|
||||
]
|
||||
self._q_capture_cb["values"] = cap_labels
|
||||
self._q_capture_rows = [None] + [r["id"] for r in captures]
|
||||
|
||||
subs = self._db.get_distinct_subs()
|
||||
sub_labels = ["All"] + [f"0x{s:02X}" for s in subs]
|
||||
self._q_sub_cb["values"] = sub_labels
|
||||
self._q_sub_values = [None] + subs
|
||||
|
||||
stats = self._db.get_stats()
|
||||
self._q_stats_var.set(
|
||||
f"DB: {stats['captures']} captures | {stats['frames']} frames"
|
||||
)
|
||||
except Exception as exc:
|
||||
self._q_stats_var.set(f"DB error: {exc}")
|
||||
|
||||
def _parse_hex_or_int(self, s: str) -> Optional[int]:
|
||||
"""Parse '0x1F', '31', or '' into int or None."""
|
||||
s = s.strip()
|
||||
if not s:
|
||||
return None
|
||||
try:
|
||||
return int(s, 0)
|
||||
except ValueError:
|
||||
return None
|
||||
|
||||
def _run_db_query(self) -> None:
|
||||
"""Execute query with current filter values and populate results tree."""
|
||||
# Resolve capture_id
|
||||
cap_idx = self._q_capture_cb.current()
|
||||
cap_id = self._q_capture_rows[cap_idx] if cap_idx > 0 else None
|
||||
|
||||
# Direction
|
||||
dir_val = self._q_dir_var.get()
|
||||
direction = dir_val if dir_val != "All" else None
|
||||
|
||||
# SUB
|
||||
sub_idx = self._q_sub_cb.current()
|
||||
sub = self._q_sub_values[sub_idx] if sub_idx > 0 else None
|
||||
|
||||
# Offset / value
|
||||
offset = self._parse_hex_or_int(self._q_offset_var.get())
|
||||
value = self._parse_hex_or_int(self._q_value_var.get())
|
||||
|
||||
try:
|
||||
if offset is not None:
|
||||
rows = self._db.query_by_byte(
|
||||
offset=offset, value=value,
|
||||
capture_id=cap_id, direction=direction, sub=sub
|
||||
)
|
||||
else:
|
||||
rows = self._db.query_frames(
|
||||
capture_id=cap_id, direction=direction, sub=sub
|
||||
)
|
||||
except Exception as exc:
|
||||
messagebox.showerror("Query error", str(exc))
|
||||
return
|
||||
|
||||
# Populate tree
|
||||
self._q_tree.delete(*self._q_tree.get_children())
|
||||
self._q_rows.clear()
|
||||
|
||||
for row in rows:
|
||||
sub_hex = f"0x{row['sub']:02X}" if row["sub"] is not None else "—"
|
||||
page_hex = f"0x{row['page_key']:04X}" if row["page_key"] is not None else "—"
|
||||
chk_str = {1: "OK", 0: "BAD", None: "—"}.get(row["checksum_ok"], "—")
|
||||
tag = "bw_row" if row["direction"] == "BW" else "s3_row"
|
||||
if row["checksum_ok"] == 0:
|
||||
tag = "bad_row"
|
||||
|
||||
iid = str(row["id"])
|
||||
self._q_tree.insert("", tk.END, iid=iid, tags=(tag,), values=(
|
||||
row["capture_id"],
|
||||
row["session_idx"],
|
||||
row["direction"],
|
||||
sub_hex,
|
||||
row["sub_name"] or "",
|
||||
page_hex,
|
||||
row["payload_len"],
|
||||
chk_str,
|
||||
))
|
||||
self._q_rows[iid] = row
|
||||
|
||||
self.sb_var.set(f"Query returned {len(rows)} rows")
|
||||
|
||||
def _on_q_select(self, _event: tk.Event) -> None:
|
||||
"""When a DB result row is selected, auto-run interpret at current offset."""
|
||||
self._run_interpret()
|
||||
|
||||
def _run_interpret(self) -> None:
|
||||
"""Show multi-format byte interpretation for the selected row + offset."""
|
||||
sel = self._q_tree.selection()
|
||||
if not sel:
|
||||
return
|
||||
iid = sel[0]
|
||||
row = self._q_rows.get(iid)
|
||||
if row is None:
|
||||
return
|
||||
|
||||
offset = self._parse_hex_or_int(self._interp_offset_var.get())
|
||||
if offset is None:
|
||||
return
|
||||
|
||||
payload = bytes(row["payload"])
|
||||
interp = self._db.interpret_offset(payload, offset)
|
||||
|
||||
w = self._interp_text
|
||||
w.configure(state="normal")
|
||||
w.delete("1.0", tk.END)
|
||||
|
||||
sub_hex = f"0x{row['sub']:02X}" if row["sub"] is not None else "??"
|
||||
w.insert(tk.END, f"Frame #{row['id']} [{row['direction']}] SUB={sub_hex} "
|
||||
f"offset={offset} (0x{offset:04X})\n", "label")
|
||||
|
||||
label_order = [
|
||||
("uint8", "uint8 "),
|
||||
("int8", "int8 "),
|
||||
("uint16_be", "uint16 BE "),
|
||||
("uint16_le", "uint16 LE "),
|
||||
("uint32_be", "uint32 BE "),
|
||||
("uint32_le", "uint32 LE "),
|
||||
("float32_be", "float32 BE "),
|
||||
("float32_le", "float32 LE "),
|
||||
]
|
||||
line = ""
|
||||
for key, label in label_order:
|
||||
if key in interp:
|
||||
val = interp[key]
|
||||
if isinstance(val, float):
|
||||
val_str = f"{val:.6g}"
|
||||
else:
|
||||
val_str = str(val)
|
||||
if key.startswith("uint") or key.startswith("int"):
|
||||
val_str += f" (0x{int(val) & 0xFFFFFFFF:X})"
|
||||
chunk = f"{label}: {val_str}"
|
||||
line += f" {chunk:<30}"
|
||||
if len(line) > 80:
|
||||
w.insert(tk.END, line + "\n", "value")
|
||||
line = ""
|
||||
if line:
|
||||
w.insert(tk.END, line + "\n", "value")
|
||||
|
||||
w.configure(state="disabled")
|
||||
|
||||
def _build_statusbar(self) -> None:
|
||||
bar = tk.Frame(self, bg=BG3, height=20)
|
||||
bar.pack(side=tk.BOTTOM, fill=tk.X)
|
||||
self.sb_var = tk.StringVar(value="Ready")
|
||||
tk.Label(bar, textvariable=self.sb_var, bg=BG3, fg=FG_DIM,
|
||||
font=MONO_SM, anchor="w", padx=6).pack(fill=tk.X)
|
||||
|
||||
# ── file picking ───────────────────────────────────────────────────────
|
||||
|
||||
def _browse_file(self, var: tk.StringVar, default_name: str) -> None:
|
||||
path = filedialog.askopenfilename(
|
||||
title=f"Select {default_name}",
|
||||
filetypes=[("Binary files", "*.bin"), ("All files", "*.*")],
|
||||
initialfile=default_name,
|
||||
)
|
||||
if path:
|
||||
var.set(path)
|
||||
|
||||
# ── analysis ──────────────────────────────────────────────────────────
|
||||
|
||||
def _run_analyze(self) -> None:
|
||||
s3_path = Path(self.s3_var.get().strip()) if self.s3_var.get().strip() else None
|
||||
bw_path = Path(self.bw_var.get().strip()) if self.bw_var.get().strip() else None
|
||||
|
||||
if not s3_path or not bw_path:
|
||||
messagebox.showerror("Missing files", "Please select both S3 and BW raw files.")
|
||||
return
|
||||
if not s3_path.exists():
|
||||
messagebox.showerror("File not found", f"S3 file not found:\n{s3_path}")
|
||||
return
|
||||
if not bw_path.exists():
|
||||
messagebox.showerror("File not found", f"BW file not found:\n{bw_path}")
|
||||
return
|
||||
|
||||
self.state.s3_path = s3_path
|
||||
self.state.bw_path = bw_path
|
||||
self._do_analyze(s3_path, bw_path)
|
||||
|
||||
def _run_export(self) -> None:
|
||||
if not self.state.sessions:
|
||||
messagebox.showinfo("Export", "Run Analyze first.")
|
||||
return
|
||||
|
||||
outdir = self.state.s3_path.parent if self.state.s3_path else Path(".")
|
||||
out_path = write_claude_export(
|
||||
self.state.sessions,
|
||||
self.state.diffs,
|
||||
outdir,
|
||||
self.state.s3_path,
|
||||
self.state.bw_path,
|
||||
)
|
||||
|
||||
self.sb_var.set(f"Exported: {out_path.name}")
|
||||
if messagebox.askyesno(
|
||||
"Export complete",
|
||||
f"Saved to:\n{out_path}\n\nOpen the folder?",
|
||||
):
|
||||
import subprocess
|
||||
subprocess.Popen(["explorer", str(out_path.parent)])
|
||||
|
||||
def _do_analyze(self, s3_path: Path, bw_path: Path) -> None:
|
||||
self.status_var.set("Parsing...")
|
||||
self.update_idletasks()
|
||||
|
||||
s3_blob = s3_path.read_bytes()
|
||||
bw_blob = bw_path.read_bytes()
|
||||
|
||||
s3_frames = annotate_frames(parse_s3(s3_blob, trailer_len=0), "S3")
|
||||
bw_frames = annotate_frames(parse_bw(bw_blob, trailer_len=0, validate_checksum=True), "BW")
|
||||
|
||||
sessions = split_into_sessions(bw_frames, s3_frames)
|
||||
|
||||
diffs: list[Optional[list[FrameDiff]]] = [None]
|
||||
for i in range(1, len(sessions)):
|
||||
diffs.append(diff_sessions(sessions[i - 1], sessions[i]))
|
||||
|
||||
self.state.sessions = sessions
|
||||
self.state.diffs = diffs
|
||||
|
||||
n_s3 = sum(len(s.s3_frames) for s in sessions)
|
||||
n_bw = sum(len(s.bw_frames) for s in sessions)
|
||||
self.status_var.set(
|
||||
f"{len(sessions)} sessions | BW: {n_bw} frames S3: {n_s3} frames"
|
||||
)
|
||||
self.sb_var.set(f"Loaded: {s3_path.name} + {bw_path.name}")
|
||||
|
||||
self.export_btn.configure(state="normal")
|
||||
self._rebuild_tree()
|
||||
|
||||
# Auto-ingest into DB (deduped by SHA256 — fast no-op on re-analyze)
|
||||
try:
|
||||
cap_id = self._db.ingest(sessions, s3_path, bw_path)
|
||||
if cap_id is not None:
|
||||
self.state.last_capture_id = cap_id
|
||||
self._refresh_query_dropdowns()
|
||||
# Pre-select this capture in the Query tab
|
||||
cap_labels = list(self._q_capture_cb["values"])
|
||||
# Find label that starts with #<cap_id>
|
||||
for i, lbl in enumerate(cap_labels):
|
||||
if lbl.startswith(f"#{cap_id} "):
|
||||
self._q_capture_cb.current(i)
|
||||
break
|
||||
# else: already ingested — no change to dropdown selection
|
||||
except Exception as exc:
|
||||
self.sb_var.set(f"DB ingest error: {exc}")
|
||||
|
||||
# ── tree building ──────────────────────────────────────────────────────
|
||||
|
||||
def _rebuild_tree(self) -> None:
|
||||
self.tree.delete(*self.tree.get_children())
|
||||
|
||||
for sess in self.state.sessions:
|
||||
is_complete = any(
|
||||
af.header is not None and af.header.sub == 0x74
|
||||
for af in sess.bw_frames
|
||||
)
|
||||
label = f"Session {sess.index}"
|
||||
if not is_complete:
|
||||
label += " [partial]"
|
||||
n_diff = len(self.state.diffs[sess.index] or [])
|
||||
diff_info = f"{n_diff} changes" if n_diff > 0 else ""
|
||||
sess_id = self.tree.insert("", tk.END, text=label,
|
||||
values=(diff_info,), tags=("session",))
|
||||
|
||||
for af in sess.all_frames:
|
||||
src_tag = "bw_frame" if af.source == "BW" else "s3_frame"
|
||||
sub_hex = f"{af.header.sub:02X}" if af.header else "??"
|
||||
label_text = f"[{af.source}] {sub_hex} {af.sub_name}"
|
||||
extra = ""
|
||||
tags = (src_tag,)
|
||||
if af.frame.checksum_valid is False:
|
||||
extra = "BAD CHK"
|
||||
tags = ("bad_chk",)
|
||||
elif af.header is None:
|
||||
tags = ("malformed",)
|
||||
label_text = f"[{af.source}] MALFORMED"
|
||||
self.tree.insert(sess_id, tk.END, text=label_text,
|
||||
values=(extra,), tags=tags,
|
||||
iid=f"frame_{sess.index}_{af.frame.index}_{af.source}")
|
||||
|
||||
# Expand all sessions
|
||||
for item in self.tree.get_children():
|
||||
self.tree.item(item, open=True)
|
||||
|
||||
# ── tree selection → detail panel ─────────────────────────────────────
|
||||
|
||||
def _on_tree_select(self, _event: tk.Event) -> None:
|
||||
sel = self.tree.selection()
|
||||
if not sel:
|
||||
return
|
||||
iid = sel[0]
|
||||
|
||||
# Determine if it's a session node or a frame node
|
||||
if iid.startswith("frame_"):
|
||||
# frame_<sessidx>_<frameidx>_<source>
|
||||
parts = iid.split("_")
|
||||
sess_idx = int(parts[1])
|
||||
frame_idx = int(parts[2])
|
||||
source = parts[3]
|
||||
self._show_frame_detail(sess_idx, frame_idx, source)
|
||||
else:
|
||||
# Session node — show session summary
|
||||
# Find session index from text
|
||||
text = self.tree.item(iid, "text")
|
||||
try:
|
||||
idx = int(text.split()[1])
|
||||
self._show_session_detail(idx)
|
||||
except (IndexError, ValueError):
|
||||
pass
|
||||
|
||||
def _find_frame(self, sess_idx: int, frame_idx: int, source: str) -> Optional[AnnotatedFrame]:
|
||||
if sess_idx >= len(self.state.sessions):
|
||||
return None
|
||||
sess = self.state.sessions[sess_idx]
|
||||
pool = sess.bw_frames if source == "BW" else sess.s3_frames
|
||||
for af in pool:
|
||||
if af.frame.index == frame_idx:
|
||||
return af
|
||||
return None
|
||||
|
||||
# ── detail renderers ──────────────────────────────────────────────────
|
||||
|
||||
def _clear_all_tabs(self) -> None:
|
||||
for w in (self.inv_text, self.hex_text, self.diff_text, self.report_text):
|
||||
self._text_clear(w)
|
||||
|
||||
def _show_session_detail(self, sess_idx: int) -> None:
|
||||
if sess_idx >= len(self.state.sessions):
|
||||
return
|
||||
sess = self.state.sessions[sess_idx]
|
||||
diffs = self.state.diffs[sess_idx]
|
||||
|
||||
self._clear_all_tabs()
|
||||
|
||||
# ── Inventory tab ────────────────────────────────────────────────
|
||||
w = self.inv_text
|
||||
self._text_clear(w)
|
||||
self._tw(w, f"SESSION {sess.index}", "head"); self._tn(w)
|
||||
n_bw, n_s3 = len(sess.bw_frames), len(sess.s3_frames)
|
||||
self._tw(w, f"Frames: {n_bw + n_s3} (BW: {n_bw}, S3: {n_s3})\n", "normal")
|
||||
if n_bw != n_s3:
|
||||
self._tw(w, " WARNING: BW/S3 count mismatch\n", "warn")
|
||||
self._tn(w)
|
||||
|
||||
for seq_i, af in enumerate(sess.all_frames):
|
||||
src_tag = "bw" if af.source == "BW" else "s3"
|
||||
sub_hex = f"{af.header.sub:02X}" if af.header else "??"
|
||||
page_str = f" (page {af.header.page_key:04X})" if af.header and af.header.page_key != 0 else ""
|
||||
chk = ""
|
||||
if af.frame.checksum_valid is False:
|
||||
chk = " [BAD CHECKSUM]"
|
||||
elif af.frame.checksum_valid is True:
|
||||
chk = f" [{af.frame.checksum_type}]"
|
||||
self._tw(w, f" [{af.source}] #{seq_i:<3} ", src_tag)
|
||||
self._tw(w, f"SUB={sub_hex} ", "addr")
|
||||
self._tw(w, f"{af.sub_name:<30}", src_tag)
|
||||
self._tw(w, f"{page_str} len={len(af.frame.payload)}", "dim")
|
||||
if chk:
|
||||
self._tw(w, chk, "warn" if af.frame.checksum_valid is False else "dim")
|
||||
self._tn(w)
|
||||
|
||||
# ── Diff tab ─────────────────────────────────────────────────────
|
||||
w = self.diff_text
|
||||
self._text_clear(w)
|
||||
if diffs is None:
|
||||
self._tw(w, "(No previous session to diff against)\n", "dim")
|
||||
elif not diffs:
|
||||
self._tw(w, f"DIFF vs SESSION {sess_idx - 1}\n", "head"); self._tn(w)
|
||||
self._tw(w, " No changes detected.\n", "dim")
|
||||
else:
|
||||
self._tw(w, f"DIFF vs SESSION {sess_idx - 1}\n", "head"); self._tn(w)
|
||||
for fd in diffs:
|
||||
page_str = f" (page {fd.page_key:04X})" if fd.page_key != 0 else ""
|
||||
self._tw(w, f"\n SUB {fd.sub:02X} ({fd.sub_name}){page_str}:\n", "addr")
|
||||
for bd in fd.diffs:
|
||||
before_s = f"{bd.before:02x}" if bd.before >= 0 else "--"
|
||||
after_s = f"{bd.after:02x}" if bd.after >= 0 else "--"
|
||||
self._tw(w, f" [{bd.payload_offset:3d}] 0x{bd.payload_offset:04X}: ", "dim")
|
||||
self._tw(w, f"{before_s} -> {after_s}", "changed")
|
||||
if bd.field_name:
|
||||
self._tw(w, f" [{bd.field_name}]", "known")
|
||||
self._tn(w)
|
||||
|
||||
# ── Full Report tab ───────────────────────────────────────────────
|
||||
report_text = render_session_report(sess, diffs, sess_idx - 1 if sess_idx > 0 else None)
|
||||
w = self.report_text
|
||||
self._text_clear(w)
|
||||
self._tw(w, report_text, "normal")
|
||||
|
||||
# Switch to Inventory tab
|
||||
self.nb.select(0)
|
||||
|
||||
def _show_frame_detail(self, sess_idx: int, frame_idx: int, source: str) -> None:
|
||||
af = self._find_frame(sess_idx, frame_idx, source)
|
||||
if af is None:
|
||||
return
|
||||
|
||||
self._clear_all_tabs()
|
||||
src_tag = "bw" if source == "BW" else "s3"
|
||||
sub_hex = f"{af.header.sub:02X}" if af.header else "??"
|
||||
|
||||
# ── Inventory tab — single frame summary ─────────────────────────
|
||||
w = self.inv_text
|
||||
self._tw(w, f"[{af.source}] Frame #{af.frame.index}\n", src_tag)
|
||||
self._tw(w, f"Session {sess_idx} | ", "dim")
|
||||
self._tw(w, f"SUB={sub_hex} {af.sub_name}\n", "addr")
|
||||
if af.header:
|
||||
self._tw(w, f" OFFSET: {af.header.page_key:04X} ", "dim")
|
||||
self._tw(w, f"CMD={af.header.cmd:02X} FLAGS={af.header.flags:02X}\n", "dim")
|
||||
self._tn(w)
|
||||
self._tw(w, f"Payload bytes: {len(af.frame.payload)}\n", "dim")
|
||||
if af.frame.checksum_valid is False:
|
||||
self._tw(w, " BAD CHECKSUM\n", "warn")
|
||||
elif af.frame.checksum_valid is True:
|
||||
self._tw(w, f" Checksum: {af.frame.checksum_type} {af.frame.checksum_hex}\n", "dim")
|
||||
self._tn(w)
|
||||
|
||||
# Protocol header breakdown
|
||||
p = af.frame.payload
|
||||
if len(p) >= 5:
|
||||
self._tw(w, "Header breakdown:\n", "head")
|
||||
self._tw(w, f" [0] CMD = {p[0]:02x}\n", "dim")
|
||||
self._tw(w, f" [1] ? = {p[1]:02x}\n", "dim")
|
||||
self._tw(w, f" [2] SUB = {p[2]:02x} ({af.sub_name})\n", src_tag)
|
||||
self._tw(w, f" [3] OFFSET_HI = {p[3]:02x}\n", "dim")
|
||||
self._tw(w, f" [4] OFFSET_LO = {p[4]:02x}\n", "dim")
|
||||
if len(p) > 5:
|
||||
self._tw(w, f" [5..] data = {len(p) - 5} bytes\n", "dim")
|
||||
|
||||
# ── Hex Dump tab ─────────────────────────────────────────────────
|
||||
w = self.hex_text
|
||||
self._tw(w, f"[{af.source}] SUB={sub_hex} {af.sub_name}\n", src_tag)
|
||||
self._tw(w, f"Payload ({len(af.frame.payload)} bytes):\n", "dim")
|
||||
self._tn(w)
|
||||
dump_lines = format_hex_dump(af.frame.payload, indent=" ")
|
||||
self._tw(w, "\n".join(dump_lines) + "\n", "normal")
|
||||
|
||||
# Annotate known field offsets within this frame
|
||||
diffs_for_sess = self.state.diffs[sess_idx] if sess_idx < len(self.state.diffs) else None
|
||||
if diffs_for_sess and af.header:
|
||||
page_key = af.header.page_key
|
||||
matching = [fd for fd in diffs_for_sess
|
||||
if fd.sub == af.header.sub and fd.page_key == page_key]
|
||||
if matching:
|
||||
self._tn(w)
|
||||
self._tw(w, "Changed bytes in this frame (vs prev session):\n", "head")
|
||||
for bd in matching[0].diffs:
|
||||
before_s = f"{bd.before:02x}" if bd.before >= 0 else "--"
|
||||
after_s = f"{bd.after:02x}" if bd.after >= 0 else "--"
|
||||
self._tw(w, f" [{bd.payload_offset:3d}] 0x{bd.payload_offset:04X}: ", "dim")
|
||||
self._tw(w, f"{before_s} -> {after_s}", "changed")
|
||||
if bd.field_name:
|
||||
self._tw(w, f" [{bd.field_name}]", "known")
|
||||
self._tn(w)
|
||||
|
||||
# Switch to Hex Dump tab for frame selection
|
||||
self.nb.select(1)
|
||||
|
||||
# ── live mode ─────────────────────────────────────────────────────────
|
||||
|
||||
def _toggle_live(self) -> None:
|
||||
if self._live_thread and self._live_thread.is_alive():
|
||||
self._live_stop.set()
|
||||
self.live_btn.configure(text="Live: OFF", bg=BG3, fg=FG)
|
||||
self.status_var.set("Live stopped")
|
||||
else:
|
||||
s3_path = Path(self.s3_var.get().strip()) if self.s3_var.get().strip() else None
|
||||
bw_path = Path(self.bw_var.get().strip()) if self.bw_var.get().strip() else None
|
||||
if not s3_path or not bw_path:
|
||||
messagebox.showerror("Missing files", "Select both raw files before starting live mode.")
|
||||
return
|
||||
self.state.s3_path = s3_path
|
||||
self.state.bw_path = bw_path
|
||||
self._live_stop.clear()
|
||||
self._live_thread = threading.Thread(
|
||||
target=self._live_worker, args=(s3_path, bw_path), daemon=True)
|
||||
self._live_thread.start()
|
||||
self.live_btn.configure(text="Live: ON", bg=GREEN, fg="#000000")
|
||||
self.status_var.set("Live mode running...")
|
||||
|
||||
def _live_worker(self, s3_path: Path, bw_path: Path) -> None:
|
||||
s3_buf = bytearray()
|
||||
bw_buf = bytearray()
|
||||
s3_pos = bw_pos = 0
|
||||
|
||||
while not self._live_stop.is_set():
|
||||
changed = False
|
||||
if s3_path.exists():
|
||||
with s3_path.open("rb") as fh:
|
||||
fh.seek(s3_pos)
|
||||
nb = fh.read()
|
||||
if nb:
|
||||
s3_buf.extend(nb); s3_pos += len(nb); changed = True
|
||||
if bw_path.exists():
|
||||
with bw_path.open("rb") as fh:
|
||||
fh.seek(bw_pos)
|
||||
nb = fh.read()
|
||||
if nb:
|
||||
bw_buf.extend(nb); bw_pos += len(nb); changed = True
|
||||
|
||||
if changed:
|
||||
self._live_q.put("refresh")
|
||||
|
||||
time.sleep(0.1)
|
||||
|
||||
def _poll_live_queue(self) -> None:
|
||||
try:
|
||||
while True:
|
||||
msg = self._live_q.get_nowait()
|
||||
if msg == "refresh" and self.state.s3_path and self.state.bw_path:
|
||||
self._do_analyze(self.state.s3_path, self.state.bw_path)
|
||||
except queue.Empty:
|
||||
pass
|
||||
finally:
|
||||
self.after(150, self._poll_live_queue)
|
||||
|
||||
# ── text helpers ──────────────────────────────────────────────────────
|
||||
|
||||
def _text_clear(self, w: tk.Text) -> None:
|
||||
w.configure(state="normal")
|
||||
w.delete("1.0", tk.END)
|
||||
# leave enabled for further inserts
|
||||
|
||||
def _tw(self, w: tk.Text, text: str, tag: str = "normal") -> None:
|
||||
"""Insert text with a colour tag."""
|
||||
w.configure(state="normal")
|
||||
w.insert(tk.END, text, tag)
|
||||
|
||||
def _tn(self, w: tk.Text) -> None:
|
||||
"""Insert newline."""
|
||||
w.configure(state="normal")
|
||||
w.insert(tk.END, "\n")
|
||||
w.configure(state="disabled")
|
||||
|
||||
|
||||
# ──────────────────────────────────────────────────────────────────────────────
|
||||
# Entry point
|
||||
# ──────────────────────────────────────────────────────────────────────────────
|
||||
|
||||
def main() -> None:
|
||||
app = AnalyzerGUI()
|
||||
app.mainloop()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
BIN
parsers/raw_bw.bin
Normal file
BIN
parsers/raw_bw.bin
Normal file
Binary file not shown.
BIN
parsers/raw_s3.bin
Normal file
BIN
parsers/raw_s3.bin
Normal file
Binary file not shown.
1204
parsers/s3_analyzer.py
Normal file
1204
parsers/s3_analyzer.py
Normal file
File diff suppressed because it is too large
Load Diff
413
parsers/s3_parser.py
Normal file
413
parsers/s3_parser.py
Normal file
@@ -0,0 +1,413 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
s3_parser.py — Unified Instantel frame parser (S3 + BW).
|
||||
|
||||
Modes:
|
||||
- s3: DLE STX (10 02) ... DLE ETX (10 03)
|
||||
- bw: ACK+STX (41 02) ... ETX (03)
|
||||
|
||||
Stuffing:
|
||||
- Literal 0x10 in payload is stuffed as 10 10 in both directions.
|
||||
|
||||
Checksums:
|
||||
- BW frames appear to use more than one checksum style depending on message type.
|
||||
Small frames often validate with 1-byte SUM8.
|
||||
Large config/write frames appear to use a 2-byte CRC16 variant.
|
||||
|
||||
In BW mode we therefore validate candidate ETX positions using AUTO checksum matching:
|
||||
- SUM8 (1 byte)
|
||||
- CRC16 variants (2 bytes), both little/big endian
|
||||
If any match, we accept the ETX as a real frame terminator.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import argparse
|
||||
import json
|
||||
from dataclasses import dataclass
|
||||
from pathlib import Path
|
||||
from typing import Callable, Dict, List, Optional, Tuple
|
||||
|
||||
DLE = 0x10
|
||||
STX = 0x02
|
||||
ETX = 0x03
|
||||
ACK = 0x41
|
||||
|
||||
__version__ = "0.2.2"
|
||||
|
||||
|
||||
@dataclass
|
||||
class Frame:
|
||||
index: int
|
||||
start_offset: int
|
||||
end_offset: int
|
||||
payload_raw: bytes # de-stuffed bytes between STX..ETX (includes checksum bytes at end)
|
||||
payload: bytes # payload without checksum bytes
|
||||
trailer: bytes
|
||||
checksum_valid: Optional[bool]
|
||||
checksum_type: Optional[str]
|
||||
checksum_hex: Optional[str]
|
||||
|
||||
|
||||
# ------------------------
|
||||
# Checksum / CRC helpers
|
||||
# ------------------------
|
||||
|
||||
def checksum8_sum(data: bytes) -> int:
|
||||
"""SUM8: sum(payload) & 0xFF"""
|
||||
return sum(data) & 0xFF
|
||||
|
||||
|
||||
def crc16_ibm(data: bytes) -> int:
|
||||
# CRC-16/IBM (aka ARC) poly=0xA001, init=0x0000, refin/refout true
|
||||
crc = 0x0000
|
||||
for b in data:
|
||||
crc ^= b
|
||||
for _ in range(8):
|
||||
crc = (crc >> 1) ^ 0xA001 if (crc & 1) else (crc >> 1)
|
||||
return crc & 0xFFFF
|
||||
|
||||
|
||||
def crc16_ccitt_false(data: bytes) -> int:
|
||||
# CRC-16/CCITT-FALSE poly=0x1021, init=0xFFFF, refin/refout false
|
||||
crc = 0xFFFF
|
||||
for b in data:
|
||||
crc ^= (b << 8)
|
||||
for _ in range(8):
|
||||
crc = ((crc << 1) ^ 0x1021) & 0xFFFF if (crc & 0x8000) else (crc << 1) & 0xFFFF
|
||||
return crc
|
||||
|
||||
|
||||
def crc16_x25(data: bytes) -> int:
|
||||
# CRC-16/X-25 poly=0x8408 (reflected), init=0xFFFF, xorout=0xFFFF
|
||||
crc = 0xFFFF
|
||||
for b in data:
|
||||
crc ^= b
|
||||
for _ in range(8):
|
||||
crc = (crc >> 1) ^ 0x8408 if (crc & 1) else (crc >> 1)
|
||||
return (crc ^ 0xFFFF) & 0xFFFF
|
||||
|
||||
|
||||
CRC16_FUNCS: Dict[str, Callable[[bytes], int]] = {
|
||||
"CRC16_IBM": crc16_ibm,
|
||||
"CRC16_CCITT_FALSE": crc16_ccitt_false,
|
||||
"CRC16_X25": crc16_x25,
|
||||
}
|
||||
|
||||
|
||||
def _try_validate_sum8(body: bytes) -> Optional[Tuple[bytes, bytes, str]]:
|
||||
"""
|
||||
body = payload + chk8
|
||||
Returns (payload, chk_bytes, type) if valid, else None
|
||||
"""
|
||||
if len(body) < 1:
|
||||
return None
|
||||
payload = body[:-1]
|
||||
chk = body[-1]
|
||||
if checksum8_sum(payload) == chk:
|
||||
return payload, bytes([chk]), "SUM8"
|
||||
return None
|
||||
|
||||
|
||||
def _try_validate_sum8_large(body: bytes) -> Optional[Tuple[bytes, bytes, str]]:
|
||||
"""
|
||||
Large BW->S3 write frame checksum (SUBs 68, 69, 71, 82, 1A with data).
|
||||
|
||||
Formula: (sum(b for b in payload[2:-1] if b != 0x10) + 0x10) & 0xFF
|
||||
- Starts from byte [2], skipping CMD (0x10) and DLE (0x10) at [0][1]
|
||||
- Skips all 0x10 bytes in the covered range
|
||||
- Adds 0x10 as a constant offset
|
||||
- body[-1] is the checksum byte
|
||||
|
||||
Confirmed across 20 frames from two independent captures (2026-03-12).
|
||||
"""
|
||||
if len(body) < 3:
|
||||
return None
|
||||
payload = body[:-1]
|
||||
chk = body[-1]
|
||||
calc = (sum(b for b in payload[2:] if b != 0x10) + 0x10) & 0xFF
|
||||
if calc == chk:
|
||||
return payload, bytes([chk]), "SUM8_LARGE"
|
||||
return None
|
||||
|
||||
|
||||
def _try_validate_crc16(body: bytes) -> Optional[Tuple[bytes, bytes, str]]:
|
||||
"""
|
||||
body = payload + crc16(2 bytes)
|
||||
Try multiple CRC16 types and both endian interpretations.
|
||||
Returns (payload, chk_bytes, type) if valid, else None
|
||||
"""
|
||||
if len(body) < 2:
|
||||
return None
|
||||
payload = body[:-2]
|
||||
chk_bytes = body[-2:]
|
||||
|
||||
given_le = int.from_bytes(chk_bytes, "little", signed=False)
|
||||
given_be = int.from_bytes(chk_bytes, "big", signed=False)
|
||||
|
||||
for name, fn in CRC16_FUNCS.items():
|
||||
calc = fn(payload)
|
||||
if calc == given_le:
|
||||
return payload, chk_bytes, f"{name}_LE"
|
||||
if calc == given_be:
|
||||
return payload, chk_bytes, f"{name}_BE"
|
||||
return None
|
||||
|
||||
|
||||
def validate_bw_body_auto(body: bytes) -> Optional[Tuple[bytes, bytes, str]]:
|
||||
"""
|
||||
Try to interpret the tail of body as a checksum in several ways.
|
||||
Return (payload, checksum_bytes, checksum_type) if any match; else None.
|
||||
"""
|
||||
# Prefer plain SUM8 first (small frames: POLL, read commands)
|
||||
hit = _try_validate_sum8(body)
|
||||
if hit:
|
||||
return hit
|
||||
|
||||
# Large BW->S3 write frames (SUBs 68, 69, 71, 82, 1A with data)
|
||||
hit = _try_validate_sum8_large(body)
|
||||
if hit:
|
||||
return hit
|
||||
|
||||
# Then CRC16 variants
|
||||
hit = _try_validate_crc16(body)
|
||||
if hit:
|
||||
return hit
|
||||
|
||||
return None
|
||||
|
||||
|
||||
# ------------------------
|
||||
# S3 MODE (DLE framed)
|
||||
# ------------------------
|
||||
|
||||
def parse_s3(blob: bytes, trailer_len: int) -> List[Frame]:
|
||||
frames: List[Frame] = []
|
||||
|
||||
IDLE = 0
|
||||
IN_FRAME = 1
|
||||
AFTER_DLE = 2
|
||||
|
||||
state = IDLE
|
||||
body = bytearray()
|
||||
start_offset = 0
|
||||
idx = 0
|
||||
|
||||
i = 0
|
||||
n = len(blob)
|
||||
|
||||
while i < n:
|
||||
b = blob[i]
|
||||
|
||||
if state == IDLE:
|
||||
if b == DLE and i + 1 < n and blob[i + 1] == STX:
|
||||
start_offset = i
|
||||
body.clear()
|
||||
state = IN_FRAME
|
||||
i += 2
|
||||
continue
|
||||
|
||||
elif state == IN_FRAME:
|
||||
if b == DLE:
|
||||
state = AFTER_DLE
|
||||
i += 1
|
||||
continue
|
||||
body.append(b)
|
||||
|
||||
else: # AFTER_DLE
|
||||
if b == DLE:
|
||||
body.append(DLE)
|
||||
state = IN_FRAME
|
||||
i += 1
|
||||
continue
|
||||
|
||||
if b == ETX:
|
||||
end_offset = i + 1
|
||||
trailer_start = i + 1
|
||||
trailer_end = trailer_start + trailer_len
|
||||
trailer = blob[trailer_start:trailer_end]
|
||||
|
||||
# For S3 mode we don't assume checksum type here yet.
|
||||
frames.append(Frame(
|
||||
index=idx,
|
||||
start_offset=start_offset,
|
||||
end_offset=end_offset,
|
||||
payload_raw=bytes(body),
|
||||
payload=bytes(body),
|
||||
trailer=trailer,
|
||||
checksum_valid=None,
|
||||
checksum_type=None,
|
||||
checksum_hex=None
|
||||
))
|
||||
|
||||
idx += 1
|
||||
state = IDLE
|
||||
i = trailer_end
|
||||
continue
|
||||
|
||||
# Unexpected DLE + byte → treat as literal data
|
||||
body.append(DLE)
|
||||
body.append(b)
|
||||
state = IN_FRAME
|
||||
i += 1
|
||||
continue
|
||||
|
||||
i += 1
|
||||
|
||||
return frames
|
||||
|
||||
|
||||
# ------------------------
|
||||
# BW MODE (ACK+STX framed, bare ETX)
|
||||
# ------------------------
|
||||
|
||||
def parse_bw(blob: bytes, trailer_len: int, validate_checksum: bool) -> List[Frame]:
|
||||
frames: List[Frame] = []
|
||||
|
||||
IDLE = 0
|
||||
IN_FRAME = 1
|
||||
AFTER_DLE = 2
|
||||
|
||||
state = IDLE
|
||||
body = bytearray()
|
||||
start_offset = 0
|
||||
idx = 0
|
||||
|
||||
i = 0
|
||||
n = len(blob)
|
||||
|
||||
while i < n:
|
||||
b = blob[i]
|
||||
|
||||
if state == IDLE:
|
||||
# Frame start signature: ACK + STX
|
||||
if b == ACK and i + 1 < n and blob[i + 1] == STX:
|
||||
start_offset = i
|
||||
body.clear()
|
||||
state = IN_FRAME
|
||||
i += 2
|
||||
continue
|
||||
i += 1
|
||||
continue
|
||||
|
||||
if state == IN_FRAME:
|
||||
if b == DLE:
|
||||
state = AFTER_DLE
|
||||
i += 1
|
||||
continue
|
||||
|
||||
if b == ETX:
|
||||
# Candidate end-of-frame.
|
||||
# Accept ETX if the next bytes look like a real next-frame start (ACK+STX),
|
||||
# or we're at EOF. This prevents chopping on in-payload 0x03.
|
||||
next_is_start = (i + 2 < n and blob[i + 1] == ACK and blob[i + 2] == STX)
|
||||
at_eof = (i == n - 1)
|
||||
|
||||
if not (next_is_start or at_eof):
|
||||
# Not a real boundary -> payload byte
|
||||
body.append(ETX)
|
||||
i += 1
|
||||
continue
|
||||
|
||||
trailer_start = i + 1
|
||||
trailer_end = trailer_start + trailer_len
|
||||
trailer = blob[trailer_start:trailer_end]
|
||||
|
||||
chk_valid = None
|
||||
chk_type = None
|
||||
chk_hex = None
|
||||
payload = bytes(body)
|
||||
|
||||
if validate_checksum:
|
||||
hit = validate_bw_body_auto(payload)
|
||||
if hit:
|
||||
payload, chk_bytes, chk_type = hit
|
||||
chk_valid = True
|
||||
chk_hex = chk_bytes.hex()
|
||||
else:
|
||||
chk_valid = False
|
||||
|
||||
frames.append(Frame(
|
||||
index=idx,
|
||||
start_offset=start_offset,
|
||||
end_offset=i + 1,
|
||||
payload_raw=bytes(body),
|
||||
payload=payload,
|
||||
trailer=trailer,
|
||||
checksum_valid=chk_valid,
|
||||
checksum_type=chk_type,
|
||||
checksum_hex=chk_hex
|
||||
))
|
||||
idx += 1
|
||||
state = IDLE
|
||||
i = trailer_end
|
||||
continue
|
||||
|
||||
# Normal byte
|
||||
body.append(b)
|
||||
i += 1
|
||||
continue
|
||||
|
||||
# AFTER_DLE: DLE XX => literal XX for any XX (full DLE stuffing)
|
||||
body.append(b)
|
||||
state = IN_FRAME
|
||||
i += 1
|
||||
|
||||
return frames
|
||||
|
||||
|
||||
# ------------------------
|
||||
# CLI
|
||||
# ------------------------
|
||||
|
||||
def main() -> None:
|
||||
ap = argparse.ArgumentParser(description="Parse Instantel S3/BW binary captures.")
|
||||
ap.add_argument("binfile", type=Path)
|
||||
ap.add_argument("--mode", choices=["s3", "bw"], default="s3")
|
||||
ap.add_argument("--trailer-len", type=int, default=0)
|
||||
ap.add_argument("--no-checksum", action="store_true")
|
||||
ap.add_argument("--out", type=Path, default=None)
|
||||
|
||||
args = ap.parse_args()
|
||||
|
||||
print(f"s3_parser v{__version__}")
|
||||
|
||||
blob = args.binfile.read_bytes()
|
||||
|
||||
if args.mode == "s3":
|
||||
frames = parse_s3(blob, args.trailer_len)
|
||||
else:
|
||||
frames = parse_bw(blob, args.trailer_len, validate_checksum=not args.no_checksum)
|
||||
|
||||
print("Frames found:", len(frames))
|
||||
|
||||
def to_hex(b: bytes) -> str:
|
||||
return b.hex()
|
||||
|
||||
lines = []
|
||||
for f in frames:
|
||||
obj = {
|
||||
"index": f.index,
|
||||
"start_offset": f.start_offset,
|
||||
"end_offset": f.end_offset,
|
||||
"payload_len": len(f.payload),
|
||||
"payload_hex": to_hex(f.payload),
|
||||
"trailer_hex": to_hex(f.trailer),
|
||||
"checksum_valid": f.checksum_valid,
|
||||
"checksum_type": f.checksum_type,
|
||||
"checksum_hex": f.checksum_hex,
|
||||
}
|
||||
lines.append(json.dumps(obj))
|
||||
|
||||
if args.out:
|
||||
args.out.write_text("\n".join(lines) + "\n", encoding="utf-8")
|
||||
print(f"Wrote: {args.out}")
|
||||
else:
|
||||
for line in lines[:10]:
|
||||
print(line)
|
||||
if len(lines) > 10:
|
||||
print(f"... ({len(lines) - 10} more)")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
1538
seismo_lab.py
Normal file
1538
seismo_lab.py
Normal file
File diff suppressed because it is too large
Load Diff
0
sfm/__init__.py
Normal file
0
sfm/__init__.py
Normal file
496
sfm/server.py
Normal file
496
sfm/server.py
Normal file
@@ -0,0 +1,496 @@
|
||||
"""
|
||||
sfm/server.py — Seismograph Field Module REST API
|
||||
|
||||
Wraps the minimateplus library in a small FastAPI service.
|
||||
Terra-view proxies /api/sfm/* to this service (same pattern as SLMM at :8100).
|
||||
|
||||
Default port: 8200
|
||||
|
||||
Endpoints
|
||||
---------
|
||||
GET /health Service heartbeat — no device I/O
|
||||
GET /device/info POLL + serial number + full config read
|
||||
GET /device/events Download all stored events (headers + peak values)
|
||||
POST /device/connect Explicit connect/identify (same as /device/info)
|
||||
GET /device/event/{idx} Single event by index (header + waveform record)
|
||||
|
||||
Transport query params (supply one set):
|
||||
Serial (direct RS-232 cable):
|
||||
port — serial port name (e.g. COM5, /dev/ttyUSB0)
|
||||
baud — baud rate (default 38400)
|
||||
|
||||
TCP (modem / ACH Auto Call Home):
|
||||
host — IP address or hostname of the modem or ACH relay
|
||||
tcp_port — TCP port number (default 12345, Blastware default)
|
||||
|
||||
Each call opens the connection, does its work, then closes it.
|
||||
(Stateless / reconnect-per-call, matching Blastware's observed behaviour.)
|
||||
|
||||
Run with:
|
||||
python -m uvicorn sfm.server:app --host 0.0.0.0 --port 8200 --reload
|
||||
or:
|
||||
python sfm/server.py
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
import sys
|
||||
from typing import Optional
|
||||
|
||||
# FastAPI / Pydantic
|
||||
try:
|
||||
from fastapi import FastAPI, HTTPException, Query
|
||||
from fastapi.middleware.cors import CORSMiddleware
|
||||
from fastapi.responses import JSONResponse
|
||||
import uvicorn
|
||||
except ImportError:
|
||||
print(
|
||||
"fastapi and uvicorn are required for the SFM server.\n"
|
||||
"Install them with: pip install fastapi uvicorn",
|
||||
file=sys.stderr,
|
||||
)
|
||||
sys.exit(1)
|
||||
|
||||
from minimateplus import MiniMateClient
|
||||
from minimateplus.protocol import ProtocolError
|
||||
from minimateplus.models import ComplianceConfig, DeviceInfo, Event, PeakValues, ProjectInfo, Timestamp
|
||||
from minimateplus.transport import TcpTransport, DEFAULT_TCP_PORT
|
||||
|
||||
logging.basicConfig(
|
||||
level=logging.INFO,
|
||||
format="%(asctime)s %(levelname)-7s %(name)s %(message)s",
|
||||
datefmt="%H:%M:%S",
|
||||
)
|
||||
log = logging.getLogger("sfm.server")
|
||||
|
||||
# ── FastAPI app ────────────────────────────────────────────────────────────────
|
||||
|
||||
app = FastAPI(
|
||||
title="Seismograph Field Module (SFM)",
|
||||
description=(
|
||||
"REST API for Instantel MiniMate Plus seismographs.\n"
|
||||
"Implements the minimateplus RS-232 protocol library.\n"
|
||||
"Proxied by terra-view at /api/sfm/*."
|
||||
),
|
||||
version="0.1.0",
|
||||
)
|
||||
|
||||
# Allow requests from the waveform viewer opened as a local file (file://)
|
||||
# and from any dev server or terra-view proxy.
|
||||
app.add_middleware(
|
||||
CORSMiddleware,
|
||||
allow_origins=["*"],
|
||||
allow_methods=["GET", "POST"],
|
||||
allow_headers=["*"],
|
||||
)
|
||||
|
||||
|
||||
# ── Serialisers ────────────────────────────────────────────────────────────────
|
||||
# Plain dict helpers — avoids a Pydantic dependency in the library layer.
|
||||
|
||||
def _serialise_timestamp(ts: Optional[Timestamp]) -> Optional[dict]:
|
||||
if ts is None:
|
||||
return None
|
||||
return {
|
||||
"year": ts.year,
|
||||
"month": ts.month,
|
||||
"day": ts.day,
|
||||
"hour": ts.hour,
|
||||
"minute": ts.minute,
|
||||
"second": ts.second,
|
||||
"clock_set": ts.clock_set,
|
||||
"display": str(ts),
|
||||
}
|
||||
|
||||
|
||||
def _serialise_peak_values(pv: Optional[PeakValues]) -> Optional[dict]:
|
||||
if pv is None:
|
||||
return None
|
||||
return {
|
||||
"tran_in_s": pv.tran,
|
||||
"vert_in_s": pv.vert,
|
||||
"long_in_s": pv.long,
|
||||
"micl_psi": pv.micl,
|
||||
"peak_vector_sum": pv.peak_vector_sum,
|
||||
}
|
||||
|
||||
|
||||
def _serialise_project_info(pi: Optional[ProjectInfo]) -> Optional[dict]:
|
||||
if pi is None:
|
||||
return None
|
||||
return {
|
||||
"setup_name": pi.setup_name,
|
||||
"project": pi.project,
|
||||
"client": pi.client,
|
||||
"operator": pi.operator,
|
||||
"sensor_location": pi.sensor_location,
|
||||
"notes": pi.notes,
|
||||
}
|
||||
|
||||
|
||||
def _serialise_compliance_config(cc: Optional["ComplianceConfig"]) -> Optional[dict]:
|
||||
if cc is None:
|
||||
return None
|
||||
return {
|
||||
"record_time": cc.record_time,
|
||||
"sample_rate": cc.sample_rate,
|
||||
"trigger_level_geo": cc.trigger_level_geo,
|
||||
"alarm_level_geo": cc.alarm_level_geo,
|
||||
"max_range_geo": cc.max_range_geo,
|
||||
"setup_name": cc.setup_name,
|
||||
"project": cc.project,
|
||||
"client": cc.client,
|
||||
"operator": cc.operator,
|
||||
"sensor_location": cc.sensor_location,
|
||||
"notes": cc.notes,
|
||||
}
|
||||
|
||||
|
||||
def _serialise_device_info(info: DeviceInfo) -> dict:
|
||||
return {
|
||||
"serial": info.serial,
|
||||
"firmware_version": info.firmware_version,
|
||||
"firmware_minor": info.firmware_minor,
|
||||
"dsp_version": info.dsp_version,
|
||||
"manufacturer": info.manufacturer,
|
||||
"model": info.model,
|
||||
"event_count": info.event_count,
|
||||
"compliance_config": _serialise_compliance_config(info.compliance_config),
|
||||
}
|
||||
|
||||
|
||||
def _serialise_event(ev: Event, debug: bool = False) -> dict:
|
||||
d: dict = {
|
||||
"index": ev.index,
|
||||
"timestamp": _serialise_timestamp(ev.timestamp),
|
||||
"sample_rate": ev.sample_rate,
|
||||
"record_type": ev.record_type,
|
||||
"peak_values": _serialise_peak_values(ev.peak_values),
|
||||
"project_info": _serialise_project_info(ev.project_info),
|
||||
}
|
||||
if debug:
|
||||
raw = getattr(ev, "_raw_record", None)
|
||||
d["raw_record_hex"] = raw.hex() if raw else None
|
||||
d["raw_record_len"] = len(raw) if raw else 0
|
||||
return d
|
||||
|
||||
|
||||
# ── Transport factory ─────────────────────────────────────────────────────────
|
||||
|
||||
def _build_client(
|
||||
port: Optional[str],
|
||||
baud: int,
|
||||
host: Optional[str],
|
||||
tcp_port: int,
|
||||
timeout: float = 30.0,
|
||||
) -> MiniMateClient:
|
||||
"""
|
||||
Return a MiniMateClient configured for either serial or TCP transport.
|
||||
|
||||
TCP takes priority if *host* is supplied; otherwise *port* (serial) is used.
|
||||
Raises HTTPException(422) if neither is provided.
|
||||
|
||||
Use timeout=120.0 (or higher) for endpoints that perform a full 5A waveform
|
||||
download — a 70-second event at 1024 sps takes 2-3 minutes to transfer over
|
||||
cellular and each individual recv must complete within the timeout window.
|
||||
"""
|
||||
if host:
|
||||
transport = TcpTransport(host, port=tcp_port)
|
||||
log.debug("TCP transport: %s:%d timeout=%.0fs", host, tcp_port, timeout)
|
||||
return MiniMateClient(transport=transport, timeout=timeout)
|
||||
elif port:
|
||||
log.debug("Serial transport: %s baud=%d", port, baud)
|
||||
return MiniMateClient(port, baud)
|
||||
else:
|
||||
raise HTTPException(
|
||||
status_code=422,
|
||||
detail=(
|
||||
"Specify either 'port' (serial, e.g. ?port=COM5) "
|
||||
"or 'host' (TCP, e.g. ?host=192.168.1.50&tcp_port=12345)"
|
||||
),
|
||||
)
|
||||
|
||||
|
||||
def _is_tcp(host: Optional[str]) -> bool:
|
||||
return bool(host)
|
||||
|
||||
|
||||
def _run_with_retry(fn, *, is_tcp: bool):
|
||||
"""
|
||||
Call fn() and, for TCP connections only, retry once on ProtocolError.
|
||||
|
||||
Rationale: when a MiniMate Plus is cold (just had its serial lines asserted
|
||||
by the modem or a local bridge), it takes 5-10 seconds to boot before it
|
||||
will respond to POLL_PROBE. The first request may time out during that boot
|
||||
window; a single automatic retry is enough to recover once the unit is up.
|
||||
|
||||
Serial connections are NOT retried — a timeout there usually means a real
|
||||
problem (wrong port, wrong baud, cable unplugged).
|
||||
"""
|
||||
try:
|
||||
return fn()
|
||||
except ProtocolError as exc:
|
||||
if not is_tcp:
|
||||
raise
|
||||
log.info("TCP poll timed out (unit may have been cold) — retrying once")
|
||||
return fn() # let any second failure propagate normally
|
||||
|
||||
|
||||
# ── Endpoints ──────────────────────────────────────────────────────────────────
|
||||
|
||||
@app.get("/health")
|
||||
def health() -> dict:
|
||||
"""Service heartbeat. No device I/O."""
|
||||
return {"status": "ok", "service": "sfm", "version": "0.1.0"}
|
||||
|
||||
|
||||
@app.get("/device/info")
|
||||
def device_info(
|
||||
port: Optional[str] = Query(None, description="Serial port (e.g. COM5, /dev/ttyUSB0)"),
|
||||
baud: int = Query(38400, description="Serial baud rate (default 38400)"),
|
||||
host: Optional[str] = Query(None, description="TCP host — modem IP or ACH relay (e.g. 203.0.113.5)"),
|
||||
tcp_port: int = Query(DEFAULT_TCP_PORT, description=f"TCP port (default {DEFAULT_TCP_PORT})"),
|
||||
) -> dict:
|
||||
"""
|
||||
Connect to the device, perform the POLL startup handshake, and return
|
||||
identity information (serial number, firmware version, model).
|
||||
|
||||
Supply either *port* (serial) or *host* (TCP/modem).
|
||||
Equivalent to POST /device/connect — provided as GET for convenience.
|
||||
"""
|
||||
log.info("GET /device/info port=%s host=%s tcp_port=%d", port, host, tcp_port)
|
||||
|
||||
try:
|
||||
def _do():
|
||||
with _build_client(port, baud, host, tcp_port) as client:
|
||||
info = client.connect()
|
||||
# SUB 08 event_count is unreliable (always returns 1 regardless of
|
||||
# actual storage). Count via 1E/1F chain instead.
|
||||
info.event_count = client.count_events()
|
||||
return info
|
||||
info = _run_with_retry(_do, is_tcp=_is_tcp(host))
|
||||
except HTTPException:
|
||||
raise
|
||||
except ProtocolError as exc:
|
||||
raise HTTPException(status_code=502, detail=f"Protocol error: {exc}") from exc
|
||||
except OSError as exc:
|
||||
raise HTTPException(status_code=502, detail=f"Connection error: {exc}") from exc
|
||||
except Exception as exc:
|
||||
raise HTTPException(status_code=500, detail=f"Device error: {exc}") from exc
|
||||
|
||||
return _serialise_device_info(info)
|
||||
|
||||
|
||||
@app.post("/device/connect")
|
||||
def device_connect(
|
||||
port: Optional[str] = Query(None, description="Serial port (e.g. COM5)"),
|
||||
baud: int = Query(38400, description="Serial baud rate"),
|
||||
host: Optional[str] = Query(None, description="TCP host — modem IP or ACH relay"),
|
||||
tcp_port: int = Query(DEFAULT_TCP_PORT, description=f"TCP port (default {DEFAULT_TCP_PORT})"),
|
||||
) -> dict:
|
||||
"""
|
||||
Connect to the device and return identity. POST variant for terra-view
|
||||
compatibility with the SLMM proxy pattern.
|
||||
"""
|
||||
return device_info(port=port, baud=baud, host=host, tcp_port=tcp_port)
|
||||
|
||||
|
||||
@app.get("/device/events")
|
||||
def device_events(
|
||||
port: Optional[str] = Query(None, description="Serial port (e.g. COM5)"),
|
||||
baud: int = Query(38400, description="Serial baud rate"),
|
||||
host: Optional[str] = Query(None, description="TCP host — modem IP or ACH relay"),
|
||||
tcp_port: int = Query(DEFAULT_TCP_PORT, description=f"TCP port (default {DEFAULT_TCP_PORT})"),
|
||||
debug: bool = Query(False, description="Include raw record hex for field-layout inspection"),
|
||||
) -> dict:
|
||||
"""
|
||||
Connect to the device, read the event index, and download all stored
|
||||
events (event headers + full waveform records with peak values).
|
||||
|
||||
Supply either *port* (serial) or *host* (TCP/modem).
|
||||
|
||||
Pass debug=true to include raw_record_hex in each event — useful for
|
||||
verifying field offsets against the protocol reference.
|
||||
|
||||
This does NOT download raw ADC waveform samples — those are large and
|
||||
fetched separately via GET /device/event/{idx}/waveform (future endpoint).
|
||||
"""
|
||||
log.info("GET /device/events port=%s host=%s debug=%s", port, host, debug)
|
||||
|
||||
try:
|
||||
def _do():
|
||||
with _build_client(port, baud, host, tcp_port) as client:
|
||||
return client.connect(), client.get_events(debug=debug)
|
||||
info, events = _run_with_retry(_do, is_tcp=_is_tcp(host))
|
||||
except HTTPException:
|
||||
raise
|
||||
except ProtocolError as exc:
|
||||
raise HTTPException(status_code=502, detail=f"Protocol error: {exc}") from exc
|
||||
except OSError as exc:
|
||||
raise HTTPException(status_code=502, detail=f"Connection error: {exc}") from exc
|
||||
except Exception as exc:
|
||||
raise HTTPException(status_code=500, detail=f"Device error: {exc}") from exc
|
||||
|
||||
# Fill sample_rate from compliance config where the event record doesn't supply it.
|
||||
# sample_rate is a device-level setting, not stored per-event in the waveform record.
|
||||
if info.compliance_config and info.compliance_config.sample_rate:
|
||||
for ev in events:
|
||||
if ev.sample_rate is None:
|
||||
ev.sample_rate = info.compliance_config.sample_rate
|
||||
|
||||
# Backfill event.project_info fields that the 210-byte waveform record doesn't carry.
|
||||
# The waveform record only stores "Project:" — client/operator/sensor_location/notes
|
||||
# live in the SUB 1A compliance config, not in the per-event record.
|
||||
if info.compliance_config:
|
||||
cc = info.compliance_config
|
||||
for ev in events:
|
||||
if ev.project_info is None:
|
||||
ev.project_info = ProjectInfo()
|
||||
pi = ev.project_info
|
||||
if pi.client is None: pi.client = cc.client
|
||||
if pi.operator is None: pi.operator = cc.operator
|
||||
if pi.sensor_location is None: pi.sensor_location = cc.sensor_location
|
||||
if pi.notes is None: pi.notes = cc.notes
|
||||
|
||||
return {
|
||||
"device": _serialise_device_info(info),
|
||||
"event_count": len(events),
|
||||
"events": [_serialise_event(ev, debug=debug) for ev in events],
|
||||
}
|
||||
|
||||
|
||||
@app.get("/device/event/{index}")
|
||||
def device_event(
|
||||
index: int,
|
||||
port: Optional[str] = Query(None, description="Serial port (e.g. COM5)"),
|
||||
baud: int = Query(38400, description="Serial baud rate"),
|
||||
host: Optional[str] = Query(None, description="TCP host — modem IP or ACH relay"),
|
||||
tcp_port: int = Query(DEFAULT_TCP_PORT, description=f"TCP port (default {DEFAULT_TCP_PORT})"),
|
||||
) -> dict:
|
||||
"""
|
||||
Download a single event by index (0-based).
|
||||
|
||||
Supply either *port* (serial) or *host* (TCP/modem).
|
||||
Performs: POLL startup → event index → event header → waveform record.
|
||||
"""
|
||||
log.info("GET /device/event/%d port=%s host=%s", index, port, host)
|
||||
|
||||
try:
|
||||
def _do():
|
||||
with _build_client(port, baud, host, tcp_port) as client:
|
||||
client.connect()
|
||||
return client.get_events()
|
||||
events = _run_with_retry(_do, is_tcp=_is_tcp(host))
|
||||
except HTTPException:
|
||||
raise
|
||||
except ProtocolError as exc:
|
||||
raise HTTPException(status_code=502, detail=f"Protocol error: {exc}") from exc
|
||||
except OSError as exc:
|
||||
raise HTTPException(status_code=502, detail=f"Connection error: {exc}") from exc
|
||||
except Exception as exc:
|
||||
raise HTTPException(status_code=500, detail=f"Device error: {exc}") from exc
|
||||
|
||||
matching = [ev for ev in events if ev.index == index]
|
||||
if not matching:
|
||||
raise HTTPException(
|
||||
status_code=404,
|
||||
detail=f"Event index {index} not found on device",
|
||||
)
|
||||
|
||||
return _serialise_event(matching[0])
|
||||
|
||||
|
||||
@app.get("/device/event/{index}/waveform")
|
||||
def device_event_waveform(
|
||||
index: int,
|
||||
port: Optional[str] = Query(None, description="Serial port (e.g. COM5)"),
|
||||
baud: int = Query(38400, description="Serial baud rate"),
|
||||
host: Optional[str] = Query(None, description="TCP host — modem IP or ACH relay"),
|
||||
tcp_port: int = Query(DEFAULT_TCP_PORT, description=f"TCP port (default {DEFAULT_TCP_PORT})"),
|
||||
) -> dict:
|
||||
"""
|
||||
Download the full raw ADC waveform for a single event (0-based index).
|
||||
|
||||
Supply either *port* (serial) or *host* (TCP/modem).
|
||||
|
||||
Performs: POLL startup → get_events() (to locate the 4-byte waveform key) →
|
||||
download_waveform() (full SUB 5A stream, stop_after_metadata=False).
|
||||
|
||||
Response includes:
|
||||
- **total_samples**: expected sample-sets from the STRT record
|
||||
- **pretrig_samples**: pre-trigger sample count
|
||||
- **rectime_seconds**: record duration
|
||||
- **samples_decoded**: actual sample-sets decoded (may be less than total_samples
|
||||
if the device is not storing all frames yet, or the capture was partial)
|
||||
- **sample_rate**: samples per second (from compliance config)
|
||||
- **channels**: dict of channel name → list of signed int16 ADC counts
|
||||
(keys: "Tran", "Vert", "Long", "Mic")
|
||||
"""
|
||||
log.info("GET /device/event/%d/waveform port=%s host=%s", index, port, host)
|
||||
|
||||
try:
|
||||
def _do():
|
||||
with _build_client(port, baud, host, tcp_port, timeout=120.0) as client:
|
||||
info = client.connect()
|
||||
# full_waveform=True fetches the complete 5A stream inside the
|
||||
# 1E→0A→0C→5A→1F loop. Issuing a second 5A after 1F times out.
|
||||
events = client.get_events(full_waveform=True)
|
||||
matching = [ev for ev in events if ev.index == index]
|
||||
return matching[0] if matching else None, info
|
||||
ev, info = _run_with_retry(_do, is_tcp=_is_tcp(host))
|
||||
except HTTPException:
|
||||
raise
|
||||
except ProtocolError as exc:
|
||||
raise HTTPException(status_code=502, detail=f"Protocol error: {exc}") from exc
|
||||
except OSError as exc:
|
||||
raise HTTPException(status_code=502, detail=f"Connection error: {exc}") from exc
|
||||
except Exception as exc:
|
||||
raise HTTPException(status_code=500, detail=f"Device error: {exc}") from exc
|
||||
|
||||
if ev is None:
|
||||
raise HTTPException(
|
||||
status_code=404,
|
||||
detail=f"Event index {index} not found on device",
|
||||
)
|
||||
|
||||
raw = getattr(ev, "raw_samples", None) or {}
|
||||
samples_decoded = len(raw.get("Tran", []))
|
||||
|
||||
# Resolve sample_rate from compliance config if not on the event itself
|
||||
sample_rate = ev.sample_rate
|
||||
if sample_rate is None and info.compliance_config:
|
||||
sample_rate = info.compliance_config.sample_rate
|
||||
|
||||
return {
|
||||
"index": ev.index,
|
||||
"record_type": ev.record_type,
|
||||
"timestamp": _serialise_timestamp(ev.timestamp),
|
||||
"total_samples": ev.total_samples,
|
||||
"pretrig_samples": ev.pretrig_samples,
|
||||
"rectime_seconds": ev.rectime_seconds,
|
||||
"samples_decoded": samples_decoded,
|
||||
"sample_rate": sample_rate,
|
||||
"peak_values": _serialise_peak_values(ev.peak_values),
|
||||
"channels": raw,
|
||||
}
|
||||
|
||||
|
||||
# ── Entry point ────────────────────────────────────────────────────────────────
|
||||
|
||||
if __name__ == "__main__":
|
||||
import argparse
|
||||
|
||||
ap = argparse.ArgumentParser(description="SFM — Seismograph Field Module API server")
|
||||
ap.add_argument("--host", default="0.0.0.0", help="Bind address (default: 0.0.0.0)")
|
||||
ap.add_argument("--port", type=int, default=8200, help="Port (default: 8200)")
|
||||
ap.add_argument("--reload", action="store_true", help="Enable auto-reload (dev mode)")
|
||||
args = ap.parse_args()
|
||||
|
||||
log.info("Starting SFM server on %s:%d", args.host, args.port)
|
||||
uvicorn.run(
|
||||
"sfm.server:app",
|
||||
host=args.host,
|
||||
port=args.port,
|
||||
reload=args.reload,
|
||||
)
|
||||
538
sfm/waveform_viewer.html
Normal file
538
sfm/waveform_viewer.html
Normal file
@@ -0,0 +1,538 @@
|
||||
<!DOCTYPE html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="UTF-8" />
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
|
||||
<title>SFM Waveform Viewer</title>
|
||||
<script src="https://cdnjs.cloudflare.com/ajax/libs/Chart.js/4.4.1/chart.umd.min.js"></script>
|
||||
<style>
|
||||
* { box-sizing: border-box; margin: 0; padding: 0; }
|
||||
|
||||
body {
|
||||
background: #0d1117;
|
||||
color: #c9d1d9;
|
||||
font-family: 'Segoe UI', system-ui, sans-serif;
|
||||
font-size: 13px;
|
||||
}
|
||||
|
||||
header {
|
||||
background: #161b22;
|
||||
border-bottom: 1px solid #30363d;
|
||||
padding: 12px 20px;
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: 16px;
|
||||
flex-wrap: wrap;
|
||||
}
|
||||
|
||||
header h1 {
|
||||
font-size: 15px;
|
||||
font-weight: 600;
|
||||
color: #f0f6fc;
|
||||
white-space: nowrap;
|
||||
margin-right: 8px;
|
||||
}
|
||||
|
||||
.conn-group {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: 6px;
|
||||
flex-wrap: wrap;
|
||||
}
|
||||
|
||||
label { color: #8b949e; font-size: 12px; }
|
||||
|
||||
input[type="text"], input[type="number"] {
|
||||
background: #0d1117;
|
||||
border: 1px solid #30363d;
|
||||
border-radius: 6px;
|
||||
color: #c9d1d9;
|
||||
padding: 5px 8px;
|
||||
font-size: 13px;
|
||||
width: 100px;
|
||||
}
|
||||
input[type="number"] { width: 70px; }
|
||||
input:focus { outline: none; border-color: #388bfd; }
|
||||
|
||||
button {
|
||||
background: #1f6feb;
|
||||
border: none;
|
||||
border-radius: 6px;
|
||||
color: #fff;
|
||||
cursor: pointer;
|
||||
font-size: 13px;
|
||||
font-weight: 500;
|
||||
padding: 5px 14px;
|
||||
transition: background 0.15s;
|
||||
}
|
||||
button:hover { background: #388bfd; }
|
||||
button:active { background: #1158c7; }
|
||||
button:disabled { background: #21262d; color: #484f58; cursor: not-allowed; }
|
||||
|
||||
#status-bar {
|
||||
background: #161b22;
|
||||
border-bottom: 1px solid #21262d;
|
||||
padding: 5px 20px;
|
||||
font-size: 12px;
|
||||
color: #8b949e;
|
||||
min-height: 26px;
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: 20px;
|
||||
}
|
||||
#status-bar.error { color: #f85149; }
|
||||
#status-bar.ok { color: #3fb950; }
|
||||
#status-bar.loading { color: #d29922; }
|
||||
|
||||
.meta-pill {
|
||||
background: #21262d;
|
||||
border-radius: 4px;
|
||||
padding: 2px 8px;
|
||||
color: #c9d1d9;
|
||||
font-family: monospace;
|
||||
}
|
||||
|
||||
#charts {
|
||||
padding: 12px 16px;
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
gap: 10px;
|
||||
}
|
||||
|
||||
.chart-wrap {
|
||||
background: #161b22;
|
||||
border: 1px solid #21262d;
|
||||
border-radius: 8px;
|
||||
padding: 10px 12px 8px;
|
||||
}
|
||||
|
||||
.chart-label {
|
||||
font-size: 11px;
|
||||
font-weight: 600;
|
||||
letter-spacing: 0.06em;
|
||||
text-transform: uppercase;
|
||||
margin-bottom: 4px;
|
||||
}
|
||||
|
||||
.chart-canvas-wrap { position: relative; height: 130px; }
|
||||
|
||||
#empty-state {
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
align-items: center;
|
||||
justify-content: center;
|
||||
height: 60vh;
|
||||
color: #484f58;
|
||||
gap: 8px;
|
||||
}
|
||||
#empty-state svg { opacity: 0.3; }
|
||||
#empty-state p { font-size: 14px; }
|
||||
|
||||
.ch-tran { color: #58a6ff; }
|
||||
.ch-vert { color: #3fb950; }
|
||||
.ch-long { color: #d29922; }
|
||||
.ch-mic { color: #bc8cff; }
|
||||
|
||||
#unit-bar {
|
||||
background: #0d1117;
|
||||
border-bottom: 1px solid #21262d;
|
||||
padding: 8px 20px;
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: 16px;
|
||||
flex-wrap: wrap;
|
||||
font-size: 12px;
|
||||
}
|
||||
|
||||
.unit-field { display: flex; flex-direction: column; gap: 1px; }
|
||||
.unit-field .uf-label { color: #484f58; font-size: 10px; text-transform: uppercase; letter-spacing: 0.05em; }
|
||||
.unit-field .uf-value { color: #c9d1d9; font-family: monospace; font-size: 13px; }
|
||||
.unit-field .uf-value.highlight { color: #58a6ff; font-weight: 600; }
|
||||
|
||||
.event-chips {
|
||||
display: flex;
|
||||
gap: 5px;
|
||||
flex-wrap: wrap;
|
||||
margin-left: 8px;
|
||||
}
|
||||
|
||||
.event-chip {
|
||||
background: #21262d;
|
||||
border: 1px solid #30363d;
|
||||
border-radius: 5px;
|
||||
color: #8b949e;
|
||||
cursor: pointer;
|
||||
font-size: 12px;
|
||||
padding: 3px 10px;
|
||||
transition: all 0.12s;
|
||||
}
|
||||
.event-chip:hover { background: #1f6feb; border-color: #1f6feb; color: #fff; }
|
||||
.event-chip.active { background: #1f6feb; border-color: #388bfd; color: #fff; font-weight: 600; }
|
||||
|
||||
#connect-btn {
|
||||
background: #238636;
|
||||
margin-left: auto;
|
||||
}
|
||||
#connect-btn:hover { background: #2ea043; }
|
||||
#connect-btn:disabled { background: #21262d; color: #484f58; }
|
||||
</style>
|
||||
</head>
|
||||
<body>
|
||||
|
||||
<header>
|
||||
<h1>SFM Waveform Viewer</h1>
|
||||
<div class="conn-group">
|
||||
<label>API</label>
|
||||
<input type="text" id="api-base" value="http://localhost:8200" style="width:180px" />
|
||||
</div>
|
||||
<div class="conn-group">
|
||||
<label>Device host</label>
|
||||
<input type="text" id="dev-host" value="" placeholder="e.g. 10.0.0.5" />
|
||||
<label>TCP port</label>
|
||||
<input type="number" id="dev-tcp-port" value="9034" />
|
||||
</div>
|
||||
<button id="connect-btn" onclick="connectUnit()">Connect</button>
|
||||
<button id="load-btn" onclick="loadWaveform()" disabled>Load Waveform</button>
|
||||
<button id="prev-btn" onclick="stepEvent(-1)" disabled>◀ Prev</button>
|
||||
<button id="next-btn" onclick="stepEvent(+1)" disabled>Next ▶</button>
|
||||
</header>
|
||||
|
||||
<!-- Unit info bar — hidden until connected -->
|
||||
<div id="unit-bar" style="display:none">
|
||||
<div class="unit-field">
|
||||
<span class="uf-label">Serial</span>
|
||||
<span class="uf-value" id="u-serial">—</span>
|
||||
</div>
|
||||
<div class="unit-field">
|
||||
<span class="uf-label">Firmware</span>
|
||||
<span class="uf-value" id="u-fw">—</span>
|
||||
</div>
|
||||
<div class="unit-field">
|
||||
<span class="uf-label">Sample rate</span>
|
||||
<span class="uf-value" id="u-sr">—</span>
|
||||
</div>
|
||||
<div class="unit-field">
|
||||
<span class="uf-label">Events</span>
|
||||
<span class="uf-value highlight" id="u-count">—</span>
|
||||
</div>
|
||||
<div class="event-chips" id="event-chips"></div>
|
||||
</div>
|
||||
|
||||
<div id="status-bar">Ready — enter device host and click Connect.</div>
|
||||
|
||||
<div id="empty-state">
|
||||
<svg width="48" height="48" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="1.5">
|
||||
<polyline points="22 12 18 12 15 21 9 3 6 12 2 12"/>
|
||||
</svg>
|
||||
<p>No waveform loaded</p>
|
||||
</div>
|
||||
|
||||
<div id="charts" style="display:none"></div>
|
||||
|
||||
<script>
|
||||
const CHANNEL_COLORS = {
|
||||
Tran: '#58a6ff',
|
||||
Vert: '#3fb950',
|
||||
Long: '#d29922',
|
||||
Mic: '#bc8cff',
|
||||
};
|
||||
|
||||
let charts = {};
|
||||
let lastData = null;
|
||||
let unitInfo = null;
|
||||
let currentEventIndex = 0;
|
||||
|
||||
function setStatus(msg, cls = '') {
|
||||
const bar = document.getElementById('status-bar');
|
||||
bar.textContent = msg;
|
||||
bar.className = cls;
|
||||
}
|
||||
|
||||
function appendMeta(label, value) {
|
||||
const bar = document.getElementById('status-bar');
|
||||
const pill = document.createElement('span');
|
||||
pill.className = 'meta-pill';
|
||||
pill.textContent = `${label}: ${value}`;
|
||||
bar.appendChild(pill);
|
||||
}
|
||||
|
||||
async function connectUnit() {
|
||||
const apiBase = document.getElementById('api-base').value.replace(/\/$/, '');
|
||||
const devHost = document.getElementById('dev-host').value.trim();
|
||||
const tcpPort = document.getElementById('dev-tcp-port').value;
|
||||
|
||||
if (!devHost) { setStatus('Enter a device host first.', 'error'); return; }
|
||||
|
||||
const btn = document.getElementById('connect-btn');
|
||||
btn.disabled = true;
|
||||
btn.textContent = 'Connecting…';
|
||||
setStatus('Connecting to unit…', 'loading');
|
||||
|
||||
const url = `${apiBase}/device/info?host=${encodeURIComponent(devHost)}&tcp_port=${tcpPort}`;
|
||||
try {
|
||||
const resp = await fetch(url);
|
||||
if (!resp.ok) {
|
||||
const err = await resp.json().catch(() => ({ detail: resp.statusText }));
|
||||
throw new Error(err.detail || resp.statusText);
|
||||
}
|
||||
unitInfo = await resp.json();
|
||||
} catch (e) {
|
||||
setStatus(`Error: ${e.message}`, 'error');
|
||||
btn.disabled = false;
|
||||
btn.textContent = 'Connect';
|
||||
return;
|
||||
}
|
||||
|
||||
// Populate unit bar
|
||||
document.getElementById('u-serial').textContent = unitInfo.serial || '—';
|
||||
document.getElementById('u-fw').textContent = unitInfo.firmware_version || '—';
|
||||
const sr = unitInfo.compliance_config?.sample_rate;
|
||||
document.getElementById('u-sr').textContent = sr ? `${sr} sps` : '—';
|
||||
const count = unitInfo.event_count ?? 0;
|
||||
document.getElementById('u-count').textContent = count;
|
||||
|
||||
// Build event chips
|
||||
const chipsEl = document.getElementById('event-chips');
|
||||
chipsEl.innerHTML = '';
|
||||
for (let i = 0; i < count; i++) {
|
||||
const chip = document.createElement('button');
|
||||
chip.className = 'event-chip' + (i === 0 ? ' active' : '');
|
||||
chip.textContent = `Event ${i}`;
|
||||
chip.onclick = () => selectEvent(i);
|
||||
chipsEl.appendChild(chip);
|
||||
}
|
||||
|
||||
document.getElementById('unit-bar').style.display = 'flex';
|
||||
document.getElementById('load-btn').disabled = count === 0;
|
||||
document.getElementById('prev-btn').disabled = true;
|
||||
document.getElementById('next-btn').disabled = count <= 1;
|
||||
|
||||
btn.disabled = false;
|
||||
btn.textContent = 'Reconnect';
|
||||
|
||||
if (count === 0) {
|
||||
setStatus('Connected — no events stored on device.', 'ok');
|
||||
} else {
|
||||
setStatus(`Connected — ${count} event${count !== 1 ? 's' : ''} stored. Select an event or click Load Waveform.`, 'ok');
|
||||
}
|
||||
}
|
||||
|
||||
function selectEvent(idx) {
|
||||
currentEventIndex = idx;
|
||||
// Update chip highlight
|
||||
document.querySelectorAll('.event-chip').forEach((c, i) => {
|
||||
c.classList.toggle('active', i === idx);
|
||||
});
|
||||
document.getElementById('prev-btn').disabled = idx <= 0;
|
||||
const count = unitInfo?.event_count ?? 0;
|
||||
document.getElementById('next-btn').disabled = idx >= count - 1;
|
||||
loadWaveform();
|
||||
}
|
||||
|
||||
async function loadWaveform() {
|
||||
const apiBase = document.getElementById('api-base').value.replace(/\/$/, '');
|
||||
const devHost = document.getElementById('dev-host').value.trim();
|
||||
const tcpPort = document.getElementById('dev-tcp-port').value;
|
||||
const evIndex = currentEventIndex;
|
||||
|
||||
if (!devHost) { setStatus('Enter a device host first.', 'error'); return; }
|
||||
|
||||
const btn = document.getElementById('load-btn');
|
||||
btn.disabled = true;
|
||||
setStatus('Fetching waveform…', 'loading');
|
||||
|
||||
const url = `${apiBase}/device/event/${evIndex}/waveform?host=${encodeURIComponent(devHost)}&tcp_port=${tcpPort}`;
|
||||
|
||||
let data;
|
||||
try {
|
||||
const resp = await fetch(url);
|
||||
if (!resp.ok) {
|
||||
const err = await resp.json().catch(() => ({ detail: resp.statusText }));
|
||||
throw new Error(err.detail || resp.statusText);
|
||||
}
|
||||
data = await resp.json();
|
||||
} catch (e) {
|
||||
setStatus(`Error: ${e.message}`, 'error');
|
||||
btn.disabled = false;
|
||||
return;
|
||||
}
|
||||
|
||||
lastData = data;
|
||||
renderWaveform(data);
|
||||
btn.disabled = false;
|
||||
}
|
||||
|
||||
function stepEvent(delta) {
|
||||
const count = unitInfo?.event_count ?? 0;
|
||||
const next = Math.max(0, Math.min(count - 1, currentEventIndex + delta));
|
||||
selectEvent(next);
|
||||
}
|
||||
|
||||
function renderWaveform(data) {
|
||||
const sr = data.sample_rate || 1024;
|
||||
const pretrig = data.pretrig_samples || 0;
|
||||
const decoded = data.samples_decoded || 0;
|
||||
const total = data.total_samples || decoded;
|
||||
const channels = data.channels || {};
|
||||
const recType = data.record_type || 'Unknown';
|
||||
|
||||
// Status bar
|
||||
const bar = document.getElementById('status-bar');
|
||||
bar.innerHTML = '';
|
||||
bar.className = 'ok';
|
||||
const ts = data.timestamp;
|
||||
if (ts) {
|
||||
bar.textContent = `Event #${data.index} — ${ts.display} `;
|
||||
} else {
|
||||
bar.textContent = `Event #${data.index} `;
|
||||
}
|
||||
appendMeta('type', recType);
|
||||
appendMeta('sr', `${sr} sps`);
|
||||
appendMeta('samples', `${decoded.toLocaleString()} / ${total.toLocaleString()}`);
|
||||
appendMeta('pretrig', pretrig);
|
||||
appendMeta('rectime', `${data.rectime_seconds ?? '?'}s`);
|
||||
|
||||
// No waveform data — show a clear reason instead of empty charts
|
||||
if (decoded === 0) {
|
||||
document.getElementById('empty-state').style.display = 'flex';
|
||||
document.getElementById('empty-state').querySelector('p').textContent =
|
||||
recType === 'Waveform'
|
||||
? 'Waveform decode returned no samples — check server logs'
|
||||
: `Record type "${recType}" — waveform decode not yet supported for this mode`;
|
||||
document.getElementById('charts').style.display = 'none';
|
||||
Object.values(charts).forEach(c => c.destroy());
|
||||
charts = {};
|
||||
return;
|
||||
}
|
||||
|
||||
// Build time axis (ms)
|
||||
const times = Array.from({ length: decoded }, (_, i) =>
|
||||
((i - pretrig) / sr * 1000).toFixed(2)
|
||||
);
|
||||
|
||||
// Show charts area
|
||||
document.getElementById('empty-state').style.display = 'none';
|
||||
const chartsDiv = document.getElementById('charts');
|
||||
chartsDiv.style.display = 'flex';
|
||||
chartsDiv.innerHTML = '';
|
||||
|
||||
// Destroy old Chart instances
|
||||
Object.values(charts).forEach(c => c.destroy());
|
||||
charts = {};
|
||||
|
||||
for (const [ch, color] of Object.entries(CHANNEL_COLORS)) {
|
||||
const samples = channels[ch];
|
||||
if (!samples || samples.length === 0) continue;
|
||||
|
||||
const wrap = document.createElement('div');
|
||||
wrap.className = 'chart-wrap';
|
||||
|
||||
const lbl = document.createElement('div');
|
||||
lbl.className = `chart-label ch-${ch.toLowerCase()}`;
|
||||
|
||||
// Compute peak for label
|
||||
const peak = Math.max(...samples.map(Math.abs));
|
||||
lbl.textContent = `${ch} — peak ${peak.toLocaleString()} counts`;
|
||||
wrap.appendChild(lbl);
|
||||
|
||||
const canvasWrap = document.createElement('div');
|
||||
canvasWrap.className = 'chart-canvas-wrap';
|
||||
const canvas = document.createElement('canvas');
|
||||
canvasWrap.appendChild(canvas);
|
||||
wrap.appendChild(canvasWrap);
|
||||
chartsDiv.appendChild(wrap);
|
||||
|
||||
// Downsample for rendering if very long (keep chart responsive)
|
||||
const MAX_POINTS = 4000;
|
||||
let renderTimes = times;
|
||||
let renderData = samples;
|
||||
if (samples.length > MAX_POINTS) {
|
||||
const step = Math.ceil(samples.length / MAX_POINTS);
|
||||
renderTimes = times.filter((_, i) => i % step === 0);
|
||||
renderData = samples.filter((_, i) => i % step === 0);
|
||||
}
|
||||
|
||||
const chart = new Chart(canvas, {
|
||||
type: 'line',
|
||||
data: {
|
||||
labels: renderTimes,
|
||||
datasets: [{
|
||||
data: renderData,
|
||||
borderColor: color,
|
||||
borderWidth: 1,
|
||||
pointRadius: 0,
|
||||
tension: 0,
|
||||
}],
|
||||
},
|
||||
options: {
|
||||
animation: false,
|
||||
responsive: true,
|
||||
maintainAspectRatio: false,
|
||||
plugins: {
|
||||
legend: { display: false },
|
||||
tooltip: {
|
||||
mode: 'index',
|
||||
intersect: false,
|
||||
callbacks: {
|
||||
title: items => `t = ${items[0].label} ms`,
|
||||
label: item => `${ch}: ${item.raw.toLocaleString()} counts`,
|
||||
},
|
||||
},
|
||||
// Trigger line annotation (drawn manually via afterDraw)
|
||||
},
|
||||
scales: {
|
||||
x: {
|
||||
type: 'category',
|
||||
ticks: {
|
||||
color: '#484f58',
|
||||
maxTicksLimit: 10,
|
||||
maxRotation: 0,
|
||||
callback: (val, i) => renderTimes[i] + ' ms',
|
||||
},
|
||||
grid: { color: '#21262d' },
|
||||
},
|
||||
y: {
|
||||
ticks: { color: '#484f58', maxTicksLimit: 5 },
|
||||
grid: { color: '#21262d' },
|
||||
},
|
||||
},
|
||||
},
|
||||
plugins: [{
|
||||
// Draw trigger line at t=0
|
||||
id: 'triggerLine',
|
||||
afterDraw(chart) {
|
||||
const ctx = chart.ctx;
|
||||
const xAxis = chart.scales.x;
|
||||
const yAxis = chart.scales.y;
|
||||
|
||||
// Find index of t=0
|
||||
const zeroIdx = renderTimes.findIndex(t => parseFloat(t) >= 0);
|
||||
if (zeroIdx < 0) return;
|
||||
|
||||
const x = xAxis.getPixelForValue(zeroIdx);
|
||||
ctx.save();
|
||||
ctx.beginPath();
|
||||
ctx.moveTo(x, yAxis.top);
|
||||
ctx.lineTo(x, yAxis.bottom);
|
||||
ctx.strokeStyle = 'rgba(248, 81, 73, 0.7)';
|
||||
ctx.lineWidth = 1.5;
|
||||
ctx.setLineDash([4, 3]);
|
||||
ctx.stroke();
|
||||
ctx.restore();
|
||||
},
|
||||
}],
|
||||
});
|
||||
|
||||
charts[ch] = chart;
|
||||
}
|
||||
}
|
||||
|
||||
// Allow Enter key on connection inputs to trigger connect
|
||||
['api-base', 'dev-host', 'dev-tcp-port'].forEach(id => {
|
||||
document.getElementById(id).addEventListener('keydown', e => {
|
||||
if (e.key === 'Enter') connectUnit();
|
||||
});
|
||||
});
|
||||
</script>
|
||||
</body>
|
||||
</html>
|
||||
Reference in New Issue
Block a user