diff --git a/changelog.md b/changelog.md index d9bf8e4..e85163f 100644 --- a/changelog.md +++ b/changelog.md @@ -1,5 +1,12 @@ # Changelog +## 2026-03-18 - 4.9.1 - fix(readme) +document QoS tiers, heartbeat frames, and adaptive flow control in the protocol overview + +- Adds PING, PONG, WINDOW_UPDATE, and WINDOW_UPDATE_BACK frame types to the protocol documentation +- Describes the 3-tier priority queues for control, normal data, and sustained traffic +- Explains sustained stream classification and adaptive per-stream window sizing + ## 2026-03-18 - 4.9.0 - feat(protocol) add sustained-stream tunnel scheduling to isolate high-throughput traffic diff --git a/readme.md b/readme.md index e975a21..706a5ae 100644 --- a/readme.md +++ b/readme.md @@ -17,7 +17,7 @@ pnpm install @serve.zone/remoteingress `@serve.zone/remoteingress` uses a **Hub/Edge** topology with a high-performance Rust core and a TypeScript API surface: ``` -┌─────────────────────┐ TLS Tunnel ┌─────────────────────┐ +┌─────────────────────┐ TLS Tunnel ┌─────────────────────┐ │ Network Edge │ ◄══════════════════════════► │ Private Cluster │ │ │ (multiplexed frames + │ │ │ RemoteIngressEdge │ shared-secret auth) │ RemoteIngressHub │ @@ -48,6 +48,8 @@ pnpm install @serve.zone/remoteingress - 🎛️ **Dynamic port configuration** — the hub assigns listen ports per edge and can hot-reload them at runtime via `FRAME_CONFIG` frames - 📣 **Event-driven** — both Hub and Edge extend `EventEmitter` for real-time monitoring - ⚡ **Rust core** — all frame encoding, TLS, and TCP proxying happen in native code for maximum throughput +- 🎚️ **3-tier QoS** — control frames, normal data, and sustained (elephant flow) traffic each get their own priority queue +- 📊 **Adaptive flow control** — per-stream windows scale with active stream count to prevent memory overuse ## 🚀 Usage @@ -280,6 +282,10 @@ The tunnel uses a custom binary frame protocol over TLS: | `DATA_BACK` | `0x04` | Hub → Edge | Response data flowing downstream | | `CLOSE_BACK` | `0x05` | Hub → Edge | Upstream (SmartProxy) closed the connection | | `CONFIG` | `0x06` | Hub → Edge | Runtime configuration update (e.g. port changes); payload is JSON | +| `PING` | `0x07` | Hub → Edge | Heartbeat probe (sent every 15s) | +| `PONG` | `0x08` | Edge → Hub | Heartbeat response | +| `WINDOW_UPDATE` | `0x09` | Edge → Hub | Per-stream flow control: edge consumed N bytes, hub can send more | +| `WINDOW_UPDATE_BACK` | `0x0A` | Hub → Edge | Per-stream flow control: hub consumed N bytes, edge can send more | Max payload size per frame: **16 MB**. Stream IDs are 32-bit unsigned integers. @@ -292,6 +298,42 @@ Max payload size per frame: **16 MB**. Stream IDs are 32-bit unsigned integers. 5. Frame protocol begins — `OPEN`/`DATA`/`CLOSE` frames flow in both directions 6. Hub can push `CONFIG` frames at any time to update the edge's listen ports +## 🎚️ QoS & Flow Control + +The tunnel multiplexer uses a **3-tier priority system** and **per-stream flow control** to ensure fair bandwidth sharing across thousands of concurrent streams. + +### Priority Tiers + +All outbound frames are queued into one of three priority levels: + +| Tier | Queue | Frames | Behavior | +|------|-------|--------|----------| +| 🔴 **Control** (highest) | `ctrl_queue` | PING, PONG, WINDOW_UPDATE, OPEN, CLOSE, CONFIG | Always drained first. Never delayed. | +| 🟡 **Data** (normal) | `data_queue` | DATA, DATA_BACK from normal streams | Drained when ctrl is empty. Gated at 64 buffered items for backpressure. | +| 🟢 **Sustained** (lowest) | `sustained_queue` | DATA, DATA_BACK from elephant flows | Drained freely when ctrl+data are empty. Otherwise guaranteed **1 MB/s** via forced drain every second. | + +This prevents large bulk transfers (e.g. git clones, file downloads) from starving interactive traffic and ensures `WINDOW_UPDATE` frames are never delayed — which would cause flow control deadlocks. + +### Sustained Stream Classification + +A stream is automatically classified as **sustained** (elephant flow) when: +- It has been active for **>10 seconds**, AND +- Its average throughput exceeds **20 Mbit/s** (2.5 MB/s) + +Once classified, the stream's flow control window is locked to the **1 MB floor** and its data frames move to the lowest-priority queue. Classification is one-way — a stream never gets promoted back to normal. + +### Adaptive Per-Stream Windows + +Each stream has a send window that limits bytes-in-flight. The window size adapts to the number of active streams using a shared **200 MB memory budget**: + +| Active Streams | Window per Stream | +|---|---| +| 1–50 | 4 MB (maximum) | +| 51–100 | Scales down (4 MB → 2 MB) | +| 200+ | 1 MB (floor) | + +The consumer sends `WINDOW_UPDATE` frames after processing data, allowing the producer to send more. This prevents any single stream from consuming unbounded memory and provides natural backpressure. + ## 💡 Example Scenarios ### 1. Expose a Private Kubernetes Cluster to the Internet diff --git a/ts/00_commitinfo_data.ts b/ts/00_commitinfo_data.ts index 7aecf6a..67bfdc7 100644 --- a/ts/00_commitinfo_data.ts +++ b/ts/00_commitinfo_data.ts @@ -3,6 +3,6 @@ */ export const commitinfo = { name: '@serve.zone/remoteingress', - version: '4.9.0', + version: '4.9.1', description: 'Edge ingress tunnel for DcRouter - accepts incoming TCP connections at network edge and tunnels them to DcRouter SmartProxy preserving client IP via PROXY protocol v1.' }