2025-06-23 09:03:09 +00:00
# SmartProxy Development Hints
## Byte Tracking and Metrics
2025-06-23 13:07:30 +00:00
### Throughput Drift Issue (Fixed)
**Problem**: Throughput numbers were gradually increasing over time for long-lived connections.
**Root Cause**: The `byRoute()` and `byIP()` methods were dividing cumulative total bytes (since connection start) by the window duration, causing rates to appear higher as connections aged:
- Hour 1: 1GB total / 60s = 17 MB/s ✓
- Hour 2: 2GB total / 60s = 34 MB/s ✗ (appears doubled!)
- Hour 3: 3GB total / 60s = 50 MB/s ✗ (keeps rising!)
2025-06-23 13:19:39 +00:00
**Solution**: Implemented dedicated ThroughputTracker instances for each route and IP address:
- Each route and IP gets its own throughput tracker with per-second sampling
- Samples are taken every second and stored in a circular buffer
- Rate calculations use actual samples within the requested window
- Default window is now 1 second for real-time accuracy
2025-06-23 13:07:30 +00:00
2025-06-23 09:35:37 +00:00
### What Gets Counted (Network Interface Throughput)
The byte tracking is designed to match network interface throughput (what Unifi/network monitoring tools show):
**Counted bytes include:**
- All application data
- TLS handshakes and protocol overhead
- TLS record headers and encryption padding
- HTTP headers and protocol data
- WebSocket frames and protocol overhead
- TLS alerts sent to clients
**NOT counted:**
- PROXY protocol headers (sent to backend, not client)
- TCP/IP headers (handled by OS, not visible at application layer)
**Byte direction:**
- `bytesReceived` : All bytes received FROM the client on the incoming connection
- `bytesSent` : All bytes sent TO the client on the incoming connection
- Backend connections are separate and not mixed with client metrics
2025-06-23 09:03:09 +00:00
### Double Counting Issue (Fixed)
**Problem**: Initial data chunks were being counted twice in the byte tracking:
1. Once when stored in `pendingData` in `setupDirectConnection()`
2. Again when the data flowed through bidirectional forwarding
**Solution**: Removed the byte counting when storing initial chunks. Bytes are now only counted when they actually flow through the `setupBidirectionalForwarding()` callbacks.
### HttpProxy Metrics (Fixed)
**Problem**: HttpProxy forwarding was updating connection record byte counts but not calling `metricsCollector.recordBytes()` , resulting in missing throughput data.
**Solution**: Added `metricsCollector.recordBytes()` calls to the HttpProxy bidirectional forwarding callbacks.
### Metrics Architecture
2025-06-23 13:19:39 +00:00
The metrics system has multiple layers:
2025-06-23 09:03:09 +00:00
1. **Connection Records** (`record.bytesReceived/bytesSent` ): Track total bytes per connection
2025-06-23 13:19:39 +00:00
2. **Global ThroughputTracker** : Accumulates bytes between samples for overall rate calculations
3. **Per-Route ThroughputTrackers** : Dedicated tracker for each route with per-second sampling
4. **Per-IP ThroughputTrackers** : Dedicated tracker for each IP with per-second sampling
5. **connectionByteTrackers** : Track cumulative bytes and metadata for active connections
2025-06-23 09:03:09 +00:00
2025-06-23 13:07:30 +00:00
Key features:
2025-06-23 13:19:39 +00:00
- All throughput trackers sample every second (1Hz)
- Each tracker maintains a circular buffer of samples (default: 1 hour retention)
- Rate calculations are accurate for any requested window (default: 1 second)
2025-06-23 13:07:30 +00:00
- All byte counting happens exactly once at the data flow point
2025-06-23 13:19:39 +00:00
- Unused route/IP trackers are automatically cleaned up when connections close
2025-06-23 09:03:09 +00:00
2025-06-23 09:35:37 +00:00
### Understanding "High" Byte Counts
If byte counts seem high compared to actual application data, remember:
- TLS handshakes can be 1-5KB depending on cipher suites and certificates
- Each TLS record has 5 bytes of header overhead
- TLS encryption adds 16-48 bytes of padding/MAC per record
- HTTP/2 has additional framing overhead
- WebSocket has frame headers (2-14 bytes per message)
This overhead is real network traffic and should be counted for accurate throughput metrics.
### Byte Counting Paths
There are two mutually exclusive paths for connections:
1. **Direct forwarding** (route-connection-handler.ts):
- Used for TCP passthrough, TLS passthrough, and direct connections
- Bytes counted in `setupBidirectionalForwarding` callbacks
- Initial chunk NOT counted separately (flows through bidirectional forwarding)
2. **HttpProxy forwarding** (http-proxy-bridge.ts):
- Used for TLS termination (terminate, terminate-and-reencrypt)
- Initial chunk counted when written to proxy
- All subsequent bytes counted in `setupBidirectionalForwarding` callbacks
- This is the ONLY counting point for these connections
### Byte Counting Audit (2025-01-06)
A comprehensive audit was performed to verify byte counting accuracy:
**Audit Results:**
- ✅ No double counting detected in any connection flow
- ✅ Each byte counted exactly once in each direction
- ✅ Connection records and metrics updated consistently
- ✅ PROXY protocol headers correctly excluded from client metrics
- ✅ NFTables forwarded connections correctly not counted (kernel handles)
**Key Implementation Points:**
- All byte counting happens in only 2 files: `route-connection-handler.ts` and `http-proxy-bridge.ts`
- Both use the same pattern: increment `record.bytesReceived/Sent` AND call `metricsCollector.recordBytes()`
- Initial chunks handled correctly: stored but not counted until forwarded
- TLS alerts counted as sent bytes (correct - they are sent to client)
For full audit details, see `readme.byte-counting-audit.md`
2025-06-23 09:03:09 +00:00
## Connection Cleanup
### Zombie Connection Detection
The connection manager performs comprehensive zombie detection every 10 seconds:
- **Full zombies**: Both incoming and outgoing sockets destroyed but connection not cleaned up
- **Half zombies**: One socket destroyed, grace period expired (5 minutes for TLS, 30 seconds for non-TLS)
- **Stuck connections**: Data received but none sent back after threshold (5 minutes for TLS, 60 seconds for non-TLS)
### Cleanup Queue
Connections are cleaned up through a batched queue system:
- Batch size: 100 connections
- Processing triggered immediately when batch size reached
- Otherwise processed after 100ms delay
- Prevents overwhelming the system during mass disconnections
## Keep-Alive Handling
Keep-alive connections receive special treatment based on `keepAliveTreatment` setting:
- **standard**: Normal timeout applies
- **extended**: Timeout multiplied by `keepAliveInactivityMultiplier` (default 6x)
- **immortal**: No timeout, connections persist indefinitely
## PROXY Protocol
The system supports both receiving and sending PROXY protocol:
- **Receiving**: Automatically detected from trusted proxy IPs (configured in `proxyIPs` )
- **Sending**: Enabled per-route or globally via `sendProxyProtocol` setting
- Real client IP is preserved and used for all connection tracking and security checks