Files
smartproxy/readme.hints.md
2025-06-23 09:35:37 +00:00

5.3 KiB

SmartProxy Development Hints

Byte Tracking and Metrics

What Gets Counted (Network Interface Throughput)

The byte tracking is designed to match network interface throughput (what Unifi/network monitoring tools show):

Counted bytes include:

  • All application data
  • TLS handshakes and protocol overhead
  • TLS record headers and encryption padding
  • HTTP headers and protocol data
  • WebSocket frames and protocol overhead
  • TLS alerts sent to clients

NOT counted:

  • PROXY protocol headers (sent to backend, not client)
  • TCP/IP headers (handled by OS, not visible at application layer)

Byte direction:

  • bytesReceived: All bytes received FROM the client on the incoming connection
  • bytesSent: All bytes sent TO the client on the incoming connection
  • Backend connections are separate and not mixed with client metrics

Double Counting Issue (Fixed)

Problem: Initial data chunks were being counted twice in the byte tracking:

  1. Once when stored in pendingData in setupDirectConnection()
  2. Again when the data flowed through bidirectional forwarding

Solution: Removed the byte counting when storing initial chunks. Bytes are now only counted when they actually flow through the setupBidirectionalForwarding() callbacks.

HttpProxy Metrics (Fixed)

Problem: HttpProxy forwarding was updating connection record byte counts but not calling metricsCollector.recordBytes(), resulting in missing throughput data.

Solution: Added metricsCollector.recordBytes() calls to the HttpProxy bidirectional forwarding callbacks.

Metrics Architecture

The metrics system has three layers:

  1. Connection Records (record.bytesReceived/bytesSent): Track total bytes per connection
  2. ThroughputTracker: Accumulates bytes between samples for rate calculations (bytes/second)
  3. connectionByteTrackers: Track bytes per connection with timestamps for per-route/IP metrics

Total byte counts come from connection records only, preventing double counting.

Understanding "High" Byte Counts

If byte counts seem high compared to actual application data, remember:

  • TLS handshakes can be 1-5KB depending on cipher suites and certificates
  • Each TLS record has 5 bytes of header overhead
  • TLS encryption adds 16-48 bytes of padding/MAC per record
  • HTTP/2 has additional framing overhead
  • WebSocket has frame headers (2-14 bytes per message)

This overhead is real network traffic and should be counted for accurate throughput metrics.

Byte Counting Paths

There are two mutually exclusive paths for connections:

  1. Direct forwarding (route-connection-handler.ts):

    • Used for TCP passthrough, TLS passthrough, and direct connections
    • Bytes counted in setupBidirectionalForwarding callbacks
    • Initial chunk NOT counted separately (flows through bidirectional forwarding)
  2. HttpProxy forwarding (http-proxy-bridge.ts):

    • Used for TLS termination (terminate, terminate-and-reencrypt)
    • Initial chunk counted when written to proxy
    • All subsequent bytes counted in setupBidirectionalForwarding callbacks
    • This is the ONLY counting point for these connections

Byte Counting Audit (2025-01-06)

A comprehensive audit was performed to verify byte counting accuracy:

Audit Results:

  • No double counting detected in any connection flow
  • Each byte counted exactly once in each direction
  • Connection records and metrics updated consistently
  • PROXY protocol headers correctly excluded from client metrics
  • NFTables forwarded connections correctly not counted (kernel handles)

Key Implementation Points:

  • All byte counting happens in only 2 files: route-connection-handler.ts and http-proxy-bridge.ts
  • Both use the same pattern: increment record.bytesReceived/Sent AND call metricsCollector.recordBytes()
  • Initial chunks handled correctly: stored but not counted until forwarded
  • TLS alerts counted as sent bytes (correct - they are sent to client)

For full audit details, see readme.byte-counting-audit.md

Connection Cleanup

Zombie Connection Detection

The connection manager performs comprehensive zombie detection every 10 seconds:

  • Full zombies: Both incoming and outgoing sockets destroyed but connection not cleaned up
  • Half zombies: One socket destroyed, grace period expired (5 minutes for TLS, 30 seconds for non-TLS)
  • Stuck connections: Data received but none sent back after threshold (5 minutes for TLS, 60 seconds for non-TLS)

Cleanup Queue

Connections are cleaned up through a batched queue system:

  • Batch size: 100 connections
  • Processing triggered immediately when batch size reached
  • Otherwise processed after 100ms delay
  • Prevents overwhelming the system during mass disconnections

Keep-Alive Handling

Keep-alive connections receive special treatment based on keepAliveTreatment setting:

  • standard: Normal timeout applies
  • extended: Timeout multiplied by keepAliveInactivityMultiplier (default 6x)
  • immortal: No timeout, connections persist indefinitely

PROXY Protocol

The system supports both receiving and sending PROXY protocol:

  • Receiving: Automatically detected from trusted proxy IPs (configured in proxyIPs)
  • Sending: Enabled per-route or globally via sendProxyProtocol setting
  • Real client IP is preserved and used for all connection tracking and security checks