Compare commits

..

2 Commits

Author SHA1 Message Date
424407d879 fix(readme): update 2025-06-13 17:22:31 +00:00
7e1b7b190c fix(readme): update 2025-06-12 16:59:25 +00:00
14 changed files with 740 additions and 4804 deletions

View File

@ -1,724 +0,0 @@
# Connection Management in SmartProxy
This document describes connection handling, cleanup mechanisms, and known issues in SmartProxy, particularly focusing on proxy chain configurations.
## Connection Accumulation Investigation (January 2025)
### Problem Statement
Connections may accumulate on the outer proxy in proxy chain configurations, despite implemented fixes.
### Historical Context
- **v19.5.12-v19.5.15**: Major connection cleanup improvements
- **v19.5.19+**: PROXY protocol support with WrappedSocket implementation
- **v19.5.20**: Fixed race condition in immediate routing cleanup
### Current Architecture
#### Connection Flow in Proxy Chains
```
Client → Outer Proxy (8001) → Inner Proxy (8002) → Backend (httpbin.org:443)
```
1. **Outer Proxy**:
- Accepts client connection
- Sends PROXY protocol header to inner proxy
- Tracks connection in ConnectionManager
- Immediate routing for non-TLS ports
2. **Inner Proxy**:
- Parses PROXY protocol to get real client IP
- Establishes connection to backend
- Tracks its own connections separately
### Potential Causes of Connection Accumulation
#### 1. Race Condition in Immediate Routing
When a connection is immediately routed (non-TLS ports), there's a timing window:
```typescript
// route-connection-handler.ts, line ~231
this.routeConnection(socket, record, '', undefined);
// Connection is routed before all setup is complete
```
**Issue**: If client disconnects during backend connection setup, cleanup may not trigger properly.
#### 2. Outgoing Socket Assignment Timing
Despite the fix in v19.5.20:
```typescript
// Line 1362 in setupDirectConnection
record.outgoing = targetSocket;
```
There's still a window between socket creation and the `connect` event where cleanup might miss the outgoing socket.
#### 3. Batch Cleanup Delays
ConnectionManager uses queued cleanup:
- Batch size: 100 connections
- Batch interval: 100ms
- Under rapid connection/disconnection, queue might lag
#### 4. Different Cleanup Paths
Multiple cleanup triggers exist:
- Socket 'close' event
- Socket 'error' event
- Inactivity timeout
- Connection timeout
- Manual cleanup
Not all paths may properly handle proxy chain scenarios.
#### 5. Keep-Alive Connection Handling
Keep-alive connections have special treatment:
- Extended inactivity timeout (6x normal)
- Warning before closure
- May accumulate if backend is unresponsive
### Observed Symptoms
1. **Outer proxy connection count grows over time**
2. **Inner proxy maintains zero or low connection count**
3. **Connections show as closed in logs but remain in tracking**
4. **Memory usage gradually increases**
### Debug Strategies
#### 1. Enhanced Logging
Add connection state logging at key points:
```typescript
// When outgoing socket is created
logger.log('debug', `Outgoing socket created for ${connectionId}`, {
hasOutgoing: !!record.outgoing,
outgoingState: record.outgoing?.readyState
});
```
#### 2. Connection State Inspection
Periodically log detailed connection state:
```typescript
for (const [id, record] of connectionManager.getConnections()) {
console.log({
id,
age: Date.now() - record.incomingStartTime,
incomingDestroyed: record.incoming.destroyed,
outgoingDestroyed: record.outgoing?.destroyed,
hasCleanupTimer: !!record.cleanupTimer
});
}
```
#### 3. Cleanup Verification
Track cleanup completion:
```typescript
// In cleanupConnection
logger.log('debug', `Cleanup completed for ${record.id}`, {
recordsRemaining: this.connectionRecords.size
});
```
### Recommendations
1. **Immediate Cleanup for Proxy Chains**
- Skip batch queue for proxy chain connections
- Use synchronous cleanup when PROXY protocol is detected
2. **Socket State Validation**
- Check both `destroyed` and `readyState` before cleanup decisions
- Handle 'opening' state sockets explicitly
3. **Timeout Adjustments**
- Shorter timeouts for proxy chain connections
- More aggressive cleanup for connections without data transfer
4. **Connection Limits**
- Per-route connection limits
- Backpressure when approaching limits
5. **Monitoring**
- Export connection metrics
- Alert on connection count thresholds
- Track connection age distribution
### Test Scenarios to Reproduce
1. **Rapid Connect/Disconnect**
```bash
# Create many short-lived connections
for i in {1..1000}; do
(echo -n | nc localhost 8001) &
done
```
2. **Slow Backend**
- Configure inner proxy to connect to unresponsive backend
- Monitor outer proxy connection count
3. **Mixed Traffic**
- Combine TLS and non-TLS connections
- Add keep-alive connections
- Observe accumulation patterns
### Future Improvements
1. **Connection Pool Isolation**
- Separate pools for proxy chain vs direct connections
- Different cleanup strategies per pool
2. **Circuit Breaker**
- Detect accumulation and trigger aggressive cleanup
- Temporary refuse new connections when near limit
3. **Connection State Machine**
- Explicit states: CONNECTING, ESTABLISHED, CLOSING, CLOSED
- State transition validation
- Timeout per state
4. **Metrics Collection**
- Connection lifecycle events
- Cleanup success/failure rates
- Time spent in each state
### Root Cause Identified (January 2025)
**The primary issue is on the inner proxy when backends are unreachable:**
When the backend is unreachable (e.g., non-routable IP like 10.255.255.1):
1. The outgoing socket gets stuck in "opening" state indefinitely
2. The `createSocketWithErrorHandler` in socket-utils.ts doesn't implement connection timeout
3. `socket.setTimeout()` only handles inactivity AFTER connection, not during connect phase
4. Connections accumulate because they never transition to error state
5. Socket timeout warnings fire but connections are preserved as keep-alive
**Code Issue:**
```typescript
// socket-utils.ts line 275
if (timeout) {
socket.setTimeout(timeout); // This only handles inactivity, not connection!
}
```
**Required Fix:**
1. Add `connectionTimeout` to ISmartProxyOptions interface:
```typescript
// In interfaces.ts
connectionTimeout?: number; // Timeout for establishing connection (ms), default: 30000 (30s)
```
2. Update `createSocketWithErrorHandler` in socket-utils.ts:
```typescript
export function createSocketWithErrorHandler(options: SafeSocketOptions): plugins.net.Socket {
const { port, host, onError, onConnect, timeout } = options;
const socket = new plugins.net.Socket();
let connected = false;
let connectionTimeout: NodeJS.Timeout | null = null;
socket.on('error', (error) => {
if (connectionTimeout) {
clearTimeout(connectionTimeout);
connectionTimeout = null;
}
if (onError) onError(error);
});
socket.on('connect', () => {
connected = true;
if (connectionTimeout) {
clearTimeout(connectionTimeout);
connectionTimeout = null;
}
if (timeout) socket.setTimeout(timeout); // Set inactivity timeout
if (onConnect) onConnect();
});
// Implement connection establishment timeout
if (timeout) {
connectionTimeout = setTimeout(() => {
if (!connected && !socket.destroyed) {
const error = new Error(`Connection timeout after ${timeout}ms to ${host}:${port}`);
(error as any).code = 'ETIMEDOUT';
socket.destroy();
if (onError) onError(error);
}
}, timeout);
}
socket.connect(port, host);
return socket;
}
```
3. Pass connectionTimeout in route-connection-handler.ts:
```typescript
const targetSocket = createSocketWithErrorHandler({
port: finalTargetPort,
host: finalTargetHost,
timeout: this.settings.connectionTimeout || 30000, // Connection timeout
onError: (error) => { /* existing */ },
onConnect: async () => { /* existing */ }
});
```
### Investigation Results (January 2025)
Based on extensive testing with debug scripts:
1. **Normal Operation**: In controlled tests, connections are properly cleaned up:
- Immediate routing cleanup handler properly destroys outgoing connections
- Both outer and inner proxies maintain 0 connections after clients disconnect
- Keep-alive connections are tracked and cleaned up correctly
2. **Potential Edge Cases Not Covered by Tests**:
- **HTTP/2 Connections**: May have different lifecycle than HTTP/1.1
- **WebSocket Connections**: Long-lived upgrade connections might persist
- **Partial TLS Handshakes**: Connections that start TLS but don't complete
- **PROXY Protocol Parse Failures**: Malformed headers from untrusted sources
- **Connection Pool Reuse**: HttpProxy component may maintain its own pools
3. **Timing-Sensitive Scenarios**:
- Client disconnects exactly when `record.outgoing` is being assigned
- Backend connects but immediately RSTs
- Proxy chain where middle proxy restarts
- Multiple rapid reconnects with same source IP/port
4. **Configuration-Specific Issues**:
- Mixed `sendProxyProtocol` settings in chain
- Different `keepAlive` settings between proxies
- Mismatched timeout values
- Routes with `forwardingEngine: 'nftables'`
### Additional Debug Points
Add these debug logs to identify the specific scenario:
```typescript
// In route-connection-handler.ts setupDirectConnection
logger.log('debug', `Setting outgoing socket for ${connectionId}`, {
timestamp: Date.now(),
hasOutgoing: !!record.outgoing,
socketState: targetSocket.readyState
});
// In connection-manager.ts cleanupConnection
logger.log('debug', `Cleanup attempt for ${record.id}`, {
alreadyClosed: record.connectionClosed,
hasIncoming: !!record.incoming,
hasOutgoing: !!record.outgoing,
incomingDestroyed: record.incoming?.destroyed,
outgoingDestroyed: record.outgoing?.destroyed
});
```
### Workarounds
Until root cause is identified:
1. **Periodic Force Cleanup**:
```typescript
setInterval(() => {
const connections = connectionManager.getConnections();
for (const [id, record] of connections) {
if (record.incoming?.destroyed && !record.connectionClosed) {
connectionManager.cleanupConnection(record, 'force_cleanup');
}
}
}, 60000); // Every minute
```
2. **Connection Age Limit**:
```typescript
// Add max connection age check
const maxAge = 3600000; // 1 hour
if (Date.now() - record.incomingStartTime > maxAge) {
connectionManager.cleanupConnection(record, 'max_age');
}
```
3. **Aggressive Timeout Settings**:
```typescript
{
socketTimeout: 60000, // 1 minute
inactivityTimeout: 300000, // 5 minutes
connectionCleanupInterval: 30000 // 30 seconds
}
```
### Related Files
- `/ts/proxies/smart-proxy/route-connection-handler.ts` - Main connection handling
- `/ts/proxies/smart-proxy/connection-manager.ts` - Connection tracking and cleanup
- `/ts/core/utils/socket-utils.ts` - Socket cleanup utilities
- `/test/test.proxy-chain-cleanup.node.ts` - Test for connection cleanup
- `/test/test.proxy-chaining-accumulation.node.ts` - Test for accumulation prevention
- `/.nogit/debug/connection-accumulation-debug.ts` - Debug script for connection states
- `/.nogit/debug/connection-accumulation-keepalive.ts` - Keep-alive specific tests
- `/.nogit/debug/connection-accumulation-http.ts` - HTTP traffic through proxy chains
### Summary
**Issue Identified**: Connection accumulation occurs on the **inner proxy** (not outer) when backends are unreachable.
**Root Cause**: The `createSocketWithErrorHandler` function in socket-utils.ts doesn't implement connection establishment timeout. It only sets `socket.setTimeout()` which handles inactivity AFTER connection is established, not during the connect phase.
**Impact**: When connecting to unreachable IPs (e.g., 10.255.255.1), outgoing sockets remain in "opening" state indefinitely, causing connections to accumulate.
**Fix Required**:
1. Add `connectionTimeout` setting to ISmartProxyOptions
2. Implement proper connection timeout in `createSocketWithErrorHandler`
3. Pass the timeout value from route-connection-handler
**Workaround Until Fixed**: Configure shorter socket timeouts and use the periodic force cleanup suggested above.
The connection cleanup mechanisms have been significantly improved in v19.5.20:
1. Race condition fixed by setting `record.outgoing` before connecting
2. Immediate routing cleanup handler always destroys outgoing connections
3. Tests confirm no accumulation in standard scenarios with reachable backends
However, the missing connection establishment timeout causes accumulation when backends are unreachable or very slow to connect.
### Outer Proxy Sudden Accumulation After Hours
**User Report**: "The counter goes up suddenly after some hours on the outer proxy"
**Investigation Findings**:
1. **Cleanup Queue Mechanism**:
- Connections are cleaned up in batches of 100 via a queue
- If the cleanup timer gets stuck or cleared without restart, connections accumulate
- The timer is set with `setTimeout` and could be affected by event loop blocking
2. **Potential Causes for Sudden Spikes**:
a) **Cleanup Timer Failure**:
```typescript
// In ConnectionManager, if this timer gets cleared but not restarted:
this.cleanupTimer = this.setTimeout(() => {
this.processCleanupQueue();
}, 100);
```
b) **Memory Pressure**:
- After hours of operation, memory fragmentation or pressure could cause delays
- Garbage collection pauses might interfere with timer execution
c) **Event Listener Accumulation**:
- Socket event listeners might accumulate over time
- Server 'connection' event handlers are particularly important
d) **Keep-Alive Connection Cascades**:
- When many keep-alive connections timeout simultaneously
- Outer proxy has different timeout than inner proxy
- Mass disconnection events can overwhelm cleanup queue
e) **HttpProxy Component Issues**:
- If using `useHttpProxy`, the HttpProxy bridge might maintain connection pools
- These pools might not be properly cleaned after hours
3. **Why "Sudden" After Hours**:
- Not a gradual leak but triggered by specific conditions
- Likely related to periodic events or thresholds:
- Inactivity check runs every 30 seconds
- Keep-alive connections have extended timeouts (6x normal)
- Parity check has 30-minute timeout for half-closed connections
4. **Reproduction Scenarios**:
- Mass client disconnection/reconnection (network blip)
- Keep-alive timeout cascade when inner proxy times out first
- Cleanup timer getting stuck during high load
- Memory pressure causing event loop delays
### Additional Monitoring Recommendations
1. **Add Cleanup Queue Monitoring**:
```typescript
setInterval(() => {
const cm = proxy.connectionManager;
if (cm.cleanupQueue.size > 100 && !cm.cleanupTimer) {
logger.error('Cleanup queue stuck!', {
queueSize: cm.cleanupQueue.size,
hasTimer: !!cm.cleanupTimer
});
}
}, 60000);
```
2. **Track Timer Health**:
- Monitor if cleanup timer is running
- Check for event loop blocking
- Log when batch processing takes too long
3. **Memory Monitoring**:
- Track heap usage over time
- Monitor for memory leaks in long-running processes
- Force periodic garbage collection if needed
### Immediate Mitigations
1. **Restart Cleanup Timer**:
```typescript
// Emergency cleanup timer restart
if (!cm.cleanupTimer && cm.cleanupQueue.size > 0) {
cm.cleanupTimer = setTimeout(() => {
cm.processCleanupQueue();
}, 100);
}
```
2. **Force Periodic Cleanup**:
```typescript
setInterval(() => {
const cm = connectionManager;
if (cm.getConnectionCount() > threshold) {
cm.performOptimizedInactivityCheck();
// Force process cleanup queue
cm.processCleanupQueue();
}
}, 300000); // Every 5 minutes
```
3. **Connection Age Limits**:
- Set maximum connection lifetime
- Force close connections older than threshold
- More aggressive cleanup for proxy chains
## ✅ FIXED: Zombie Connection Detection (January 2025)
### Root Cause Identified
"Zombie connections" occur when sockets are destroyed without triggering their close/error event handlers. This causes connections to remain tracked with both sockets destroyed but `connectionClosed=false`. This is particularly problematic in proxy chains where the inner proxy might close connections in ways that don't trigger proper events on the outer proxy.
### Fix Implemented
Added zombie detection to the periodic inactivity check in ConnectionManager:
```typescript
// In performOptimizedInactivityCheck()
// Check ALL connections for zombie state
for (const [connectionId, record] of this.connectionRecords) {
if (!record.connectionClosed) {
const incomingDestroyed = record.incoming?.destroyed || false;
const outgoingDestroyed = record.outgoing?.destroyed || false;
// Check for zombie connections: both sockets destroyed but not cleaned up
if (incomingDestroyed && outgoingDestroyed) {
logger.log('warn', `Zombie connection detected: ${connectionId} - both sockets destroyed but not cleaned up`, {
connectionId,
remoteIP: record.remoteIP,
age: plugins.prettyMs(now - record.incomingStartTime),
component: 'connection-manager'
});
// Clean up immediately
this.cleanupConnection(record, 'zombie_cleanup');
continue;
}
// Check for half-zombie: one socket destroyed
if (incomingDestroyed || outgoingDestroyed) {
const age = now - record.incomingStartTime;
// Give it 30 seconds grace period for normal cleanup
if (age > 30000) {
logger.log('warn', `Half-zombie connection detected: ${connectionId} - ${incomingDestroyed ? 'incoming' : 'outgoing'} destroyed`, {
connectionId,
remoteIP: record.remoteIP,
age: plugins.prettyMs(age),
incomingDestroyed,
outgoingDestroyed,
component: 'connection-manager'
});
// Clean up
this.cleanupConnection(record, 'half_zombie_cleanup');
}
}
}
}
```
### How It Works
1. **Full Zombie Detection**: Detects when both incoming and outgoing sockets are destroyed but the connection hasn't been cleaned up
2. **Half-Zombie Detection**: Detects when only one socket is destroyed, with a 30-second grace period for normal cleanup to occur
3. **Automatic Cleanup**: Immediately cleans up zombie connections when detected
4. **Runs Periodically**: Integrated into the existing inactivity check that runs every 30 seconds
### Why This Fixes the Outer Proxy Accumulation
- When inner proxy closes connections abruptly (e.g., due to backend failure), the outer proxy's outgoing socket might be destroyed without firing close/error events
- These become zombie connections that previously accumulated indefinitely
- Now they are detected and cleaned up within 30 seconds
### Test Results
Debug scripts confirmed:
- Zombie connections can be created when sockets are destroyed directly without events
- The zombie detection successfully identifies and cleans up these connections
- Both full zombies (both sockets destroyed) and half-zombies (one socket destroyed) are handled
This fix addresses the specific issue where "connections that are closed on the inner proxy, always also close on the outer proxy" as requested by the user.
## 🔍 Production Diagnostics (January 2025)
Since the zombie detection fix didn't fully resolve the issue, use the ProductionConnectionMonitor to diagnose the actual problem:
### How to Use the Production Monitor
1. **Add to your proxy startup script**:
```typescript
import ProductionConnectionMonitor from './production-connection-monitor.js';
// After proxy.start()
const monitor = new ProductionConnectionMonitor(proxy);
monitor.start(5000); // Check every 5 seconds
// Monitor will automatically capture diagnostics when:
// - Connections exceed threshold (default: 50)
// - Sudden spike occurs (default: +20 connections)
```
2. **Diagnostics are saved to**: `.nogit/connection-diagnostics/`
3. **Force capture anytime**: `monitor.forceCaptureNow()`
### What the Monitor Captures
For each connection:
- Socket states (destroyed, readable, writable, readyState)
- Connection flags (closed, keepAlive, TLS status)
- Data transfer statistics
- Time since last activity
- Cleanup queue status
- Event listener counts
- Termination reasons
### Pattern Analysis
The monitor automatically identifies:
- **Zombie connections**: Both sockets destroyed but not cleaned up
- **Half-zombies**: One socket destroyed
- **Stuck connecting**: Outgoing socket stuck in connecting state
- **No outgoing**: Missing outgoing socket
- **Keep-alive stuck**: Keep-alive connections with no recent activity
- **Old connections**: Connections older than 1 hour
- **No data transfer**: Connections with no bytes transferred
- **Listener leaks**: Excessive event listeners
### Common Accumulation Patterns
1. **Connecting State Stuck**
- Outgoing socket shows `connecting: true` indefinitely
- Usually means connection timeout not working
- Check if backend is reachable
2. **Missing Outgoing Socket**
- Connection has no outgoing socket but isn't closed
- May indicate immediate routing issues
- Check error logs during connection setup
3. **Event Listener Accumulation**
- High listener counts (>20) on sockets
- Indicates cleanup not removing all listeners
- Can cause memory leaks
4. **Keep-Alive Zombies**
- Keep-alive connections not timing out
- Check keepAlive timeout settings
- May need more aggressive cleanup
### Next Steps
1. **Run the monitor in production** during accumulation
2. **Share the diagnostic files** from `.nogit/connection-diagnostics/`
3. **Look for patterns** in the captured snapshots
4. **Check specific connection IDs** that accumulate
The diagnostic files will show exactly what state connections are in when accumulation occurs, allowing targeted fixes for the specific issue.
## ✅ FIXED: Stuck Connection Detection (January 2025)
### Additional Root Cause Found
Connections to hanging backends (that accept but never respond) were not being cleaned up because:
- Both sockets remain alive (not destroyed)
- Keep-alive prevents normal timeout
- No data is sent back to the client despite receiving data
- These don't qualify as "zombies" since sockets aren't destroyed
### Fix Implemented
Added stuck connection detection to the periodic inactivity check:
```typescript
// Check for stuck connections: no data sent back to client
if (!record.connectionClosed && record.outgoing && record.bytesReceived > 0 && record.bytesSent === 0) {
const age = now - record.incomingStartTime;
// If connection is older than 60 seconds and no data sent back, likely stuck
if (age > 60000) {
logger.log('warn', `Stuck connection detected: ${connectionId} - received ${record.bytesReceived} bytes but sent 0 bytes`, {
connectionId,
remoteIP: record.remoteIP,
age: plugins.prettyMs(age),
bytesReceived: record.bytesReceived,
targetHost: record.targetHost,
targetPort: record.targetPort,
component: 'connection-manager'
});
// Clean up
this.cleanupConnection(record, 'stuck_no_response');
}
}
```
### What This Fixes
- Connections to backends that accept but never respond
- Proxy chains where inner proxy connects to unresponsive services
- Scenarios where keep-alive prevents normal timeout mechanisms
- Connections that receive client data but never send anything back
### Detection Criteria
- Connection has received bytes from client (`bytesReceived > 0`)
- No bytes sent back to client (`bytesSent === 0`)
- Connection is older than 60 seconds
- Both sockets are still alive (not destroyed)
This complements the zombie detection by handling cases where sockets remain technically alive but the connection is effectively dead.
## 🚨 CRITICAL FIX: Cleanup Queue Bug (January 2025)
### Critical Bug Found
The cleanup queue had a severe bug that caused connection accumulation when more than 100 connections needed cleanup:
```typescript
// BUG: This cleared the ENTIRE queue after processing only the first batch!
const toCleanup = Array.from(this.cleanupQueue).slice(0, this.cleanupBatchSize);
this.cleanupQueue.clear(); // ❌ This discarded all connections beyond the first 100!
```
### Fix Implemented
```typescript
// Now only removes the connections being processed
const toCleanup = Array.from(this.cleanupQueue).slice(0, this.cleanupBatchSize);
for (const connectionId of toCleanup) {
this.cleanupQueue.delete(connectionId); // ✅ Only remove what we process
const record = this.connectionRecords.get(connectionId);
if (record) {
this.cleanupConnection(record, record.incomingTerminationReason || 'normal');
}
}
```
### Impact
- **Before**: If 150 connections needed cleanup, only the first 100 would be processed and the remaining 50 would accumulate forever
- **After**: All connections are properly cleaned up in batches
### Additional Improvements
1. **Faster Inactivity Checks**: Reduced from 30s to 10s intervals
- Zombies and stuck connections are detected 3x faster
- Reduces the window for accumulation
2. **Duplicate Prevention**: Added check in queueCleanup to prevent processing already-closed connections
- Prevents unnecessary work
- Ensures connections are only cleaned up once
### Summary of All Fixes
1. **Connection Timeout** (already documented) - Prevents accumulation when backends are unreachable
2. **Zombie Detection** - Cleans up connections with destroyed sockets
3. **Stuck Connection Detection** - Cleans up connections to hanging backends
4. **Cleanup Queue Bug** - Ensures ALL connections get cleaned up, not just the first 100
5. **Faster Detection** - Reduced check interval from 30s to 10s
These fixes combined should prevent connection accumulation in all known scenarios.

View File

@ -1,187 +0,0 @@
# SmartProxy Code Deletion Plan
This document tracks all code paths that can be deleted as part of the routing unification effort.
## Phase 1: Matching Logic Duplicates (READY TO DELETE)
### 1. Inline Matching Functions in RouteManager
**File**: `ts/proxies/smart-proxy/route-manager.ts`
**Lines**: Approximately lines 200-400
**Duplicates**:
- `matchDomain()` method - duplicate of DomainMatcher
- `matchPath()` method - duplicate of PathMatcher
- `matchIpPattern()` method - duplicate of IpMatcher
- `matchHeaders()` method - duplicate of HeaderMatcher
**Action**: Update to use unified matchers from `ts/core/routing/matchers/`
### 2. Duplicate Matching in Core route-utils
**File**: `ts/core/utils/route-utils.ts`
**Functions to update**:
- `matchDomain()` → Use DomainMatcher.match()
- `matchPath()` → Use PathMatcher.match()
- `matchIpPattern()` → Use IpMatcher.match()
- `matchHeader()` → Use HeaderMatcher.match()
**Action**: Update to use unified matchers, keep only unique utilities
## Phase 2: Route Manager Duplicates (READY AFTER MIGRATION)
### 1. SmartProxy RouteManager
**File**: `ts/proxies/smart-proxy/route-manager.ts`
**Entire file**: ~500 lines
**Reason**: 95% duplicate of SharedRouteManager
**Migration Required**:
- Update SmartProxy to use SharedRouteManager
- Update all imports
- Test thoroughly
**Action**: DELETE entire file after migration
### 2. Deprecated Methods in SharedRouteManager
**File**: `ts/core/utils/route-manager.ts`
**Methods**:
- Any deprecated security check methods
- Legacy compatibility methods
**Action**: Remove after confirming no usage
## Phase 3: Router Consolidation (REQUIRES REFACTORING)
### 1. ProxyRouter vs RouteRouter Duplication
**Files**:
- `ts/routing/router/proxy-router.ts` (~250 lines)
- `ts/routing/router/route-router.ts` (~250 lines)
**Reason**: Nearly identical implementations
**Plan**: Merge into single HttpRouter with legacy adapter
**Action**: DELETE one file after consolidation
### 2. Inline Route Matching in HttpProxy
**Location**: Various files in `ts/proxies/http-proxy/`
**Pattern**: Direct route matching without using RouteManager
**Action**: Update to use SharedRouteManager
## Phase 4: Scattered Utilities (CLEANUP)
### 1. Duplicate Route Utilities
**Files with duplicate logic**:
- `ts/proxies/smart-proxy/utils/route-utils.ts` - Keep (different purpose)
- `ts/proxies/smart-proxy/utils/route-validators.ts` - Review for duplicates
- `ts/proxies/smart-proxy/utils/route-patterns.ts` - Review for consolidation
### 2. Legacy Type Definitions
**Review for removal**:
- Old route type definitions
- Deprecated configuration interfaces
- Unused type exports
## Deletion Progress Tracker
### Completed Deletions
- [x] Phase 1: Matching logic consolidation (Partial)
- Updated core/utils/route-utils.ts to use unified matchers
- Removed duplicate matching implementations (~200 lines)
- Marked functions as deprecated with migration path
- [x] Phase 2: RouteManager unification (COMPLETED)
- ✓ Migrated SmartProxy to use SharedRouteManager
- ✓ Updated imports in smart-proxy.ts, route-connection-handler.ts, and index.ts
- ✓ Created logger adapter to match ILogger interface expectations
- ✓ Fixed method calls (getAllRoutes → getRoutes)
- ✓ Fixed type errors in header matcher
- ✓ Removed unused ipToNumber imports and methods
- ✓ DELETED: `/ts/proxies/smart-proxy/route-manager.ts` (553 lines removed)
- [x] Phase 3: Router consolidation (COMPLETED)
- ✓ Created unified HttpRouter with legacy compatibility
- ✓ Migrated ProxyRouter and RouteRouter to use HttpRouter aliases
- ✓ Updated imports in http-proxy.ts, request-handler.ts, websocket-handler.ts
- ✓ Added routeReqLegacy() method for backward compatibility
- ✓ DELETED: `/ts/routing/router/proxy-router.ts` (437 lines)
- ✓ DELETED: `/ts/routing/router/route-router.ts` (482 lines)
- [x] Phase 4: Architecture cleanup (COMPLETED)
- ✓ Updated route-utils.ts to use unified matchers directly
- ✓ Removed deprecated methods from SharedRouteManager
- ✓ Fixed HeaderMatcher.matchMultiple → matchAll method name
- ✓ Fixed findMatchingRoute return type handling (IRouteMatchResult)
- ✓ Fixed header type conversion for RegExp patterns
- ✓ DELETED: Duplicate RouteManager class from http-proxy/models/types.ts (~200 lines)
- ✓ Updated all imports to use SharedRouteManager from core/utils
- ✓ Fixed PathMatcher exact match behavior (added $ anchor for non-wildcard patterns)
- ✓ Updated test expectations to match unified matcher behavior
- ✓ All TypeScript errors resolved and build successful
- [x] Phase 5: Remove all backward compatibility code (COMPLETED)
- ✓ Removed routeReqLegacy() method from HttpRouter
- ✓ Removed all legacy compatibility methods from HttpRouter (~130 lines)
- ✓ Removed LegacyRouterResult interface
- ✓ Removed ProxyRouter and RouteRouter aliases
- ✓ Updated RequestHandler to remove legacyRouter parameter and legacy routing fallback (~80 lines)
- ✓ Updated WebSocketHandler to remove legacyRouter parameter and legacy routing fallback
- ✓ Updated HttpProxy to use only unified HttpRouter
- ✓ Removed IReverseProxyConfig interface (deprecated legacy interface)
- ✓ Removed useExternalPort80Handler deprecated option
- ✓ Removed backward compatibility exports from index.ts
- ✓ Removed all deprecated functions from route-utils.ts (~50 lines)
- ✓ Clean build with no legacy code
### Files Updated
1. `ts/core/utils/route-utils.ts` - Replaced all matching logic with unified matchers
2. `ts/core/utils/security-utils.ts` - Updated to use IpMatcher directly
3. `ts/proxies/smart-proxy/smart-proxy.ts` - Using SharedRouteManager with logger adapter
4. `ts/proxies/smart-proxy/route-connection-handler.ts` - Updated to use SharedRouteManager
5. `ts/proxies/smart-proxy/index.ts` - Exporting SharedRouteManager as RouteManager
6. `ts/core/routing/matchers/header.ts` - Fixed type handling for array header values
7. `ts/core/utils/route-manager.ts` - Removed unused ipToNumber import
8. `ts/proxies/http-proxy/http-proxy.ts` - Updated imports to use unified router
9. `ts/proxies/http-proxy/request-handler.ts` - Updated to use routeReqLegacy()
10. `ts/proxies/http-proxy/websocket-handler.ts` - Updated to use routeReqLegacy()
11. `ts/routing/router/index.ts` - Export unified HttpRouter with aliases
12. `ts/proxies/smart-proxy/utils/route-utils.ts` - Updated to use unified matchers directly
13. `ts/proxies/http-proxy/request-handler.ts` - Fixed findMatchingRoute usage
14. `ts/proxies/http-proxy/models/types.ts` - Removed duplicate RouteManager class
15. `ts/index.ts` - Updated exports to use SharedRouteManager aliases
16. `ts/proxies/index.ts` - Updated exports to use SharedRouteManager aliases
17. `test/test.acme-route-creation.ts` - Fixed getAllRoutes → getRoutes method call
### Files Created
1. `ts/core/routing/matchers/domain.ts` - Unified domain matcher
2. `ts/core/routing/matchers/path.ts` - Unified path matcher
3. `ts/core/routing/matchers/ip.ts` - Unified IP matcher
4. `ts/core/routing/matchers/header.ts` - Unified header matcher
5. `ts/core/routing/matchers/index.ts` - Matcher exports
6. `ts/core/routing/types.ts` - Core routing types
7. `ts/core/routing/specificity.ts` - Route specificity calculator
8. `ts/core/routing/index.ts` - Main routing exports
9. `ts/routing/router/http-router.ts` - Unified HTTP router
### Lines of Code Removed
- Target: ~1,500 lines
- Actual: ~2,332 lines (Target exceeded by 55%!)
- Phase 1: ~200 lines (matching logic)
- Phase 2: 553 lines (SmartProxy RouteManager)
- Phase 3: 919 lines (ProxyRouter + RouteRouter)
- Phase 4: ~200 lines (Duplicate RouteManager from http-proxy)
- Phase 5: ~460 lines (Legacy compatibility code)
## Unified Routing Architecture Summary
The routing unification effort has successfully:
1. **Created unified matchers** - Consistent matching logic across all route types
- DomainMatcher: Wildcard domain matching with specificity calculation
- PathMatcher: Path pattern matching with parameter extraction
- IpMatcher: IP address and CIDR notation matching
- HeaderMatcher: HTTP header matching with regex support
2. **Consolidated route managers** - Single SharedRouteManager for all proxies
3. **Unified routers** - Single HttpRouter for all HTTP routing needs
4. **Removed ~2,332 lines of code** - Exceeded target by 55%
5. **Clean modern architecture** - No legacy code, no backward compatibility layers
## Safety Checklist Before Deletion
Before deleting any code:
1. ✓ All tests pass
2. ✓ No references to deleted code remain
3. ✓ Migration path tested
4. ✓ Performance benchmarks show no regression
5. ✓ Documentation updated
## Rollback Plan
If issues arise after deletion:
1. Git history preserves all deleted code
2. Each phase can be reverted independently
3. Feature flags can disable new code if needed

View File

@ -1,897 +0,0 @@
# SmartProxy Project Hints
## Project Overview
- Package: `@push.rocks/smartproxy` high-performance proxy supporting HTTP(S), TCP, WebSocket, and ACME integration.
- Written in TypeScript, compiled output in `dist_ts/`, uses ESM with NodeNext resolution.
## Important: ACME Configuration in v19.0.0
- **Breaking Change**: ACME configuration must be placed within individual route TLS settings, not at the top level
- Route-level ACME config is the ONLY way to enable SmartAcme initialization
- SmartCertManager requires email in route config for certificate acquisition
- Top-level ACME configuration is ignored in v19.0.0
## Repository Structure
- `ts/` TypeScript source files:
- `index.ts` exports main modules.
- `plugins.ts` centralizes native and third-party imports.
- Subdirectories: `networkproxy/`, `nftablesproxy/`, `port80handler/`, `redirect/`, `smartproxy/`.
- Key classes: `ProxyRouter` (`classes.router.ts`), `SmartProxy` (`classes.smartproxy.ts`), plus handlers/managers.
- `dist_ts/` transpiled `.js` and `.d.ts` files mirroring `ts/` structure.
- `test/` test suites in TypeScript:
- `test.router.ts` routing logic (hostname matching, wildcards, path parameters, config management).
- `test.smartproxy.ts` proxy behavior tests (TCP forwarding, SNI handling, concurrency, chaining, timeouts).
- `test/helpers/` utilities (e.g., certificates).
- `assets/certs/` placeholder certificates for ACME and TLS.
## Development Setup
- Requires `pnpm` (v10+).
- Install dependencies: `pnpm install`.
- Build: `pnpm build` (runs `tsbuild --web --allowimplicitany`).
- Test: `pnpm test` (runs `tstest test/`).
- Format: `pnpm format` (runs `gitzone format`).
## How to Test
### Test Structure
Tests use tapbundle from `@git.zone/tstest`. The correct pattern is:
```typescript
import { tap, expect } from '@git.zone/tstest/tapbundle';
tap.test('test description', async () => {
// Test logic here
expect(someValue).toEqual(expectedValue);
});
// IMPORTANT: Must end with tap.start()
tap.start();
```
### Expect Syntax (from @push.rocks/smartexpect)
```typescript
// Type assertions
expect('hello').toBeTypeofString();
expect(42).toBeTypeofNumber();
// Equality
expect('hithere').toEqual('hithere');
// Negated assertions
expect(1).not.toBeTypeofString();
// Regular expressions
expect('hithere').toMatch(/hi/);
// Numeric comparisons
expect(5).toBeGreaterThan(3);
expect(0.1 + 0.2).toBeCloseTo(0.3, 10);
// Arrays
expect([1, 2, 3]).toContain(2);
expect([1, 2, 3]).toHaveLength(3);
// Async assertions
await expect(asyncFunction()).resolves.toEqual('expected');
await expect(asyncFunction()).resolves.withTimeout(5000).toBeTypeofString();
// Complex object navigation
expect(complexObject)
.property('users')
.arrayItem(0)
.property('name')
.toEqual('Alice');
```
### Test Modifiers
- `tap.only.test()` - Run only this test
- `tap.skip.test()` - Skip a test
- `tap.timeout()` - Set test-specific timeout
### Running Tests
- All tests: `pnpm test`
- Specific test: `tsx test/test.router.ts`
- With options: `tstest test/**/*.ts --verbose --timeout 60`
### Test File Requirements
- Must start with `test.` prefix
- Must use `.ts` extension
- Must call `tap.start()` at the end
## Coding Conventions
- Import modules via `plugins.ts`:
```ts
import * as plugins from './plugins.ts';
const server = new plugins.http.Server();
```
- Reference plugins with full path: `plugins.acme`, `plugins.smartdelay`, `plugins.minimatch`, etc.
- Path patterns support globs (`*`) and parameters (`:param`) in `ProxyRouter`.
- Wildcard hostname matching leverages `minimatch` patterns.
## Key Components
- **ProxyRouter**
- Methods: `routeReq`, `routeReqWithDetails`.
- Hostname matching: case-insensitive, strips port, supports exact, wildcard, TLD, complex patterns.
- Path routing: exact, wildcard, parameter extraction (`pathParams`), returns `pathMatch` and `pathRemainder`.
- Config API: `setNewProxyConfigs`, `addProxyConfig`, `removeProxyConfig`, `getHostnames`, `getProxyConfigs`.
- **SmartProxy**
- Manages one or more `net.Server` instances to forward TCP streams.
- Options: `preserveSourceIP`, `defaultAllowedIPs`, `globalPortRanges`, `sniEnabled`.
- DomainConfigManager: round-robin selection for multiple target IPs.
- Graceful shutdown in `stop()`, ensures no lingering servers or sockets.
## Notable Points
- **TSConfig**: `module: NodeNext`, `verbatimModuleSyntax`, allows `.js` extension imports in TS.
- Mermaid diagrams and architecture flows in `readme.md` illustrate component interactions and protocol flows.
- CLI entrypoint (`cli.js`) supports command-line usage (ACME, proxy controls).
- ACME and certificate handling via `Port80Handler` and `helpers.certificates.ts`.
## ACME/Certificate Configuration Example (v19.0.0)
```typescript
const proxy = new SmartProxy({
routes: [{
name: 'example.com',
match: { domains: 'example.com', ports: 443 },
action: {
type: 'forward',
target: { host: 'localhost', port: 8080 },
tls: {
mode: 'terminate',
certificate: 'auto',
acme: { // ACME config MUST be here, not at top level
email: 'ssl@example.com',
useProduction: false,
challengePort: 80
}
}
}
}]
});
```
## TODOs / Considerations
- Ensure import extensions in source match build outputs (`.ts` vs `.js`).
- Update `plugins.ts` when adding new dependencies.
- Maintain test coverage for new routing or proxy features.
- Keep `ts/` and `dist_ts/` in sync after refactors.
- Consider implementing top-level ACME config support for backward compatibility
## HTTP-01 ACME Challenge Fix (v19.3.8)
### Issue
Non-TLS connections on ports configured in `useHttpProxy` were not being forwarded to HttpProxy. This caused ACME HTTP-01 challenges to fail when the ACME port (usually 80) was included in `useHttpProxy`.
### Root Cause
In the `RouteConnectionHandler.handleForwardAction` method, only connections with TLS settings (mode: 'terminate' or 'terminate-and-reencrypt') were being forwarded to HttpProxy. Non-TLS connections were always handled as direct connections, even when the port was configured for HttpProxy.
### Solution
Added a check for non-TLS connections on ports listed in `useHttpProxy`:
```typescript
// No TLS settings - check if this port should use HttpProxy
const isHttpProxyPort = this.settings.useHttpProxy?.includes(record.localPort);
if (isHttpProxyPort && this.httpProxyBridge.getHttpProxy()) {
// Forward non-TLS connections to HttpProxy if configured
this.httpProxyBridge.forwardToHttpProxy(/*...*/);
return;
}
```
### Test Coverage
- `test/test.http-fix-unit.ts` - Unit tests verifying the fix
- Tests confirm that non-TLS connections on HttpProxy ports are properly forwarded
- Tests verify that non-HttpProxy ports still use direct connections
### Configuration Example
```typescript
const proxy = new SmartProxy({
useHttpProxy: [80], // Enable HttpProxy for port 80
httpProxyPort: 8443,
acme: {
email: 'ssl@example.com',
port: 80
},
routes: [
// Your routes here
]
});
```
## ACME Certificate Provisioning Timing Fix (v19.3.9)
### Issue
Certificate provisioning would start before ports were listening, causing ACME HTTP-01 challenges to fail with connection refused errors.
### Root Cause
SmartProxy initialization sequence:
1. Certificate manager initialized → immediately starts provisioning
2. Ports start listening (too late for ACME challenges)
### Solution
Deferred certificate provisioning until after ports are ready:
```typescript
// SmartCertManager.initialize() now skips automatic provisioning
// SmartProxy.start() calls provisionAllCertificates() directly after ports are listening
```
### Test Coverage
- `test/test.acme-timing-simple.ts` - Verifies proper timing sequence
### Migration
Update to v19.3.9+, no configuration changes needed.
## Socket Handler Race Condition Fix (v19.5.0)
### Issue
Initial data chunks were being emitted before async socket handlers had completed setup, causing data loss when handlers performed async operations before setting up data listeners.
### Root Cause
The `handleSocketHandlerAction` method was using `process.nextTick` to emit initial chunks regardless of whether the handler was sync or async. This created a race condition where async handlers might not have their listeners ready when the initial data was emitted.
### Solution
Differentiated between sync and async handlers:
```typescript
const result = route.action.socketHandler(socket);
if (result instanceof Promise) {
// Async handler - wait for completion before emitting initial data
result.then(() => {
if (initialChunk && initialChunk.length > 0) {
socket.emit('data', initialChunk);
}
}).catch(/*...*/);
} else {
// Sync handler - use process.nextTick as before
if (initialChunk && initialChunk.length > 0) {
process.nextTick(() => {
socket.emit('data', initialChunk);
});
}
}
```
### Test Coverage
- `test/test.socket-handler-race.ts` - Specifically tests async handlers with delayed listener setup
- Verifies that initial data is received even when handler sets up listeners after async work
### Usage Note
Socket handlers require initial data from the client to trigger routing (not just a TLS handshake). Clients must send at least one byte of data for the handler to be invoked.
## Route-Specific Security Implementation (v19.5.3)
### Issue
Route-specific security configurations (ipAllowList, ipBlockList, authentication) were defined in the route types but not enforced at runtime.
### Root Cause
The RouteConnectionHandler only checked global IP validation but didn't enforce route-specific security rules after matching a route.
### Solution
Added security checks after route matching:
```typescript
// Apply route-specific security checks
const routeSecurity = route.action.security || route.security;
if (routeSecurity) {
// Check IP allow/block lists
if (routeSecurity.ipAllowList || routeSecurity.ipBlockList) {
const isIPAllowed = this.securityManager.isIPAuthorized(
remoteIP,
routeSecurity.ipAllowList || [],
routeSecurity.ipBlockList || []
);
if (!isIPAllowed) {
socket.end();
this.connectionManager.cleanupConnection(record, 'route_ip_blocked');
return;
}
}
}
```
### Test Coverage
- `test/test.route-security-unit.ts` - Unit tests verifying SecurityManager.isIPAuthorized logic
- Tests confirm IP allow/block lists work correctly with glob patterns
### Configuration Example
```typescript
const routes: IRouteConfig[] = [{
name: 'secure-api',
match: { ports: 8443, domains: 'api.example.com' },
action: {
type: 'forward',
target: { host: 'localhost', port: 3000 },
security: {
ipAllowList: ['192.168.1.*', '10.0.0.0/8'], // Allow internal IPs
ipBlockList: ['192.168.1.100'], // But block specific IP
maxConnections: 100, // Per-route limit (TODO)
authentication: { // HTTP-only, requires TLS termination
type: 'basic',
credentials: [{ username: 'api', password: 'secret' }]
}
}
}
}];
```
### Notes
- IP lists support glob patterns (via minimatch): `192.168.*`, `10.?.?.1`
- Block lists take precedence over allow lists
- Authentication requires TLS termination (cannot be enforced on passthrough/direct connections)
- Per-route connection limits are not yet implemented
- Security is defined at the route level (route.security), not in the action
- Route matching is based solely on match criteria; security is enforced after matching
## Performance Issues Investigation (v19.5.3+)
### Critical Blocking Operations Found
1. **Busy Wait Loop** in `ts/proxies/nftables-proxy/nftables-proxy.ts:235-238`
- Blocks entire event loop with `while (Date.now() < waitUntil) {}`
- Should use `await new Promise(resolve => setTimeout(resolve, delay))`
2. **Synchronous Filesystem Operations**
- Certificate management uses `fs.existsSync()`, `fs.mkdirSync()`, `fs.readFileSync()`
- NFTables proxy uses `execSync()` for system commands
- Certificate store uses `ensureDirSync()`, `fileExistsSync()`, `removeManySync()`
3. **Memory Leak Risks**
- Several `setInterval()` calls without storing references for cleanup
- Event listeners added without proper cleanup in error paths
- Missing `removeAllListeners()` calls in some connection cleanup scenarios
### Performance Recommendations
- Replace all sync filesystem operations with async alternatives
- Fix the busy wait loop immediately (critical event loop blocker)
- Add proper cleanup for all timers and event listeners
- Consider worker threads for CPU-intensive operations
- See `readme.problems.md` for detailed analysis and recommendations
## Performance Optimizations Implemented (Phase 1 - v19.6.0)
### 1. Async Utilities Created (`ts/core/utils/async-utils.ts`)
- **delay()**: Non-blocking alternative to busy wait loops
- **retryWithBackoff()**: Retry operations with exponential backoff
- **withTimeout()**: Execute operations with timeout protection
- **parallelLimit()**: Run async operations with concurrency control
- **debounceAsync()**: Debounce async functions
- **AsyncMutex**: Ensure exclusive access to resources
- **CircuitBreaker**: Protect against cascading failures
### 2. Filesystem Utilities Created (`ts/core/utils/fs-utils.ts`)
- **AsyncFileSystem**: Complete async filesystem operations
- exists(), ensureDir(), readFile(), writeFile()
- readJSON(), writeJSON() with proper error handling
- copyFile(), moveFile(), removeDir()
- Stream creation and file listing utilities
### 3. Critical Fixes Applied
#### Busy Wait Loop Fixed
- **Location**: `ts/proxies/nftables-proxy/nftables-proxy.ts:235-238`
- **Fix**: Replaced `while (Date.now() < waitUntil) {}` with `await delay(ms)`
- **Impact**: Unblocks event loop, massive performance improvement
#### Certificate Manager Migration
- **File**: `ts/proxies/http-proxy/certificate-manager.ts`
- Added async initialization method
- Kept sync methods for backward compatibility with deprecation warnings
- Added `loadDefaultCertificatesAsync()` method
#### Certificate Store Migration
- **File**: `ts/proxies/smart-proxy/cert-store.ts`
- Replaced all `fileExistsSync`, `ensureDirSync`, `removeManySync`
- Used parallel operations with `Promise.all()` for better performance
- Improved error handling and async JSON operations
#### NFTables Proxy Improvements
- Added deprecation warnings to sync methods
- Created `executeWithTempFile()` helper for common pattern
- Started migration of sync filesystem operations to async
- Added import for delay and AsyncFileSystem utilities
### 4. Backward Compatibility Maintained
- All sync methods retained with deprecation warnings
- Existing APIs unchanged, new async methods added alongside
- Feature flags prepared for gradual rollout
### 5. Phase 1 Completion Status
✅ **Phase 1 COMPLETE** - All critical performance fixes have been implemented:
- ✅ Fixed busy wait loop in nftables-proxy.ts
- ✅ Created async utilities (delay, retry, timeout, parallelLimit, mutex, circuit breaker)
- ✅ Created filesystem utilities (AsyncFileSystem with full async operations)
- ✅ Migrated all certificate management to async operations
- ✅ Migrated nftables-proxy filesystem operations to async (except stopSync for exit handlers)
- ✅ All tests passing for new utilities
### 6. Phase 2 Progress Status
🔨 **Phase 2 IN PROGRESS** - Resource Lifecycle Management:
- ✅ Created LifecycleComponent base class for automatic resource cleanup
- ✅ Created BinaryHeap data structure for priority queue operations
- ✅ Created EnhancedConnectionPool with backpressure and health checks
- ✅ Cleaned up legacy code (removed ts/common/, event-utils.ts, event-system.ts)
- 📋 TODO: Migrate existing components to extend LifecycleComponent
- 📋 TODO: Add integration tests for resource management
### 7. Next Steps (Remaining Work)
- **Phase 2 (cont)**: Migrate components to use LifecycleComponent
- **Phase 3**: Add worker threads for CPU-intensive operations
- **Phase 4**: Performance monitoring dashboard
## Socket Error Handling Fix (v19.5.11+)
### Issue
Server crashed with unhandled 'error' event when backend connections failed (ECONNREFUSED). Also caused memory leak with rising active connection count as failed connections weren't cleaned up properly.
### Root Cause
1. **Race Condition**: In forwarding handlers, sockets were created with `net.connect()` but error handlers were attached later, creating a window where errors could crash the server
2. **Incomplete Cleanup**: When server connections failed, client sockets weren't properly cleaned up, leaving connection records in memory
### Solution
Created `createSocketWithErrorHandler()` utility that attaches error handlers immediately:
```typescript
// Before (race condition):
const socket = net.connect(port, host);
// ... other code ...
socket.on('error', handler); // Too late!
// After (safe):
const socket = createSocketWithErrorHandler({
port, host,
onError: (error) => {
// Handle error immediately
clientSocket.destroy();
},
onConnect: () => {
// Set up forwarding
}
});
```
### Changes Made
1. **New Utility**: `ts/core/utils/socket-utils.ts` - Added `createSocketWithErrorHandler()`
2. **Updated Handlers**:
- `https-passthrough-handler.ts` - Uses safe socket creation
- `https-terminate-to-http-handler.ts` - Uses safe socket creation
3. **Connection Cleanup**: Client sockets destroyed immediately on server connection failure
### Test Coverage
- `test/test.socket-error-handling.node.ts` - Verifies server doesn't crash on ECONNREFUSED
- `test/test.forwarding-error-fix.node.ts` - Tests forwarding handlers handle errors gracefully
### Configuration
No configuration changes needed. The fix is transparent to users.
### Important Note
The fix was applied in two places:
1. **ForwardingHandler classes** (`https-passthrough-handler.ts`, etc.) - These are standalone forwarding utilities
2. **SmartProxy route-connection-handler** (`route-connection-handler.ts`) - This is where the actual SmartProxy connection handling happens
The critical fix for SmartProxy was in `setupDirectConnection()` method in route-connection-handler.ts, which now uses `createSocketWithErrorHandler()` to properly handle connection failures and clean up connection records.
## Connection Cleanup Improvements (v19.5.12+)
### Issue
Connections were still counting up during rapid retry scenarios, especially when routing failed or backend connections were refused. This was due to:
1. **Delayed Cleanup**: Using `initiateCleanupOnce` queued cleanup operations (batch of 100 every 100ms) instead of immediate cleanup
2. **NFTables Memory Leak**: NFTables connections were never cleaned up, staying in memory forever
3. **Connection Limit Bypass**: When max connections reached, connection record check happened after creation
### Root Cause Analysis
1. **Queued vs Immediate Cleanup**:
- `initiateCleanupOnce()`: Adds to cleanup queue, processes up to 100 connections every 100ms
- `cleanupConnection()`: Immediate synchronous cleanup
- Under rapid retries, connections were created faster than the queue could process them
2. **NFTables Connections**:
- Marked with `usingNetworkProxy = true` but never cleaned up
- Connection records stayed in memory indefinitely
3. **Error Path Cleanup**:
- Many error paths used `socket.end()` (async) followed by cleanup
- Created timing windows where connections weren't fully cleaned
### Solution
1. **Immediate Cleanup**: Changed all error paths from `initiateCleanupOnce()` to `cleanupConnection()` for immediate cleanup
2. **NFTables Cleanup**: Added socket close listener to clean up connection records when NFTables connections close
3. **Connection Limit Fix**: Added null check after `createConnection()` to handle rejection properly
### Changes Made in route-connection-handler.ts
```typescript
// 1. NFTables cleanup (line 551-553)
socket.once('close', () => {
this.connectionManager.cleanupConnection(record, 'nftables_closed');
});
// 2. Connection limit check (line 93-96)
const record = this.connectionManager.createConnection(socket);
if (!record) {
// Connection was rejected due to limit - socket already destroyed
return;
}
// 3. Changed all error paths to use immediate cleanup
// Before: this.connectionManager.initiateCleanupOnce(record, reason)
// After: this.connectionManager.cleanupConnection(record, reason)
```
### Test Coverage
- `test/test.rapid-retry-cleanup.node.ts` - Verifies connection cleanup under rapid retry scenarios
- Test shows connection count stays at 0 even with 20 rapid retries with 50ms intervals
- Confirms both ECONNREFUSED and routing failure scenarios are handled correctly
### Performance Impact
- **Positive**: No more connection accumulation under load
- **Positive**: Immediate cleanup reduces memory usage
- **Consideration**: More frequent cleanup operations, but prevents queue backlog
### Migration Notes
No configuration changes needed. The improvements are automatic and backward compatible.
## Early Client Disconnect Handling (v19.5.13+)
### Issue
Connections were accumulating when clients connected but disconnected before sending data or during routing. This occurred in two scenarios:
1. **TLS Path**: Clients connecting and disconnecting before sending initial TLS handshake data
2. **Non-TLS Immediate Routing**: Clients disconnecting while backend connection was being established
### Root Cause
1. **Missing Cleanup Handlers**: During initial data wait and immediate routing, no close/end handlers were attached to catch early disconnections
2. **Race Condition**: Backend connection attempts continued even after client disconnected, causing unhandled errors
3. **Timing Window**: Between accepting connection and establishing full bidirectional flow, disconnections weren't properly handled
### Solution
1. **TLS Path Fix**: Added close/end handlers during initial data wait (lines 224-253 in route-connection-handler.ts)
2. **Immediate Routing Fix**: Used `setupSocketHandlers` for proper handler attachment (lines 180-205)
3. **Backend Error Handling**: Check if connection already closed before handling backend errors (line 1144)
### Changes Made
```typescript
// 1. TLS path - handle disconnect before initial data
socket.once('close', () => {
if (!initialDataReceived) {
this.connectionManager.cleanupConnection(record, 'closed_before_data');
}
});
// 2. Immediate routing path - proper handler setup
setupSocketHandlers(socket, (reason) => {
if (!record.outgoing || record.outgoing.readyState !== 'open') {
if (record.outgoing && !record.outgoing.destroyed) {
record.outgoing.destroy(); // Abort pending backend connection
}
this.connectionManager.cleanupConnection(record, reason);
}
}, undefined, 'immediate-route-client');
// 3. Backend connection error handling
onError: (error) => {
if (record.connectionClosed) {
logger.log('debug', 'Backend connection failed but client already disconnected');
return; // Client already gone, nothing to clean up
}
// ... normal error handling
}
```
### Test Coverage
- `test/test.connect-disconnect-cleanup.node.ts` - Comprehensive test for early disconnect scenarios
- Tests verify connection count stays at 0 even with rapid connect/disconnect patterns
- Covers immediate disconnect, delayed disconnect, and mixed patterns
### Performance Impact
- **Positive**: No more connection accumulation from early disconnects
- **Positive**: Immediate cleanup reduces memory usage
- **Positive**: Prevents resource exhaustion from rapid reconnection attempts
### Migration Notes
No configuration changes needed. The fix is automatic and backward compatible.
## Proxy Chain Connection Accumulation Fix (v19.5.14+)
### Issue
When chaining SmartProxies (Client → SmartProxy1 → SmartProxy2 → Backend), connections would accumulate and never be cleaned up. This was particularly severe when the backend was down or closing connections immediately.
### Root Cause
The half-open connection support was preventing proper cascade cleanup in proxy chains:
1. Backend closes → SmartProxy2's server socket closes
2. SmartProxy2 keeps client socket open (half-open support)
3. SmartProxy1 never gets notified that downstream is closed
4. Connections accumulate at each proxy in the chain
The issue was in `createIndependentSocketHandlers()` which waited for BOTH sockets to close before cleanup.
### Solution
1. **Changed default behavior**: When one socket closes, both close immediately
2. **Made half-open support opt-in**: Only enabled when explicitly requested
3. **Centralized socket handling**: Created `setupBidirectionalForwarding()` for consistent behavior
4. **Applied everywhere**: Updated HttpProxyBridge and route-connection-handler to use centralized handling
### Changes Made
```typescript
// socket-utils.ts - Default behavior now closes both sockets
export function createIndependentSocketHandlers(
clientSocket, serverSocket, onBothClosed,
options: { enableHalfOpen?: boolean } = {} // Half-open is opt-in
) {
// When server closes, immediately close client (unless half-open enabled)
if (!clientClosed && !options.enableHalfOpen) {
clientSocket.destroy();
}
}
// New centralized function for consistent socket pairing
export function setupBidirectionalForwarding(
clientSocket, serverSocket,
handlers: {
onClientData?: (chunk) => void;
onServerData?: (chunk) => void;
onCleanup: (reason) => void;
enableHalfOpen?: boolean; // Default: false
}
)
```
### Test Coverage
- `test/test.proxy-chain-simple.node.ts` - Verifies proxy chains don't accumulate connections
- Tests confirm connections stay at 0 even with backend closing immediately
- Works for any proxy chain configuration (not just localhost)
### Performance Impact
- **Positive**: No more connection accumulation in proxy chains
- **Positive**: Immediate cleanup reduces memory usage
- **Neutral**: Half-open connections still available when needed (opt-in)
### Migration Notes
No configuration changes needed. The fix applies to all proxy chains automatically.
## Socket Cleanup Handler Deprecation (v19.5.15+)
### Issue
The deprecated `createSocketCleanupHandler()` function was still being used in forwarding handlers, despite being marked as deprecated.
### Solution
Updated all forwarding handlers to use the new centralized socket utilities:
1. **Replaced `createSocketCleanupHandler()`** with `setupBidirectionalForwarding()` in:
- `https-terminate-to-https-handler.ts`
- `https-terminate-to-http-handler.ts`
2. **Removed deprecated function** from `socket-utils.ts`
### Benefits
- Consistent socket handling across all handlers
- Proper cleanup in proxy chains (no half-open connections by default)
- Better backpressure handling with the centralized implementation
- Reduced code duplication
### Migration Notes
No user-facing changes. All forwarding handlers now use the same robust socket handling as the main SmartProxy connection handler.
## WrappedSocket Class Evaluation for PROXY Protocol (v19.5.19+)
### Current Socket Handling Architecture
- Sockets are handled directly as `net.Socket` instances throughout the codebase
- Socket augmentation via TypeScript module augmentation for TLS properties
- Metadata tracked separately in `IConnectionRecord` objects
- Socket utilities provide helper functions but don't encapsulate the socket
- Connection records track extensive metadata (IDs, timestamps, byte counters, TLS state, etc.)
### Evaluation: Should We Introduce a WrappedSocket Class?
**Yes, a WrappedSocket class would make sense**, particularly for PROXY protocol implementation and future extensibility.
### Design Considerations for WrappedSocket
```typescript
class WrappedSocket {
private socket: net.Socket;
private connectionId: string;
private metadata: {
realClientIP?: string; // From PROXY protocol
realClientPort?: number; // From PROXY protocol
proxyIP?: string; // Immediate connection IP
proxyPort?: number; // Immediate connection port
bytesReceived: number;
bytesSent: number;
lastActivity: number;
isTLS: boolean;
// ... other metadata
};
// PROXY protocol handling
private proxyProtocolParsed: boolean = false;
private pendingData: Buffer[] = [];
constructor(socket: net.Socket) {
this.socket = socket;
this.setupHandlers();
}
// Getters for clean access
get remoteAddress(): string {
return this.metadata.realClientIP || this.socket.remoteAddress || '';
}
get remotePort(): number {
return this.metadata.realClientPort || this.socket.remotePort || 0;
}
get isFromTrustedProxy(): boolean {
return !!this.metadata.realClientIP;
}
// PROXY protocol parsing
async parseProxyProtocol(trustedProxies: string[]): Promise<boolean> {
// Implementation here
}
// Delegate socket methods
write(data: any): boolean {
this.metadata.bytesSent += Buffer.byteLength(data);
return this.socket.write(data);
}
destroy(error?: Error): void {
this.socket.destroy(error);
}
// Event forwarding
on(event: string, listener: Function): this {
this.socket.on(event, listener);
return this;
}
}
```
### Implementation Benefits
1. **Encapsulation**: Bundle socket + metadata + behavior in one place
2. **PROXY Protocol Integration**: Cleaner handling without modifying existing socket code
3. **State Management**: Centralized socket state tracking and validation
4. **API Consistency**: Uniform interface for all socket operations
5. **Future Extensibility**: Easy to add new socket-level features (compression, encryption, etc.)
6. **Type Safety**: Better TypeScript support without module augmentation
7. **Testing**: Easier to mock and test socket behavior
### Implementation Drawbacks
1. **Major Refactoring**: Would require changes throughout the codebase
2. **Performance Overhead**: Additional abstraction layer (minimal but present)
3. **Compatibility**: Need to maintain event emitter compatibility
4. **Learning Curve**: Developers need to understand the wrapper
### Recommended Approach: Phased Implementation
**Phase 1: PROXY Protocol Only** (Immediate)
- Create minimal `ProxyProtocolSocket` wrapper for new connections from trusted proxies
- Use in connection handler when receiving from trusted proxy IPs
- Minimal disruption to existing code
```typescript
class ProxyProtocolSocket {
constructor(
public socket: net.Socket,
public realClientIP?: string,
public realClientPort?: number
) {}
get remoteAddress(): string {
return this.realClientIP || this.socket.remoteAddress || '';
}
get remotePort(): number {
return this.realClientPort || this.socket.remotePort || 0;
}
}
```
**Phase 2: Gradual Migration** (Future)
- Extend wrapper with more functionality
- Migrate critical paths to use wrapper
- Add performance monitoring
**Phase 3: Full Adoption** (Long-term)
- Complete migration to WrappedSocket
- Remove socket augmentation
- Standardize all socket handling
### Decision Summary
✅ **Implement minimal ProxyProtocolSocket for immediate PROXY protocol support**
- Low risk, high value
- Solves the immediate proxy chain connection limit issue
- Sets foundation for future improvements
- Can be implemented alongside existing code
📋 **Consider full WrappedSocket for future major version**
- Cleaner architecture
- Better maintainability
- But requires significant refactoring
## WrappedSocket Implementation (PROXY Protocol Phase 1) - v19.5.19+
The WrappedSocket class has been implemented as the foundation for PROXY protocol support:
### Implementation Details
1. **Design Approach**: Uses JavaScript Proxy to delegate all Socket methods/properties to the underlying socket while allowing override of specific properties (remoteAddress, remotePort).
2. **Key Design Decisions**:
- NOT a Duplex stream - Initially tried this approach but it created infinite loops
- Simple wrapper using Proxy pattern for transparent delegation
- All sockets are wrapped, not just those from trusted proxies
- Trusted proxy detection happens after wrapping
3. **Usage Pattern**:
```typescript
// In RouteConnectionHandler.handleConnection()
const wrappedSocket = new WrappedSocket(socket);
// Pass wrappedSocket throughout the flow
// When calling socket-utils functions, extract underlying socket:
const underlyingSocket = getUnderlyingSocket(socket);
setupBidirectionalForwarding(underlyingSocket, targetSocket, {...});
```
4. **Important Implementation Notes**:
- Socket utility functions (setupBidirectionalForwarding, cleanupSocket) expect raw net.Socket
- Always extract underlying socket before passing to these utilities using `getUnderlyingSocket()`
- WrappedSocket preserves all Socket functionality through Proxy delegation
- TypeScript typing handled via index signature: `[key: string]: any`
5. **Files Modified**:
- `ts/core/models/wrapped-socket.ts` - The WrappedSocket implementation
- `ts/core/models/socket-types.ts` - Helper functions and type guards
- `ts/proxies/smart-proxy/route-connection-handler.ts` - Updated to wrap all incoming sockets
- `ts/proxies/smart-proxy/connection-manager.ts` - Updated to accept WrappedSocket
- `ts/proxies/smart-proxy/http-proxy-bridge.ts` - Updated to handle WrappedSocket
6. **Test Coverage**:
- `test/test.wrapped-socket-forwarding.ts` - Verifies data forwarding through wrapped sockets
### Next Steps for PROXY Protocol
- Phase 2: Parse PROXY protocol header from trusted proxies
- Phase 3: Update real client IP/port after parsing
- Phase 4: Test with HAProxy and AWS ELB
- Phase 5: Documentation and configuration
## Proxy Protocol Documentation
For detailed information about proxy protocol implementation and proxy chaining:
- **[Proxy Protocol Guide](./readme.proxy-protocol.md)** - Complete implementation details and configuration
- **[Proxy Protocol Examples](./readme.proxy-protocol-example.md)** - Code examples and conceptual implementation
- **[Proxy Chain Summary](./readme.proxy-chain-summary.md)** - Quick reference for proxy chaining setup
## Connection Cleanup Edge Cases Investigation (v19.5.20+)
### Issue Discovered
"Zombie connections" can occur when both sockets are destroyed but the connection record hasn't been cleaned up. This happens when sockets are destroyed without triggering their close/error event handlers.
### Root Cause
1. **Event Handler Bypass**: In edge cases (network failures, proxy chain failures, forced socket destruction), sockets can be destroyed without their event handlers being called
2. **Cleanup Queue Delay**: The `initiateCleanupOnce` method adds connections to a cleanup queue (batch of 100 every 100ms), which may not process fast enough
3. **Inactivity Check Limitation**: The periodic inactivity check only examines `lastActivity` timestamps, not actual socket states
### Test Results
Debug script (`connection-manager-direct-test.ts`) revealed:
- **Normal cleanup works**: When socket events fire normally, cleanup is reliable
- **Zombies ARE created**: Direct socket destruction creates zombies (destroyed sockets, connectionClosed=false)
- **Manual cleanup works**: Calling `initiateCleanupOnce` on a zombie does clean it up
- **Inactivity check misses zombies**: The check doesn't detect connections with destroyed sockets
### Potential Solutions
1. **Periodic Zombie Detection**: Add zombie detection to the inactivity check:
```typescript
// In performOptimizedInactivityCheck
if (record.incoming?.destroyed && record.outgoing?.destroyed && !record.connectionClosed) {
this.cleanupConnection(record, 'zombie_detected');
}
```
2. **Socket State Monitoring**: Check socket states during connection operations
3. **Defensive Socket Handling**: Always attach cleanup handlers before any operation that might destroy sockets
4. **Immediate Cleanup Option**: For critical paths, use `cleanupConnection` instead of `initiateCleanupOnce`
### Impact
- Memory leaks in edge cases (network failures, proxy chain issues)
- Connection count inaccuracy
- Potential resource exhaustion over time
### Test Files
- `.nogit/debug/connection-manager-direct-test.ts` - Direct ConnectionManager testing showing zombie creation

740
readme.md
View File

@ -665,6 +665,661 @@ redirect: {
}
```
## Forwarding Modes Guide
This section provides a comprehensive reference for all forwarding modes available in SmartProxy, helping you choose the right configuration for your use case.
### Visual Overview
```mermaid
graph TD
A[Incoming Traffic] --> B{Action Type?}
B -->|forward| C{TLS Mode?}
B -->|socket-handler| D[Custom Handler]
C -->|terminate| E[Decrypt TLS]
C -->|passthrough| F[Forward Encrypted]
C -->|terminate-and-reencrypt| G[Decrypt & Re-encrypt]
C -->|none/HTTP| H[Forward HTTP]
E --> I{Engine?}
F --> I
G --> I
H --> I
I -->|node| J[Node.js Processing]
I -->|nftables| K[Kernel NAT]
J --> L[Backend]
K --> L
D --> M[Custom Logic]
style B fill:#f9f,stroke:#333,stroke-width:2px
style C fill:#bbf,stroke:#333,stroke-width:2px
style I fill:#bfb,stroke:#333,stroke-width:2px
```
### Overview
SmartProxy offers flexible traffic forwarding through combinations of:
- **Action Types**: How to handle matched traffic
- **TLS Modes**: How to handle HTTPS/TLS connections
- **Forwarding Engines**: Where packet processing occurs
### Quick Reference
#### Modern Route-Based Configuration
| Use Case | Action Type | TLS Mode | Engine | Performance | Security |
|----------|------------|----------|---------|-------------|----------|
| HTTP web server | `forward` | N/A | `node` | Good | Basic |
| HTTPS web server (inspect traffic) | `forward` | `terminate` | `node` | Good | Full inspection |
| HTTPS passthrough (no inspection) | `forward` | `passthrough` | `node` | Better | End-to-end encryption |
| HTTPS gateway (re-encrypt to backend) | `forward` | `terminate-and-reencrypt` | `node` | Moderate | Full control |
| High-performance TCP forwarding | `forward` | `passthrough` | `nftables` | Excellent | Basic |
| Custom protocol handling | `socket-handler` | N/A | `node` | Varies | Custom |
#### Legacy Forwarding Types (Deprecated)
| Legacy Type | Modern Equivalent |
|------------|------------------|
| `http-only` | `action.type: 'forward'` with port 80 |
| `https-passthrough` | `action.type: 'forward'` + `tls.mode: 'passthrough'` |
| `https-terminate-to-http` | `action.type: 'forward'` + `tls.mode: 'terminate'` |
| `https-terminate-to-https` | `action.type: 'forward'` + `tls.mode: 'terminate-and-reencrypt'` |
### Forwarding Mode Categories
#### 1. Action Types
##### Forward Action
Routes traffic to a backend server. This is the most common action type.
```typescript
{
action: {
type: 'forward',
target: {
host: 'backend-server',
port: 8080
}
}
}
```
##### Socket Handler Action
Provides custom handling for any TCP protocol. Used for specialized protocols or custom logic.
```typescript
{
action: {
type: 'socket-handler',
socketHandler: async (socket, context) => {
// Custom protocol implementation
}
}
}
```
#### 2. TLS Modes (for Forward Action)
##### Passthrough Mode
- **What**: Forwards encrypted TLS traffic without decryption
- **When**: Backend handles its own TLS termination
- **Pros**: Maximum performance, true end-to-end encryption
- **Cons**: Cannot inspect or modify HTTPS traffic
```mermaid
graph LR
Client -->|TLS| SmartProxy
SmartProxy -->|TLS| Backend
style SmartProxy fill:#f9f,stroke:#333,stroke-width:2px
```
##### Terminate Mode
- **What**: Decrypts TLS, forwards as plain HTTP
- **When**: Backend doesn't support HTTPS or you need to inspect traffic
- **Pros**: Can modify headers, inspect content, add security headers
- **Cons**: Backend connection is unencrypted
```mermaid
graph LR
Client -->|TLS| SmartProxy
SmartProxy -->|HTTP| Backend
style SmartProxy fill:#f9f,stroke:#333,stroke-width:2px
```
##### Terminate-and-Reencrypt Mode
- **What**: Decrypts TLS, then creates new TLS connection to backend
- **When**: Need traffic inspection but backend requires HTTPS
- **Pros**: Full control while maintaining backend security
- **Cons**: Higher CPU usage, increased latency
```mermaid
graph LR
Client -->|TLS| SmartProxy
SmartProxy -->|New TLS| Backend
style SmartProxy fill:#f9f,stroke:#333,stroke-width:2px
```
#### 3. Forwarding Engines
##### Node.js Engine (Default)
- **Processing**: Application-level in Node.js event loop
- **Features**: Full protocol support, header manipulation, WebSockets
- **Performance**: Good for most use cases
- **Use when**: You need application-layer features
##### NFTables Engine
- **Processing**: Kernel-level packet forwarding
- **Features**: Basic NAT, minimal overhead
- **Performance**: Excellent, near wire-speed
- **Use when**: Maximum performance is critical
- **Requirements**: Linux, root permissions, NFTables installed
### Detailed Mode Explanations
#### HTTP Forwarding (Port 80)
Simple HTTP forwarding without encryption:
```typescript
{
match: { ports: 80, domains: 'example.com' },
action: {
type: 'forward',
target: { host: 'localhost', port: 8080 }
}
}
```
**Data Flow**: Client → SmartProxy (HTTP) → Backend (HTTP)
#### HTTPS with TLS Termination
Decrypt HTTPS and forward as HTTP:
```typescript
{
match: { ports: 443, domains: 'secure.example.com' },
action: {
type: 'forward',
target: { host: 'localhost', port: 8080 },
tls: {
mode: 'terminate',
certificate: 'auto' // Use Let's Encrypt
}
}
}
```
**Data Flow**: Client → SmartProxy (HTTPS decrypt) → Backend (HTTP)
#### HTTPS Passthrough
Forward encrypted traffic without decryption:
```typescript
{
match: { ports: 443, domains: 'legacy.example.com' },
action: {
type: 'forward',
target: { host: '192.168.1.10', port: 443 },
tls: {
mode: 'passthrough'
}
}
}
```
**Data Flow**: Client → SmartProxy (TLS forwarding) → Backend (Original TLS)
#### HTTPS Gateway (Terminate and Re-encrypt)
Decrypt, inspect, then re-encrypt to backend:
```typescript
{
match: { ports: 443, domains: 'api.example.com' },
action: {
type: 'forward',
target: { host: 'api-backend', port: 443 },
tls: {
mode: 'terminate-and-reencrypt',
certificate: 'auto'
},
advanced: {
headers: {
'X-Forwarded-Proto': 'https',
'X-Real-IP': '{clientIp}'
}
}
}
}
```
**Data Flow**: Client → SmartProxy (HTTPS decrypt) → SmartProxy (New HTTPS) → Backend
#### High-Performance NFTables Forwarding
Kernel-level forwarding for maximum performance:
```typescript
{
match: { ports: 443, domains: 'fast.example.com' },
action: {
type: 'forward',
target: { host: 'backend', port: 443 },
tls: { mode: 'passthrough' },
forwardingEngine: 'nftables',
nftables: {
preserveSourceIP: true,
maxRate: '10gbps'
}
}
}
```
**Data Flow**: Client → Kernel (NFTables NAT) → Backend
#### Custom Socket Handler
Handle custom protocols or implement specialized logic:
```typescript
{
match: { ports: 9000, domains: 'custom.example.com' },
action: {
type: 'socket-handler',
socketHandler: async (socket, context) => {
console.log(`Connection from ${context.clientIp}`);
socket.write('Welcome to custom protocol server\n');
socket.on('data', (data) => {
// Handle custom protocol
const response = processCustomProtocol(data);
socket.write(response);
});
}
}
}
```
### Decision Guide
#### Choose HTTP Forwarding When:
- Backend only supports HTTP
- Internal services not exposed to internet
- Development/testing environments
#### Choose HTTPS Termination When:
- Need to inspect/modify HTTP traffic
- Backend doesn't support HTTPS
- Want to add security headers
- Need to cache responses
#### Choose HTTPS Passthrough When:
- Backend manages its own certificates
- Need true end-to-end encryption
- Compliance requires no MITM
- WebSocket connections to backend
#### Choose HTTPS Terminate-and-Reencrypt When:
- Need traffic inspection AND backend requires HTTPS
- API gateway scenarios
- Adding authentication layers
- Different certificates for client/backend
#### Choose NFTables Engine When:
- Handling 1Gbps+ traffic
- Thousands of concurrent connections
- Minimal latency is critical
- Don't need application-layer features
#### Choose Socket Handler When:
- Implementing custom protocols
- Need fine-grained connection control
- Building protocol adapters
- Special authentication flows
### Complete Examples
#### Example 1: Complete Web Application
```typescript
const proxy = new SmartProxy({
routes: [
// HTTP to HTTPS redirect
{
match: { ports: 80, domains: ['example.com', 'www.example.com'] },
action: {
type: 'socket-handler',
socketHandler: SocketHandlers.httpRedirect('https://{domain}{path}')
}
},
// Main website with TLS termination
{
match: { ports: 443, domains: ['example.com', 'www.example.com'] },
action: {
type: 'forward',
target: { host: 'web-backend', port: 3000 },
tls: {
mode: 'terminate',
certificate: 'auto'
},
websocket: { enabled: true }
}
},
// API with re-encryption
{
match: { ports: 443, domains: 'api.example.com' },
action: {
type: 'forward',
target: { host: 'api-backend', port: 443 },
tls: {
mode: 'terminate-and-reencrypt',
certificate: 'auto'
}
},
security: {
ipAllowList: ['10.0.0.0/8'],
rateLimit: {
enabled: true,
maxRequests: 100,
window: 60
}
}
}
]
});
```
#### Example 2: Multi-Mode Proxy Setup
```typescript
const proxy = new SmartProxy({
routes: [
// Legacy app with passthrough
{
match: { ports: 443, domains: 'legacy.example.com' },
action: {
type: 'forward',
target: { host: 'legacy-server', port: 443 },
tls: { mode: 'passthrough' }
}
},
// High-performance streaming with NFTables
{
match: { ports: 8080, domains: 'stream.example.com' },
action: {
type: 'forward',
target: { host: 'stream-backend', port: 8080 },
forwardingEngine: 'nftables',
nftables: {
protocol: 'tcp',
preserveSourceIP: true
}
}
},
// Custom protocol handler
{
match: { ports: 9999 },
action: {
type: 'socket-handler',
socketHandler: SocketHandlers.proxy('custom-backend', 9999)
}
}
]
});
```
### Performance Considerations
#### Node.js Engine Performance
| Metric | Typical Performance |
|--------|-------------------|
| Throughput | 1-10 Gbps |
| Connections | 10,000-50,000 concurrent |
| Latency | 1-5ms added |
| CPU Usage | Moderate |
**Best for**: Most web applications, APIs, sites needing inspection
#### NFTables Engine Performance
| Metric | Typical Performance |
|--------|-------------------|
| Throughput | 10-100 Gbps |
| Connections | 100,000+ concurrent |
| Latency | <0.1ms added |
| CPU Usage | Minimal |
**Best for**: High-traffic services, streaming, gaming, TCP forwarding
#### Performance Tips
1. **Use passthrough mode** when you don't need inspection
2. **Enable NFTables** for high-traffic services
3. **Terminate TLS only when necessary** - it adds CPU overhead
4. **Use connection pooling** for terminate-and-reencrypt mode
5. **Enable HTTP/2** for better multiplexing
### Security Implications
#### TLS Termination Security
**Pros:**
- Inspect traffic for threats
- Add security headers
- Implement WAF rules
- Log requests for audit
**Cons:**
- Proxy has access to decrypted data
- Requires secure certificate storage
- Potential compliance issues
**Best Practices:**
- Use auto-renewal with Let's Encrypt
- Store certificates securely
- Implement proper access controls
- Use strong TLS configurations
#### Passthrough Security
**Pros:**
- True end-to-end encryption
- No MITM concerns
- Backend controls security
**Cons:**
- Cannot inspect traffic
- Cannot add security headers
- Limited DDoS protection
#### Socket Handler Security
**Risks:**
- Custom code may have vulnerabilities
- Resource exhaustion possible
- Authentication bypass risks
**Mitigations:**
```typescript
{
action: {
type: 'socket-handler',
socketHandler: async (socket, context) => {
// Always validate and sanitize input
socket.on('data', (data) => {
if (data.length > MAX_SIZE) {
socket.destroy();
return;
}
// Process safely...
});
// Set timeouts
socket.setTimeout(30000);
// Rate limit connections
if (connectionsFromIP(context.clientIp) > 10) {
socket.destroy();
}
}
}
}
```
### Migration from Legacy Types
#### From `http-only`
**Old:**
```typescript
{
type: 'http-only',
target: { host: 'localhost', port: 8080 }
}
```
**New:**
```typescript
{
match: { ports: 80, domains: 'example.com' },
action: {
type: 'forward',
target: { host: 'localhost', port: 8080 }
}
}
```
#### From `https-passthrough`
**Old:**
```typescript
{
type: 'https-passthrough',
target: { host: 'backend', port: 443 }
}
```
**New:**
```typescript
{
match: { ports: 443, domains: 'example.com' },
action: {
type: 'forward',
target: { host: 'backend', port: 443 },
tls: { mode: 'passthrough' }
}
}
```
#### From `https-terminate-to-http`
**Old:**
```typescript
{
type: 'https-terminate-to-http',
target: { host: 'localhost', port: 8080 },
ssl: { /* certs */ }
}
```
**New:**
```typescript
{
match: { ports: 443, domains: 'example.com' },
action: {
type: 'forward',
target: { host: 'localhost', port: 8080 },
tls: {
mode: 'terminate',
certificate: 'auto' // or provide cert/key
}
}
}
```
#### From `https-terminate-to-https`
**Old:**
```typescript
{
type: 'https-terminate-to-https',
target: { host: 'backend', port: 443 },
ssl: { /* certs */ }
}
```
**New:**
```typescript
{
match: { ports: 443, domains: 'example.com' },
action: {
type: 'forward',
target: { host: 'backend', port: 443 },
tls: {
mode: 'terminate-and-reencrypt',
certificate: 'auto'
}
}
}
```
### Helper Functions Quick Reference
SmartProxy provides helper functions for common configurations:
```typescript
// HTTP forwarding
createHttpRoute('example.com', { host: 'localhost', port: 8080 })
// HTTPS with termination
createHttpsTerminateRoute('secure.com', { host: 'localhost', port: 8080 }, {
certificate: 'auto'
})
// HTTPS passthrough
createHttpsPassthroughRoute('legacy.com', { host: 'backend', port: 443 })
// Complete HTTPS setup (includes HTTP redirect)
...createCompleteHttpsServer('example.com', { host: 'localhost', port: 8080 }, {
certificate: 'auto'
})
// NFTables high-performance
createNfTablesRoute('fast.com', { host: 'backend', port: 8080 }, {
ports: 80,
preserveSourceIP: true
})
// Custom socket handler
createSocketHandlerRoute('custom.com', 9000, async (socket, context) => {
// Handler implementation
})
```
### Summary
SmartProxy's forwarding modes provide flexibility for any proxy scenario:
- **Simple HTTP/HTTPS forwarding** for most web applications
- **TLS passthrough** for end-to-end encryption
- **TLS termination** for traffic inspection and modification
- **NFTables** for extreme performance requirements
- **Socket handlers** for custom protocols
Choose based on your security requirements, performance needs, and whether you need to inspect or modify traffic. The modern route-based configuration provides a consistent interface regardless of the forwarding mode you choose.
### Route Metadata and Prioritization
You can add metadata to routes to help with organization and control matching priority:
@ -970,6 +1625,34 @@ The `IProxyStats` interface provides the following methods:
- `getConnectionsByRoute()`: Connection count per route
- `getConnectionsByIP()`: Connection count per client IP
Additional extended methods available:
- `getThroughputRate()`: Bytes per second rate for the last minute
- `getTopIPs(limit?: number)`: Get top IPs by connection count
- `isIPBlocked(ip: string, maxConnectionsPerIP: number)`: Check if an IP has reached the connection limit
### Extended Metrics Example
```typescript
const stats = proxy.getStats() as any; // Extended methods are available
// Get throughput rate
const rate = stats.getThroughputRate();
console.log(`Incoming: ${rate.bytesInPerSec} bytes/sec`);
console.log(`Outgoing: ${rate.bytesOutPerSec} bytes/sec`);
// Get top 10 IPs by connection count
const topIPs = stats.getTopIPs(10);
topIPs.forEach(({ ip, connections }) => {
console.log(`${ip}: ${connections} connections`);
});
// Check if an IP should be rate limited
if (stats.isIPBlocked('192.168.1.100', 100)) {
console.log('IP has too many connections');
}
```
### Monitoring Example
```typescript
@ -1736,6 +2419,62 @@ createHttpToHttpsRedirect('old.example.com', 443)
}
```
## WebSocket Keep-Alive Configuration
If your WebSocket connections are disconnecting every 30 seconds in SNI passthrough mode, here's how to configure keep-alive settings:
### Extended Keep-Alive Treatment (Recommended)
```typescript
const proxy = new SmartProxy({
// Extend timeout for keep-alive connections
keepAliveTreatment: 'extended',
keepAliveInactivityMultiplier: 10, // 10x the base timeout
inactivityTimeout: 14400000, // 4 hours base (40 hours with multiplier)
routes: [
{
name: 'websocket-passthrough',
match: {
ports: 443,
domains: ['ws.example.com', 'wss.example.com']
},
action: {
type: 'forward',
target: { host: 'backend', port: 443 },
tls: { mode: 'passthrough' }
}
}
]
});
```
### Immortal Connections (Never Timeout)
```typescript
const proxy = new SmartProxy({
// Never timeout keep-alive connections
keepAliveTreatment: 'immortal',
routes: [
// ... same as above
]
});
```
### Understanding the Issue
In SNI passthrough mode:
1. **WebSocket Heartbeat**: The HTTP proxy's WebSocket handler sends ping frames every 30 seconds
2. **SNI Passthrough**: In passthrough mode, traffic is encrypted end-to-end
3. **Can't Inject Pings**: The proxy can't inject ping frames into encrypted traffic
4. **Connection Terminated**: After 30 seconds, connection is marked inactive and closed
The solution involves:
- Longer grace periods for encrypted connections (5 minutes vs 30 seconds)
- Relying on OS-level TCP keep-alive instead of application-level heartbeat
- Different timeout strategies per route type
## Configuration Options
### SmartProxy (IRoutedSmartProxyOptions)
@ -1746,6 +2485,7 @@ createHttpToHttpsRedirect('old.example.com', 443)
- `httpProxyPort` (number, default 8443) - Port where HttpProxy listens for forwarded connections
- Connection timeouts: `initialDataTimeout`, `socketTimeout`, `inactivityTimeout`, etc.
- Socket opts: `noDelay`, `keepAlive`, `enableKeepAliveProbes`
- Keep-alive configuration: `keepAliveTreatment` ('standard'|'extended'|'immortal'), `keepAliveInactivityMultiplier`
- `certProvisionFunction` (callback) - Custom certificate provisioning
#### SmartProxy Dynamic Port Management Methods

View File

@ -1,45 +0,0 @@
# Memory Leaks Fixed in SmartProxy
## Summary of Issues Found and Fixed
### 1. MetricsCollector - Request Timestamps Array
**Issue**: The `requestTimestamps` array could grow to 10,000 entries before cleanup, causing unnecessary memory usage.
**Fix**: Reduced threshold to 5,000 and more aggressive cleanup when exceeded.
### 2. RouteConnectionHandler - Unused Route Context Cache
**Issue**: Declared `routeContextCache` Map that was never used but could be confusing.
**Fix**: Removed the unused cache and added documentation explaining why caching wasn't implemented.
### 3. FunctionCache - Uncleaned Interval Timer
**Issue**: The cache cleanup interval was never cleared, preventing proper garbage collection.
**Fix**: Added `destroy()` method to properly clear the interval timer.
### 4. HttpProxy/RequestHandler - Uncleaned Rate Limit Cleanup Timer
**Issue**: The RequestHandler creates a setInterval for rate limit cleanup that's never cleared.
**Status**: Needs fix - add destroy method and call it from HttpProxy.stop()
## Memory Leak Test
A comprehensive memory leak test was created at `test/test.memory-leak-check.node.ts` that:
- Tests with 1000 requests to same routes
- Tests with 1000 requests to different routes (cache growth)
- Tests rapid 10,000 requests (timestamp array growth)
- Monitors memory usage throughout
- Verifies specific data structures don't grow unbounded
## Recommendations
1. Always use `unref()` on intervals that shouldn't keep the process alive
2. Always provide cleanup/destroy methods for classes that create timers
3. Implement size limits on all caches and Maps
4. Consider using WeakMap for caches where appropriate
5. Run memory leak tests regularly, especially after adding new features
## Running the Memory Leak Test
```bash
# Run with garbage collection exposed for accurate measurements
node --expose-gc test/test.memory-leak-check.node.ts
```
The test will monitor memory usage and fail if memory growth exceeds acceptable thresholds.

View File

@ -1,591 +0,0 @@
# SmartProxy Metrics Implementation Plan
This document outlines the plan for implementing comprehensive metrics tracking in SmartProxy.
## Overview
The metrics system will provide real-time insights into proxy performance, connection statistics, and throughput data. The implementation will be efficient, thread-safe, and have minimal impact on proxy performance.
**Key Design Decisions**:
1. **On-demand computation**: Instead of maintaining duplicate state, the MetricsCollector computes metrics on-demand from existing data structures.
2. **SmartProxy-centric architecture**: MetricsCollector receives the SmartProxy instance, providing access to all components:
- ConnectionManager for connection data
- RouteManager for route metadata
- Settings for configuration
- Future components without API changes
This approach:
- Eliminates synchronization issues
- Reduces memory overhead
- Simplifies the implementation
- Guarantees metrics accuracy
- Leverages existing battle-tested components
- Provides flexibility for future enhancements
## Metrics Interface
```typescript
interface IProxyStats {
getActiveConnections(): number;
getConnectionsByRoute(): Map<string, number>;
getConnectionsByIP(): Map<string, number>;
getTotalConnections(): number;
getRequestsPerSecond(): number;
getThroughput(): { bytesIn: number, bytesOut: number };
}
```
## Implementation Plan
### 1. Create MetricsCollector Class
**Location**: `/ts/proxies/smart-proxy/metrics-collector.ts`
```typescript
import type { SmartProxy } from './smart-proxy.js';
export class MetricsCollector implements IProxyStats {
constructor(
private smartProxy: SmartProxy
) {}
// RPS tracking (the only state we need to maintain)
private requestTimestamps: number[] = [];
private readonly RPS_WINDOW_SIZE = 60000; // 1 minute window
// All other metrics are computed on-demand from SmartProxy's components
}
```
### 2. Integration Points
Since metrics are computed on-demand from ConnectionManager's records, we only need minimal integration:
#### A. Request Tracking for RPS
**File**: `/ts/proxies/smart-proxy/route-connection-handler.ts`
```typescript
// In handleNewConnection when a new connection is accepted
this.metricsCollector.recordRequest();
```
#### B. SmartProxy Component Access
Through the SmartProxy instance, MetricsCollector can access:
- `smartProxy.connectionManager` - All active connections and their details
- `smartProxy.routeManager` - Route configurations and metadata
- `smartProxy.settings` - Configuration for thresholds and limits
- `smartProxy.servers` - Server instances and port information
- Any other components as needed for future metrics
No additional hooks needed!
### 3. Metric Implementations
#### A. Active Connections
```typescript
getActiveConnections(): number {
return this.smartProxy.connectionManager.getConnectionCount();
}
```
#### B. Connections by Route
```typescript
getConnectionsByRoute(): Map<string, number> {
const routeCounts = new Map<string, number>();
// Compute from active connections
for (const [_, record] of this.smartProxy.connectionManager.getConnections()) {
const routeName = record.routeName || 'unknown';
const current = routeCounts.get(routeName) || 0;
routeCounts.set(routeName, current + 1);
}
return routeCounts;
}
```
#### C. Connections by IP
```typescript
getConnectionsByIP(): Map<string, number> {
const ipCounts = new Map<string, number>();
// Compute from active connections
for (const [_, record] of this.smartProxy.connectionManager.getConnections()) {
const ip = record.remoteIP;
const current = ipCounts.get(ip) || 0;
ipCounts.set(ip, current + 1);
}
return ipCounts;
}
// Additional helper methods for IP tracking
getTopIPs(limit: number = 10): Array<{ip: string, connections: number}> {
const ipCounts = this.getConnectionsByIP();
const sorted = Array.from(ipCounts.entries())
.sort((a, b) => b[1] - a[1])
.slice(0, limit)
.map(([ip, connections]) => ({ ip, connections }));
return sorted;
}
isIPBlocked(ip: string, maxConnectionsPerIP: number): boolean {
const ipCounts = this.getConnectionsByIP();
const currentConnections = ipCounts.get(ip) || 0;
return currentConnections >= maxConnectionsPerIP;
}
```
#### D. Total Connections
```typescript
getTotalConnections(): number {
// Get from termination stats
const stats = this.smartProxy.connectionManager.getTerminationStats();
let total = this.smartProxy.connectionManager.getConnectionCount(); // Add active connections
// Add all terminated connections
for (const reason in stats.incoming) {
total += stats.incoming[reason];
}
return total;
}
```
#### E. Requests Per Second
```typescript
getRequestsPerSecond(): number {
const now = Date.now();
const windowStart = now - this.RPS_WINDOW_SIZE;
// Clean old timestamps
this.requestTimestamps = this.requestTimestamps.filter(ts => ts > windowStart);
// Calculate RPS based on window
const requestsInWindow = this.requestTimestamps.length;
return requestsInWindow / (this.RPS_WINDOW_SIZE / 1000);
}
recordRequest(): void {
this.requestTimestamps.push(Date.now());
// Prevent unbounded growth
if (this.requestTimestamps.length > 10000) {
this.cleanupOldRequests();
}
}
```
#### F. Throughput Tracking
```typescript
getThroughput(): { bytesIn: number, bytesOut: number } {
let bytesIn = 0;
let bytesOut = 0;
// Sum bytes from all active connections
for (const [_, record] of this.smartProxy.connectionManager.getConnections()) {
bytesIn += record.bytesReceived;
bytesOut += record.bytesSent;
}
return { bytesIn, bytesOut };
}
// Get throughput rate (bytes per second) for last minute
getThroughputRate(): { bytesInPerSec: number, bytesOutPerSec: number } {
const now = Date.now();
let recentBytesIn = 0;
let recentBytesOut = 0;
let connectionCount = 0;
// Calculate bytes transferred in last minute from active connections
for (const [_, record] of this.smartProxy.connectionManager.getConnections()) {
const connectionAge = now - record.incomingStartTime;
if (connectionAge < 60000) { // Connection started within last minute
recentBytesIn += record.bytesReceived;
recentBytesOut += record.bytesSent;
connectionCount++;
} else {
// For older connections, estimate rate based on average
const rate = connectionAge / 60000;
recentBytesIn += record.bytesReceived / rate;
recentBytesOut += record.bytesSent / rate;
connectionCount++;
}
}
return {
bytesInPerSec: Math.round(recentBytesIn / 60),
bytesOutPerSec: Math.round(recentBytesOut / 60)
};
}
```
### 4. Performance Optimizations
Since metrics are computed on-demand from existing data structures, performance optimizations are minimal:
#### A. Caching for Frequent Queries
```typescript
private cachedMetrics: {
timestamp: number;
connectionsByRoute?: Map<string, number>;
connectionsByIP?: Map<string, number>;
} = { timestamp: 0 };
private readonly CACHE_TTL = 1000; // 1 second cache
getConnectionsByRoute(): Map<string, number> {
const now = Date.now();
// Return cached value if fresh
if (this.cachedMetrics.connectionsByRoute &&
now - this.cachedMetrics.timestamp < this.CACHE_TTL) {
return this.cachedMetrics.connectionsByRoute;
}
// Compute fresh value
const routeCounts = new Map<string, number>();
for (const [_, record] of this.smartProxy.connectionManager.getConnections()) {
const routeName = record.routeName || 'unknown';
const current = routeCounts.get(routeName) || 0;
routeCounts.set(routeName, current + 1);
}
// Cache and return
this.cachedMetrics.connectionsByRoute = routeCounts;
this.cachedMetrics.timestamp = now;
return routeCounts;
}
```
#### B. RPS Cleanup
```typescript
// Only cleanup needed is for RPS timestamps
private cleanupOldRequests(): void {
const cutoff = Date.now() - this.RPS_WINDOW_SIZE;
this.requestTimestamps = this.requestTimestamps.filter(ts => ts > cutoff);
}
```
### 5. SmartProxy Integration
#### A. Add to SmartProxy Class
```typescript
export class SmartProxy {
private metricsCollector: MetricsCollector;
constructor(options: ISmartProxyOptions) {
// ... existing code ...
// Pass SmartProxy instance to MetricsCollector
this.metricsCollector = new MetricsCollector(this);
}
// Public API
public getStats(): IProxyStats {
return this.metricsCollector;
}
}
```
#### B. Configuration Options
```typescript
interface ISmartProxyOptions {
// ... existing options ...
metrics?: {
enabled?: boolean; // Default: true
rpsWindowSize?: number; // Default: 60000 (1 minute)
throughputWindowSize?: number; // Default: 60000 (1 minute)
cleanupInterval?: number; // Default: 60000 (1 minute)
};
}
```
### 6. Advanced Metrics (Future Enhancement)
```typescript
interface IAdvancedProxyStats extends IProxyStats {
// Latency metrics
getAverageLatency(): number;
getLatencyPercentiles(): { p50: number, p95: number, p99: number };
// Error metrics
getErrorRate(): number;
getErrorsByType(): Map<string, number>;
// Route-specific metrics
getRouteMetrics(routeName: string): IRouteMetrics;
// Time-series data
getHistoricalMetrics(duration: number): IHistoricalMetrics;
// Server/Port metrics (leveraging SmartProxy access)
getPortUtilization(): Map<number, { connections: number, maxConnections: number }>;
getCertificateExpiry(): Map<string, Date>;
}
// Example implementation showing SmartProxy component access
getPortUtilization(): Map<number, { connections: number, maxConnections: number }> {
const portStats = new Map();
// Access servers through SmartProxy
for (const [port, server] of this.smartProxy.servers) {
const connections = Array.from(this.smartProxy.connectionManager.getConnections())
.filter(([_, record]) => record.localPort === port).length;
// Access route configuration through SmartProxy
const routes = this.smartProxy.routeManager.getRoutesForPort(port);
const maxConnections = routes[0]?.advanced?.maxConnections ||
this.smartProxy.settings.defaults?.security?.maxConnections ||
10000;
portStats.set(port, { connections, maxConnections });
}
return portStats;
}
```
### 7. HTTP Metrics Endpoint (Optional)
```typescript
// Expose metrics via HTTP endpoint
class MetricsHttpHandler {
handleRequest(req: IncomingMessage, res: ServerResponse): void {
if (req.url === '/metrics') {
const stats = this.proxy.getStats();
res.writeHead(200, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({
activeConnections: stats.getActiveConnections(),
totalConnections: stats.getTotalConnections(),
requestsPerSecond: stats.getRequestsPerSecond(),
throughput: stats.getThroughput(),
connectionsByRoute: Object.fromEntries(stats.getConnectionsByRoute()),
connectionsByIP: Object.fromEntries(stats.getConnectionsByIP()),
topIPs: stats.getTopIPs(20)
}));
}
}
}
```
### 8. Testing Strategy
The simplified design makes testing much easier since we can mock the ConnectionManager's data:
#### A. Unit Tests
```typescript
// test/test.metrics-collector.ts
tap.test('MetricsCollector computes metrics correctly', async () => {
// Mock ConnectionManager with test data
const mockConnectionManager = {
getConnectionCount: () => 2,
getConnections: () => new Map([
['conn1', { remoteIP: '192.168.1.1', routeName: 'api', bytesReceived: 1000, bytesSent: 500 }],
['conn2', { remoteIP: '192.168.1.1', routeName: 'web', bytesReceived: 2000, bytesSent: 1000 }]
]),
getTerminationStats: () => ({ incoming: { normal: 10, timeout: 2 } })
};
const collector = new MetricsCollector(mockConnectionManager as any);
expect(collector.getActiveConnections()).toEqual(2);
expect(collector.getConnectionsByIP().get('192.168.1.1')).toEqual(2);
expect(collector.getTotalConnections()).toEqual(14); // 2 active + 12 terminated
});
```
#### B. Integration Tests
```typescript
// test/test.metrics-integration.ts
tap.test('SmartProxy provides accurate metrics', async () => {
const proxy = new SmartProxy({ /* config */ });
await proxy.start();
// Create connections and verify metrics
const stats = proxy.getStats();
expect(stats.getActiveConnections()).toEqual(0);
});
```
#### C. Performance Tests
```typescript
// test/test.metrics-performance.ts
tap.test('Metrics collection has minimal performance impact', async () => {
// Measure proxy performance with and without metrics
// Ensure overhead is < 1%
});
```
### 9. Implementation Phases
#### Phase 1: Core Metrics (Days 1-2)
- [ ] Create MetricsCollector class
- [ ] Implement all metric methods (reading from ConnectionManager)
- [ ] Add RPS tracking
- [ ] Add to SmartProxy with getStats() method
#### Phase 2: Testing & Optimization (Days 3-4)
- [ ] Add comprehensive unit tests with mocked data
- [ ] Add integration tests with real proxy
- [ ] Implement caching for performance
- [ ] Add RPS cleanup mechanism
#### Phase 3: Advanced Features (Days 5-7)
- [ ] Add HTTP metrics endpoint
- [ ] Implement Prometheus export format
- [ ] Add IP-based rate limiting helpers
- [ ] Create monitoring dashboard example
**Note**: The simplified design reduces implementation time from 4 weeks to 1 week!
### 10. Usage Examples
```typescript
// Basic usage
const proxy = new SmartProxy({
routes: [...],
metrics: { enabled: true }
});
await proxy.start();
// Get metrics
const stats = proxy.getStats();
console.log(`Active connections: ${stats.getActiveConnections()}`);
console.log(`RPS: ${stats.getRequestsPerSecond()}`);
console.log(`Throughput: ${JSON.stringify(stats.getThroughput())}`);
// Monitor specific routes
const routeConnections = stats.getConnectionsByRoute();
for (const [route, count] of routeConnections) {
console.log(`Route ${route}: ${count} connections`);
}
// Monitor connections by IP
const ipConnections = stats.getConnectionsByIP();
for (const [ip, count] of ipConnections) {
console.log(`IP ${ip}: ${count} connections`);
}
// Get top IPs by connection count
const topIPs = stats.getTopIPs(10);
console.log('Top 10 IPs:', topIPs);
// Check if IP should be rate limited
if (stats.isIPBlocked('192.168.1.100', 100)) {
console.log('IP has too many connections');
}
```
### 11. Monitoring Integration
```typescript
// Export to monitoring systems
class PrometheusExporter {
export(stats: IProxyStats): string {
return `
# HELP smartproxy_active_connections Current number of active connections
# TYPE smartproxy_active_connections gauge
smartproxy_active_connections ${stats.getActiveConnections()}
# HELP smartproxy_total_connections Total connections since start
# TYPE smartproxy_total_connections counter
smartproxy_total_connections ${stats.getTotalConnections()}
# HELP smartproxy_requests_per_second Current requests per second
# TYPE smartproxy_requests_per_second gauge
smartproxy_requests_per_second ${stats.getRequestsPerSecond()}
`;
}
}
```
### 12. Documentation
- Add metrics section to main README
- Create metrics API documentation
- Add monitoring setup guide
- Provide dashboard configuration examples
## Success Criteria
1. **Performance**: Metrics collection adds < 1% overhead
2. **Accuracy**: All metrics are accurate within 1% margin
3. **Memory**: No memory leaks over 24-hour operation
4. **Thread Safety**: No race conditions under high load
5. **Usability**: Simple, intuitive API for accessing metrics
## Privacy and Security Considerations
### IP Address Tracking
1. **Privacy Compliance**:
- Consider GDPR and other privacy regulations when storing IP addresses
- Implement configurable IP anonymization (e.g., mask last octet)
- Add option to disable IP tracking entirely
2. **Security**:
- Use IP metrics for rate limiting and DDoS protection
- Implement automatic blocking for IPs exceeding connection limits
- Consider integration with IP reputation services
3. **Implementation Options**:
```typescript
interface IMetricsOptions {
trackIPs?: boolean; // Default: true
anonymizeIPs?: boolean; // Default: false
maxConnectionsPerIP?: number; // Default: 100
ipBlockDuration?: number; // Default: 3600000 (1 hour)
}
```
## Future Enhancements
1. **Distributed Metrics**: Aggregate metrics across multiple proxy instances
2. **Historical Storage**: Store metrics in time-series database
3. **Alerting**: Built-in alerting based on metric thresholds
4. **Custom Metrics**: Allow users to define custom metrics
5. **GraphQL API**: Provide GraphQL endpoint for flexible metric queries
6. **IP Analytics**:
- Geographic distribution of connections
- Automatic anomaly detection for IP patterns
- Integration with threat intelligence feeds
## Benefits of the Simplified Design
By using a SmartProxy-centric architecture with on-demand computation:
1. **Zero Synchronization Issues**: Metrics always reflect the true state
2. **Minimal Memory Overhead**: No duplicate data structures
3. **Simpler Implementation**: ~200 lines instead of ~1000 lines
4. **Easier Testing**: Can mock SmartProxy components
5. **Better Performance**: No overhead from state updates
6. **Guaranteed Accuracy**: Single source of truth
7. **Faster Development**: 1 week instead of 4 weeks
8. **Future Flexibility**: Access to all SmartProxy components without API changes
9. **Holistic Metrics**: Can correlate data across components (connections, routes, settings, certificates, etc.)
10. **Clean Architecture**: MetricsCollector is a true SmartProxy component, not an isolated module
This approach leverages the existing, well-tested SmartProxy infrastructure while providing a clean, simple metrics API that can grow with the proxy's capabilities.

View File

@ -1,202 +0,0 @@
# Production Connection Monitoring
This document explains how to use the ProductionConnectionMonitor to diagnose connection accumulation issues in real-time.
## Quick Start
```typescript
import ProductionConnectionMonitor from './.nogit/debug/production-connection-monitor.js';
// After starting your proxy
const monitor = new ProductionConnectionMonitor(proxy);
monitor.start(5000); // Check every 5 seconds
// The monitor will automatically capture diagnostics when:
// - Connections exceed 50 (default threshold)
// - Sudden spike of 20+ connections occurs
// - You manually call monitor.forceCaptureNow()
```
## What Gets Captured
When accumulation is detected, the monitor saves a JSON file with:
### Connection Details
- Socket states (destroyed, readable, writable, readyState)
- Connection age and activity timestamps
- Data transfer statistics (bytes sent/received)
- Target host and port information
- Keep-alive status
- Event listener counts
### System State
- Memory usage
- Event loop lag
- Connection count trends
- Termination statistics
## Reading Diagnostic Files
Files are saved to `.nogit/connection-diagnostics/` with names like:
```
accumulation_2025-06-07T20-20-43-733Z_force_capture.json
```
### Key Fields to Check
1. **Socket States**
```json
"incomingState": {
"destroyed": false,
"readable": true,
"writable": true,
"readyState": "open"
}
```
- Both destroyed = zombie connection
- One destroyed = half-zombie
- Both alive but old = potential stuck connection
2. **Data Transfer**
```json
"bytesReceived": 36,
"bytesSent": 0,
"timeSinceLastActivity": 60000
```
- No bytes sent back = stuck connection
- High bytes but old = slow backend
- No activity = idle connection
3. **Connection Flags**
```json
"hasReceivedInitialData": false,
"hasKeepAlive": true,
"connectionClosed": false
```
- hasReceivedInitialData=false on non-TLS = immediate routing
- hasKeepAlive=true = extended timeout applies
- connectionClosed=false = still tracked
## Common Patterns
### 1. Hanging Backend Pattern
```json
{
"bytesReceived": 36,
"bytesSent": 0,
"age": 120000,
"targetHost": "backend.example.com",
"incomingState": { "destroyed": false },
"outgoingState": { "destroyed": false }
}
```
**Fix**: The stuck connection detection (60s timeout) should clean these up.
### 2. Zombie Connection Pattern
```json
{
"incomingState": { "destroyed": true },
"outgoingState": { "destroyed": true },
"connectionClosed": false
}
```
**Fix**: The zombie detection should clean these up within 30s.
### 3. Event Listener Leak Pattern
```json
{
"incomingListeners": {
"data": 15,
"error": 20,
"close": 18
}
}
```
**Issue**: Event listeners accumulating, potential memory leak.
### 4. No Outgoing Socket Pattern
```json
{
"outgoingState": { "exists": false },
"connectionClosed": false,
"age": 5000
}
```
**Issue**: Connection setup failed but cleanup didn't trigger.
## Forcing Diagnostic Capture
To capture current state immediately:
```typescript
monitor.forceCaptureNow();
```
This is useful when you notice accumulation starting.
## Automated Analysis
The monitor automatically analyzes patterns and logs:
- Zombie/half-zombie counts
- Stuck connection counts
- Old connection counts
- Memory usage
- Recommendations
## Integration Example
```typescript
// In your proxy startup script
import { SmartProxy } from '@push.rocks/smartproxy';
import ProductionConnectionMonitor from './production-connection-monitor.js';
async function startProxyWithMonitoring() {
const proxy = new SmartProxy({
// your config
});
await proxy.start();
// Start monitoring
const monitor = new ProductionConnectionMonitor(proxy);
monitor.start(5000);
// Optional: Capture on specific events
process.on('SIGUSR1', () => {
console.log('Manual diagnostic capture triggered');
monitor.forceCaptureNow();
});
// Graceful shutdown
process.on('SIGTERM', async () => {
monitor.stop();
await proxy.stop();
process.exit(0);
});
}
```
## Troubleshooting
### Monitor Not Detecting Accumulation
- Check threshold settings (default: 50 connections)
- Reduce check interval for faster detection
- Use forceCaptureNow() to capture current state
### Too Many False Positives
- Increase accumulation threshold
- Increase spike threshold
- Adjust check interval
### Missing Diagnostic Data
- Ensure output directory exists and is writable
- Check disk space
- Verify process has write permissions
## Next Steps
1. Deploy the monitor to production
2. Wait for accumulation to occur
3. Share diagnostic files for analysis
4. Apply targeted fixes based on patterns found
The diagnostic data will reveal the exact state of connections when accumulation occurs, enabling precise fixes for your specific scenario.

View File

@ -1,625 +0,0 @@
# PROXY Protocol Implementation Plan
## ⚠️ CRITICAL: Implementation Order
**Phase 1 (ProxyProtocolSocket/WrappedSocket) MUST be completed first!**
The ProxyProtocolSocket class is the foundation that enables all PROXY protocol functionality. No protocol parsing or integration can happen until this wrapper class is fully implemented and tested.
1. **FIRST**: Implement ProxyProtocolSocket (the WrappedSocket)
2. **THEN**: Add PROXY protocol parser
3. **THEN**: Integrate with connection handlers
4. **FINALLY**: Add security and validation
## Overview
Implement PROXY protocol support in SmartProxy to preserve client IP information through proxy chains, solving the connection limit accumulation issue where inner proxies see all connections as coming from the outer proxy's IP.
## Problem Statement
- In proxy chains, the inner proxy sees all connections from the outer proxy's IP
- This causes the inner proxy to hit per-IP connection limits (default: 100)
- Results in connection rejections while outer proxy accumulates connections
## Solution Design
### 1. Core Features
#### 1.1 PROXY Protocol Parsing
- Support PROXY protocol v1 (text format) initially
- Parse incoming PROXY headers to extract:
- Real client IP address
- Real client port
- Proxy IP address
- Proxy port
- Protocol (TCP4/TCP6)
#### 1.2 PROXY Protocol Generation
- Add ability to send PROXY protocol headers when forwarding connections
- Configurable per route or target
#### 1.3 Trusted Proxy IPs
- New `proxyIPs` array in SmartProxy options
- Auto-enable PROXY protocol acceptance for connections from these IPs
- Reject PROXY protocol from untrusted sources (security)
### 2. Configuration Schema
```typescript
interface ISmartProxyOptions {
// ... existing options
// List of trusted proxy IPs that can send PROXY protocol
proxyIPs?: string[];
// Global option to accept PROXY protocol (defaults based on proxyIPs)
acceptProxyProtocol?: boolean;
// Global option to send PROXY protocol to all targets
sendProxyProtocol?: boolean;
}
interface IRouteAction {
// ... existing options
// Send PROXY protocol to this specific target
sendProxyProtocol?: boolean;
}
```
### 3. Implementation Steps
#### IMPORTANT: Phase 1 Must Be Completed First
The `ProxyProtocolSocket` (WrappedSocket) is the foundation for all PROXY protocol functionality. This wrapper class must be implemented and integrated BEFORE any PROXY protocol parsing can begin.
#### Phase 1: ProxyProtocolSocket (WrappedSocket) Foundation - ✅ COMPLETED (v19.5.19)
This phase creates the socket wrapper infrastructure that all subsequent phases depend on.
1. **Create WrappedSocket class** in `ts/core/models/wrapped-socket.ts`
- Used JavaScript Proxy pattern instead of EventEmitter (avoids infinite loops)
- Properties for real client IP and port
- Transparent getters that return real or socket IP/port
- All socket methods/properties delegated via Proxy
2. **Implement core wrapper functionality**
- Constructor accepts regular socket + optional metadata
- `remoteAddress` getter returns real IP or falls back to socket IP
- `remotePort` getter returns real port or falls back to socket port
- `isFromTrustedProxy` property to check if it has real client info
- `setProxyInfo()` method to update real client details
3. **Update ConnectionManager to handle wrapped sockets**
- Accept either `net.Socket` or `WrappedSocket`
- Created `getUnderlyingSocket()` helper for socket utilities
- All socket utility functions extract underlying socket
4. **Integration completed**
- All incoming sockets wrapped in RouteConnectionHandler
- Socket forwarding verified working with wrapped sockets
- Type safety maintained with index signature
**Deliverables**: ✅ Working WrappedSocket that can wrap any socket and provide transparent access to client info.
#### Phase 2: PROXY Protocol Parser - ✅ COMPLETED (v19.5.21)
Only after WrappedSocket is working can we add protocol parsing.
1. ✅ Created `ProxyProtocolParser` class in `ts/core/utils/proxy-protocol.ts`
2. ✅ Implemented v1 text format parsing with full validation
3. ✅ Added comprehensive error handling and IP validation
4. ✅ Integrated parser to work WITH WrappedSocket in RouteConnectionHandler
**Deliverables**: ✅ Working PROXY protocol v1 parser that validates headers, extracts client info, and handles both TCP4 and TCP6 protocols.
#### Phase 3: Connection Handler Integration - ✅ COMPLETED (v19.5.21)
1. ✅ Modify `RouteConnectionHandler` to create WrappedSocket for all connections
2. ✅ Check if connection is from trusted proxy IP
3. ✅ If trusted, attempt to parse PROXY protocol header
4. ✅ Update wrapped socket with real client info
5. ✅ Continue normal connection handling with wrapped socket
**Deliverables**: ✅ RouteConnectionHandler now parses PROXY protocol from trusted proxies and updates connection records with real client info.
#### Phase 4: Outbound PROXY Protocol - ✅ COMPLETED (v19.5.21)
1. ✅ Add PROXY header generation in `setupDirectConnection`
2. ✅ Make it configurable per route via `sendProxyProtocol` option
3. ✅ Send header immediately after TCP connection
4. ✅ Added remotePort tracking to connection records
**Deliverables**: ✅ SmartProxy can now send PROXY protocol headers to backend servers when configured, preserving client IP through proxy chains.
#### Phase 5: Security & Validation - FINAL PHASE
1. Validate PROXY headers strictly
2. Reject malformed headers
3. Only accept from trusted IPs
4. Add rate limiting for PROXY protocol parsing
### 4. Design Decision: Socket Wrapper Architecture
#### Option A: Minimal Single Socket Wrapper
- **Scope**: Wraps individual sockets with metadata
- **Use Case**: PROXY protocol support with minimal refactoring
- **Pros**: Simple, low risk, easy migration
- **Cons**: Still need separate connection management
#### Option B: Comprehensive Connection Wrapper
- **Scope**: Manages socket pairs (incoming + outgoing) with all utilities
- **Use Case**: Complete connection lifecycle management
- **Pros**:
- Encapsulates all socket utilities (forwarding, cleanup, backpressure)
- Single object represents entire connection
- Cleaner API for connection handling
- **Cons**:
- Major architectural change
- Higher implementation risk
- More complex migration
#### Recommendation
Start with **Option A** (ProxyProtocolSocket) for immediate PROXY protocol support, then evaluate Option B based on:
- Performance impact of additional abstraction
- Code simplification benefits
- Team comfort with architectural change
### 5. Code Implementation Details
#### 5.1 ProxyProtocolSocket (WrappedSocket) - PHASE 1 IMPLEMENTATION
This is the foundational wrapper class that MUST be implemented first. It wraps a regular socket and provides transparent access to the real client IP/port.
```typescript
// ts/core/models/proxy-protocol-socket.ts
import { EventEmitter } from 'events';
import * as plugins from '../../../plugins.js';
/**
* ProxyProtocolSocket wraps a regular net.Socket to provide transparent access
* to the real client IP and port when behind a proxy using PROXY protocol.
*
* This is the FOUNDATION for all PROXY protocol support and must be implemented
* before any protocol parsing can occur.
*/
export class ProxyProtocolSocket extends EventEmitter {
private realClientIP?: string;
private realClientPort?: number;
constructor(
public readonly socket: plugins.net.Socket,
realClientIP?: string,
realClientPort?: number
) {
super();
this.realClientIP = realClientIP;
this.realClientPort = realClientPort;
// Forward all socket events
this.forwardSocketEvents();
}
/**
* Returns the real client IP if available, otherwise the socket's remote address
*/
get remoteAddress(): string | undefined {
return this.realClientIP || this.socket.remoteAddress;
}
/**
* Returns the real client port if available, otherwise the socket's remote port
*/
get remotePort(): number | undefined {
return this.realClientPort || this.socket.remotePort;
}
/**
* Indicates if this connection came through a trusted proxy
*/
get isFromTrustedProxy(): boolean {
return !!this.realClientIP;
}
/**
* Updates the real client information (called after parsing PROXY protocol)
*/
setProxyInfo(ip: string, port: number): void {
this.realClientIP = ip;
this.realClientPort = port;
}
// Pass-through all socket methods
write(data: any, encoding?: any, callback?: any): boolean {
return this.socket.write(data, encoding, callback);
}
end(data?: any, encoding?: any, callback?: any): this {
this.socket.end(data, encoding, callback);
return this;
}
destroy(error?: Error): this {
this.socket.destroy(error);
return this;
}
// ... implement all other socket methods as pass-through
/**
* Forward all events from the underlying socket
*/
private forwardSocketEvents(): void {
const events = ['data', 'end', 'close', 'error', 'drain', 'timeout'];
events.forEach(event => {
this.socket.on(event, (...args) => {
this.emit(event, ...args);
});
});
}
}
```
**KEY POINT**: This wrapper must be fully functional and tested BEFORE moving to Phase 2.
#### 4.2 ProxyProtocolParser (new file)
```typescript
// ts/core/utils/proxy-protocol.ts
export class ProxyProtocolParser {
static readonly PROXY_V1_SIGNATURE = 'PROXY ';
static parse(chunk: Buffer): IProxyInfo | null {
// Implementation
}
static generate(info: IProxyInfo): Buffer {
// Implementation
}
}
```
#### 4.3 Connection Handler Updates
```typescript
// In handleConnection method
let wrappedSocket: ProxyProtocolSocket | plugins.net.Socket = socket;
// Wrap socket if from trusted proxy
if (this.settings.proxyIPs?.includes(socket.remoteAddress)) {
wrappedSocket = new ProxyProtocolSocket(socket);
}
// Create connection record with wrapped socket
const record = this.connectionManager.createConnection(wrappedSocket);
// In handleInitialData method
if (wrappedSocket instanceof ProxyProtocolSocket) {
const proxyInfo = await this.checkForProxyProtocol(chunk);
if (proxyInfo) {
wrappedSocket.setProxyInfo(proxyInfo.sourceIP, proxyInfo.sourcePort);
// Continue with remaining data after PROXY header
}
}
```
#### 4.4 Security Manager Updates
- Accept socket or ProxyProtocolSocket
- Use `socket.remoteAddress` getter for real client IP
- Transparent handling of both socket types
### 5. Configuration Examples
#### Basic Setup (IMPLEMENTED ✅)
```typescript
// Outer proxy - sends PROXY protocol
const outerProxy = new SmartProxy({
routes: [{
name: 'to-inner-proxy',
match: { ports: 443 },
action: {
type: 'forward',
target: { host: '195.201.98.232', port: 443 },
sendProxyProtocol: true // Enable for this route
}
}]
});
// Inner proxy - accepts PROXY protocol from outer proxy
const innerProxy = new SmartProxy({
proxyIPs: ['212.95.99.130'], // Outer proxy IP
acceptProxyProtocol: true, // Optional - defaults to true when proxyIPs is set
routes: [{
name: 'to-backend',
match: { ports: 443 },
action: {
type: 'forward',
target: { host: '192.168.5.247', port: 443 }
}
}]
});
```
### 6. Testing Plan
#### Unit Tests
- PROXY protocol v1 parsing (valid/invalid formats)
- Header generation
- Trusted IP validation
- Connection record updates
#### Integration Tests
- Single proxy with PROXY protocol
- Proxy chain with PROXY protocol
- Security: reject from untrusted IPs
- Performance: minimal overhead
- Compatibility: works with TLS passthrough
#### Test Scenarios
1. **Connection limit test**: Verify inner proxy sees real client IPs
2. **Security test**: Ensure PROXY protocol rejected from untrusted sources
3. **Compatibility test**: Verify no impact on non-PROXY connections
4. **Performance test**: Measure overhead of PROXY protocol parsing
### 7. Security Considerations
1. **IP Spoofing Prevention**
- Only accept PROXY protocol from explicitly trusted IPs
- Validate all header fields
- Reject malformed headers immediately
2. **Resource Protection**
- Limit PROXY header size (107 bytes for v1)
- Timeout for incomplete headers
- Rate limit connection attempts
3. **Logging**
- Log all PROXY protocol acceptance/rejection
- Include real client IP in all connection logs
### 8. Rollout Strategy
1. **Phase 1**: Deploy parser and acceptance (backward compatible)
2. **Phase 2**: Enable between controlled proxy pairs
3. **Phase 3**: Monitor for issues and performance impact
4. **Phase 4**: Expand to all proxy chains
### 9. Success Metrics
- Inner proxy connection distribution matches outer proxy
- No more connection limit rejections in proxy chains
- Accurate client IP logging throughout the chain
- No performance degradation (<1ms added latency)
### 10. Future Enhancements
- PROXY protocol v2 (binary format) support
- TLV extensions for additional metadata
- AWS VPC endpoint ID support
- Custom metadata fields
## WrappedSocket Class Design
### Overview
A WrappedSocket class has been evaluated and recommended to provide cleaner PROXY protocol integration and better socket management architecture.
### Rationale for WrappedSocket
#### Current Challenges
- Sockets handled directly as `net.Socket` instances throughout codebase
- Metadata tracked separately in `IConnectionRecord` objects
- Socket augmentation via TypeScript module augmentation for TLS properties
- PROXY protocol would require modifying socket handling in multiple places
#### Benefits
1. **Clean PROXY Protocol Integration** - Parse and store real client IP/port without modifying existing socket handling
2. **Better Encapsulation** - Bundle socket + metadata + behavior together
3. **Type Safety** - No more module augmentation needed
4. **Future Extensibility** - Easy to add compression, metrics, etc.
5. **Simplified Testing** - Easier to mock and test socket behavior
### Implementation Strategy
#### Phase 1: Minimal ProxyProtocolSocket (Immediate)
Create a minimal wrapper for PROXY protocol support:
```typescript
class ProxyProtocolSocket {
constructor(
public socket: net.Socket,
public realClientIP?: string,
public realClientPort?: number
) {}
get remoteAddress(): string {
return this.realClientIP || this.socket.remoteAddress || '';
}
get remotePort(): number {
return this.realClientPort || this.socket.remotePort || 0;
}
get isFromTrustedProxy(): boolean {
return !!this.realClientIP;
}
}
```
Integration points:
- Use in `RouteConnectionHandler` when receiving from trusted proxy IPs
- Update `ConnectionManager` to accept wrapped sockets
- Modify security checks to use `socket.remoteAddress` getter
#### Phase 2: Connection-Aware WrappedSocket (Alternative Design)
A more comprehensive design that manages both sides of a connection:
```typescript
// Option A: Single Socket Wrapper (simpler)
class WrappedSocket extends EventEmitter {
private socket: net.Socket;
private connectionId: string;
private metadata: ISocketMetadata;
constructor(socket: net.Socket, metadata?: Partial<ISocketMetadata>) {
super();
this.socket = socket;
this.connectionId = this.generateId();
this.metadata = { ...defaultMetadata, ...metadata };
this.setupHandlers();
}
// ... single socket management
}
// Option B: Connection Pair Wrapper (comprehensive)
class WrappedConnection extends EventEmitter {
private connectionId: string;
private incoming: WrappedSocket;
private outgoing?: WrappedSocket;
private forwardingActive: boolean = false;
constructor(incomingSocket: net.Socket) {
super();
this.connectionId = this.generateId();
this.incoming = new WrappedSocket(incomingSocket);
}
// Connect to backend and set up forwarding
async connectToBackend(target: ITarget): Promise<void> {
const outgoingSocket = await this.createOutgoingConnection(target);
this.outgoing = new WrappedSocket(outgoingSocket);
await this.setupBidirectionalForwarding();
}
// Built-in forwarding logic from socket-utils
private async setupBidirectionalForwarding(): Promise<void> {
if (!this.outgoing) throw new Error('No outgoing socket');
// Handle data forwarding with backpressure
this.incoming.on('data', (chunk) => {
this.outgoing!.write(chunk, () => {
// Handle backpressure
});
});
this.outgoing.on('data', (chunk) => {
this.incoming.write(chunk, () => {
// Handle backpressure
});
});
// Handle connection lifecycle
const cleanup = (reason: string) => {
this.forwardingActive = false;
this.incoming.destroy();
this.outgoing?.destroy();
this.emit('closed', reason);
};
this.incoming.once('close', () => cleanup('incoming_closed'));
this.outgoing.once('close', () => cleanup('outgoing_closed'));
this.forwardingActive = true;
}
// PROXY protocol support
async handleProxyProtocol(trustedProxies: string[]): Promise<boolean> {
if (trustedProxies.includes(this.incoming.socket.remoteAddress)) {
const parsed = await this.incoming.parseProxyProtocol();
if (parsed && this.outgoing) {
// Forward PROXY protocol to backend if configured
await this.outgoing.sendProxyProtocol(this.incoming.realClientIP);
}
return parsed;
}
return false;
}
// Consolidated metrics
getMetrics(): IConnectionMetrics {
return {
connectionId: this.connectionId,
duration: Date.now() - this.startTime,
incoming: this.incoming.getMetrics(),
outgoing: this.outgoing?.getMetrics(),
totalBytes: this.getTotalBytes(),
state: this.getConnectionState()
};
}
}
```
#### Phase 3: Full Migration (Long-term)
- Replace all `net.Socket` usage with `WrappedSocket`
- Remove socket augmentation from `socket-augmentation.ts`
- Update all socket utilities to work with wrapped sockets
- Standardize socket handling across all components
### Integration with PROXY Protocol
The WrappedSocket class integrates seamlessly with PROXY protocol:
1. **Connection Acceptance**:
```typescript
const wrappedSocket = new ProxyProtocolSocket(socket);
if (this.isFromTrustedProxy(socket.remoteAddress)) {
await wrappedSocket.parseProxyProtocol(this.settings.proxyIPs);
}
```
2. **Security Checks**:
```typescript
// Automatically uses real client IP if available
const clientIP = wrappedSocket.remoteAddress;
if (!this.securityManager.isIPAllowed(clientIP)) {
wrappedSocket.destroy();
}
```
3. **Connection Records**:
```typescript
const record = this.connectionManager.createConnection(wrappedSocket);
// ConnectionManager uses wrappedSocket.remoteAddress transparently
```
### Option B Example: How It Would Replace Current Architecture
Instead of current approach with separate components:
```typescript
// Current: Multiple separate components
const record = connectionManager.createConnection(socket);
const { cleanupClient, cleanupServer } = createIndependentSocketHandlers(
clientSocket, serverSocket, onBothClosed
);
setupBidirectionalForwarding(clientSocket, serverSocket, handlers);
```
Option B would consolidate everything:
```typescript
// Option B: Single connection object
const connection = new WrappedConnection(incomingSocket);
await connection.handleProxyProtocol(trustedProxies);
await connection.connectToBackend({ host: 'server', port: 443 });
// Everything is handled internally - forwarding, cleanup, metrics
connection.on('closed', (reason) => {
logger.log('Connection closed', connection.getMetrics());
});
```
This would replace:
- `IConnectionRecord` - absorbed into WrappedConnection
- `socket-utils.ts` functions - methods on WrappedConnection
- Separate incoming/outgoing tracking - unified in one object
- Manual cleanup coordination - automatic lifecycle management
Additional benefits with Option B:
- **Connection Pooling Integration**: WrappedConnection could integrate with EnhancedConnectionPool for backend connections
- **Unified Metrics**: Single point for all connection statistics
- **Protocol Negotiation**: Handle PROXY, TLS, HTTP/2 upgrade in one place
- **Resource Management**: Automatic cleanup with LifecycleComponent pattern
### Migration Path
1. **Week 1-2**: Implement minimal ProxyProtocolSocket (Option A)
2. **Week 3-4**: Test with PROXY protocol implementation
3. **Month 2**: Prototype WrappedConnection (Option B) if beneficial
4. **Month 3-6**: Gradual migration if Option B proves valuable
5. **Future**: Complete adoption in next major version
### Success Criteria
- PROXY protocol works transparently with wrapped sockets
- No performance regression (<0.1% overhead)
- Simplified code in connection handlers
- Better TypeScript type safety
- Easier to add new socket-level features

View File

@ -1,112 +0,0 @@
# SmartProxy: Proxy Protocol and Proxy Chaining Summary
## Quick Summary
SmartProxy supports proxy chaining through the **WrappedSocket** infrastructure, which is designed to handle PROXY protocol for preserving real client IP addresses across multiple proxy layers. While the infrastructure is in place (v19.5.19+), the actual PROXY protocol parsing is not yet implemented.
## Current State
### ✅ What's Implemented
- **WrappedSocket class** - Foundation for proxy protocol support
- **Proxy IP configuration** - `proxyIPs` setting to define trusted proxies
- **Socket wrapping** - All incoming connections wrapped automatically
- **Connection tracking** - Real client IP tracking in connection records
- **Test infrastructure** - Tests for proxy chaining scenarios
### ❌ What's Missing
- **PROXY protocol v1 parsing** - Header parsing not implemented
- **PROXY protocol v2 support** - Binary format not supported
- **Automatic header generation** - Must be manually implemented
- **Production testing** - No HAProxy/AWS ELB compatibility tests
## Key Files
### Core Implementation
- `ts/core/models/wrapped-socket.ts` - WrappedSocket class
- `ts/core/models/socket-types.ts` - Helper functions
- `ts/proxies/smart-proxy/route-connection-handler.ts` - Connection handling
- `ts/proxies/smart-proxy/models/interfaces.ts` - Configuration interfaces
### Tests
- `test/test.wrapped-socket.ts` - WrappedSocket unit tests
- `test/test.proxy-chain-simple.node.ts` - Basic proxy chain test
- `test/test.proxy-chaining-accumulation.node.ts` - Connection leak tests
### Documentation
- `readme.proxy-protocol.md` - Detailed implementation guide
- `readme.proxy-protocol-example.md` - Code examples and future implementation
- `readme.hints.md` - Project overview with WrappedSocket notes
## Quick Configuration Example
```typescript
// Outer proxy (internet-facing)
const outerProxy = new SmartProxy({
sendProxyProtocol: true, // Will send PROXY protocol (when implemented)
routes: [{
name: 'forward-to-inner',
match: { ports: 443 },
action: {
type: 'forward',
target: { host: 'inner-proxy.local', port: 443 },
tls: { mode: 'passthrough' }
}
}]
});
// Inner proxy (backend-facing)
const innerProxy = new SmartProxy({
proxyIPs: ['outer-proxy.local'], // Trust the outer proxy
acceptProxyProtocol: true, // Will parse PROXY protocol (when implemented)
routes: [{
name: 'forward-to-backend',
match: { ports: 443, domains: 'api.example.com' },
action: {
type: 'forward',
target: { host: 'backend.local', port: 8080 },
tls: { mode: 'terminate' }
}
}]
});
```
## How It Works (Conceptually)
1. **Client** connects to **Outer Proxy**
2. **Outer Proxy** wraps socket in WrappedSocket
3. **Outer Proxy** forwards to **Inner Proxy**
- Would prepend: `PROXY TCP4 <client-ip> <proxy-ip> <client-port> <proxy-port>\r\n`
4. **Inner Proxy** receives connection from trusted proxy
5. **Inner Proxy** would parse PROXY protocol header
6. **Inner Proxy** updates WrappedSocket with real client IP
7. **Backend** receives connection with preserved client information
## Important Notes
### Connection Cleanup
The fix for proxy chain connection accumulation (v19.5.14+) changed the default socket behavior:
- **Before**: Half-open connections supported by default (caused accumulation)
- **After**: Both sockets close when one closes (prevents accumulation)
- **Override**: Set `enableHalfOpen: true` if half-open needed
### Security
- Only parse PROXY protocol from IPs listed in `proxyIPs`
- Never use `0.0.0.0/0` as a trusted proxy range
- Each proxy in chain must explicitly trust the previous proxy
### Testing
Use the test files as reference implementations:
- Simple chains: `test.proxy-chain-simple.node.ts`
- Connection leaks: `test.proxy-chaining-accumulation.node.ts`
- Rapid reconnects: `test.rapid-retry-cleanup.node.ts`
## Next Steps
To fully implement PROXY protocol support:
1. Implement the parser in `ProxyProtocolParser` class
2. Integrate parser into `handleConnection` method
3. Add header generation to `setupDirectConnection`
4. Test with real proxies (HAProxy, nginx, AWS ELB)
5. Add PROXY protocol v2 support for better performance
See `readme.proxy-protocol-example.md` for detailed implementation examples.

View File

@ -1,462 +0,0 @@
# SmartProxy PROXY Protocol Implementation Example
This document shows how PROXY protocol parsing could be implemented in SmartProxy. Note that this is a conceptual implementation guide - the actual parsing is not yet implemented in the current version.
## Conceptual PROXY Protocol v1 Parser Implementation
### Parser Class
```typescript
// This would go in ts/core/utils/proxy-protocol-parser.ts
import { logger } from './logger.js';
export interface IProxyProtocolInfo {
version: 1 | 2;
command: 'PROXY' | 'LOCAL';
family: 'TCP4' | 'TCP6' | 'UNKNOWN';
sourceIP: string;
destIP: string;
sourcePort: number;
destPort: number;
headerLength: number;
}
export class ProxyProtocolParser {
private static readonly PROXY_V1_SIGNATURE = 'PROXY ';
private static readonly MAX_V1_HEADER_LENGTH = 108; // Max possible v1 header
/**
* Parse PROXY protocol v1 header from buffer
* Returns null if not a valid PROXY protocol header
*/
static parseV1(buffer: Buffer): IProxyProtocolInfo | null {
// Need at least 8 bytes for "PROXY " + newline
if (buffer.length < 8) {
return null;
}
// Check for v1 signature
const possibleHeader = buffer.toString('ascii', 0, 6);
if (possibleHeader !== this.PROXY_V1_SIGNATURE) {
return null;
}
// Find the end of the header (CRLF)
let headerEnd = -1;
for (let i = 6; i < Math.min(buffer.length, this.MAX_V1_HEADER_LENGTH); i++) {
if (buffer[i] === 0x0D && buffer[i + 1] === 0x0A) { // \r\n
headerEnd = i + 2;
break;
}
}
if (headerEnd === -1) {
// No complete header found
return null;
}
// Parse the header line
const headerLine = buffer.toString('ascii', 0, headerEnd - 2);
const parts = headerLine.split(' ');
if (parts.length !== 6) {
logger.log('warn', 'Invalid PROXY v1 header format', {
headerLine,
partCount: parts.length
});
return null;
}
const [proxy, family, srcIP, dstIP, srcPort, dstPort] = parts;
// Validate family
if (!['TCP4', 'TCP6', 'UNKNOWN'].includes(family)) {
logger.log('warn', 'Invalid PROXY protocol family', { family });
return null;
}
// Validate ports
const sourcePort = parseInt(srcPort);
const destPort = parseInt(dstPort);
if (isNaN(sourcePort) || sourcePort < 1 || sourcePort > 65535 ||
isNaN(destPort) || destPort < 1 || destPort > 65535) {
logger.log('warn', 'Invalid PROXY protocol ports', { srcPort, dstPort });
return null;
}
return {
version: 1,
command: 'PROXY',
family: family as 'TCP4' | 'TCP6' | 'UNKNOWN',
sourceIP: srcIP,
destIP: dstIP,
sourcePort,
destPort,
headerLength: headerEnd
};
}
/**
* Check if buffer potentially contains PROXY protocol
*/
static mightBeProxyProtocol(buffer: Buffer): boolean {
if (buffer.length < 6) return false;
// Check for v1 signature
const start = buffer.toString('ascii', 0, 6);
if (start === this.PROXY_V1_SIGNATURE) return true;
// Check for v2 signature (12 bytes: \x0D\x0A\x0D\x0A\x00\x0D\x0A\x51\x55\x49\x54\x0A)
if (buffer.length >= 12) {
const v2Sig = Buffer.from([0x0D, 0x0A, 0x0D, 0x0A, 0x00, 0x0D, 0x0A, 0x51, 0x55, 0x49, 0x54, 0x0A]);
if (buffer.compare(v2Sig, 0, 12, 0, 12) === 0) return true;
}
return false;
}
}
```
### Integration with RouteConnectionHandler
```typescript
// This shows how it would be integrated into route-connection-handler.ts
private async handleProxyProtocol(
socket: plugins.net.Socket,
wrappedSocket: WrappedSocket,
record: IConnectionRecord
): Promise<Buffer | null> {
const remoteIP = socket.remoteAddress || '';
// Only parse PROXY protocol from trusted IPs
if (!this.settings.proxyIPs?.includes(remoteIP)) {
return null;
}
return new Promise((resolve) => {
let buffer = Buffer.alloc(0);
let headerParsed = false;
const parseHandler = (chunk: Buffer) => {
// Accumulate data
buffer = Buffer.concat([buffer, chunk]);
// Try to parse PROXY protocol
const proxyInfo = ProxyProtocolParser.parseV1(buffer);
if (proxyInfo) {
// Update wrapped socket with real client info
wrappedSocket.setProxyInfo(proxyInfo.sourceIP, proxyInfo.sourcePort);
// Update connection record
record.remoteIP = proxyInfo.sourceIP;
logger.log('info', 'PROXY protocol parsed', {
connectionId: record.id,
realIP: proxyInfo.sourceIP,
realPort: proxyInfo.sourcePort,
proxyIP: remoteIP
});
// Remove this handler
socket.removeListener('data', parseHandler);
headerParsed = true;
// Return remaining data after header
const remaining = buffer.slice(proxyInfo.headerLength);
resolve(remaining.length > 0 ? remaining : null);
} else if (buffer.length > 108) {
// Max v1 header length exceeded, not PROXY protocol
socket.removeListener('data', parseHandler);
headerParsed = true;
resolve(buffer);
}
};
// Set timeout for PROXY protocol parsing
const timeout = setTimeout(() => {
if (!headerParsed) {
socket.removeListener('data', parseHandler);
logger.log('warn', 'PROXY protocol parsing timeout', {
connectionId: record.id,
bufferLength: buffer.length
});
resolve(buffer.length > 0 ? buffer : null);
}
}, 1000); // 1 second timeout
socket.on('data', parseHandler);
// Clean up on early close
socket.once('close', () => {
clearTimeout(timeout);
if (!headerParsed) {
socket.removeListener('data', parseHandler);
resolve(null);
}
});
});
}
// Modified handleConnection to include PROXY protocol parsing
public async handleConnection(socket: plugins.net.Socket): void {
const remoteIP = socket.remoteAddress || '';
const localPort = socket.localPort || 0;
// Always wrap the socket
const wrappedSocket = new WrappedSocket(socket);
// Create connection record
const record = this.connectionManager.createConnection(wrappedSocket);
if (!record) return;
// If from trusted proxy, parse PROXY protocol
if (this.settings.proxyIPs?.includes(remoteIP)) {
const remainingData = await this.handleProxyProtocol(socket, wrappedSocket, record);
if (remainingData) {
// Process remaining data as normal
this.handleInitialData(wrappedSocket, record, remainingData);
} else {
// Wait for more data
this.handleInitialData(wrappedSocket, record);
}
} else {
// Not from trusted proxy, handle normally
this.handleInitialData(wrappedSocket, record);
}
}
```
### Sending PROXY Protocol When Forwarding
```typescript
// This would be added to setupDirectConnection method
private setupDirectConnection(
socket: plugins.net.Socket | WrappedSocket,
record: IConnectionRecord,
serverName?: string,
initialChunk?: Buffer,
overridePort?: number,
targetHost?: string,
targetPort?: number
): void {
// ... existing code ...
// Create target socket
const targetSocket = createSocketWithErrorHandler({
port: finalTargetPort,
host: finalTargetHost,
onConnect: () => {
// If sendProxyProtocol is enabled, send PROXY header first
if (this.settings.sendProxyProtocol) {
const proxyHeader = this.buildProxyProtocolHeader(wrappedSocket, targetSocket);
targetSocket.write(proxyHeader);
}
// Then send any pending data
if (record.pendingData.length > 0) {
const combinedData = Buffer.concat(record.pendingData);
targetSocket.write(combinedData);
}
// ... rest of connection setup ...
}
});
}
private buildProxyProtocolHeader(
clientSocket: WrappedSocket,
serverSocket: net.Socket
): Buffer {
const family = clientSocket.remoteFamily === 'IPv6' ? 'TCP6' : 'TCP4';
const srcIP = clientSocket.remoteAddress || '0.0.0.0';
const srcPort = clientSocket.remotePort || 0;
const dstIP = serverSocket.localAddress || '0.0.0.0';
const dstPort = serverSocket.localPort || 0;
const header = `PROXY ${family} ${srcIP} ${dstIP} ${srcPort} ${dstPort}\r\n`;
return Buffer.from(header, 'ascii');
}
```
## Complete Example: HAProxy Compatible Setup
```typescript
// Example showing a complete HAProxy-compatible SmartProxy setup
import { SmartProxy } from '@push.rocks/smartproxy';
// Configuration matching HAProxy's proxy protocol behavior
const proxy = new SmartProxy({
// Accept PROXY protocol from these sources (like HAProxy's 'accept-proxy')
proxyIPs: [
'10.0.0.0/8', // Private network load balancers
'172.16.0.0/12', // Docker networks
'192.168.0.0/16' // Local networks
],
// Send PROXY protocol to backends (like HAProxy's 'send-proxy')
sendProxyProtocol: true,
routes: [
{
name: 'web-app',
match: {
ports: 443,
domains: ['app.example.com', 'www.example.com']
},
action: {
type: 'forward',
target: {
host: 'backend-pool.internal',
port: 8080
},
tls: {
mode: 'terminate',
certificate: 'auto',
acme: {
email: 'ssl@example.com'
}
}
}
}
]
});
// Start the proxy
await proxy.start();
// The proxy will now:
// 1. Accept connections on port 443
// 2. Parse PROXY protocol from trusted IPs
// 3. Terminate TLS
// 4. Forward to backend with PROXY protocol header
// 5. Backend sees real client IP
```
## Testing PROXY Protocol
```typescript
// Test client that sends PROXY protocol
import * as net from 'net';
function createProxyProtocolClient(
realClientIP: string,
realClientPort: number,
proxyHost: string,
proxyPort: number
): net.Socket {
const client = net.connect(proxyPort, proxyHost);
client.on('connect', () => {
// Send PROXY protocol header
const header = `PROXY TCP4 ${realClientIP} ${proxyHost} ${realClientPort} ${proxyPort}\r\n`;
client.write(header);
// Then send actual request
client.write('GET / HTTP/1.1\r\nHost: example.com\r\n\r\n');
});
return client;
}
// Usage
const client = createProxyProtocolClient(
'203.0.113.45', // Real client IP
54321, // Real client port
'localhost', // Proxy host
8080 // Proxy port
);
```
## AWS Network Load Balancer Example
```typescript
// Configuration for AWS NLB with PROXY protocol v2
const proxy = new SmartProxy({
// AWS NLB IP ranges (get current list from AWS)
proxyIPs: [
'10.0.0.0/8', // VPC CIDR
// Add specific NLB IPs or use AWS IP ranges
],
// AWS NLB uses PROXY protocol v2 by default
acceptProxyProtocolV2: true, // Future feature
routes: [{
name: 'aws-app',
match: { ports: 443 },
action: {
type: 'forward',
target: {
host: 'app-cluster.internal',
port: 8443
},
tls: { mode: 'passthrough' }
}
}]
});
// The proxy will:
// 1. Accept PROXY protocol v2 from AWS NLB
// 2. Preserve VPC endpoint IDs and other metadata
// 3. Forward to backend with real client information
```
## Debugging PROXY Protocol
```typescript
// Enable detailed logging to debug PROXY protocol parsing
const proxy = new SmartProxy({
enableDetailedLogging: true,
proxyIPs: ['10.0.0.1'],
// Add custom logging for debugging
routes: [{
name: 'debug-route',
match: { ports: 8080 },
action: {
type: 'socket-handler',
socketHandler: async (socket, context) => {
console.log('Socket handler called with context:', {
clientIp: context.clientIp, // Real IP from PROXY protocol
port: context.port,
connectionId: context.connectionId,
timestamp: context.timestamp
});
// Handle the socket...
}
}
}]
});
```
## Security Considerations
1. **Always validate trusted proxy IPs** - Never accept PROXY protocol from untrusted sources
2. **Use specific IP ranges** - Avoid wildcards like `0.0.0.0/0`
3. **Implement rate limiting** - PROXY protocol parsing has a computational cost
4. **Validate header format** - Reject malformed headers immediately
5. **Set parsing timeouts** - Prevent slow loris attacks via PROXY headers
6. **Log parsing failures** - Monitor for potential attacks or misconfigurations
## Performance Considerations
1. **Header parsing overhead** - Minimal, one-time cost per connection
2. **Memory usage** - Small buffer for header accumulation (max 108 bytes for v1)
3. **Connection establishment** - Slight delay for PROXY protocol parsing
4. **Throughput impact** - None after initial header parsing
5. **CPU usage** - Negligible for well-formed headers
## Future Enhancements
1. **PROXY Protocol v2** - Binary format for better performance
2. **TLS information preservation** - Pass TLS version, cipher, SNI via PP2
3. **Custom type-length-value (TLV) fields** - Extended metadata support
4. **Connection pooling** - Reuse backend connections with different client IPs
5. **Health checks** - Skip PROXY protocol for health check connections

View File

@ -1,415 +0,0 @@
# SmartProxy PROXY Protocol and Proxy Chaining Documentation
## Overview
SmartProxy implements support for the PROXY protocol v1 to enable proxy chaining and preserve real client IP addresses across multiple proxy layers. This documentation covers the implementation details, configuration, and usage patterns for proxy chaining scenarios.
## Architecture
### WrappedSocket Implementation
The foundation of PROXY protocol support is the `WrappedSocket` class, which wraps regular `net.Socket` instances to provide transparent access to real client information when behind a proxy.
```typescript
// ts/core/models/wrapped-socket.ts
export class WrappedSocket {
public readonly socket: plugins.net.Socket;
private realClientIP?: string;
private realClientPort?: number;
constructor(
socket: plugins.net.Socket,
realClientIP?: string,
realClientPort?: number
) {
this.socket = socket;
this.realClientIP = realClientIP;
this.realClientPort = realClientPort;
// Uses JavaScript Proxy to delegate all methods to underlying socket
return new Proxy(this, {
get(target, prop, receiver) {
// Override specific properties
if (prop === 'remoteAddress') {
return target.remoteAddress;
}
if (prop === 'remotePort') {
return target.remotePort;
}
// ... delegate other properties to underlying socket
}
});
}
get remoteAddress(): string | undefined {
return this.realClientIP || this.socket.remoteAddress;
}
get remotePort(): number | undefined {
return this.realClientPort || this.socket.remotePort;
}
get isFromTrustedProxy(): boolean {
return !!this.realClientIP;
}
}
```
### Key Design Decisions
1. **All sockets are wrapped** - Every incoming connection is wrapped in a WrappedSocket, not just those from trusted proxies
2. **Proxy pattern for delegation** - Uses JavaScript Proxy to transparently delegate all Socket methods while allowing property overrides
3. **Not a Duplex stream** - Simple wrapper approach avoids complexity and infinite loops
4. **Trust-based parsing** - PROXY protocol parsing only occurs for connections from trusted proxy IPs
## Configuration
### Basic PROXY Protocol Configuration
```typescript
const proxy = new SmartProxy({
// List of trusted proxy IPs that can send PROXY protocol
proxyIPs: ['10.0.0.1', '10.0.0.2', '192.168.1.0/24'],
// Global option to accept PROXY protocol (defaults based on proxyIPs)
acceptProxyProtocol: true,
// Global option to send PROXY protocol to all targets
sendProxyProtocol: false,
routes: [
{
name: 'backend-app',
match: { ports: 443, domains: 'app.example.com' },
action: {
type: 'forward',
target: { host: 'backend.internal', port: 8443 },
tls: { mode: 'passthrough' }
}
}
]
});
```
### Proxy Chain Configuration
Setting up two SmartProxies in a chain:
```typescript
// Outer Proxy (Internet-facing)
const outerProxy = new SmartProxy({
proxyIPs: [], // No trusted proxies for outer proxy
sendProxyProtocol: true, // Send PROXY protocol to inner proxy
routes: [{
name: 'to-inner-proxy',
match: { ports: 443 },
action: {
type: 'forward',
target: {
host: 'inner-proxy.internal',
port: 443
},
tls: { mode: 'passthrough' }
}
}]
});
// Inner Proxy (Backend-facing)
const innerProxy = new SmartProxy({
proxyIPs: ['outer-proxy.internal'], // Trust the outer proxy
acceptProxyProtocol: true,
routes: [{
name: 'to-backend',
match: { ports: 443, domains: 'app.example.com' },
action: {
type: 'forward',
target: {
host: 'backend.internal',
port: 8080
},
tls: {
mode: 'terminate',
certificate: 'auto'
}
}
}]
});
```
## How Two SmartProxies Communicate
### Connection Flow
1. **Client connects to Outer Proxy**
```
Client (203.0.113.45:54321) → Outer Proxy (1.2.3.4:443)
```
2. **Outer Proxy wraps the socket**
```typescript
// In RouteConnectionHandler.handleConnection()
const wrappedSocket = new WrappedSocket(socket);
// At this point:
// wrappedSocket.remoteAddress = '203.0.113.45'
// wrappedSocket.remotePort = 54321
```
3. **Outer Proxy forwards to Inner Proxy**
- Creates new connection to inner proxy
- If `sendProxyProtocol` is enabled, prepends PROXY protocol header:
```
PROXY TCP4 203.0.113.45 1.2.3.4 54321 443\r\n
[Original TLS/HTTP data follows]
```
4. **Inner Proxy receives connection**
- Sees connection from outer proxy IP
- Checks if IP is in `proxyIPs` list
- If trusted, parses PROXY protocol header
- Updates WrappedSocket with real client info:
```typescript
wrappedSocket.setProxyInfo('203.0.113.45', 54321);
```
5. **Inner Proxy routes based on real client IP**
- Security checks use real client IP
- Connection records track real client IP
- Backend sees requests from the original client IP
### Connection Record Tracking
```typescript
// In ConnectionManager
interface IConnectionRecord {
id: string;
incoming: WrappedSocket; // Wrapped socket with real client info
outgoing: net.Socket | null;
remoteIP: string; // Real client IP from PROXY protocol or direct connection
localPort: number;
// ... other fields
}
```
## Implementation Details
### Socket Wrapping in Route Handler
```typescript
// ts/proxies/smart-proxy/route-connection-handler.ts
public handleConnection(socket: plugins.net.Socket): void {
const remoteIP = socket.remoteAddress || '';
// Always wrap the socket to prepare for potential PROXY protocol
const wrappedSocket = new WrappedSocket(socket);
// If this is from a trusted proxy, log it
if (this.settings.proxyIPs?.includes(remoteIP)) {
logger.log('debug', `Connection from trusted proxy ${remoteIP}, PROXY protocol parsing will be enabled`);
}
// Create connection record with wrapped socket
const record = this.connectionManager.createConnection(wrappedSocket);
// Continue with normal connection handling...
}
```
### Socket Utility Integration
When passing wrapped sockets to socket utility functions, the underlying socket must be extracted:
```typescript
import { getUnderlyingSocket } from '../../core/models/socket-types.js';
// In setupDirectConnection()
const incomingSocket = getUnderlyingSocket(socket); // Extract raw socket
setupBidirectionalForwarding(incomingSocket, targetSocket, {
onClientData: (chunk) => {
record.bytesReceived += chunk.length;
},
onServerData: (chunk) => {
record.bytesSent += chunk.length;
},
onCleanup: (reason) => {
this.connectionManager.cleanupConnection(record, reason);
},
enableHalfOpen: false // Required for proxy chains
});
```
## Current Status and Limitations
### Implemented (v19.5.19+)
- ✅ WrappedSocket foundation class
- ✅ Socket wrapping in connection handler
- ✅ Connection manager support for wrapped sockets
- ✅ Socket utility integration helpers
- ✅ Proxy IP configuration options
### Not Yet Implemented
- ❌ PROXY protocol v1 header parsing
- ❌ PROXY protocol v2 binary format support
- ❌ Automatic PROXY protocol header generation when forwarding
- ❌ HAProxy compatibility testing
- ❌ AWS ELB/NLB compatibility testing
### Known Issues
1. **No actual PROXY protocol parsing** - The infrastructure is in place but the protocol parsing is not yet implemented
2. **Manual configuration required** - No automatic detection of PROXY protocol support
3. **Limited to TCP connections** - WebSocket connections through proxy chains may not preserve client IPs
## Testing Proxy Chains
### Basic Proxy Chain Test
```typescript
// test/test.proxy-chain-simple.node.ts
tap.test('simple proxy chain test', async () => {
// Create backend server
const backend = net.createServer((socket) => {
console.log('Backend: Connection received');
socket.write('HTTP/1.1 200 OK\r\n\r\nHello from backend');
socket.end();
});
// Create inner proxy (downstream)
const innerProxy = new SmartProxy({
proxyIPs: ['127.0.0.1'], // Trust localhost for testing
routes: [{
name: 'to-backend',
match: { ports: 8591 },
action: {
type: 'forward',
target: { host: 'localhost', port: 9999 }
}
}]
});
// Create outer proxy (upstream)
const outerProxy = new SmartProxy({
sendProxyProtocol: true, // Send PROXY to inner
routes: [{
name: 'to-inner',
match: { ports: 8590 },
action: {
type: 'forward',
target: { host: 'localhost', port: 8591 }
}
}]
});
// Test connection through chain
const client = net.connect(8590, 'localhost');
client.write('GET / HTTP/1.1\r\nHost: test.com\r\n\r\n');
// Verify no connection accumulation
const counts = getConnectionCounts();
expect(counts.proxy1).toEqual(0);
expect(counts.proxy2).toEqual(0);
});
```
## Best Practices
### 1. Always Configure Trusted Proxies
```typescript
// Be specific about which IPs can send PROXY protocol
proxyIPs: ['10.0.0.1', '10.0.0.2'], // Good
proxyIPs: ['0.0.0.0/0'], // Bad - trusts everyone
```
### 2. Use CIDR Notation for Subnets
```typescript
proxyIPs: [
'10.0.0.0/24', // Trust entire subnet
'192.168.1.5', // Trust specific IP
'172.16.0.0/16' // Trust private network
]
```
### 3. Enable Half-Open Only When Needed
```typescript
// For proxy chains, always disable half-open
setupBidirectionalForwarding(client, server, {
enableHalfOpen: false // Ensures proper cascade cleanup
});
```
### 4. Monitor Connection Counts
```typescript
// Regular monitoring prevents connection leaks
setInterval(() => {
const stats = proxy.getStatistics();
console.log(`Active connections: ${stats.activeConnections}`);
if (stats.activeConnections > 1000) {
console.warn('High connection count detected');
}
}, 60000);
```
## Future Enhancements
### Phase 2: PROXY Protocol v1 Parser
```typescript
// Planned implementation
class ProxyProtocolParser {
static parse(buffer: Buffer): ProxyInfo | null {
// Parse "PROXY TCP4 <src-ip> <dst-ip> <src-port> <dst-port>\r\n"
const header = buffer.toString('ascii', 0, 108);
const match = header.match(/^PROXY (TCP4|TCP6) (\S+) (\S+) (\d+) (\d+)\r\n/);
if (match) {
return {
protocol: match[1],
sourceIP: match[2],
destIP: match[3],
sourcePort: parseInt(match[4]),
destPort: parseInt(match[5]),
headerLength: match[0].length
};
}
return null;
}
}
```
### Phase 3: Automatic PROXY Protocol Detection
- Peek at first bytes to detect PROXY protocol signature
- Automatic fallback to direct connection if not present
- Configurable timeout for protocol detection
### Phase 4: PROXY Protocol v2 Support
- Binary protocol format for better performance
- Additional metadata support (TLS info, ALPN, etc.)
- AWS VPC endpoint ID preservation
## Troubleshooting
### Connection Accumulation in Proxy Chains
If connections accumulate when chaining proxies:
1. Verify `enableHalfOpen: false` in socket forwarding
2. Check that both proxies have proper cleanup handlers
3. Monitor with connection count logging
4. Use `test.proxy-chain-simple.node.ts` as reference
### Real Client IP Not Preserved
If the backend sees proxy IP instead of client IP:
1. Verify outer proxy has `sendProxyProtocol: true`
2. Verify inner proxy has outer proxy IP in `proxyIPs` list
3. Check logs for "Connection from trusted proxy" message
4. Ensure PROXY protocol parsing is implemented (currently pending)
### Performance Impact
PROXY protocol adds minimal overhead:
- One-time parsing cost per connection
- Small memory overhead for real client info storage
- No impact on data transfer performance
- Negligible CPU impact for header generation
## Related Documentation
- [Socket Utilities](./ts/core/utils/socket-utils.ts) - Low-level socket handling
- [Connection Manager](./ts/proxies/smart-proxy/connection-manager.ts) - Connection lifecycle
- [Route Handler](./ts/proxies/smart-proxy/route-connection-handler.ts) - Request routing
- [Test Suite](./test/test.wrapped-socket.ts) - WrappedSocket unit tests

View File

@ -1,341 +0,0 @@
# SmartProxy Routing Architecture Unification Plan
## Overview
This document analyzes the current state of routing in SmartProxy, identifies redundancies and inconsistencies, and proposes a unified architecture.
## Current State Analysis
### 1. Multiple Route Manager Implementations
#### 1.1 Core SharedRouteManager (`ts/core/utils/route-manager.ts`)
- **Purpose**: Designed as a shared component for SmartProxy and NetworkProxy
- **Features**:
- Port mapping and expansion (e.g., `[80, 443]` → individual routes)
- Comprehensive route matching (domain, path, IP, headers, TLS)
- Route validation and conflict detection
- Event emitter for route changes
- Detailed logging support
- **Status**: Well-designed but underutilized
#### 1.2 SmartProxy RouteManager (`ts/proxies/smart-proxy/route-manager.ts`)
- **Purpose**: SmartProxy-specific route management
- **Issues**:
- 95% duplicate code from SharedRouteManager
- Only difference is using `ISmartProxyOptions` instead of generic interface
- Contains deprecated security methods
- Unnecessary code duplication
- **Status**: Should be removed in favor of SharedRouteManager
#### 1.3 HttpProxy Route Management (`ts/proxies/http-proxy/`)
- **Purpose**: HTTP-specific routing
- **Implementation**: Minimal, inline route matching
- **Status**: Could benefit from SharedRouteManager
### 2. Multiple Router Implementations
#### 2.1 ProxyRouter (`ts/routing/router/proxy-router.ts`)
- **Purpose**: Legacy compatibility with `IReverseProxyConfig`
- **Features**: Domain-based routing with path patterns
- **Used by**: HttpProxy for backward compatibility
#### 2.2 RouteRouter (`ts/routing/router/route-router.ts`)
- **Purpose**: Modern routing with `IRouteConfig`
- **Features**: Nearly identical to ProxyRouter
- **Issues**: Code duplication with ProxyRouter
### 3. Scattered Route Utilities
#### 3.1 Core route-utils (`ts/core/utils/route-utils.ts`)
- **Purpose**: Shared matching functions
- **Features**: Domain, path, IP, CIDR matching
- **Status**: Well-implemented, should be the single source
#### 3.2 SmartProxy route-utils (`ts/proxies/smart-proxy/utils/route-utils.ts`)
- **Purpose**: Route configuration utilities
- **Features**: Different scope - config merging, not pattern matching
- **Status**: Keep separate as it serves different purpose
### 4. Other Route-Related Files
- `route-patterns.ts`: Constants for route patterns
- `route-validators.ts`: Route configuration validation
- `route-helpers.ts`: Additional utilities
- `route-connection-handler.ts`: Connection routing logic
## Problems Identified
### 1. Code Duplication
- **SharedRouteManager vs SmartProxy RouteManager**: ~1000 lines of duplicate code
- **ProxyRouter vs RouteRouter**: ~500 lines of duplicate code
- **Matching logic**: Implemented in 4+ different places
### 2. Inconsistent Implementations
```typescript
// Example: Domain matching appears in multiple places
// 1. In route-utils.ts
export function matchDomain(pattern: string, hostname: string): boolean
// 2. In SmartProxy RouteManager
private matchDomain(domain: string, hostname: string): boolean
// 3. In ProxyRouter
private matchesHostname(configName: string, hostname: string): boolean
// 4. In RouteRouter
private matchDomain(pattern: string, hostname: string): boolean
```
### 3. Unclear Separation of Concerns
- Route Managers handle both storage AND matching
- Routers also handle storage AND matching
- No clear boundaries between layers
### 4. Maintenance Burden
- Bug fixes need to be applied in multiple places
- New features must be implemented multiple times
- Testing effort multiplied
## Proposed Unified Architecture
### Layer 1: Core Routing Components
```
ts/core/routing/
├── types.ts # All route-related types
├── utils.ts # All matching logic (consolidated)
├── route-store.ts # Route storage and indexing
└── route-matcher.ts # Route matching engine
```
### Layer 2: Route Management
```
ts/core/routing/
└── route-manager.ts # Single RouteManager for all proxies
- Uses RouteStore for storage
- Uses RouteMatcher for matching
- Provides high-level API
```
### Layer 3: HTTP Routing
```
ts/routing/
└── http-router.ts # Single HTTP router implementation
- Uses RouteManager for route lookup
- Handles HTTP-specific concerns
- Legacy adapter built-in
```
### Layer 4: Proxy Integration
```
ts/proxies/
├── smart-proxy/
│ └── (uses core RouteManager directly)
├── http-proxy/
│ └── (uses core RouteManager + HttpRouter)
└── network-proxy/
└── (uses core RouteManager directly)
```
## Implementation Plan
### Phase 1: Consolidate Matching Logic (Week 1)
1. **Audit all matching implementations**
- Document differences in behavior
- Identify the most comprehensive implementation
- Create test suite covering all edge cases
2. **Create unified matching module**
```typescript
// ts/core/routing/matchers.ts
export class DomainMatcher {
static match(pattern: string, hostname: string): boolean
}
export class PathMatcher {
static match(pattern: string, path: string): MatchResult
}
export class IpMatcher {
static match(pattern: string, ip: string): boolean
static matchCidr(cidr: string, ip: string): boolean
}
```
3. **Update all components to use unified matchers**
- Replace local implementations
- Ensure backward compatibility
- Run comprehensive tests
### Phase 2: Unify Route Managers (Week 2)
1. **Enhance SharedRouteManager**
- Add any missing features from SmartProxy RouteManager
- Make it truly generic (no proxy-specific dependencies)
- Add adapter pattern for different options types
2. **Migrate SmartProxy to use SharedRouteManager**
```typescript
// Before
this.routeManager = new RouteManager(this.settings);
// After
this.routeManager = new SharedRouteManager({
logger: this.settings.logger,
enableDetailedLogging: this.settings.enableDetailedLogging
});
```
3. **Remove duplicate RouteManager**
- Delete `ts/proxies/smart-proxy/route-manager.ts`
- Update all imports
- Verify all tests pass
### Phase 3: Consolidate Routers (Week 3)
1. **Create unified HttpRouter**
```typescript
export class HttpRouter {
constructor(private routeManager: SharedRouteManager) {}
// Modern interface
route(req: IncomingMessage): RouteResult
// Legacy adapter
routeLegacy(config: IReverseProxyConfig): RouteResult
}
```
2. **Migrate HttpProxy**
- Replace both ProxyRouter and RouteRouter
- Use single HttpRouter with appropriate adapter
- Maintain backward compatibility
3. **Clean up legacy code**
- Mark old interfaces as deprecated
- Add migration guides
- Plan removal in next major version
### Phase 4: Architecture Cleanup (Week 4)
1. **Reorganize file structure**
```
ts/core/
├── routing/
│ ├── index.ts
│ ├── types.ts
│ ├── matchers/
│ │ ├── domain.ts
│ │ ├── path.ts
│ │ ├── ip.ts
│ │ └── index.ts
│ ├── route-store.ts
│ ├── route-matcher.ts
│ └── route-manager.ts
└── utils/
└── (remove route-specific utils)
```
2. **Update documentation**
- Architecture diagrams
- Migration guides
- API documentation
3. **Performance optimization**
- Add caching where beneficial
- Optimize hot paths
- Benchmark before/after
## Migration Strategy
### For SmartProxy RouteManager Users
```typescript
// Old way
import { RouteManager } from './route-manager.js';
const manager = new RouteManager(options);
// New way
import { SharedRouteManager as RouteManager } from '../core/utils/route-manager.js';
const manager = new RouteManager({
logger: options.logger,
enableDetailedLogging: options.enableDetailedLogging
});
```
### For Router Users
```typescript
// Old way
const proxyRouter = new ProxyRouter();
const routeRouter = new RouteRouter();
// New way
const router = new HttpRouter(routeManager);
// Automatically handles both modern and legacy configs
```
## Success Metrics
1. **Code Reduction**
- Target: Remove ~1,500 lines of duplicate code
- Measure: Lines of code before/after
2. **Performance**
- Target: No regression in routing performance
- Measure: Benchmark route matching operations
3. **Maintainability**
- Target: Single implementation for each concept
- Measure: Time to implement new features
4. **Test Coverage**
- Target: 100% coverage of routing logic
- Measure: Coverage reports
## Risks and Mitigations
### Risk 1: Breaking Changes
- **Mitigation**: Extensive adapter patterns and backward compatibility layers
- **Testing**: Run all existing tests plus new integration tests
### Risk 2: Performance Regression
- **Mitigation**: Benchmark critical paths before changes
- **Testing**: Load testing with production-like scenarios
### Risk 3: Hidden Dependencies
- **Mitigation**: Careful code analysis and dependency mapping
- **Testing**: Integration tests across all proxy types
## Long-term Vision
### Future Enhancements
1. **Route Caching**: LRU cache for frequently accessed routes
2. **Route Indexing**: Trie-based indexing for faster domain matching
3. **Route Priorities**: Explicit priority system instead of specificity
4. **Dynamic Routes**: Support for runtime route modifications
5. **Route Templates**: Reusable route configurations
### API Evolution
```typescript
// Future unified routing API
const routingEngine = new RoutingEngine({
stores: [fileStore, dbStore, dynamicStore],
matchers: [domainMatcher, pathMatcher, customMatcher],
cache: new LRUCache({ max: 1000 }),
indexes: {
domain: new TrieIndex(),
path: new RadixTree()
}
});
// Simple, powerful API
const route = await routingEngine.findRoute({
domain: 'example.com',
path: '/api/v1/users',
ip: '192.168.1.1',
headers: { 'x-custom': 'value' }
});
```
## Conclusion
The current routing architecture has significant duplication and inconsistencies. By following this unification plan, we can:
1. Reduce code by ~30%
2. Improve maintainability
3. Ensure consistent behavior
4. Enable future enhancements
The phased approach minimizes risk while delivering incremental value. Each phase is independently valuable and can be deployed separately.

View File

@ -1,140 +0,0 @@
# WebSocket Keep-Alive Configuration Guide
## Quick Fix for SNI Passthrough WebSocket Disconnections
If your WebSocket connections are disconnecting every 30 seconds in SNI passthrough mode, here's the immediate solution:
### Option 1: Extended Keep-Alive Treatment (Recommended)
```typescript
const proxy = new SmartProxy({
// Extend timeout for keep-alive connections
keepAliveTreatment: 'extended',
keepAliveInactivityMultiplier: 10, // 10x the base timeout
inactivityTimeout: 14400000, // 4 hours base (40 hours with multiplier)
routes: [
{
name: 'websocket-passthrough',
match: {
ports: 443,
domains: ['ws.example.com', 'wss.example.com']
},
action: {
type: 'forward',
target: { host: 'backend', port: 443 },
tls: { mode: 'passthrough' }
}
}
]
});
```
### Option 2: Immortal Connections (Never Timeout)
```typescript
const proxy = new SmartProxy({
// Never timeout keep-alive connections
keepAliveTreatment: 'immortal',
routes: [
// ... same as above
]
});
```
### Option 3: Per-Route Security Settings
```typescript
const proxy = new SmartProxy({
routes: [
{
name: 'websocket-passthrough',
match: {
ports: 443,
domains: ['ws.example.com']
},
action: {
type: 'forward',
target: { host: 'backend', port: 443 },
tls: { mode: 'passthrough' }
},
security: {
// Disable connection limits for this route
maxConnections: 0, // 0 = unlimited
maxConnectionsPerIP: 0 // 0 = unlimited
}
}
]
});
```
## Understanding the Issue
### Why Connections Drop at 30 Seconds
1. **WebSocket Heartbeat**: The HTTP proxy's WebSocket handler sends ping frames every 30 seconds
2. **SNI Passthrough**: In passthrough mode, traffic is encrypted end-to-end
3. **Can't Inject Pings**: The proxy can't inject ping frames into encrypted traffic
4. **No Pong Response**: Client doesn't respond to pings that were never sent
5. **Connection Terminated**: After 30 seconds, connection is marked inactive and closed
### Why Grace Periods Were Too Short
- Half-zombie detection: 30 seconds (now 5 minutes for TLS)
- Stuck connection detection: 60 seconds (now 5 minutes for TLS)
- These were too aggressive for encrypted long-lived connections
## Long-Term Solution
The fix involves:
1. **Detecting SNI Passthrough**: Skip WebSocket heartbeat for passthrough connections
2. **Longer Grace Periods**: 5-minute grace for encrypted connections
3. **TCP Keep-Alive**: Rely on OS-level TCP keep-alive instead
4. **Route-Aware Timeouts**: Different timeout strategies per route type
## TCP Keep-Alive Configuration
For best results, also configure TCP keep-alive at the OS level:
### Linux
```bash
# /etc/sysctl.conf
net.ipv4.tcp_keepalive_time = 600 # Start probes after 10 minutes
net.ipv4.tcp_keepalive_intvl = 60 # Probe every minute
net.ipv4.tcp_keepalive_probes = 9 # Drop after 9 failed probes
```
### Node.js Socket Options
The proxy already enables TCP keep-alive on sockets:
- Keep-alive is enabled by default
- Initial delay can be configured via `keepAliveInitialDelay`
## Monitoring
Check your connections:
```typescript
const stats = proxy.getStats();
console.log('Active connections:', stats.getActiveConnections());
console.log('Connections by route:', stats.getConnectionsByRoute());
// Monitor long-lived connections
setInterval(() => {
const connections = proxy.connectionManager.getConnections();
for (const [id, conn] of connections) {
const age = Date.now() - conn.incomingStartTime;
if (age > 300000) { // 5+ minutes
console.log(`Long-lived connection: ${id}, age: ${age}ms, route: ${conn.routeName}`);
}
}
}, 60000);
```
## Summary
- **Immediate Fix**: Use `keepAliveTreatment: 'extended'` or `'immortal'`
- **Applied Fix**: Increased grace periods for TLS connections to 5 minutes
- **Best Practice**: Use SNI passthrough for WebSocket when you need end-to-end encryption
- **Alternative**: Use TLS termination if you need application-level WebSocket features

View File

@ -1,63 +0,0 @@
# WebSocket Keep-Alive Fix for SNI Passthrough
## Problem
WebSocket connections in SNI passthrough mode are being disconnected every 30 seconds due to:
1. **WebSocket Heartbeat**: The HTTP proxy's WebSocket handler performs heartbeat checks every 30 seconds using ping/pong frames. In SNI passthrough mode, these frames can't be injected into the encrypted stream, causing connections to be marked as inactive and terminated.
2. **Half-Zombie Detection**: The connection manager's aggressive cleanup gives only 30 seconds grace period for connections where one socket is destroyed.
## Solution
For SNI passthrough connections:
1. Disable WebSocket-specific heartbeat checking (they're handled as raw TCP)
2. Rely on TCP keepalive settings instead
3. Increase grace period for encrypted connections
## Current Settings
- Default inactivity timeout: 4 hours (14400000 ms)
- Keep-alive multiplier for extended mode: 6x (24 hours)
- WebSocket heartbeat interval: 30 seconds (problem!)
- Half-zombie grace period: 30 seconds (too aggressive)
## Recommended Configuration
```typescript
const proxy = new SmartProxy({
// Increase grace period for connection cleanup
inactivityTimeout: 14400000, // 4 hours default
keepAliveTreatment: 'extended', // or 'immortal' for no timeout
keepAliveInactivityMultiplier: 10, // 40 hours for keepalive connections
// For routes with WebSocket over SNI passthrough
routes: [
{
name: 'websocket-passthrough',
match: { ports: 443, domains: 'ws.example.com' },
action: {
type: 'forward',
target: { host: 'backend', port: 443 },
tls: { mode: 'passthrough' },
// No WebSocket-specific config needed for passthrough
}
}
]
});
```
## Temporary Workaround
Until a fix is implemented, you can:
1. Use `keepAliveTreatment: 'immortal'` to disable timeout-based cleanup
2. Increase the half-zombie grace period
3. Use TCP keepalive at the OS level
## Proper Fix Implementation
1. Detect when a connection is SNI passthrough
2. Skip WebSocket heartbeat for passthrough connections
3. Increase grace period for encrypted connections
4. Rely on TCP keepalive instead of application-level ping/pong