fix(proxy-service): handle HTTP/3 backend forwarding failures with protocol fallback and pool cleanup

This commit is contained in:
2026-05-12 22:22:10 +00:00
parent 8415a82f21
commit e220208c16
9 changed files with 3854 additions and 4098 deletions
+47 -26
View File
@@ -2,16 +2,16 @@
**A high-performance, Rust-powered proxy toolkit for Node.js** — unified route-based configuration for SSL/TLS termination, HTTP/HTTPS reverse proxying, WebSocket support, UDP/QUIC/HTTP3, load balancing, custom protocol handlers, and kernel-level NFTables forwarding via [`@push.rocks/smartnftables`](https://code.foss.global/push.rocks/smartnftables).
## Issue Reporting and Security
For reporting bugs, issues, or security vulnerabilities, please visit [community.foss.global/](https://community.foss.global/). This is the central community hub for all issue reporting. Developers who sign and comply with our contribution agreement and go through identification can also get a [code.foss.global/](https://code.foss.global/) account to submit Pull Requests directly.
## 📦 Installation
```bash
pnpm add @push.rocks/smartproxy
```
## Issue Reporting and Security
For reporting bugs, issues, or security vulnerabilities, please visit [community.foss.global/](https://community.foss.global/). This is the central community hub for all issue reporting. Developers who sign and comply with our contribution agreement and go through identification can also get a [code.foss.global/](https://code.foss.global/) account to submit Pull Requests directly.
## 🎯 What is SmartProxy?
SmartProxy is a production-ready proxy solution that takes the complexity out of traffic management. Under the hood, all networking — TCP, UDP, TLS, HTTP reverse proxy, QUIC/HTTP3, connection tracking, security enforcement, and NFTables — is handled by a **Rust engine** for maximum performance, while you configure everything through a clean TypeScript API with full type safety.
@@ -28,7 +28,7 @@ Whether you're building microservices, deploying edge infrastructure, proxying U
| 🎯 **Flexible Matching** | Route by port, domain, path, protocol, client IP, TLS version, headers, or custom logic |
| 🚄 **High-Performance** | Choose between user-space or kernel-level (NFTables) forwarding |
| 📡 **UDP & QUIC/HTTP3** | First-class UDP transport, datagram handlers, QUIC tunneling, and HTTP/3 support |
| ⚖️ **Load Balancing** | Round-robin, least-connections, IP-hash with health checks |
| ⚖️ **Load Balancing** | Round-robin, least-connections, and IP-hash selection across host arrays |
| 🛡️ **Enterprise Security** | IP filtering, rate limiting, basic auth, JWT auth, connection limits |
| 🔌 **WebSocket Support** | First-class WebSocket proxying with ping/pong keep-alive |
| 🎮 **Custom Protocols** | Socket and datagram handlers for implementing any protocol in TypeScript |
@@ -135,7 +135,9 @@ const proxy = new SmartProxy({
});
```
### ⚖️ Load Balancer with Health Checks
### ⚖️ Load Balancer
For equivalent backends, put the backend hosts into one target's `host` array and choose a target-level load-balancing algorithm. Multiple `targets` are for sub-routing with `target.match` and `priority`.
```typescript
import { SmartProxy } from '@push.rocks/smartproxy';
@@ -146,22 +148,12 @@ const proxy = new SmartProxy({
match: { ports: 443, domains: 'app.example.com' },
action: {
type: 'forward',
targets: [
{ host: 'server1.internal', port: 8080 },
{ host: 'server2.internal', port: 8080 },
{ host: 'server3.internal', port: 8080 }
],
tls: { mode: 'terminate', certificate: 'auto' },
loadBalancing: {
algorithm: 'round-robin',
healthCheck: {
path: '/health',
interval: 30000,
timeout: 5000,
unhealthyThreshold: 3,
healthyThreshold: 2
}
}
targets: [{
host: ['server1.internal', 'server2.internal', 'server3.internal'],
port: 8080,
loadBalancing: { algorithm: 'round-robin' }
}],
tls: { mode: 'terminate', certificate: 'auto' }
}
}]
});
@@ -647,7 +639,9 @@ Supply your own certificates or integrate with external certificate providers:
```typescript
const proxy = new SmartProxy({
certProvisionFunction: async (domain: string) => {
certProvisionFunction: async (domain, eventComms) => {
eventComms.setSource('custom-acme-provider');
// Return 'http01' to let the built-in ACME handle it
if (domain.endsWith('.example.com')) return 'http01';
@@ -670,7 +664,11 @@ SmartProxy **never writes certificates to disk**. Instead, you own all persisten
const proxy = new SmartProxy({
routes: [...],
certProvisionFunction: async (domain) => myAcme.provision(domain),
certProvisionFunction: async (domain, eventComms) => {
const cert = await myAcme.provision(domain);
eventComms.setExpiryDate(new Date(cert.validUntil));
return cert;
},
// Your persistence layer — SmartProxy calls these hooks
certStore: {
@@ -774,6 +772,8 @@ type TPortRange = number | Array<number | { from: number; to: number }>;
| `forward` | Proxy to one or more backend targets (with optional TLS, WebSocket, load balancing, UDP/QUIC) |
| `socket-handler` | Custom socket/datagram handling function in TypeScript |
`targets` are evaluated as route-internal sub-routes by `target.match` and `target.priority`. For load balancing across equivalent upstreams, use a single target with `host: ['a', 'b', 'c']` and target-level `loadBalancing`.
### Target Options
```typescript
@@ -845,6 +845,8 @@ interface IRouteLoadBalancing {
}
```
Use this on an `IRouteTarget` with `host` as a string array. The `healthCheck` shape is accepted by the type layer, but active backend health polling is not currently performed by the Rust selector.
### Backend Protocol Options
```typescript
@@ -922,6 +924,7 @@ class SmartProxy extends EventEmitter {
// Route Management (atomic, mutex-locked)
updateRoutes(routes: IRouteConfig[]): Promise<void>;
updateSecurityPolicy(policy: ISmartProxySecurityPolicy): Promise<void>;
// Port Management
addListeningPort(port: number): Promise<void>;
@@ -930,7 +933,7 @@ class SmartProxy extends EventEmitter {
// Monitoring & Metrics
getMetrics(): IMetrics; // Sync — returns cached metrics adapter
getStatistics(): Promise<any>; // Async — queries Rust engine
getStatistics(): Promise<IRustStatistics>; // Async — queries Rust engine
// Certificate Management
provisionCertificate(routeName: string): Promise<void>;
@@ -965,7 +968,10 @@ interface ISmartProxyOptions {
};
// Custom certificate provisioning
certProvisionFunction?: (domain: string) => Promise<ICert | 'http01'>;
certProvisionFunction?: (
domain: string,
eventComms: ICertProvisionEventComms
) => Promise<TSmartProxyCertProvisionObject>;
certProvisionFallbackToAcme?: boolean; // Fall back to ACME on failure (default: true)
certProvisionTimeout?: number; // Timeout per provision call (ms)
certProvisionConcurrency?: number; // Max concurrent provisions
@@ -1001,6 +1007,10 @@ interface ISmartProxyOptions {
// Connection limits
maxConnectionsPerIP?: number; // Per-IP connection limit (default: 100)
connectionRateLimitPerMinute?: number; // Per-IP rate limit (default: 300/min)
securityPolicy?: {
blockedIps?: string[];
blockedCidrs?: string[];
}; // Global ingress block policy
// Keep-alive
keepAliveTreatment?: 'standard' | 'extended' | 'immortal';
@@ -1053,17 +1063,25 @@ metrics.connections.total(); // Total connections since start
metrics.connections.byRoute(); // Map<routeName, activeCount>
metrics.connections.byIP(); // Map<ip, activeCount>
metrics.connections.topIPs(10); // Top N IPs by connection count
metrics.connections.domainRequestsByIP(); // Map<ip, Map<domain, requestCount>>
metrics.connections.topDomainRequests(20); // Top IP/domain pairs by request count
metrics.connections.frontendProtocols(); // H1/H2/H3/WS frontend distribution
metrics.connections.backendProtocols(); // H1/H2/H3/WS backend distribution
// Throughput (bytes/sec)
metrics.throughput.instant(); // { in: number, out: number }
metrics.throughput.recent(); // Recent average
metrics.throughput.average(); // Overall average
metrics.throughput.custom(30); // Custom window, if provided by Rust cache
metrics.throughput.history(60); // Recent throughput samples
metrics.throughput.byRoute(); // Map<routeName, { in, out }>
metrics.throughput.byIP(); // Map<ip, { in, out }>
// Request rates
metrics.requests.perSecond(); // Requests per second
metrics.requests.perMinute(); // Requests per minute
metrics.requests.total(); // Total requests
metrics.requests.byDomain(); // Map<domain, { perSecond, lastMinute }>
// UDP metrics
metrics.udp.activeSessions(); // Current active UDP sessions
@@ -1080,12 +1098,15 @@ metrics.totals.connections(); // Total connections
metrics.backends.byBackend(); // Map<backend, IBackendMetrics>
metrics.backends.protocols(); // Map<backend, protocol>
metrics.backends.topByErrors(10); // Top N error-prone backends
metrics.backends.detectedProtocols(); // Backend protocol discovery cache
// Percentiles
metrics.percentiles.connectionDuration(); // { p50, p95, p99 }
metrics.percentiles.bytesTransferred(); // { in: { p50, p95, p99 }, out: { p50, p95, p99 } }
```
The percentile methods are part of the public metrics shape. In the current Rust adapter they return zeroed values until percentile collection is implemented in the Rust metrics snapshot.
## 🐛 Troubleshooting
### Certificate Issues