Compare commits
20 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
| 8c2042a2f5 | |||
| 3514260316 | |||
| f171cc8c5d | |||
| c7722c30f3 | |||
| 0ae882731a | |||
| 53d73c7dc6 | |||
| b4b8bd925d | |||
| 5ac44b898b | |||
| 9b4393b5ac | |||
| 02b4ed8018 | |||
| e4e4b4f1ec | |||
| d361a21543 | |||
| 106713a546 | |||
| 101675b5f8 | |||
| 9fac17bc39 | |||
| 2e3cf515a4 | |||
| 754d32fd34 | |||
| f0b7c27996 | |||
| db932e8acc | |||
| 455d5bb757 |
87
changelog.md
87
changelog.md
@@ -1,5 +1,92 @@
|
||||
# Changelog
|
||||
|
||||
## 2026-02-19 - 25.7.7 - fix(proxy)
|
||||
restrict PROXY protocol parsing to configured trusted proxy IPs and parse PROXY headers before metrics/fast-path so client IPs reflect the real source
|
||||
|
||||
- Add proxy_ips: Vec<std::net::IpAddr> to ConnectionConfig with a default empty Vec
|
||||
- Populate proxy_ips from options.proxy_ips strings in rust/crates/rustproxy/src/lib.rs, parsing each to IpAddr
|
||||
- Only peek for and parse PROXY v1 headers when the remote IP is contained in proxy_ips (prevents untrusted clients from injecting PROXY headers)
|
||||
- Move PROXY protocol parsing earlier so metrics and fast-path logic use the effective (real client) IP after PROXY parsing
|
||||
- If proxy_ips is empty, behavior remains unchanged (no PROXY parsing)
|
||||
|
||||
## 2026-02-19 - 25.7.6 - fix(throughput)
|
||||
add tests for per-IP connection tracking and throughput history; assert per-IP eviction after connection close to prevent memory leak
|
||||
|
||||
- Adds runtime assertions for per-IP TCP connection tracking (m.connections.byIP) while a connection is active
|
||||
- Adds checks for throughput history (m.throughput.history) to ensure history length and timestamps are recorded
|
||||
- Asserts that per-IP tracking data is evicted after connection close (byIP.size === 0) to verify memory leak fix
|
||||
- Reorders test checks so per-IP and history metrics are validated during the active connection and totals are validated after close
|
||||
|
||||
## 2026-02-19 - 25.7.5 - fix(rustproxy)
|
||||
prune stale per-route metrics, add per-route rate limiter caching and regex cache, and improve connection tracking cleanup to prevent memory growth
|
||||
|
||||
- Prune per-route metrics for routes removed from configuration via MetricsCollector::retain_routes invoked during route table updates
|
||||
- Introduce per-route shared RateLimiter instances (DashMap) with a request-count-triggered periodic cleanup to avoid stale limiters
|
||||
- Cache compiled URL-rewrite regexes (regex_cache) to avoid recompiling patterns on every request and insert compiled regex on first use
|
||||
- Improve upstream connection tracking to remove zero-count entries and guard against underflow, preventing unbounded DashMap growth
|
||||
- Evict per-IP metrics and timestamps when the last connection for an IP closes so per-IP DashMap entries are fully freed
|
||||
- Add unit tests validating connection tracking cleanup, per-IP eviction, and route-metrics retention behavior
|
||||
|
||||
## 2026-02-19 - 25.7.4 - fix(smart-proxy)
|
||||
include proxy IPs in smart proxy configuration
|
||||
|
||||
- Add proxyIps: this.settings.proxyIPs to proxy options in ts/proxies/smart-proxy/smart-proxy.ts
|
||||
- Ensures proxy IPs from settings are passed into the proxy implementation (enables proxy IP filtering/whitelisting)
|
||||
|
||||
## 2026-02-16 - 25.7.3 - fix(metrics)
|
||||
centralize connection-closed reporting via ConnectionGuard and remove duplicate explicit metrics.connection_closed calls
|
||||
|
||||
- Removed numerous explicit metrics.connection_closed calls from rust/crates/rustproxy-http/src/proxy_service.rs so connection teardown and byte counting are handled by the connection guard / counting body instead of ad-hoc calls.
|
||||
- Simplified ConnectionGuard in rust/crates/rustproxy-passthrough/src/tcp_listener.rs: removed the disarm flag and disarm() method so Drop always reports connection_closed.
|
||||
- Stopped disarming the TCP-level guard when handing connections off to HTTP proxy paths (HTTP/WebSocket/streaming flows) to avoid missing or double-reporting metrics.
|
||||
- Fixes incorrect/duplicate connection-closed metric emission and ensures consistent byte/connection accounting during streaming and WebSocket upgrades.
|
||||
|
||||
## 2026-02-16 - 25.7.2 - fix(rustproxy-http)
|
||||
preserve original Host header when proxying and add X-Forwarded-* headers; add TLS WebSocket echo backend helper and integration test for terminate-and-reencrypt websocket
|
||||
|
||||
- Preserve the client's original Host header instead of replacing it with backend host:port when proxying requests.
|
||||
- Add standard reverse-proxy headers: X-Forwarded-For (appends client IP), X-Forwarded-Host, and X-Forwarded-Proto for upstream requests.
|
||||
- Ensure raw TCP/HTTP upstream requests copy original headers and skip X-Forwarded-* (which are added explicitly).
|
||||
- Add start_tls_ws_echo_backend test helper to start a TLS WebSocket echo backend for tests.
|
||||
- Add integration test test_terminate_and_reencrypt_websocket to verify WS upgrade through terminate-and-reencrypt TLS path.
|
||||
- Rename unused parameter upstream to _upstream in proxy_service functions to avoid warnings.
|
||||
|
||||
## 2026-02-16 - 25.7.1 - fix(proxy)
|
||||
use TLS to backends for terminate-and-reencrypt routes
|
||||
|
||||
- Set upstream.use_tls = true when a route's TLS mode is TerminateAndReencrypt so the proxy re-encrypts to backend servers.
|
||||
- Add start_tls_http_backend test helper and update integration tests to run TLS-enabled backend servers validating re-encryption behavior.
|
||||
- Make the selected upstream mutable to allow toggling the use_tls flag during request handling.
|
||||
|
||||
## 2026-02-16 - 25.7.0 - feat(routes)
|
||||
add protocol-based route matching and ensure terminate-and-reencrypt routes HTTP through the full HTTP proxy; update docs and tests
|
||||
|
||||
- Introduce a new 'protocol' match field for routes (supports 'http' and 'tcp') and preserve it through cloning/merging.
|
||||
- Add Rust integration test verifying terminate-and-reencrypt decrypts TLS and routes HTTP traffic via the HTTP proxy (per-request Host/path routing) instead of raw tunneling.
|
||||
- Add TypeScript unit tests covering protocol field validation, preservation, interaction with terminate-and-reencrypt, cloning, merging, and matching behavior.
|
||||
- Update README with a Protocol-Specific Routing section and clarify terminate-and-reencrypt behavior (HTTP routed via HTTP proxy; non-HTTP uses raw TLS-to-TLS tunnel).
|
||||
- Example config: include health check thresholds (unhealthyThreshold and healthyThreshold) in the sample healthCheck settings.
|
||||
|
||||
## 2026-02-16 - 25.6.0 - feat(rustproxy)
|
||||
add protocol-based routing and backend TLS re-encryption support
|
||||
|
||||
- Introduce optional route_match.protocol ("http" | "tcp") in Rust and TypeScript route types to allow protocol-restricted routing.
|
||||
- RouteManager: respect protocol field during matching and treat TLS connections without SNI as not matching domain-restricted routes (except wildcard-only routes).
|
||||
- HTTP proxy: add BackendStream abstraction to unify plain TCP and tokio-rustls TLS backend streams, and support connecting to upstreams over TLS (upstream.use_tls) with an InsecureBackendVerifier for internal/self-signed backends.
|
||||
- WebSocket and HTTP forwarding updated to use BackendStream so upstream TLS is handled transparently.
|
||||
- Passthrough listener: perform post-termination protocol detection for TerminateAndReencrypt; route HTTP flows into HttpProxyService and handle non-HTTP as TLS-to-TLS tunnel.
|
||||
- Add tests for protocol matching, TLS/no-SNI behavior, and other routing edge cases.
|
||||
- Add rustls and tokio-rustls dependencies (Cargo.toml/Cargo.lock updates).
|
||||
|
||||
## 2026-02-16 - 25.5.0 - feat(tls)
|
||||
add shared TLS acceptor with SNI resolver and session resumption; prefer shared acceptor and fall back to per-connection when routes specify custom TLS versions
|
||||
|
||||
- Add CertResolver that pre-parses PEM certs/keys into CertifiedKey instances for SNI-based lookup and cheap runtime resolution
|
||||
- Introduce build_shared_tls_acceptor to create a shared ServerConfig with session cache (4096) and Ticketer for session ticket resumption
|
||||
- Add ArcSwap<Option<TlsAcceptor>> shared_tls_acceptor to tcp_listener for hot-reloadable, pre-built acceptor and update accept loop/handlers to use it
|
||||
- set_tls_configs now attempts to build and store the shared TLS acceptor, falling back to per-connection acceptors on failure; raw PEM configs are still retained for route-level fallbacks
|
||||
- Add get_tls_acceptor helper: prefer shared acceptor for performance and session resumption, but build per-connection acceptor when a route requests custom TLS versions
|
||||
|
||||
## 2026-02-16 - 25.4.0 - feat(rustproxy)
|
||||
support dynamically loaded TLS certificates via loadCertificate IPC and include them in listener TLS configs for rebuilds and hot-swap
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "@push.rocks/smartproxy",
|
||||
"version": "25.4.0",
|
||||
"version": "25.7.7",
|
||||
"private": false,
|
||||
"description": "A powerful proxy package with unified route-based configuration for high traffic management. Features include SSL/TLS support, flexible routing patterns, WebSocket handling, advanced security options, and automatic ACME certificate management.",
|
||||
"main": "dist_ts/index.js",
|
||||
|
||||
57
readme.md
57
readme.md
@@ -27,7 +27,7 @@ Whether you're building microservices, deploying edge infrastructure, or need a
|
||||
| 🦀 **Rust-Powered Engine** | All networking handled by a high-performance Rust binary via IPC |
|
||||
| 🔀 **Unified Route-Based Config** | Clean match/action patterns for intuitive traffic routing |
|
||||
| 🔒 **Automatic SSL/TLS** | Zero-config HTTPS with Let's Encrypt ACME integration |
|
||||
| 🎯 **Flexible Matching** | Route by port, domain, path, client IP, TLS version, headers, or custom logic |
|
||||
| 🎯 **Flexible Matching** | Route by port, domain, path, protocol, client IP, TLS version, headers, or custom logic |
|
||||
| 🚄 **High-Performance** | Choose between user-space or kernel-level (NFTables) forwarding |
|
||||
| ⚖️ **Load Balancing** | Round-robin, least-connections, IP-hash with health checks |
|
||||
| 🛡️ **Enterprise Security** | IP filtering, rate limiting, basic auth, JWT auth, connection limits |
|
||||
@@ -89,7 +89,7 @@ SmartProxy uses a powerful **match/action** pattern that makes routing predictab
|
||||
```
|
||||
|
||||
Every route consists of:
|
||||
- **Match** — What traffic to capture (ports, domains, paths, IPs, headers)
|
||||
- **Match** — What traffic to capture (ports, domains, paths, protocol, IPs, headers)
|
||||
- **Action** — What to do with it (`forward` or `socket-handler`)
|
||||
- **Security** (optional) — IP allow/block lists, rate limits, authentication
|
||||
- **Headers** (optional) — Request/response header manipulation with template variables
|
||||
@@ -103,7 +103,7 @@ SmartProxy supports three TLS handling modes:
|
||||
|------|-------------|----------|
|
||||
| `passthrough` | Forward encrypted traffic as-is (SNI-based routing) | Backend handles TLS |
|
||||
| `terminate` | Decrypt at proxy, forward plain HTTP to backend | Standard reverse proxy |
|
||||
| `terminate-and-reencrypt` | Decrypt, then re-encrypt to backend | Zero-trust environments |
|
||||
| `terminate-and-reencrypt` | Decrypt at proxy, re-encrypt to backend. HTTP traffic gets full per-request routing (Host header, path matching) via the HTTP proxy; non-HTTP traffic uses a raw TLS-to-TLS tunnel | Zero-trust / defense-in-depth environments |
|
||||
|
||||
## 💡 Common Use Cases
|
||||
|
||||
@@ -135,13 +135,13 @@ const proxy = new SmartProxy({
|
||||
],
|
||||
{
|
||||
tls: { mode: 'terminate', certificate: 'auto' },
|
||||
loadBalancing: {
|
||||
algorithm: 'round-robin',
|
||||
healthCheck: {
|
||||
path: '/health',
|
||||
interval: 30000,
|
||||
timeout: 5000
|
||||
}
|
||||
algorithm: 'round-robin',
|
||||
healthCheck: {
|
||||
path: '/health',
|
||||
interval: 30000,
|
||||
timeout: 5000,
|
||||
unhealthyThreshold: 3,
|
||||
healthyThreshold: 2
|
||||
}
|
||||
}
|
||||
)
|
||||
@@ -318,6 +318,42 @@ const proxy = new SmartProxy({
|
||||
|
||||
> **Note:** Routes with dynamic functions (host/port callbacks) are automatically relayed through the TypeScript socket handler server, since JavaScript functions can't be serialized to Rust.
|
||||
|
||||
### 🔀 Protocol-Specific Routing
|
||||
|
||||
Restrict routes to specific application-layer protocols. When `protocol` is set, the Rust engine detects the protocol after connection (or after TLS termination) and only matches routes that accept that protocol:
|
||||
|
||||
```typescript
|
||||
// HTTP-only route (rejects raw TCP connections)
|
||||
const httpOnlyRoute: IRouteConfig = {
|
||||
name: 'http-api',
|
||||
match: {
|
||||
ports: 443,
|
||||
domains: 'api.example.com',
|
||||
protocol: 'http', // Only match HTTP/1.1, HTTP/2, and WebSocket upgrades
|
||||
},
|
||||
action: {
|
||||
type: 'forward',
|
||||
targets: [{ host: 'api-backend', port: 8080 }],
|
||||
tls: { mode: 'terminate', certificate: 'auto' }
|
||||
}
|
||||
};
|
||||
|
||||
// Raw TCP route (rejects HTTP traffic)
|
||||
const tcpOnlyRoute: IRouteConfig = {
|
||||
name: 'database-proxy',
|
||||
match: {
|
||||
ports: 5432,
|
||||
protocol: 'tcp', // Only match non-HTTP TCP streams
|
||||
},
|
||||
action: {
|
||||
type: 'forward',
|
||||
targets: [{ host: 'db-server', port: 5432 }]
|
||||
}
|
||||
};
|
||||
```
|
||||
|
||||
> **Note:** Omitting `protocol` (the default) matches any protocol. For TLS routes, protocol detection happens *after* TLS termination — during the initial SNI-based route match, `protocol` is not yet known and the route is allowed to match. The protocol restriction is enforced after the proxy peeks at the decrypted data.
|
||||
|
||||
### 🔒 Security Controls
|
||||
|
||||
Comprehensive per-route security options:
|
||||
@@ -549,6 +585,7 @@ interface IRouteMatch {
|
||||
clientIp?: string[]; // ['10.0.0.0/8', '192.168.*']
|
||||
tlsVersion?: string[]; // ['TLSv1.2', 'TLSv1.3']
|
||||
headers?: Record<string, string | RegExp>; // Match by HTTP headers
|
||||
protocol?: 'http' | 'tcp'; // Match specific protocol ('http' includes h2 + WebSocket upgrades)
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
2
rust/Cargo.lock
generated
2
rust/Cargo.lock
generated
@@ -966,12 +966,14 @@ dependencies = [
|
||||
"hyper",
|
||||
"hyper-util",
|
||||
"regex",
|
||||
"rustls",
|
||||
"rustproxy-config",
|
||||
"rustproxy-metrics",
|
||||
"rustproxy-routing",
|
||||
"rustproxy-security",
|
||||
"thiserror 2.0.18",
|
||||
"tokio",
|
||||
"tokio-rustls",
|
||||
"tokio-util",
|
||||
"tracing",
|
||||
]
|
||||
|
||||
@@ -17,6 +17,7 @@ pub fn create_http_route(
|
||||
client_ip: None,
|
||||
tls_version: None,
|
||||
headers: None,
|
||||
protocol: None,
|
||||
},
|
||||
action: RouteAction {
|
||||
action_type: RouteActionType::Forward,
|
||||
@@ -108,6 +109,7 @@ pub fn create_http_to_https_redirect(
|
||||
client_ip: None,
|
||||
tls_version: None,
|
||||
headers: None,
|
||||
protocol: None,
|
||||
},
|
||||
action: RouteAction {
|
||||
action_type: RouteActionType::Forward,
|
||||
@@ -200,6 +202,7 @@ pub fn create_load_balancer_route(
|
||||
client_ip: None,
|
||||
tls_version: None,
|
||||
headers: None,
|
||||
protocol: None,
|
||||
},
|
||||
action: RouteAction {
|
||||
action_type: RouteActionType::Forward,
|
||||
|
||||
@@ -114,6 +114,10 @@ pub struct RouteMatch {
|
||||
/// Match specific HTTP headers
|
||||
#[serde(skip_serializing_if = "Option::is_none")]
|
||||
pub headers: Option<HashMap<String, String>>,
|
||||
|
||||
/// Match specific protocol: "http" (includes h2 + websocket) or "tcp"
|
||||
#[serde(skip_serializing_if = "Option::is_none")]
|
||||
pub protocol: Option<String>,
|
||||
}
|
||||
|
||||
// ─── Target Match ────────────────────────────────────────────────────
|
||||
|
||||
@@ -18,6 +18,8 @@ http-body = { workspace = true }
|
||||
http-body-util = { workspace = true }
|
||||
bytes = { workspace = true }
|
||||
tokio = { workspace = true }
|
||||
rustls = { workspace = true }
|
||||
tokio-rustls = { workspace = true }
|
||||
tracing = { workspace = true }
|
||||
thiserror = { workspace = true }
|
||||
anyhow = { workspace = true }
|
||||
|
||||
@@ -9,6 +9,7 @@ use std::sync::Arc;
|
||||
use std::sync::atomic::{AtomicU64, Ordering};
|
||||
|
||||
use bytes::Bytes;
|
||||
use dashmap::DashMap;
|
||||
use http_body_util::{BodyExt, Full, combinators::BoxBody};
|
||||
use hyper::body::Incoming;
|
||||
use hyper::{Request, Response, StatusCode};
|
||||
@@ -18,8 +19,12 @@ use tokio::net::TcpStream;
|
||||
use tokio_util::sync::CancellationToken;
|
||||
use tracing::{debug, error, info, warn};
|
||||
|
||||
use std::pin::Pin;
|
||||
use std::task::{Context, Poll};
|
||||
|
||||
use rustproxy_routing::RouteManager;
|
||||
use rustproxy_metrics::MetricsCollector;
|
||||
use rustproxy_security::RateLimiter;
|
||||
|
||||
use crate::counting_body::{CountingBody, Direction};
|
||||
use crate::request_filter::RequestFilter;
|
||||
@@ -35,6 +40,125 @@ const DEFAULT_WS_INACTIVITY_TIMEOUT: std::time::Duration = std::time::Duration::
|
||||
/// Default WebSocket max lifetime (24 hours).
|
||||
const DEFAULT_WS_MAX_LIFETIME: std::time::Duration = std::time::Duration::from_secs(86400);
|
||||
|
||||
/// Backend stream that can be either plain TCP or TLS-wrapped.
|
||||
/// Used for `terminate-and-reencrypt` mode where the backend requires TLS.
|
||||
pub(crate) enum BackendStream {
|
||||
Plain(TcpStream),
|
||||
Tls(tokio_rustls::client::TlsStream<TcpStream>),
|
||||
}
|
||||
|
||||
impl tokio::io::AsyncRead for BackendStream {
|
||||
fn poll_read(
|
||||
self: Pin<&mut Self>,
|
||||
cx: &mut Context<'_>,
|
||||
buf: &mut tokio::io::ReadBuf<'_>,
|
||||
) -> Poll<std::io::Result<()>> {
|
||||
match self.get_mut() {
|
||||
BackendStream::Plain(s) => Pin::new(s).poll_read(cx, buf),
|
||||
BackendStream::Tls(s) => Pin::new(s).poll_read(cx, buf),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl tokio::io::AsyncWrite for BackendStream {
|
||||
fn poll_write(
|
||||
self: Pin<&mut Self>,
|
||||
cx: &mut Context<'_>,
|
||||
buf: &[u8],
|
||||
) -> Poll<std::io::Result<usize>> {
|
||||
match self.get_mut() {
|
||||
BackendStream::Plain(s) => Pin::new(s).poll_write(cx, buf),
|
||||
BackendStream::Tls(s) => Pin::new(s).poll_write(cx, buf),
|
||||
}
|
||||
}
|
||||
|
||||
fn poll_flush(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<std::io::Result<()>> {
|
||||
match self.get_mut() {
|
||||
BackendStream::Plain(s) => Pin::new(s).poll_flush(cx),
|
||||
BackendStream::Tls(s) => Pin::new(s).poll_flush(cx),
|
||||
}
|
||||
}
|
||||
|
||||
fn poll_shutdown(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<std::io::Result<()>> {
|
||||
match self.get_mut() {
|
||||
BackendStream::Plain(s) => Pin::new(s).poll_shutdown(cx),
|
||||
BackendStream::Tls(s) => Pin::new(s).poll_shutdown(cx),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Connect to a backend over TLS. Uses InsecureVerifier for internal backends
|
||||
/// with self-signed certs (same pattern as tls_handler::connect_tls).
|
||||
async fn connect_tls_backend(
|
||||
host: &str,
|
||||
port: u16,
|
||||
) -> Result<tokio_rustls::client::TlsStream<TcpStream>, Box<dyn std::error::Error + Send + Sync>> {
|
||||
let _ = rustls::crypto::ring::default_provider().install_default();
|
||||
let config = rustls::ClientConfig::builder()
|
||||
.dangerous()
|
||||
.with_custom_certificate_verifier(Arc::new(InsecureBackendVerifier))
|
||||
.with_no_client_auth();
|
||||
|
||||
let connector = tokio_rustls::TlsConnector::from(Arc::new(config));
|
||||
let stream = TcpStream::connect(format!("{}:{}", host, port)).await?;
|
||||
stream.set_nodelay(true)?;
|
||||
|
||||
let server_name = rustls::pki_types::ServerName::try_from(host.to_string())?;
|
||||
let tls_stream = connector.connect(server_name, stream).await?;
|
||||
debug!("Backend TLS connection established to {}:{}", host, port);
|
||||
Ok(tls_stream)
|
||||
}
|
||||
|
||||
/// Insecure certificate verifier for backend TLS connections.
|
||||
/// Internal backends may use self-signed certs.
|
||||
#[derive(Debug)]
|
||||
struct InsecureBackendVerifier;
|
||||
|
||||
impl rustls::client::danger::ServerCertVerifier for InsecureBackendVerifier {
|
||||
fn verify_server_cert(
|
||||
&self,
|
||||
_end_entity: &rustls::pki_types::CertificateDer<'_>,
|
||||
_intermediates: &[rustls::pki_types::CertificateDer<'_>],
|
||||
_server_name: &rustls::pki_types::ServerName<'_>,
|
||||
_ocsp_response: &[u8],
|
||||
_now: rustls::pki_types::UnixTime,
|
||||
) -> Result<rustls::client::danger::ServerCertVerified, rustls::Error> {
|
||||
Ok(rustls::client::danger::ServerCertVerified::assertion())
|
||||
}
|
||||
|
||||
fn verify_tls12_signature(
|
||||
&self,
|
||||
_message: &[u8],
|
||||
_cert: &rustls::pki_types::CertificateDer<'_>,
|
||||
_dss: &rustls::DigitallySignedStruct,
|
||||
) -> Result<rustls::client::danger::HandshakeSignatureValid, rustls::Error> {
|
||||
Ok(rustls::client::danger::HandshakeSignatureValid::assertion())
|
||||
}
|
||||
|
||||
fn verify_tls13_signature(
|
||||
&self,
|
||||
_message: &[u8],
|
||||
_cert: &rustls::pki_types::CertificateDer<'_>,
|
||||
_dss: &rustls::DigitallySignedStruct,
|
||||
) -> Result<rustls::client::danger::HandshakeSignatureValid, rustls::Error> {
|
||||
Ok(rustls::client::danger::HandshakeSignatureValid::assertion())
|
||||
}
|
||||
|
||||
fn supported_verify_schemes(&self) -> Vec<rustls::SignatureScheme> {
|
||||
vec![
|
||||
rustls::SignatureScheme::RSA_PKCS1_SHA256,
|
||||
rustls::SignatureScheme::RSA_PKCS1_SHA384,
|
||||
rustls::SignatureScheme::RSA_PKCS1_SHA512,
|
||||
rustls::SignatureScheme::ECDSA_NISTP256_SHA256,
|
||||
rustls::SignatureScheme::ECDSA_NISTP384_SHA384,
|
||||
rustls::SignatureScheme::ED25519,
|
||||
rustls::SignatureScheme::RSA_PSS_SHA256,
|
||||
rustls::SignatureScheme::RSA_PSS_SHA384,
|
||||
rustls::SignatureScheme::RSA_PSS_SHA512,
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
/// HTTP proxy service that processes HTTP traffic.
|
||||
pub struct HttpProxyService {
|
||||
route_manager: Arc<RouteManager>,
|
||||
@@ -42,6 +166,12 @@ pub struct HttpProxyService {
|
||||
upstream_selector: UpstreamSelector,
|
||||
/// Timeout for connecting to upstream backends.
|
||||
connect_timeout: std::time::Duration,
|
||||
/// Per-route rate limiters (keyed by route ID).
|
||||
route_rate_limiters: Arc<DashMap<String, Arc<RateLimiter>>>,
|
||||
/// Request counter for periodic rate limiter cleanup.
|
||||
request_counter: AtomicU64,
|
||||
/// Cache of compiled URL rewrite regexes (keyed by pattern string).
|
||||
regex_cache: DashMap<String, Regex>,
|
||||
}
|
||||
|
||||
impl HttpProxyService {
|
||||
@@ -51,6 +181,9 @@ impl HttpProxyService {
|
||||
metrics,
|
||||
upstream_selector: UpstreamSelector::new(),
|
||||
connect_timeout: DEFAULT_CONNECT_TIMEOUT,
|
||||
route_rate_limiters: Arc::new(DashMap::new()),
|
||||
request_counter: AtomicU64::new(0),
|
||||
regex_cache: DashMap::new(),
|
||||
}
|
||||
}
|
||||
|
||||
@@ -65,6 +198,9 @@ impl HttpProxyService {
|
||||
metrics,
|
||||
upstream_selector: UpstreamSelector::new(),
|
||||
connect_timeout,
|
||||
route_rate_limiters: Arc::new(DashMap::new()),
|
||||
request_counter: AtomicU64::new(0),
|
||||
regex_cache: DashMap::new(),
|
||||
}
|
||||
}
|
||||
|
||||
@@ -173,6 +309,7 @@ impl HttpProxyService {
|
||||
tls_version: None,
|
||||
headers: Some(&headers),
|
||||
is_tls: false,
|
||||
protocol: Some("http"),
|
||||
};
|
||||
|
||||
let route_match = match self.route_manager.find_route(&ctx) {
|
||||
@@ -186,20 +323,37 @@ impl HttpProxyService {
|
||||
let route_id = route_match.route.id.as_deref();
|
||||
let ip_str = peer_addr.ip().to_string();
|
||||
self.metrics.record_http_request();
|
||||
self.metrics.connection_opened(route_id, Some(&ip_str));
|
||||
|
||||
// Apply request filters (IP check, rate limiting, auth)
|
||||
if let Some(ref security) = route_match.route.security {
|
||||
if let Some(response) = RequestFilter::apply(security, &req, &peer_addr) {
|
||||
self.metrics.connection_closed(route_id, Some(&ip_str));
|
||||
// Look up or create a shared rate limiter for this route
|
||||
let rate_limiter = security.rate_limit.as_ref()
|
||||
.filter(|rl| rl.enabled)
|
||||
.map(|rl| {
|
||||
let route_key = route_id.unwrap_or("__default__").to_string();
|
||||
self.route_rate_limiters
|
||||
.entry(route_key)
|
||||
.or_insert_with(|| Arc::new(RateLimiter::new(rl.max_requests, rl.window)))
|
||||
.clone()
|
||||
});
|
||||
if let Some(response) = RequestFilter::apply_with_rate_limiter(
|
||||
security, &req, &peer_addr, rate_limiter.as_ref(),
|
||||
) {
|
||||
return Ok(response);
|
||||
}
|
||||
}
|
||||
|
||||
// Periodic rate limiter cleanup (every 1000 requests)
|
||||
let count = self.request_counter.fetch_add(1, Ordering::Relaxed);
|
||||
if count % 1000 == 0 {
|
||||
for entry in self.route_rate_limiters.iter() {
|
||||
entry.value().cleanup();
|
||||
}
|
||||
}
|
||||
|
||||
// Check for test response (returns immediately, no upstream needed)
|
||||
if let Some(ref advanced) = route_match.route.action.advanced {
|
||||
if let Some(ref test_response) = advanced.test_response {
|
||||
self.metrics.connection_closed(route_id, Some(&ip_str));
|
||||
return Ok(Self::build_test_response(test_response));
|
||||
}
|
||||
}
|
||||
@@ -207,7 +361,6 @@ impl HttpProxyService {
|
||||
// Check for static file serving
|
||||
if let Some(ref advanced) = route_match.route.action.advanced {
|
||||
if let Some(ref static_files) = advanced.static_files {
|
||||
self.metrics.connection_closed(route_id, Some(&ip_str));
|
||||
return Ok(Self::serve_static_file(&path, static_files));
|
||||
}
|
||||
}
|
||||
@@ -216,12 +369,19 @@ impl HttpProxyService {
|
||||
let target = match route_match.target {
|
||||
Some(t) => t,
|
||||
None => {
|
||||
self.metrics.connection_closed(route_id, Some(&ip_str));
|
||||
return Ok(error_response(StatusCode::BAD_GATEWAY, "No target available"));
|
||||
}
|
||||
};
|
||||
|
||||
let upstream = self.upstream_selector.select(target, &peer_addr, port);
|
||||
let mut upstream = self.upstream_selector.select(target, &peer_addr, port);
|
||||
|
||||
// If the route uses terminate-and-reencrypt, always re-encrypt to backend
|
||||
if let Some(ref tls) = route_match.route.action.tls {
|
||||
if tls.mode == rustproxy_config::TlsMode::TerminateAndReencrypt {
|
||||
upstream.use_tls = true;
|
||||
}
|
||||
}
|
||||
|
||||
let upstream_key = format!("{}:{}", upstream.host, upstream.port);
|
||||
self.upstream_selector.connection_started(&upstream_key);
|
||||
|
||||
@@ -253,7 +413,7 @@ impl HttpProxyService {
|
||||
Some(q) => format!("{}?{}", path, q),
|
||||
None => path.clone(),
|
||||
};
|
||||
Self::apply_url_rewrite(&raw_path, &route_match.route)
|
||||
self.apply_url_rewrite(&raw_path, &route_match.route)
|
||||
};
|
||||
|
||||
// Build upstream request - stream body instead of buffering
|
||||
@@ -273,28 +433,92 @@ impl HttpProxyService {
|
||||
}
|
||||
}
|
||||
|
||||
// Connect to upstream with timeout
|
||||
let upstream_stream = match tokio::time::timeout(
|
||||
self.connect_timeout,
|
||||
TcpStream::connect(format!("{}:{}", upstream.host, upstream.port)),
|
||||
).await {
|
||||
Ok(Ok(s)) => s,
|
||||
Ok(Err(e)) => {
|
||||
error!("Failed to connect to upstream {}:{}: {}", upstream.host, upstream.port, e);
|
||||
self.upstream_selector.connection_ended(&upstream_key);
|
||||
self.metrics.connection_closed(route_id, Some(&ip_str));
|
||||
return Ok(error_response(StatusCode::BAD_GATEWAY, "Backend unavailable"));
|
||||
// Add standard reverse-proxy headers (X-Forwarded-*)
|
||||
{
|
||||
let original_host = parts.headers.get("host")
|
||||
.and_then(|h| h.to_str().ok())
|
||||
.unwrap_or("");
|
||||
let forwarded_proto = if route_match.route.action.tls.as_ref()
|
||||
.map(|t| matches!(t.mode,
|
||||
rustproxy_config::TlsMode::Terminate
|
||||
| rustproxy_config::TlsMode::TerminateAndReencrypt))
|
||||
.unwrap_or(false)
|
||||
{
|
||||
"https"
|
||||
} else {
|
||||
"http"
|
||||
};
|
||||
|
||||
// X-Forwarded-For: append client IP to existing chain
|
||||
let client_ip = peer_addr.ip().to_string();
|
||||
let xff_value = if let Some(existing) = upstream_headers.get("x-forwarded-for") {
|
||||
format!("{}, {}", existing.to_str().unwrap_or(""), client_ip)
|
||||
} else {
|
||||
client_ip
|
||||
};
|
||||
if let Ok(val) = hyper::header::HeaderValue::from_str(&xff_value) {
|
||||
upstream_headers.insert(
|
||||
hyper::header::HeaderName::from_static("x-forwarded-for"),
|
||||
val,
|
||||
);
|
||||
}
|
||||
Err(_) => {
|
||||
error!("Upstream connect timeout for {}:{}", upstream.host, upstream.port);
|
||||
self.upstream_selector.connection_ended(&upstream_key);
|
||||
self.metrics.connection_closed(route_id, Some(&ip_str));
|
||||
return Ok(error_response(StatusCode::GATEWAY_TIMEOUT, "Backend connect timeout"));
|
||||
// X-Forwarded-Host: original Host header
|
||||
if let Ok(val) = hyper::header::HeaderValue::from_str(original_host) {
|
||||
upstream_headers.insert(
|
||||
hyper::header::HeaderName::from_static("x-forwarded-host"),
|
||||
val,
|
||||
);
|
||||
}
|
||||
// X-Forwarded-Proto: original client protocol
|
||||
if let Ok(val) = hyper::header::HeaderValue::from_str(forwarded_proto) {
|
||||
upstream_headers.insert(
|
||||
hyper::header::HeaderName::from_static("x-forwarded-proto"),
|
||||
val,
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
// Connect to upstream with timeout (TLS if upstream.use_tls is set)
|
||||
let backend = if upstream.use_tls {
|
||||
match tokio::time::timeout(
|
||||
self.connect_timeout,
|
||||
connect_tls_backend(&upstream.host, upstream.port),
|
||||
).await {
|
||||
Ok(Ok(tls)) => BackendStream::Tls(tls),
|
||||
Ok(Err(e)) => {
|
||||
error!("Failed TLS connect to upstream {}:{}: {}", upstream.host, upstream.port, e);
|
||||
self.upstream_selector.connection_ended(&upstream_key);
|
||||
return Ok(error_response(StatusCode::BAD_GATEWAY, "Backend TLS unavailable"));
|
||||
}
|
||||
Err(_) => {
|
||||
error!("Upstream TLS connect timeout for {}:{}", upstream.host, upstream.port);
|
||||
self.upstream_selector.connection_ended(&upstream_key);
|
||||
return Ok(error_response(StatusCode::GATEWAY_TIMEOUT, "Backend TLS connect timeout"));
|
||||
}
|
||||
}
|
||||
} else {
|
||||
match tokio::time::timeout(
|
||||
self.connect_timeout,
|
||||
TcpStream::connect(format!("{}:{}", upstream.host, upstream.port)),
|
||||
).await {
|
||||
Ok(Ok(s)) => {
|
||||
s.set_nodelay(true).ok();
|
||||
BackendStream::Plain(s)
|
||||
}
|
||||
Ok(Err(e)) => {
|
||||
error!("Failed to connect to upstream {}:{}: {}", upstream.host, upstream.port, e);
|
||||
self.upstream_selector.connection_ended(&upstream_key);
|
||||
return Ok(error_response(StatusCode::BAD_GATEWAY, "Backend unavailable"));
|
||||
}
|
||||
Err(_) => {
|
||||
error!("Upstream connect timeout for {}:{}", upstream.host, upstream.port);
|
||||
self.upstream_selector.connection_ended(&upstream_key);
|
||||
return Ok(error_response(StatusCode::GATEWAY_TIMEOUT, "Backend connect timeout"));
|
||||
}
|
||||
}
|
||||
};
|
||||
upstream_stream.set_nodelay(true).ok();
|
||||
|
||||
let io = TokioIo::new(upstream_stream);
|
||||
let io = TokioIo::new(backend);
|
||||
|
||||
let result = if use_h2 {
|
||||
// HTTP/2 backend
|
||||
@@ -310,12 +534,12 @@ impl HttpProxyService {
|
||||
/// Forward request to backend via HTTP/1.1 with body streaming.
|
||||
async fn forward_h1(
|
||||
&self,
|
||||
io: TokioIo<TcpStream>,
|
||||
io: TokioIo<BackendStream>,
|
||||
parts: hyper::http::request::Parts,
|
||||
body: Incoming,
|
||||
upstream_headers: hyper::HeaderMap,
|
||||
upstream_path: &str,
|
||||
upstream: &crate::upstream_selector::UpstreamSelection,
|
||||
_upstream: &crate::upstream_selector::UpstreamSelection,
|
||||
route: &rustproxy_config::RouteConfig,
|
||||
route_id: Option<&str>,
|
||||
source_ip: &str,
|
||||
@@ -324,7 +548,6 @@ impl HttpProxyService {
|
||||
Ok(h) => h,
|
||||
Err(e) => {
|
||||
error!("Upstream handshake failed: {}", e);
|
||||
self.metrics.connection_closed(route_id, Some(source_ip));
|
||||
return Ok(error_response(StatusCode::BAD_GATEWAY, "Backend handshake failed"));
|
||||
}
|
||||
};
|
||||
@@ -342,11 +565,6 @@ impl HttpProxyService {
|
||||
|
||||
if let Some(headers) = upstream_req.headers_mut() {
|
||||
*headers = upstream_headers;
|
||||
if let Ok(host_val) = hyper::header::HeaderValue::from_str(
|
||||
&format!("{}:{}", upstream.host, upstream.port)
|
||||
) {
|
||||
headers.insert(hyper::header::HOST, host_val);
|
||||
}
|
||||
}
|
||||
|
||||
// Wrap the request body in CountingBody to track bytes_in
|
||||
@@ -365,7 +583,6 @@ impl HttpProxyService {
|
||||
Ok(resp) => resp,
|
||||
Err(e) => {
|
||||
error!("Upstream request failed: {}", e);
|
||||
self.metrics.connection_closed(route_id, Some(source_ip));
|
||||
return Ok(error_response(StatusCode::BAD_GATEWAY, "Backend request failed"));
|
||||
}
|
||||
};
|
||||
@@ -376,12 +593,12 @@ impl HttpProxyService {
|
||||
/// Forward request to backend via HTTP/2 with body streaming.
|
||||
async fn forward_h2(
|
||||
&self,
|
||||
io: TokioIo<TcpStream>,
|
||||
io: TokioIo<BackendStream>,
|
||||
parts: hyper::http::request::Parts,
|
||||
body: Incoming,
|
||||
upstream_headers: hyper::HeaderMap,
|
||||
upstream_path: &str,
|
||||
upstream: &crate::upstream_selector::UpstreamSelection,
|
||||
_upstream: &crate::upstream_selector::UpstreamSelection,
|
||||
route: &rustproxy_config::RouteConfig,
|
||||
route_id: Option<&str>,
|
||||
source_ip: &str,
|
||||
@@ -391,7 +608,6 @@ impl HttpProxyService {
|
||||
Ok(h) => h,
|
||||
Err(e) => {
|
||||
error!("HTTP/2 upstream handshake failed: {}", e);
|
||||
self.metrics.connection_closed(route_id, Some(source_ip));
|
||||
return Ok(error_response(StatusCode::BAD_GATEWAY, "Backend H2 handshake failed"));
|
||||
}
|
||||
};
|
||||
@@ -408,11 +624,6 @@ impl HttpProxyService {
|
||||
|
||||
if let Some(headers) = upstream_req.headers_mut() {
|
||||
*headers = upstream_headers;
|
||||
if let Ok(host_val) = hyper::header::HeaderValue::from_str(
|
||||
&format!("{}:{}", upstream.host, upstream.port)
|
||||
) {
|
||||
headers.insert(hyper::header::HOST, host_val);
|
||||
}
|
||||
}
|
||||
|
||||
// Wrap the request body in CountingBody to track bytes_in
|
||||
@@ -431,7 +642,6 @@ impl HttpProxyService {
|
||||
Ok(resp) => resp,
|
||||
Err(e) => {
|
||||
error!("HTTP/2 upstream request failed: {}", e);
|
||||
self.metrics.connection_closed(route_id, Some(source_ip));
|
||||
return Ok(error_response(StatusCode::BAD_GATEWAY, "Backend H2 request failed"));
|
||||
}
|
||||
};
|
||||
@@ -442,8 +652,7 @@ impl HttpProxyService {
|
||||
/// Build the client-facing response from an upstream response, streaming the body.
|
||||
///
|
||||
/// The response body is wrapped in a `CountingBody` that counts bytes as they
|
||||
/// stream from upstream to client. When the body is fully consumed (or dropped),
|
||||
/// it reports byte counts to the metrics collector and calls `connection_closed`.
|
||||
/// stream from upstream to client.
|
||||
async fn build_streaming_response(
|
||||
&self,
|
||||
upstream_response: Response<Incoming>,
|
||||
@@ -472,11 +681,6 @@ impl HttpProxyService {
|
||||
Direction::Out,
|
||||
);
|
||||
|
||||
// Close the connection metric now — the HTTP request/response cycle is done
|
||||
// from the proxy's perspective once we hand the streaming body to hyper.
|
||||
// Bytes will still be counted as they flow.
|
||||
self.metrics.connection_closed(route_id, Some(source_ip));
|
||||
|
||||
let body: BoxBody<Bytes, hyper::Error> = BoxBody::new(counting_body);
|
||||
|
||||
Ok(response.body(body).unwrap())
|
||||
@@ -508,7 +712,6 @@ impl HttpProxyService {
|
||||
.unwrap_or("");
|
||||
if !allowed_origins.is_empty() && !allowed_origins.iter().any(|o| o == "*" || o == origin) {
|
||||
self.upstream_selector.connection_ended(upstream_key);
|
||||
self.metrics.connection_closed(route_id, Some(source_ip));
|
||||
return Ok(error_response(StatusCode::FORBIDDEN, "Origin not allowed"));
|
||||
}
|
||||
}
|
||||
@@ -516,26 +719,45 @@ impl HttpProxyService {
|
||||
|
||||
info!("WebSocket upgrade from {} -> {}:{}", peer_addr, upstream.host, upstream.port);
|
||||
|
||||
// Connect to upstream with timeout
|
||||
let mut upstream_stream = match tokio::time::timeout(
|
||||
self.connect_timeout,
|
||||
TcpStream::connect(format!("{}:{}", upstream.host, upstream.port)),
|
||||
).await {
|
||||
Ok(Ok(s)) => s,
|
||||
Ok(Err(e)) => {
|
||||
error!("WebSocket: failed to connect upstream {}:{}: {}", upstream.host, upstream.port, e);
|
||||
self.upstream_selector.connection_ended(upstream_key);
|
||||
self.metrics.connection_closed(route_id, Some(source_ip));
|
||||
return Ok(error_response(StatusCode::BAD_GATEWAY, "Backend unavailable"));
|
||||
// Connect to upstream with timeout (TLS if upstream.use_tls is set)
|
||||
let mut upstream_stream: BackendStream = if upstream.use_tls {
|
||||
match tokio::time::timeout(
|
||||
self.connect_timeout,
|
||||
connect_tls_backend(&upstream.host, upstream.port),
|
||||
).await {
|
||||
Ok(Ok(tls)) => BackendStream::Tls(tls),
|
||||
Ok(Err(e)) => {
|
||||
error!("WebSocket: failed TLS connect upstream {}:{}: {}", upstream.host, upstream.port, e);
|
||||
self.upstream_selector.connection_ended(upstream_key);
|
||||
return Ok(error_response(StatusCode::BAD_GATEWAY, "Backend TLS unavailable"));
|
||||
}
|
||||
Err(_) => {
|
||||
error!("WebSocket: upstream TLS connect timeout for {}:{}", upstream.host, upstream.port);
|
||||
self.upstream_selector.connection_ended(upstream_key);
|
||||
return Ok(error_response(StatusCode::GATEWAY_TIMEOUT, "Backend TLS connect timeout"));
|
||||
}
|
||||
}
|
||||
Err(_) => {
|
||||
error!("WebSocket: upstream connect timeout for {}:{}", upstream.host, upstream.port);
|
||||
self.upstream_selector.connection_ended(upstream_key);
|
||||
self.metrics.connection_closed(route_id, Some(source_ip));
|
||||
return Ok(error_response(StatusCode::GATEWAY_TIMEOUT, "Backend connect timeout"));
|
||||
} else {
|
||||
match tokio::time::timeout(
|
||||
self.connect_timeout,
|
||||
TcpStream::connect(format!("{}:{}", upstream.host, upstream.port)),
|
||||
).await {
|
||||
Ok(Ok(s)) => {
|
||||
s.set_nodelay(true).ok();
|
||||
BackendStream::Plain(s)
|
||||
}
|
||||
Ok(Err(e)) => {
|
||||
error!("WebSocket: failed to connect upstream {}:{}: {}", upstream.host, upstream.port, e);
|
||||
self.upstream_selector.connection_ended(upstream_key);
|
||||
return Ok(error_response(StatusCode::BAD_GATEWAY, "Backend unavailable"));
|
||||
}
|
||||
Err(_) => {
|
||||
error!("WebSocket: upstream connect timeout for {}:{}", upstream.host, upstream.port);
|
||||
self.upstream_selector.connection_ended(upstream_key);
|
||||
return Ok(error_response(StatusCode::GATEWAY_TIMEOUT, "Backend connect timeout"));
|
||||
}
|
||||
}
|
||||
};
|
||||
upstream_stream.set_nodelay(true).ok();
|
||||
|
||||
let path = req.uri().path().to_string();
|
||||
let upstream_path = {
|
||||
@@ -562,13 +784,44 @@ impl HttpProxyService {
|
||||
parts.method, upstream_path
|
||||
);
|
||||
|
||||
let upstream_host = format!("{}:{}", upstream.host, upstream.port);
|
||||
// Copy all original headers (preserving the client's Host header).
|
||||
// Skip X-Forwarded-* since we set them ourselves below.
|
||||
for (name, value) in parts.headers.iter() {
|
||||
if name == hyper::header::HOST {
|
||||
raw_request.push_str(&format!("host: {}\r\n", upstream_host));
|
||||
} else {
|
||||
raw_request.push_str(&format!("{}: {}\r\n", name, value.to_str().unwrap_or("")));
|
||||
let name_str = name.as_str();
|
||||
if name_str == "x-forwarded-for"
|
||||
|| name_str == "x-forwarded-host"
|
||||
|| name_str == "x-forwarded-proto"
|
||||
{
|
||||
continue;
|
||||
}
|
||||
raw_request.push_str(&format!("{}: {}\r\n", name, value.to_str().unwrap_or("")));
|
||||
}
|
||||
|
||||
// Add standard reverse-proxy headers (X-Forwarded-*)
|
||||
{
|
||||
let original_host = parts.headers.get("host")
|
||||
.and_then(|h| h.to_str().ok())
|
||||
.unwrap_or("");
|
||||
let forwarded_proto = if route.action.tls.as_ref()
|
||||
.map(|t| matches!(t.mode,
|
||||
rustproxy_config::TlsMode::Terminate
|
||||
| rustproxy_config::TlsMode::TerminateAndReencrypt))
|
||||
.unwrap_or(false)
|
||||
{
|
||||
"https"
|
||||
} else {
|
||||
"http"
|
||||
};
|
||||
|
||||
let client_ip = peer_addr.ip().to_string();
|
||||
let xff_value = if let Some(existing) = parts.headers.get("x-forwarded-for") {
|
||||
format!("{}, {}", existing.to_str().unwrap_or(""), client_ip)
|
||||
} else {
|
||||
client_ip
|
||||
};
|
||||
raw_request.push_str(&format!("x-forwarded-for: {}\r\n", xff_value));
|
||||
raw_request.push_str(&format!("x-forwarded-host: {}\r\n", original_host));
|
||||
raw_request.push_str(&format!("x-forwarded-proto: {}\r\n", forwarded_proto));
|
||||
}
|
||||
|
||||
if let Some(ref route_headers) = route.headers {
|
||||
@@ -593,7 +846,6 @@ impl HttpProxyService {
|
||||
if let Err(e) = upstream_stream.write_all(raw_request.as_bytes()).await {
|
||||
error!("WebSocket: failed to send upgrade request to upstream: {}", e);
|
||||
self.upstream_selector.connection_ended(upstream_key);
|
||||
self.metrics.connection_closed(route_id, Some(source_ip));
|
||||
return Ok(error_response(StatusCode::BAD_GATEWAY, "Backend write failed"));
|
||||
}
|
||||
|
||||
@@ -604,7 +856,6 @@ impl HttpProxyService {
|
||||
Ok(0) => {
|
||||
error!("WebSocket: upstream closed before completing handshake");
|
||||
self.upstream_selector.connection_ended(upstream_key);
|
||||
self.metrics.connection_closed(route_id, Some(source_ip));
|
||||
return Ok(error_response(StatusCode::BAD_GATEWAY, "Backend closed"));
|
||||
}
|
||||
Ok(_) => {
|
||||
@@ -618,14 +869,12 @@ impl HttpProxyService {
|
||||
if response_buf.len() > 8192 {
|
||||
error!("WebSocket: upstream response headers too large");
|
||||
self.upstream_selector.connection_ended(upstream_key);
|
||||
self.metrics.connection_closed(route_id, Some(source_ip));
|
||||
return Ok(error_response(StatusCode::BAD_GATEWAY, "Backend response too large"));
|
||||
}
|
||||
}
|
||||
Err(e) => {
|
||||
error!("WebSocket: failed to read upstream response: {}", e);
|
||||
self.upstream_selector.connection_ended(upstream_key);
|
||||
self.metrics.connection_closed(route_id, Some(source_ip));
|
||||
return Ok(error_response(StatusCode::BAD_GATEWAY, "Backend read failed"));
|
||||
}
|
||||
}
|
||||
@@ -643,7 +892,6 @@ impl HttpProxyService {
|
||||
if status_code != 101 {
|
||||
debug!("WebSocket: upstream rejected upgrade with status {}", status_code);
|
||||
self.upstream_selector.connection_ended(upstream_key);
|
||||
self.metrics.connection_closed(route_id, Some(source_ip));
|
||||
return Ok(error_response(
|
||||
StatusCode::from_u16(status_code).unwrap_or(StatusCode::BAD_GATEWAY),
|
||||
"WebSocket upgrade rejected by backend",
|
||||
@@ -687,9 +935,6 @@ impl HttpProxyService {
|
||||
Err(e) => {
|
||||
debug!("WebSocket: client upgrade failed: {}", e);
|
||||
upstream_selector.connection_ended(&upstream_key_owned);
|
||||
if let Some(ref rid) = route_id_owned {
|
||||
metrics.connection_closed(Some(rid.as_str()), Some(&source_ip_owned));
|
||||
}
|
||||
return;
|
||||
}
|
||||
};
|
||||
@@ -794,7 +1039,6 @@ impl HttpProxyService {
|
||||
upstream_selector.connection_ended(&upstream_key_owned);
|
||||
if let Some(ref rid) = route_id_owned {
|
||||
metrics.record_bytes(bytes_in, bytes_out, Some(rid.as_str()), Some(&source_ip_owned));
|
||||
metrics.connection_closed(Some(rid.as_str()), Some(&source_ip_owned));
|
||||
}
|
||||
});
|
||||
|
||||
@@ -824,8 +1068,8 @@ impl HttpProxyService {
|
||||
response.body(BoxBody::new(body)).unwrap()
|
||||
}
|
||||
|
||||
/// Apply URL rewriting rules from route config.
|
||||
fn apply_url_rewrite(path: &str, route: &rustproxy_config::RouteConfig) -> String {
|
||||
/// Apply URL rewriting rules from route config, using the compiled regex cache.
|
||||
fn apply_url_rewrite(&self, path: &str, route: &rustproxy_config::RouteConfig) -> String {
|
||||
let rewrite = match route.action.advanced.as_ref()
|
||||
.and_then(|a| a.url_rewrite.as_ref())
|
||||
{
|
||||
@@ -844,10 +1088,20 @@ impl HttpProxyService {
|
||||
(path.to_string(), String::new())
|
||||
};
|
||||
|
||||
// Look up or compile the regex, caching for future requests
|
||||
let cached = self.regex_cache.get(&rewrite.pattern);
|
||||
if let Some(re) = cached {
|
||||
let result = re.replace_all(&subject, rewrite.target.as_str());
|
||||
return format!("{}{}", result, suffix);
|
||||
}
|
||||
|
||||
// Not cached — compile and insert
|
||||
match Regex::new(&rewrite.pattern) {
|
||||
Ok(re) => {
|
||||
let result = re.replace_all(&subject, rewrite.target.as_str());
|
||||
format!("{}{}", result, suffix)
|
||||
let out = format!("{}{}", result, suffix);
|
||||
self.regex_cache.insert(rewrite.pattern.clone(), re);
|
||||
out
|
||||
}
|
||||
Err(e) => {
|
||||
warn!("Invalid URL rewrite pattern '{}': {}", rewrite.pattern, e);
|
||||
@@ -974,6 +1228,9 @@ impl Default for HttpProxyService {
|
||||
metrics: Arc::new(MetricsCollector::new()),
|
||||
upstream_selector: UpstreamSelector::new(),
|
||||
connect_timeout: DEFAULT_CONNECT_TIMEOUT,
|
||||
route_rate_limiters: Arc::new(DashMap::new()),
|
||||
request_counter: AtomicU64::new(0),
|
||||
regex_cache: DashMap::new(),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -115,10 +115,18 @@ impl UpstreamSelector {
|
||||
/// Record that a connection to the given host has ended.
|
||||
pub fn connection_ended(&self, host: &str) {
|
||||
if let Some(counter) = self.active_connections.get(host) {
|
||||
let prev = counter.value().fetch_sub(1, Ordering::Relaxed);
|
||||
// Guard against underflow (shouldn't happen, but be safe)
|
||||
let prev = counter.value().load(Ordering::Relaxed);
|
||||
if prev == 0 {
|
||||
counter.value().store(0, Ordering::Relaxed);
|
||||
// Already at zero — just clean up the entry
|
||||
drop(counter);
|
||||
self.active_connections.remove(host);
|
||||
return;
|
||||
}
|
||||
counter.value().fetch_sub(1, Ordering::Relaxed);
|
||||
// Clean up zero-count entries to prevent memory growth
|
||||
if prev <= 1 {
|
||||
drop(counter);
|
||||
self.active_connections.remove(host);
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -204,6 +212,31 @@ mod tests {
|
||||
assert_eq!(r4.host, "a");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_connection_tracking_cleanup() {
|
||||
let selector = UpstreamSelector::new();
|
||||
|
||||
selector.connection_started("backend:8080");
|
||||
selector.connection_started("backend:8080");
|
||||
assert_eq!(
|
||||
selector.active_connections.get("backend:8080").unwrap().load(Ordering::Relaxed),
|
||||
2
|
||||
);
|
||||
|
||||
selector.connection_ended("backend:8080");
|
||||
assert_eq!(
|
||||
selector.active_connections.get("backend:8080").unwrap().load(Ordering::Relaxed),
|
||||
1
|
||||
);
|
||||
|
||||
// Last connection ends — entry should be removed entirely
|
||||
selector.connection_ended("backend:8080");
|
||||
assert!(selector.active_connections.get("backend:8080").is_none());
|
||||
|
||||
// Ending on a non-existent key should not panic
|
||||
selector.connection_ended("nonexistent:9999");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_ip_hash_consistent() {
|
||||
let selector = UpstreamSelector::new();
|
||||
|
||||
@@ -1,5 +1,6 @@
|
||||
use dashmap::DashMap;
|
||||
use serde::{Deserialize, Serialize};
|
||||
use std::collections::HashSet;
|
||||
use std::sync::atomic::{AtomicU64, Ordering};
|
||||
use std::sync::Mutex;
|
||||
|
||||
@@ -196,6 +197,12 @@ impl MetricsCollector {
|
||||
if val <= 1 {
|
||||
drop(counter);
|
||||
self.ip_connections.remove(ip);
|
||||
// Evict all per-IP tracking data for this IP
|
||||
self.ip_total_connections.remove(ip);
|
||||
self.ip_bytes_in.remove(ip);
|
||||
self.ip_bytes_out.remove(ip);
|
||||
self.ip_pending_tp.remove(ip);
|
||||
self.ip_throughput.remove(ip);
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -342,6 +349,17 @@ impl MetricsCollector {
|
||||
}
|
||||
}
|
||||
|
||||
/// Remove per-route metrics for route IDs that are no longer active.
|
||||
/// Call this after `update_routes()` to prune stale entries.
|
||||
pub fn retain_routes(&self, active_route_ids: &HashSet<String>) {
|
||||
self.route_connections.retain(|k, _| active_route_ids.contains(k));
|
||||
self.route_total_connections.retain(|k, _| active_route_ids.contains(k));
|
||||
self.route_bytes_in.retain(|k, _| active_route_ids.contains(k));
|
||||
self.route_bytes_out.retain(|k, _| active_route_ids.contains(k));
|
||||
self.route_pending_tp.retain(|k, _| active_route_ids.contains(k));
|
||||
self.route_throughput.retain(|k, _| active_route_ids.contains(k));
|
||||
}
|
||||
|
||||
/// Get current active connection count.
|
||||
pub fn active_connections(&self) -> u64 {
|
||||
self.active_connections.load(Ordering::Relaxed)
|
||||
@@ -633,6 +651,42 @@ mod tests {
|
||||
assert!(collector.ip_connections.get("1.2.3.4").is_none());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_per_ip_full_eviction_on_last_close() {
|
||||
let collector = MetricsCollector::with_retention(60);
|
||||
|
||||
// Open connections from two IPs
|
||||
collector.connection_opened(Some("route-a"), Some("10.0.0.1"));
|
||||
collector.connection_opened(Some("route-a"), Some("10.0.0.1"));
|
||||
collector.connection_opened(Some("route-b"), Some("10.0.0.2"));
|
||||
|
||||
// Record bytes to populate per-IP DashMaps
|
||||
collector.record_bytes(100, 200, Some("route-a"), Some("10.0.0.1"));
|
||||
collector.record_bytes(300, 400, Some("route-b"), Some("10.0.0.2"));
|
||||
collector.sample_all();
|
||||
|
||||
// Verify per-IP data exists
|
||||
assert!(collector.ip_total_connections.get("10.0.0.1").is_some());
|
||||
assert!(collector.ip_bytes_in.get("10.0.0.1").is_some());
|
||||
assert!(collector.ip_throughput.get("10.0.0.1").is_some());
|
||||
|
||||
// Close all connections for 10.0.0.1
|
||||
collector.connection_closed(Some("route-a"), Some("10.0.0.1"));
|
||||
collector.connection_closed(Some("route-a"), Some("10.0.0.1"));
|
||||
|
||||
// All per-IP data for 10.0.0.1 should be evicted
|
||||
assert!(collector.ip_connections.get("10.0.0.1").is_none());
|
||||
assert!(collector.ip_total_connections.get("10.0.0.1").is_none());
|
||||
assert!(collector.ip_bytes_in.get("10.0.0.1").is_none());
|
||||
assert!(collector.ip_bytes_out.get("10.0.0.1").is_none());
|
||||
assert!(collector.ip_pending_tp.get("10.0.0.1").is_none());
|
||||
assert!(collector.ip_throughput.get("10.0.0.1").is_none());
|
||||
|
||||
// 10.0.0.2 should still have data
|
||||
assert!(collector.ip_connections.get("10.0.0.2").is_some());
|
||||
assert!(collector.ip_total_connections.get("10.0.0.2").is_some());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_http_request_tracking() {
|
||||
let collector = MetricsCollector::with_retention(60);
|
||||
@@ -650,6 +704,35 @@ mod tests {
|
||||
assert_eq!(snapshot.http_requests_per_sec, 3);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_retain_routes_prunes_stale() {
|
||||
let collector = MetricsCollector::with_retention(60);
|
||||
|
||||
// Create metrics for 3 routes
|
||||
collector.connection_opened(Some("route-a"), None);
|
||||
collector.connection_opened(Some("route-b"), None);
|
||||
collector.connection_opened(Some("route-c"), None);
|
||||
collector.record_bytes(100, 200, Some("route-a"), None);
|
||||
collector.record_bytes(100, 200, Some("route-b"), None);
|
||||
collector.record_bytes(100, 200, Some("route-c"), None);
|
||||
collector.sample_all();
|
||||
|
||||
// Now "route-b" is removed from config
|
||||
let active = HashSet::from(["route-a".to_string(), "route-c".to_string()]);
|
||||
collector.retain_routes(&active);
|
||||
|
||||
// route-b entries should be gone
|
||||
assert!(collector.route_connections.get("route-b").is_none());
|
||||
assert!(collector.route_total_connections.get("route-b").is_none());
|
||||
assert!(collector.route_bytes_in.get("route-b").is_none());
|
||||
assert!(collector.route_bytes_out.get("route-b").is_none());
|
||||
assert!(collector.route_throughput.get("route-b").is_none());
|
||||
|
||||
// route-a and route-c should still exist
|
||||
assert!(collector.route_total_connections.get("route-a").is_some());
|
||||
assert!(collector.route_total_connections.get("route-c").is_some());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_throughput_history_in_snapshot() {
|
||||
let collector = MetricsCollector::with_retention(60);
|
||||
|
||||
@@ -95,10 +95,11 @@ impl ConnectionTracker {
|
||||
pub fn connection_closed(&self, ip: &IpAddr) {
|
||||
if let Some(counter) = self.active.get(ip) {
|
||||
let prev = counter.value().fetch_sub(1, Ordering::Relaxed);
|
||||
// Clean up zero entries
|
||||
// Clean up zero entries to prevent memory growth
|
||||
if prev <= 1 {
|
||||
drop(counter);
|
||||
self.active.remove(ip);
|
||||
self.timestamps.remove(ip);
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -205,10 +206,13 @@ impl ConnectionTracker {
|
||||
let zombies = tracker.scan_zombies();
|
||||
if !zombies.is_empty() {
|
||||
warn!(
|
||||
"Detected {} zombie connection(s): {:?}",
|
||||
"Cleaning up {} zombie connection(s): {:?}",
|
||||
zombies.len(),
|
||||
zombies
|
||||
);
|
||||
for id in &zombies {
|
||||
tracker.unregister_connection(*id);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -304,6 +308,30 @@ mod tests {
|
||||
assert_eq!(tracker.tracked_ips(), 1);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_timestamps_cleaned_on_last_close() {
|
||||
let tracker = ConnectionTracker::new(None, Some(100));
|
||||
let ip: IpAddr = "10.0.0.1".parse().unwrap();
|
||||
|
||||
// try_accept populates the timestamps map (when rate limiting is enabled)
|
||||
assert!(tracker.try_accept(&ip));
|
||||
tracker.connection_opened(&ip);
|
||||
assert!(tracker.try_accept(&ip));
|
||||
tracker.connection_opened(&ip);
|
||||
|
||||
// Timestamps should exist
|
||||
assert!(tracker.timestamps.get(&ip).is_some());
|
||||
|
||||
// Close one connection — timestamps should still exist
|
||||
tracker.connection_closed(&ip);
|
||||
assert!(tracker.timestamps.get(&ip).is_some());
|
||||
|
||||
// Close last connection — timestamps should be cleaned up
|
||||
tracker.connection_closed(&ip);
|
||||
assert!(tracker.timestamps.get(&ip).is_none());
|
||||
assert!(tracker.active.get(&ip).is_none());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_register_unregister_connection() {
|
||||
let tracker = ConnectionTracker::new(None, None);
|
||||
|
||||
@@ -2,6 +2,7 @@ use std::collections::HashMap;
|
||||
use std::sync::Arc;
|
||||
use arc_swap::ArcSwap;
|
||||
use tokio::net::TcpListener;
|
||||
use tokio_rustls::TlsAcceptor;
|
||||
use tokio_util::sync::CancellationToken;
|
||||
use tracing::{info, error, debug, warn};
|
||||
use thiserror::Error;
|
||||
@@ -21,7 +22,6 @@ struct ConnectionGuard {
|
||||
metrics: Arc<MetricsCollector>,
|
||||
route_id: Option<String>,
|
||||
source_ip: Option<String>,
|
||||
disarmed: bool,
|
||||
}
|
||||
|
||||
impl ConnectionGuard {
|
||||
@@ -30,22 +30,13 @@ impl ConnectionGuard {
|
||||
metrics,
|
||||
route_id: route_id.map(|s| s.to_string()),
|
||||
source_ip: source_ip.map(|s| s.to_string()),
|
||||
disarmed: false,
|
||||
}
|
||||
}
|
||||
|
||||
/// Disarm the guard — prevents the Drop from running.
|
||||
/// Use when handing off to a path that manages its own cleanup (e.g., HTTP proxy).
|
||||
fn disarm(mut self) {
|
||||
self.disarmed = true;
|
||||
}
|
||||
}
|
||||
|
||||
impl Drop for ConnectionGuard {
|
||||
fn drop(&mut self) {
|
||||
if !self.disarmed {
|
||||
self.metrics.connection_closed(self.route_id.as_deref(), self.source_ip.as_deref());
|
||||
}
|
||||
self.metrics.connection_closed(self.route_id.as_deref(), self.source_ip.as_deref());
|
||||
}
|
||||
}
|
||||
|
||||
@@ -93,6 +84,9 @@ pub struct ConnectionConfig {
|
||||
pub accept_proxy_protocol: bool,
|
||||
/// Whether to send PROXY protocol
|
||||
pub send_proxy_protocol: bool,
|
||||
/// Trusted IPs that may send PROXY protocol headers.
|
||||
/// When non-empty, only connections from these IPs will have PROXY headers parsed.
|
||||
pub proxy_ips: Vec<std::net::IpAddr>,
|
||||
}
|
||||
|
||||
impl Default for ConnectionConfig {
|
||||
@@ -110,6 +104,7 @@ impl Default for ConnectionConfig {
|
||||
extended_keep_alive_lifetime_ms: None,
|
||||
accept_proxy_protocol: false,
|
||||
send_proxy_protocol: false,
|
||||
proxy_ips: Vec::new(),
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -122,8 +117,10 @@ pub struct TcpListenerManager {
|
||||
route_manager: Arc<ArcSwap<RouteManager>>,
|
||||
/// Shared metrics collector
|
||||
metrics: Arc<MetricsCollector>,
|
||||
/// TLS acceptors indexed by domain (ArcSwap for hot-reload visibility in accept loops)
|
||||
/// Raw PEM TLS configs indexed by domain (kept for fallback with custom TLS versions)
|
||||
tls_configs: Arc<ArcSwap<HashMap<String, TlsCertConfig>>>,
|
||||
/// Shared TLS acceptor (pre-parsed certs + session cache). None when no certs configured.
|
||||
shared_tls_acceptor: Arc<ArcSwap<Option<TlsAcceptor>>>,
|
||||
/// HTTP proxy service for HTTP-level forwarding
|
||||
http_proxy: Arc<HttpProxyService>,
|
||||
/// Connection configuration
|
||||
@@ -154,6 +151,7 @@ impl TcpListenerManager {
|
||||
route_manager: Arc::new(ArcSwap::from(route_manager)),
|
||||
metrics,
|
||||
tls_configs: Arc::new(ArcSwap::from(Arc::new(HashMap::new()))),
|
||||
shared_tls_acceptor: Arc::new(ArcSwap::from(Arc::new(None))),
|
||||
http_proxy,
|
||||
conn_config: Arc::new(conn_config),
|
||||
conn_tracker,
|
||||
@@ -179,6 +177,7 @@ impl TcpListenerManager {
|
||||
route_manager: Arc::new(ArcSwap::from(route_manager)),
|
||||
metrics,
|
||||
tls_configs: Arc::new(ArcSwap::from(Arc::new(HashMap::new()))),
|
||||
shared_tls_acceptor: Arc::new(ArcSwap::from(Arc::new(None))),
|
||||
http_proxy,
|
||||
conn_config: Arc::new(conn_config),
|
||||
conn_tracker,
|
||||
@@ -197,8 +196,26 @@ impl TcpListenerManager {
|
||||
}
|
||||
|
||||
/// Set TLS certificate configurations.
|
||||
/// Builds a shared TLS acceptor with pre-parsed certs and session resumption support.
|
||||
/// Uses ArcSwap so running accept loops immediately see the new certs.
|
||||
pub fn set_tls_configs(&self, configs: HashMap<String, TlsCertConfig>) {
|
||||
if !configs.is_empty() {
|
||||
match tls_handler::CertResolver::new(&configs)
|
||||
.and_then(tls_handler::build_shared_tls_acceptor)
|
||||
{
|
||||
Ok(acceptor) => {
|
||||
info!("Built shared TLS acceptor for {} domain(s)", configs.len());
|
||||
self.shared_tls_acceptor.store(Arc::new(Some(acceptor)));
|
||||
}
|
||||
Err(e) => {
|
||||
warn!("Failed to build shared TLS acceptor: {}, falling back to per-connection", e);
|
||||
self.shared_tls_acceptor.store(Arc::new(None));
|
||||
}
|
||||
}
|
||||
} else {
|
||||
self.shared_tls_acceptor.store(Arc::new(None));
|
||||
}
|
||||
// Keep raw PEM configs for fallback (routes with custom TLS versions)
|
||||
self.tls_configs.store(Arc::new(configs));
|
||||
}
|
||||
|
||||
@@ -224,6 +241,7 @@ impl TcpListenerManager {
|
||||
let route_manager_swap = Arc::clone(&self.route_manager);
|
||||
let metrics = Arc::clone(&self.metrics);
|
||||
let tls_configs = Arc::clone(&self.tls_configs);
|
||||
let shared_tls_acceptor = Arc::clone(&self.shared_tls_acceptor);
|
||||
let http_proxy = Arc::clone(&self.http_proxy);
|
||||
let conn_config = Arc::clone(&self.conn_config);
|
||||
let conn_tracker = Arc::clone(&self.conn_tracker);
|
||||
@@ -233,7 +251,7 @@ impl TcpListenerManager {
|
||||
let handle = tokio::spawn(async move {
|
||||
Self::accept_loop(
|
||||
listener, port, route_manager_swap, metrics, tls_configs,
|
||||
http_proxy, conn_config, conn_tracker, cancel, relay,
|
||||
shared_tls_acceptor, http_proxy, conn_config, conn_tracker, cancel, relay,
|
||||
).await;
|
||||
});
|
||||
|
||||
@@ -322,6 +340,7 @@ impl TcpListenerManager {
|
||||
route_manager_swap: Arc<ArcSwap<RouteManager>>,
|
||||
metrics: Arc<MetricsCollector>,
|
||||
tls_configs: Arc<ArcSwap<HashMap<String, TlsCertConfig>>>,
|
||||
shared_tls_acceptor: Arc<ArcSwap<Option<TlsAcceptor>>>,
|
||||
http_proxy: Arc<HttpProxyService>,
|
||||
conn_config: Arc<ConnectionConfig>,
|
||||
conn_tracker: Arc<ConnectionTracker>,
|
||||
@@ -353,6 +372,8 @@ impl TcpListenerManager {
|
||||
let m = Arc::clone(&metrics);
|
||||
// Load the latest TLS configs from ArcSwap on each connection
|
||||
let tc = tls_configs.load_full();
|
||||
// Load the latest shared TLS acceptor from ArcSwap
|
||||
let sa = shared_tls_acceptor.load_full();
|
||||
let hp = Arc::clone(&http_proxy);
|
||||
let cc = Arc::clone(&conn_config);
|
||||
let ct = Arc::clone(&conn_tracker);
|
||||
@@ -362,7 +383,7 @@ impl TcpListenerManager {
|
||||
|
||||
tokio::spawn(async move {
|
||||
let result = Self::handle_connection(
|
||||
stream, port, peer_addr, rm, m, tc, hp, cc, cn, sr,
|
||||
stream, port, peer_addr, rm, m, tc, sa, hp, cc, cn, sr,
|
||||
).await;
|
||||
if let Err(e) = result {
|
||||
debug!("Connection error from {}: {}", peer_addr, e);
|
||||
@@ -388,6 +409,7 @@ impl TcpListenerManager {
|
||||
route_manager: Arc<RouteManager>,
|
||||
metrics: Arc<MetricsCollector>,
|
||||
tls_configs: Arc<HashMap<String, TlsCertConfig>>,
|
||||
shared_tls_acceptor: Arc<Option<TlsAcceptor>>,
|
||||
http_proxy: Arc<HttpProxyService>,
|
||||
conn_config: Arc<ConnectionConfig>,
|
||||
cancel: CancellationToken,
|
||||
@@ -397,7 +419,41 @@ impl TcpListenerManager {
|
||||
|
||||
stream.set_nodelay(true)?;
|
||||
|
||||
// Extract source IP once for all metric calls
|
||||
// --- PROXY protocol: must happen BEFORE ip_str and fast path ---
|
||||
// Only parse PROXY headers from trusted proxy IPs (security).
|
||||
// Non-proxy connections skip the peek entirely (no latency cost).
|
||||
let mut effective_peer_addr = peer_addr;
|
||||
if !conn_config.proxy_ips.is_empty() && conn_config.proxy_ips.contains(&peer_addr.ip()) {
|
||||
// Trusted proxy IP — peek for PROXY protocol header
|
||||
let mut proxy_peek = vec![0u8; 256];
|
||||
let pn = match tokio::time::timeout(
|
||||
std::time::Duration::from_millis(conn_config.initial_data_timeout_ms),
|
||||
stream.peek(&mut proxy_peek),
|
||||
).await {
|
||||
Ok(Ok(n)) => n,
|
||||
Ok(Err(e)) => return Err(e.into()),
|
||||
Err(_) => return Err("Initial data timeout (proxy protocol peek)".into()),
|
||||
};
|
||||
|
||||
if pn > 0 && crate::proxy_protocol::is_proxy_protocol_v1(&proxy_peek[..pn]) {
|
||||
match crate::proxy_protocol::parse_v1(&proxy_peek[..pn]) {
|
||||
Ok((header, consumed)) => {
|
||||
debug!("PROXY protocol: real client {} -> {}", header.source_addr, header.dest_addr);
|
||||
effective_peer_addr = header.source_addr;
|
||||
// Consume the proxy protocol header bytes
|
||||
let mut discard = vec![0u8; consumed];
|
||||
stream.read_exact(&mut discard).await?;
|
||||
}
|
||||
Err(e) => {
|
||||
debug!("Failed to parse PROXY protocol header: {}", e);
|
||||
// Not a PROXY protocol header, continue normally
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
let peer_addr = effective_peer_addr;
|
||||
|
||||
// Extract source IP once for all metric calls (reflects real client IP after PROXY parsing)
|
||||
let ip_str = peer_addr.ip().to_string();
|
||||
|
||||
// === Fast path: try port-only matching before peeking at data ===
|
||||
@@ -418,6 +474,7 @@ impl TcpListenerManager {
|
||||
tls_version: None,
|
||||
headers: None,
|
||||
is_tls: false,
|
||||
protocol: None,
|
||||
};
|
||||
|
||||
if let Some(quick_match) = route_manager.find_route(&quick_ctx) {
|
||||
@@ -529,37 +586,6 @@ impl TcpListenerManager {
|
||||
}
|
||||
// === End fast path ===
|
||||
|
||||
// Handle PROXY protocol if configured
|
||||
let mut effective_peer_addr = peer_addr;
|
||||
if conn_config.accept_proxy_protocol {
|
||||
let mut proxy_peek = vec![0u8; 256];
|
||||
let pn = match tokio::time::timeout(
|
||||
std::time::Duration::from_millis(conn_config.initial_data_timeout_ms),
|
||||
stream.peek(&mut proxy_peek),
|
||||
).await {
|
||||
Ok(Ok(n)) => n,
|
||||
Ok(Err(e)) => return Err(e.into()),
|
||||
Err(_) => return Err("Initial data timeout (proxy protocol peek)".into()),
|
||||
};
|
||||
|
||||
if pn > 0 && crate::proxy_protocol::is_proxy_protocol_v1(&proxy_peek[..pn]) {
|
||||
match crate::proxy_protocol::parse_v1(&proxy_peek[..pn]) {
|
||||
Ok((header, consumed)) => {
|
||||
debug!("PROXY protocol: real client {} -> {}", header.source_addr, header.dest_addr);
|
||||
effective_peer_addr = header.source_addr;
|
||||
// Consume the proxy protocol header bytes
|
||||
let mut discard = vec![0u8; consumed];
|
||||
stream.read_exact(&mut discard).await?;
|
||||
}
|
||||
Err(e) => {
|
||||
debug!("Failed to parse PROXY protocol header: {}", e);
|
||||
// Not a PROXY protocol header, continue normally
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
let peer_addr = effective_peer_addr;
|
||||
|
||||
// Peek at initial bytes with timeout
|
||||
let mut peek_buf = vec![0u8; 4096];
|
||||
let n = match tokio::time::timeout(
|
||||
@@ -622,6 +648,8 @@ impl TcpListenerManager {
|
||||
tls_version: None,
|
||||
headers: None,
|
||||
is_tls,
|
||||
// For TLS connections, protocol is unknown until after termination
|
||||
protocol: if is_http { Some("http") } else if !is_tls { Some("tcp") } else { None },
|
||||
};
|
||||
|
||||
let route_match = route_manager.find_route(&ctx);
|
||||
@@ -777,13 +805,9 @@ impl TcpListenerManager {
|
||||
Ok(())
|
||||
}
|
||||
Some(rustproxy_config::TlsMode::Terminate) => {
|
||||
let tls_config = Self::find_tls_config(&domain, &tls_configs)?;
|
||||
|
||||
// TLS accept with timeout, applying route-level TLS settings
|
||||
// Use shared acceptor (session resumption) or fall back to per-connection
|
||||
let route_tls = route_match.route.action.tls.as_ref();
|
||||
let acceptor = tls_handler::build_tls_acceptor_with_config(
|
||||
&tls_config.cert_pem, &tls_config.key_pem, route_tls,
|
||||
)?;
|
||||
let acceptor = Self::get_tls_acceptor(&domain, &tls_configs, &*shared_tls_acceptor, route_tls)?;
|
||||
let tls_stream = match tokio::time::timeout(
|
||||
std::time::Duration::from_millis(conn_config.initial_data_timeout_ms),
|
||||
tls_handler::accept_tls(stream, &acceptor),
|
||||
@@ -803,13 +827,20 @@ impl TcpListenerManager {
|
||||
}
|
||||
};
|
||||
|
||||
// Check protocol restriction from route config
|
||||
if let Some(ref required_protocol) = route_match.route.route_match.protocol {
|
||||
let detected = if peeked { "http" } else { "tcp" };
|
||||
if required_protocol != detected {
|
||||
debug!("Protocol mismatch: route requires '{}', got '{}'", required_protocol, detected);
|
||||
return Err("Protocol mismatch".into());
|
||||
}
|
||||
}
|
||||
|
||||
if peeked {
|
||||
debug!(
|
||||
"TLS Terminate + HTTP: {} -> {}:{} (domain: {:?})",
|
||||
peer_addr, target_host, target_port, domain
|
||||
);
|
||||
// HTTP proxy manages its own per-request metrics — disarm TCP-level guard
|
||||
_conn_guard.disarm();
|
||||
http_proxy.handle_io(buf_stream, peer_addr, port, cancel.clone()).await;
|
||||
} else {
|
||||
debug!(
|
||||
@@ -843,18 +874,63 @@ impl TcpListenerManager {
|
||||
Ok(())
|
||||
}
|
||||
Some(rustproxy_config::TlsMode::TerminateAndReencrypt) => {
|
||||
// Inline TLS accept + HTTP detection (same pattern as Terminate mode)
|
||||
let route_tls = route_match.route.action.tls.as_ref();
|
||||
Self::handle_tls_terminate_reencrypt(
|
||||
stream, n, &domain, &target_host, target_port,
|
||||
peer_addr, &tls_configs, Arc::clone(&metrics), route_id, &conn_config, route_tls,
|
||||
).await
|
||||
let acceptor = Self::get_tls_acceptor(&domain, &tls_configs, &*shared_tls_acceptor, route_tls)?;
|
||||
let tls_stream = match tokio::time::timeout(
|
||||
std::time::Duration::from_millis(conn_config.initial_data_timeout_ms),
|
||||
tls_handler::accept_tls(stream, &acceptor),
|
||||
).await {
|
||||
Ok(Ok(s)) => s,
|
||||
Ok(Err(e)) => return Err(e),
|
||||
Err(_) => return Err("TLS handshake timeout".into()),
|
||||
};
|
||||
|
||||
// Peek at decrypted data to detect protocol
|
||||
let mut buf_stream = tokio::io::BufReader::new(tls_stream);
|
||||
let is_http_data = {
|
||||
use tokio::io::AsyncBufReadExt;
|
||||
match buf_stream.fill_buf().await {
|
||||
Ok(data) => sni_parser::is_http(data),
|
||||
Err(_) => false,
|
||||
}
|
||||
};
|
||||
|
||||
// Check protocol restriction from route config
|
||||
if let Some(ref required_protocol) = route_match.route.route_match.protocol {
|
||||
let detected = if is_http_data { "http" } else { "tcp" };
|
||||
if required_protocol != detected {
|
||||
debug!("Protocol mismatch: route requires '{}', got '{}'", required_protocol, detected);
|
||||
return Err("Protocol mismatch".into());
|
||||
}
|
||||
}
|
||||
|
||||
if is_http_data {
|
||||
// HTTP: full per-request routing via HttpProxyService
|
||||
// (backend TLS handled by HttpProxyService when upstream.use_tls is set)
|
||||
debug!(
|
||||
"TLS Terminate+Reencrypt + HTTP: {} (domain: {:?})",
|
||||
peer_addr, domain
|
||||
);
|
||||
http_proxy.handle_io(buf_stream, peer_addr, port, cancel.clone()).await;
|
||||
} else {
|
||||
// Non-HTTP: TLS-to-TLS tunnel (existing behavior for raw TCP protocols)
|
||||
debug!(
|
||||
"TLS Terminate+Reencrypt + TCP: {} -> {}:{}",
|
||||
peer_addr, target_host, target_port
|
||||
);
|
||||
Self::handle_tls_reencrypt_tunnel(
|
||||
buf_stream, &target_host, target_port,
|
||||
peer_addr, Arc::clone(&metrics), route_id,
|
||||
&conn_config,
|
||||
).await?;
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
None => {
|
||||
if is_http {
|
||||
// Plain HTTP - use HTTP proxy for request-level routing
|
||||
debug!("HTTP proxy: {} on port {}", peer_addr, port);
|
||||
// HTTP proxy manages its own per-request metrics — disarm TCP-level guard
|
||||
_conn_guard.disarm();
|
||||
http_proxy.handle_connection(stream, peer_addr, port, cancel.clone()).await;
|
||||
Ok(())
|
||||
} else {
|
||||
@@ -982,40 +1058,18 @@ impl TcpListenerManager {
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Handle TLS terminate-and-reencrypt: accept TLS from client, connect TLS to backend.
|
||||
async fn handle_tls_terminate_reencrypt(
|
||||
stream: tokio::net::TcpStream,
|
||||
_peek_len: usize,
|
||||
domain: &Option<String>,
|
||||
/// Handle non-HTTP TLS-to-TLS tunnel for terminate-and-reencrypt mode.
|
||||
/// TLS accept has already been done by the caller; this only connects to the
|
||||
/// backend over TLS and forwards bidirectionally.
|
||||
async fn handle_tls_reencrypt_tunnel(
|
||||
buf_stream: tokio::io::BufReader<tokio_rustls::server::TlsStream<tokio::net::TcpStream>>,
|
||||
target_host: &str,
|
||||
target_port: u16,
|
||||
peer_addr: std::net::SocketAddr,
|
||||
tls_configs: &HashMap<String, TlsCertConfig>,
|
||||
metrics: Arc<MetricsCollector>,
|
||||
route_id: Option<&str>,
|
||||
conn_config: &ConnectionConfig,
|
||||
route_tls: Option<&rustproxy_config::RouteTls>,
|
||||
) -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
|
||||
let tls_config = Self::find_tls_config(domain, tls_configs)?;
|
||||
let acceptor = tls_handler::build_tls_acceptor_with_config(
|
||||
&tls_config.cert_pem, &tls_config.key_pem, route_tls,
|
||||
)?;
|
||||
|
||||
// Accept TLS from client with timeout
|
||||
let client_tls = match tokio::time::timeout(
|
||||
std::time::Duration::from_millis(conn_config.initial_data_timeout_ms),
|
||||
tls_handler::accept_tls(stream, &acceptor),
|
||||
).await {
|
||||
Ok(Ok(s)) => s,
|
||||
Ok(Err(e)) => return Err(e),
|
||||
Err(_) => return Err("TLS handshake timeout".into()),
|
||||
};
|
||||
|
||||
debug!(
|
||||
"TLS Terminate+Reencrypt: {} -> {}:{} (domain: {:?})",
|
||||
peer_addr, target_host, target_port, domain
|
||||
);
|
||||
|
||||
// Connect to backend over TLS with timeout
|
||||
let backend_tls = match tokio::time::timeout(
|
||||
std::time::Duration::from_millis(conn_config.connection_timeout_ms),
|
||||
@@ -1026,8 +1080,9 @@ impl TcpListenerManager {
|
||||
Err(_) => return Err("Backend TLS connection timeout".into()),
|
||||
};
|
||||
|
||||
// Forward between two TLS streams
|
||||
let (client_read, client_write) = tokio::io::split(client_tls);
|
||||
// Forward between decrypted client stream and backend TLS stream
|
||||
// (BufReader preserves any already-buffered data from the peek)
|
||||
let (client_read, client_write) = tokio::io::split(buf_stream);
|
||||
let (backend_read, backend_write) = tokio::io::split(backend_tls);
|
||||
|
||||
let base_inactivity_ms = conn_config.socket_timeout_ms;
|
||||
@@ -1069,6 +1124,30 @@ impl TcpListenerManager {
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Get a TLS acceptor, preferring the shared one (with session resumption)
|
||||
/// and falling back to per-connection when custom TLS versions are configured.
|
||||
fn get_tls_acceptor(
|
||||
domain: &Option<String>,
|
||||
tls_configs: &HashMap<String, TlsCertConfig>,
|
||||
shared_tls_acceptor: &Option<TlsAcceptor>,
|
||||
route_tls: Option<&rustproxy_config::RouteTls>,
|
||||
) -> Result<TlsAcceptor, Box<dyn std::error::Error + Send + Sync>> {
|
||||
let has_custom_versions = route_tls
|
||||
.and_then(|t| t.versions.as_ref())
|
||||
.map(|v| !v.is_empty())
|
||||
.unwrap_or(false);
|
||||
|
||||
if !has_custom_versions {
|
||||
if let Some(shared) = shared_tls_acceptor {
|
||||
return Ok(shared.clone()); // TlsAcceptor wraps Arc<ServerConfig>, clone is cheap
|
||||
}
|
||||
}
|
||||
|
||||
// Fallback: per-connection acceptor (custom TLS versions or shared build failed)
|
||||
let tls_config = Self::find_tls_config(domain, tls_configs)?;
|
||||
tls_handler::build_tls_acceptor_with_config(&tls_config.cert_pem, &tls_config.key_pem, route_tls)
|
||||
}
|
||||
|
||||
/// Find the TLS config for a given domain.
|
||||
fn find_tls_config<'a>(
|
||||
domain: &Option<String>,
|
||||
|
||||
@@ -1,17 +1,99 @@
|
||||
use std::collections::HashMap;
|
||||
use std::io::BufReader;
|
||||
use std::sync::Arc;
|
||||
|
||||
use rustls::pki_types::{CertificateDer, PrivateKeyDer};
|
||||
use rustls::server::ResolvesServerCert;
|
||||
use rustls::sign::CertifiedKey;
|
||||
use rustls::ServerConfig;
|
||||
use tokio::net::TcpStream;
|
||||
use tokio_rustls::{TlsAcceptor, TlsConnector, server::TlsStream as ServerTlsStream};
|
||||
use tracing::debug;
|
||||
use tracing::{debug, info};
|
||||
|
||||
use crate::tcp_listener::TlsCertConfig;
|
||||
|
||||
/// Ensure the default crypto provider is installed.
|
||||
fn ensure_crypto_provider() {
|
||||
let _ = rustls::crypto::ring::default_provider().install_default();
|
||||
}
|
||||
|
||||
/// SNI-based certificate resolver with pre-parsed CertifiedKeys.
|
||||
/// Enables shared ServerConfig across connections — avoids per-connection PEM parsing
|
||||
/// and enables TLS session resumption.
|
||||
#[derive(Debug)]
|
||||
pub struct CertResolver {
|
||||
certs: HashMap<String, Arc<CertifiedKey>>,
|
||||
fallback: Option<Arc<CertifiedKey>>,
|
||||
}
|
||||
|
||||
impl CertResolver {
|
||||
/// Build a resolver from PEM-encoded cert/key configs.
|
||||
/// Parses all PEM data upfront so connections only do a cheap HashMap lookup.
|
||||
pub fn new(configs: &HashMap<String, TlsCertConfig>) -> Result<Self, Box<dyn std::error::Error + Send + Sync>> {
|
||||
ensure_crypto_provider();
|
||||
let provider = rustls::crypto::ring::default_provider();
|
||||
let mut certs = HashMap::new();
|
||||
let mut fallback = None;
|
||||
|
||||
for (domain, cfg) in configs {
|
||||
let cert_chain = load_certs(&cfg.cert_pem)?;
|
||||
let key = load_private_key(&cfg.key_pem)?;
|
||||
let ck = Arc::new(CertifiedKey::from_der(cert_chain, key, &provider)
|
||||
.map_err(|e| format!("CertifiedKey for {}: {}", domain, e))?);
|
||||
if domain == "*" {
|
||||
fallback = Some(Arc::clone(&ck));
|
||||
}
|
||||
certs.insert(domain.clone(), ck);
|
||||
}
|
||||
|
||||
// If no explicit "*" fallback, use the first available cert
|
||||
if fallback.is_none() {
|
||||
fallback = certs.values().next().map(Arc::clone);
|
||||
}
|
||||
|
||||
Ok(Self { certs, fallback })
|
||||
}
|
||||
}
|
||||
|
||||
impl ResolvesServerCert for CertResolver {
|
||||
fn resolve(&self, client_hello: rustls::server::ClientHello<'_>) -> Option<Arc<CertifiedKey>> {
|
||||
let domain = match client_hello.server_name() {
|
||||
Some(name) => name,
|
||||
None => return self.fallback.clone(),
|
||||
};
|
||||
// Exact match
|
||||
if let Some(ck) = self.certs.get(domain) {
|
||||
return Some(Arc::clone(ck));
|
||||
}
|
||||
// Wildcard: sub.example.com → *.example.com
|
||||
if let Some(dot) = domain.find('.') {
|
||||
let wc = format!("*.{}", &domain[dot + 1..]);
|
||||
if let Some(ck) = self.certs.get(&wc) {
|
||||
return Some(Arc::clone(ck));
|
||||
}
|
||||
}
|
||||
self.fallback.clone()
|
||||
}
|
||||
}
|
||||
|
||||
/// Build a shared TLS acceptor with SNI resolution, session cache, and session tickets.
|
||||
/// The returned acceptor can be reused across all connections (cheap Arc clone).
|
||||
pub fn build_shared_tls_acceptor(resolver: CertResolver) -> Result<TlsAcceptor, Box<dyn std::error::Error + Send + Sync>> {
|
||||
ensure_crypto_provider();
|
||||
let mut config = ServerConfig::builder()
|
||||
.with_no_client_auth()
|
||||
.with_cert_resolver(Arc::new(resolver));
|
||||
|
||||
// Shared session cache — enables session ID resumption across connections
|
||||
config.session_storage = rustls::server::ServerSessionMemoryCache::new(4096);
|
||||
// Session ticket resumption (12-hour lifetime, Chacha20Poly1305 encrypted)
|
||||
config.ticketer = rustls::crypto::ring::Ticketer::new()
|
||||
.map_err(|e| format!("Ticketer: {}", e))?;
|
||||
|
||||
info!("Built shared TLS config with session cache (4096) and ticket support");
|
||||
Ok(TlsAcceptor::from(Arc::new(config)))
|
||||
}
|
||||
|
||||
/// Build a TLS acceptor from PEM-encoded cert and key data.
|
||||
pub fn build_tls_acceptor(cert_pem: &str, key_pem: &str) -> Result<TlsAcceptor, Box<dyn std::error::Error + Send + Sync>> {
|
||||
build_tls_acceptor_with_config(cert_pem, key_pem, None)
|
||||
|
||||
@@ -12,6 +12,8 @@ pub struct MatchContext<'a> {
|
||||
pub tls_version: Option<&'a str>,
|
||||
pub headers: Option<&'a HashMap<String, String>>,
|
||||
pub is_tls: bool,
|
||||
/// Detected protocol: "http" or "tcp". None when unknown (e.g. pre-TLS-termination).
|
||||
pub protocol: Option<&'a str>,
|
||||
}
|
||||
|
||||
/// Result of a route match.
|
||||
@@ -87,9 +89,17 @@ impl RouteManager {
|
||||
if !matchers::domain_matches_any(&patterns, domain) {
|
||||
return false;
|
||||
}
|
||||
} else if ctx.is_tls {
|
||||
// TLS connection without SNI cannot match a domain-restricted route.
|
||||
// This prevents session-ticket resumption from misrouting when clients
|
||||
// omit SNI (RFC 8446 recommends but doesn't mandate SNI on resumption).
|
||||
// Wildcard-only routes (domains: ["*"]) still match since they accept all.
|
||||
let patterns = domains.to_vec();
|
||||
let is_wildcard_only = patterns.iter().all(|d| *d == "*");
|
||||
if !is_wildcard_only {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
// If no domain provided but route requires domain, it depends on context
|
||||
// For TLS passthrough, we need SNI; for other cases we may still match
|
||||
}
|
||||
|
||||
// Path matching
|
||||
@@ -137,6 +147,17 @@ impl RouteManager {
|
||||
}
|
||||
}
|
||||
|
||||
// Protocol matching
|
||||
if let Some(ref required_protocol) = rm.protocol {
|
||||
if let Some(protocol) = ctx.protocol {
|
||||
if required_protocol != protocol {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
// If protocol not yet known (None), allow match — protocol will be
|
||||
// validated after detection (post-TLS-termination peek)
|
||||
}
|
||||
|
||||
true
|
||||
}
|
||||
|
||||
@@ -277,6 +298,7 @@ mod tests {
|
||||
client_ip: None,
|
||||
tls_version: None,
|
||||
headers: None,
|
||||
protocol: None,
|
||||
},
|
||||
action: RouteAction {
|
||||
action_type: RouteActionType::Forward,
|
||||
@@ -327,6 +349,7 @@ mod tests {
|
||||
tls_version: None,
|
||||
headers: None,
|
||||
is_tls: false,
|
||||
protocol: None,
|
||||
};
|
||||
|
||||
let result = manager.find_route(&ctx);
|
||||
@@ -349,6 +372,7 @@ mod tests {
|
||||
tls_version: None,
|
||||
headers: None,
|
||||
is_tls: false,
|
||||
protocol: None,
|
||||
};
|
||||
|
||||
let result = manager.find_route(&ctx).unwrap();
|
||||
@@ -372,6 +396,7 @@ mod tests {
|
||||
tls_version: None,
|
||||
headers: None,
|
||||
is_tls: false,
|
||||
protocol: None,
|
||||
};
|
||||
|
||||
assert!(manager.find_route(&ctx).is_none());
|
||||
@@ -457,6 +482,116 @@ mod tests {
|
||||
tls_version: None,
|
||||
headers: None,
|
||||
is_tls: false,
|
||||
protocol: None,
|
||||
};
|
||||
|
||||
assert!(manager.find_route(&ctx).is_some());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_tls_no_sni_rejects_domain_restricted_route() {
|
||||
let routes = vec![make_route(443, Some("example.com"), 0)];
|
||||
let manager = RouteManager::new(routes);
|
||||
|
||||
// TLS connection without SNI should NOT match a domain-restricted route
|
||||
let ctx = MatchContext {
|
||||
port: 443,
|
||||
domain: None,
|
||||
path: None,
|
||||
client_ip: None,
|
||||
tls_version: None,
|
||||
headers: None,
|
||||
is_tls: true,
|
||||
protocol: None,
|
||||
};
|
||||
|
||||
assert!(manager.find_route(&ctx).is_none());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_tls_no_sni_rejects_wildcard_subdomain_route() {
|
||||
let routes = vec![make_route(443, Some("*.example.com"), 0)];
|
||||
let manager = RouteManager::new(routes);
|
||||
|
||||
// TLS connection without SNI should NOT match *.example.com
|
||||
let ctx = MatchContext {
|
||||
port: 443,
|
||||
domain: None,
|
||||
path: None,
|
||||
client_ip: None,
|
||||
tls_version: None,
|
||||
headers: None,
|
||||
is_tls: true,
|
||||
protocol: None,
|
||||
};
|
||||
|
||||
assert!(manager.find_route(&ctx).is_none());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_tls_no_sni_matches_wildcard_only_route() {
|
||||
let routes = vec![make_route(443, Some("*"), 0)];
|
||||
let manager = RouteManager::new(routes);
|
||||
|
||||
// TLS connection without SNI SHOULD match a wildcard-only route
|
||||
let ctx = MatchContext {
|
||||
port: 443,
|
||||
domain: None,
|
||||
path: None,
|
||||
client_ip: None,
|
||||
tls_version: None,
|
||||
headers: None,
|
||||
is_tls: true,
|
||||
protocol: None,
|
||||
};
|
||||
|
||||
assert!(manager.find_route(&ctx).is_some());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_tls_no_sni_skips_domain_restricted_matches_fallback() {
|
||||
// Two routes: first is domain-restricted, second is wildcard catch-all
|
||||
let routes = vec![
|
||||
make_route(443, Some("specific.com"), 10),
|
||||
make_route(443, Some("*"), 0),
|
||||
];
|
||||
let manager = RouteManager::new(routes);
|
||||
|
||||
// TLS without SNI should skip specific.com and fall through to wildcard
|
||||
let ctx = MatchContext {
|
||||
port: 443,
|
||||
domain: None,
|
||||
path: None,
|
||||
client_ip: None,
|
||||
tls_version: None,
|
||||
headers: None,
|
||||
is_tls: true,
|
||||
protocol: None,
|
||||
};
|
||||
|
||||
let result = manager.find_route(&ctx);
|
||||
assert!(result.is_some());
|
||||
let matched_domains = result.unwrap().route.route_match.domains.as_ref()
|
||||
.map(|d| d.to_vec()).unwrap();
|
||||
assert!(matched_domains.contains(&"*"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_non_tls_no_domain_still_matches_domain_restricted() {
|
||||
// Non-TLS (plain HTTP) without domain should still match domain-restricted routes
|
||||
// (the HTTP proxy layer handles Host-based routing)
|
||||
let routes = vec![make_route(80, Some("example.com"), 0)];
|
||||
let manager = RouteManager::new(routes);
|
||||
|
||||
let ctx = MatchContext {
|
||||
port: 80,
|
||||
domain: None,
|
||||
path: None,
|
||||
client_ip: None,
|
||||
tls_version: None,
|
||||
headers: None,
|
||||
is_tls: false,
|
||||
protocol: None,
|
||||
};
|
||||
|
||||
assert!(manager.find_route(&ctx).is_some());
|
||||
@@ -475,6 +610,7 @@ mod tests {
|
||||
tls_version: None,
|
||||
headers: None,
|
||||
is_tls: false,
|
||||
protocol: None,
|
||||
};
|
||||
|
||||
assert!(manager.find_route(&ctx).is_some());
|
||||
@@ -525,6 +661,7 @@ mod tests {
|
||||
tls_version: None,
|
||||
headers: None,
|
||||
is_tls: false,
|
||||
protocol: None,
|
||||
};
|
||||
let result = manager.find_route(&ctx).unwrap();
|
||||
assert_eq!(result.target.unwrap().host.first(), "api-backend");
|
||||
@@ -538,8 +675,102 @@ mod tests {
|
||||
tls_version: None,
|
||||
headers: None,
|
||||
is_tls: false,
|
||||
protocol: None,
|
||||
};
|
||||
let result = manager.find_route(&ctx).unwrap();
|
||||
assert_eq!(result.target.unwrap().host.first(), "default-backend");
|
||||
}
|
||||
|
||||
fn make_route_with_protocol(port: u16, domain: Option<&str>, protocol: Option<&str>) -> RouteConfig {
|
||||
let mut route = make_route(port, domain, 0);
|
||||
route.route_match.protocol = protocol.map(|s| s.to_string());
|
||||
route
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_protocol_http_matches_http() {
|
||||
let routes = vec![make_route_with_protocol(80, None, Some("http"))];
|
||||
let manager = RouteManager::new(routes);
|
||||
|
||||
let ctx = MatchContext {
|
||||
port: 80,
|
||||
domain: None,
|
||||
path: None,
|
||||
client_ip: None,
|
||||
tls_version: None,
|
||||
headers: None,
|
||||
is_tls: false,
|
||||
protocol: Some("http"),
|
||||
};
|
||||
assert!(manager.find_route(&ctx).is_some());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_protocol_http_rejects_tcp() {
|
||||
let routes = vec![make_route_with_protocol(80, None, Some("http"))];
|
||||
let manager = RouteManager::new(routes);
|
||||
|
||||
let ctx = MatchContext {
|
||||
port: 80,
|
||||
domain: None,
|
||||
path: None,
|
||||
client_ip: None,
|
||||
tls_version: None,
|
||||
headers: None,
|
||||
is_tls: false,
|
||||
protocol: Some("tcp"),
|
||||
};
|
||||
assert!(manager.find_route(&ctx).is_none());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_protocol_none_matches_any() {
|
||||
// Route with no protocol restriction matches any protocol
|
||||
let routes = vec![make_route_with_protocol(80, None, None)];
|
||||
let manager = RouteManager::new(routes);
|
||||
|
||||
let ctx_http = MatchContext {
|
||||
port: 80,
|
||||
domain: None,
|
||||
path: None,
|
||||
client_ip: None,
|
||||
tls_version: None,
|
||||
headers: None,
|
||||
is_tls: false,
|
||||
protocol: Some("http"),
|
||||
};
|
||||
assert!(manager.find_route(&ctx_http).is_some());
|
||||
|
||||
let ctx_tcp = MatchContext {
|
||||
port: 80,
|
||||
domain: None,
|
||||
path: None,
|
||||
client_ip: None,
|
||||
tls_version: None,
|
||||
headers: None,
|
||||
is_tls: false,
|
||||
protocol: Some("tcp"),
|
||||
};
|
||||
assert!(manager.find_route(&ctx_tcp).is_some());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_protocol_http_matches_when_unknown() {
|
||||
// Route with protocol: "http" should match when ctx.protocol is None
|
||||
// (pre-TLS-termination, protocol not yet known)
|
||||
let routes = vec![make_route_with_protocol(443, None, Some("http"))];
|
||||
let manager = RouteManager::new(routes);
|
||||
|
||||
let ctx = MatchContext {
|
||||
port: 443,
|
||||
domain: None,
|
||||
path: None,
|
||||
client_ip: None,
|
||||
tls_version: None,
|
||||
headers: None,
|
||||
is_tls: true,
|
||||
protocol: None,
|
||||
};
|
||||
assert!(manager.find_route(&ctx).is_some());
|
||||
}
|
||||
}
|
||||
|
||||
@@ -27,7 +27,7 @@
|
||||
pub mod challenge_server;
|
||||
pub mod management;
|
||||
|
||||
use std::collections::HashMap;
|
||||
use std::collections::{HashMap, HashSet};
|
||||
use std::sync::Arc;
|
||||
use std::time::Instant;
|
||||
|
||||
@@ -217,6 +217,10 @@ impl RustProxy {
|
||||
extended_keep_alive_lifetime_ms: options.extended_keep_alive_lifetime,
|
||||
accept_proxy_protocol: options.accept_proxy_protocol.unwrap_or(false),
|
||||
send_proxy_protocol: options.send_proxy_protocol.unwrap_or(false),
|
||||
proxy_ips: options.proxy_ips.as_deref().unwrap_or(&[])
|
||||
.iter()
|
||||
.filter_map(|s| s.parse::<std::net::IpAddr>().ok())
|
||||
.collect(),
|
||||
}
|
||||
}
|
||||
|
||||
@@ -565,6 +569,12 @@ impl RustProxy {
|
||||
vec![]
|
||||
};
|
||||
|
||||
// Prune per-route metrics for route IDs that no longer exist
|
||||
let active_route_ids: HashSet<String> = routes.iter()
|
||||
.filter_map(|r| r.id.clone())
|
||||
.collect();
|
||||
self.metrics.retain_routes(&active_route_ids);
|
||||
|
||||
// Atomically swap the route table
|
||||
let new_manager = Arc::new(new_manager);
|
||||
self.route_table.store(Arc::clone(&new_manager));
|
||||
|
||||
@@ -185,6 +185,76 @@ pub async fn wait_for_port(port: u16, timeout_ms: u64) -> bool {
|
||||
false
|
||||
}
|
||||
|
||||
/// Start a TLS HTTP echo backend: accepts TLS, then responds with HTTP JSON
|
||||
/// containing request details. Combines TLS acceptance with HTTP echo behavior.
|
||||
pub async fn start_tls_http_backend(
|
||||
port: u16,
|
||||
backend_name: &str,
|
||||
cert_pem: &str,
|
||||
key_pem: &str,
|
||||
) -> JoinHandle<()> {
|
||||
use std::sync::Arc;
|
||||
|
||||
let acceptor = rustproxy_passthrough::build_tls_acceptor(cert_pem, key_pem)
|
||||
.expect("Failed to build TLS acceptor");
|
||||
let acceptor = Arc::new(acceptor);
|
||||
let name = backend_name.to_string();
|
||||
|
||||
let listener = TcpListener::bind(format!("127.0.0.1:{}", port))
|
||||
.await
|
||||
.unwrap_or_else(|_| panic!("Failed to bind TLS HTTP backend on port {}", port));
|
||||
|
||||
tokio::spawn(async move {
|
||||
loop {
|
||||
let (stream, _) = match listener.accept().await {
|
||||
Ok(conn) => conn,
|
||||
Err(_) => break,
|
||||
};
|
||||
let acc = acceptor.clone();
|
||||
let backend = name.clone();
|
||||
tokio::spawn(async move {
|
||||
let mut tls_stream = match acc.accept(stream).await {
|
||||
Ok(s) => s,
|
||||
Err(_) => return,
|
||||
};
|
||||
|
||||
let mut buf = vec![0u8; 16384];
|
||||
let n = match tls_stream.read(&mut buf).await {
|
||||
Ok(0) | Err(_) => return,
|
||||
Ok(n) => n,
|
||||
};
|
||||
let req_str = String::from_utf8_lossy(&buf[..n]);
|
||||
|
||||
// Parse first line: METHOD PATH HTTP/x.x
|
||||
let first_line = req_str.lines().next().unwrap_or("");
|
||||
let parts: Vec<&str> = first_line.split_whitespace().collect();
|
||||
let method = parts.first().copied().unwrap_or("UNKNOWN");
|
||||
let path = parts.get(1).copied().unwrap_or("/");
|
||||
|
||||
// Extract Host header
|
||||
let host = req_str
|
||||
.lines()
|
||||
.find(|l| l.to_lowercase().starts_with("host:"))
|
||||
.map(|l| l[5..].trim())
|
||||
.unwrap_or("unknown");
|
||||
|
||||
let body = format!(
|
||||
r#"{{"method":"{}","path":"{}","host":"{}","backend":"{}"}}"#,
|
||||
method, path, host, backend
|
||||
);
|
||||
|
||||
let response = format!(
|
||||
"HTTP/1.1 200 OK\r\nContent-Type: application/json\r\nContent-Length: {}\r\nConnection: close\r\n\r\n{}",
|
||||
body.len(),
|
||||
body,
|
||||
);
|
||||
let _ = tls_stream.write_all(response.as_bytes()).await;
|
||||
let _ = tls_stream.shutdown().await;
|
||||
});
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
/// Helper to create a minimal route config for testing.
|
||||
pub fn make_test_route(
|
||||
port: u16,
|
||||
@@ -201,6 +271,7 @@ pub fn make_test_route(
|
||||
client_ip: None,
|
||||
tls_version: None,
|
||||
headers: None,
|
||||
protocol: None,
|
||||
},
|
||||
action: rustproxy_config::RouteAction {
|
||||
action_type: rustproxy_config::RouteActionType::Forward,
|
||||
@@ -381,6 +452,86 @@ pub fn make_tls_terminate_route(
|
||||
route
|
||||
}
|
||||
|
||||
/// Start a TLS WebSocket echo backend: accepts TLS, performs WS handshake, then echoes data.
|
||||
/// Combines TLS acceptance (like `start_tls_http_backend`) with WebSocket echo (like `start_ws_echo_backend`).
|
||||
pub async fn start_tls_ws_echo_backend(
|
||||
port: u16,
|
||||
cert_pem: &str,
|
||||
key_pem: &str,
|
||||
) -> JoinHandle<()> {
|
||||
use std::sync::Arc;
|
||||
|
||||
let acceptor = rustproxy_passthrough::build_tls_acceptor(cert_pem, key_pem)
|
||||
.expect("Failed to build TLS acceptor");
|
||||
let acceptor = Arc::new(acceptor);
|
||||
|
||||
let listener = TcpListener::bind(format!("127.0.0.1:{}", port))
|
||||
.await
|
||||
.unwrap_or_else(|_| panic!("Failed to bind TLS WS echo backend on port {}", port));
|
||||
|
||||
tokio::spawn(async move {
|
||||
loop {
|
||||
let (stream, _) = match listener.accept().await {
|
||||
Ok(conn) => conn,
|
||||
Err(_) => break,
|
||||
};
|
||||
let acc = acceptor.clone();
|
||||
tokio::spawn(async move {
|
||||
let mut tls_stream = match acc.accept(stream).await {
|
||||
Ok(s) => s,
|
||||
Err(_) => return,
|
||||
};
|
||||
|
||||
// Read the HTTP upgrade request
|
||||
let mut buf = vec![0u8; 4096];
|
||||
let n = match tls_stream.read(&mut buf).await {
|
||||
Ok(0) | Err(_) => return,
|
||||
Ok(n) => n,
|
||||
};
|
||||
|
||||
let req_str = String::from_utf8_lossy(&buf[..n]);
|
||||
|
||||
// Extract Sec-WebSocket-Key for handshake
|
||||
let ws_key = req_str
|
||||
.lines()
|
||||
.find(|l| l.to_lowercase().starts_with("sec-websocket-key:"))
|
||||
.map(|l| l.split(':').nth(1).unwrap_or("").trim().to_string())
|
||||
.unwrap_or_default();
|
||||
|
||||
// Send 101 Switching Protocols
|
||||
let accept_response = format!(
|
||||
"HTTP/1.1 101 Switching Protocols\r\n\
|
||||
Upgrade: websocket\r\n\
|
||||
Connection: Upgrade\r\n\
|
||||
Sec-WebSocket-Accept: {}\r\n\
|
||||
\r\n",
|
||||
ws_key
|
||||
);
|
||||
|
||||
if tls_stream
|
||||
.write_all(accept_response.as_bytes())
|
||||
.await
|
||||
.is_err()
|
||||
{
|
||||
return;
|
||||
}
|
||||
|
||||
// Echo all data back (raw TCP after upgrade)
|
||||
let mut echo_buf = vec![0u8; 65536];
|
||||
loop {
|
||||
let n = match tls_stream.read(&mut echo_buf).await {
|
||||
Ok(0) | Err(_) => break,
|
||||
Ok(n) => n,
|
||||
};
|
||||
if tls_stream.write_all(&echo_buf[..n]).await.is_err() {
|
||||
break;
|
||||
}
|
||||
}
|
||||
});
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
/// Helper to create a TLS passthrough route for testing.
|
||||
pub fn make_tls_passthrough_route(
|
||||
port: u16,
|
||||
|
||||
@@ -407,6 +407,305 @@ async fn test_websocket_through_proxy() {
|
||||
proxy.stop().await.unwrap();
|
||||
}
|
||||
|
||||
/// Test that terminate-and-reencrypt mode routes HTTP traffic through the
|
||||
/// full HTTP proxy with per-request Host-based routing.
|
||||
///
|
||||
/// This verifies the new behavior: after TLS termination, HTTP data is detected
|
||||
/// and routed through HttpProxyService (like nginx) instead of being blindly tunneled.
|
||||
#[tokio::test]
|
||||
async fn test_terminate_and_reencrypt_http_routing() {
|
||||
let backend1_port = next_port();
|
||||
let backend2_port = next_port();
|
||||
let proxy_port = next_port();
|
||||
|
||||
let (cert1, key1) = generate_self_signed_cert("alpha.example.com");
|
||||
let (cert2, key2) = generate_self_signed_cert("beta.example.com");
|
||||
|
||||
// Generate separate backend certs (backends are independent TLS servers)
|
||||
let (backend_cert1, backend_key1) = generate_self_signed_cert("localhost");
|
||||
let (backend_cert2, backend_key2) = generate_self_signed_cert("localhost");
|
||||
|
||||
// Start TLS HTTP echo backends (proxy re-encrypts to these)
|
||||
let _b1 = start_tls_http_backend(backend1_port, "alpha", &backend_cert1, &backend_key1).await;
|
||||
let _b2 = start_tls_http_backend(backend2_port, "beta", &backend_cert2, &backend_key2).await;
|
||||
|
||||
// Create terminate-and-reencrypt routes
|
||||
let mut route1 = make_tls_terminate_route(
|
||||
proxy_port, "alpha.example.com", "127.0.0.1", backend1_port, &cert1, &key1,
|
||||
);
|
||||
route1.action.tls.as_mut().unwrap().mode = rustproxy_config::TlsMode::TerminateAndReencrypt;
|
||||
|
||||
let mut route2 = make_tls_terminate_route(
|
||||
proxy_port, "beta.example.com", "127.0.0.1", backend2_port, &cert2, &key2,
|
||||
);
|
||||
route2.action.tls.as_mut().unwrap().mode = rustproxy_config::TlsMode::TerminateAndReencrypt;
|
||||
|
||||
let options = RustProxyOptions {
|
||||
routes: vec![route1, route2],
|
||||
..Default::default()
|
||||
};
|
||||
|
||||
let mut proxy = RustProxy::new(options).unwrap();
|
||||
proxy.start().await.unwrap();
|
||||
assert!(wait_for_port(proxy_port, 2000).await);
|
||||
|
||||
// Test alpha domain - HTTP request through TLS terminate-and-reencrypt
|
||||
let alpha_result = with_timeout(async {
|
||||
let _ = rustls::crypto::ring::default_provider().install_default();
|
||||
let tls_config = rustls::ClientConfig::builder()
|
||||
.dangerous()
|
||||
.with_custom_certificate_verifier(std::sync::Arc::new(InsecureVerifier))
|
||||
.with_no_client_auth();
|
||||
let connector = tokio_rustls::TlsConnector::from(std::sync::Arc::new(tls_config));
|
||||
|
||||
let stream = tokio::net::TcpStream::connect(format!("127.0.0.1:{}", proxy_port))
|
||||
.await
|
||||
.unwrap();
|
||||
let server_name = rustls::pki_types::ServerName::try_from("alpha.example.com".to_string()).unwrap();
|
||||
let mut tls_stream = connector.connect(server_name, stream).await.unwrap();
|
||||
|
||||
let request = "GET /api/data HTTP/1.1\r\nHost: alpha.example.com\r\nConnection: close\r\n\r\n";
|
||||
tls_stream.write_all(request.as_bytes()).await.unwrap();
|
||||
|
||||
let mut response = Vec::new();
|
||||
tls_stream.read_to_end(&mut response).await.unwrap();
|
||||
String::from_utf8_lossy(&response).to_string()
|
||||
}, 10)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let alpha_body = extract_body(&alpha_result);
|
||||
assert!(
|
||||
alpha_body.contains(r#""backend":"alpha"#),
|
||||
"Expected alpha backend, got: {}",
|
||||
alpha_body
|
||||
);
|
||||
assert!(
|
||||
alpha_body.contains(r#""method":"GET"#),
|
||||
"Expected GET method, got: {}",
|
||||
alpha_body
|
||||
);
|
||||
assert!(
|
||||
alpha_body.contains(r#""path":"/api/data"#),
|
||||
"Expected /api/data path, got: {}",
|
||||
alpha_body
|
||||
);
|
||||
// Verify original Host header is preserved (not replaced with backend IP:port)
|
||||
assert!(
|
||||
alpha_body.contains(r#""host":"alpha.example.com"#),
|
||||
"Expected original Host header alpha.example.com, got: {}",
|
||||
alpha_body
|
||||
);
|
||||
|
||||
// Test beta domain - different host goes to different backend
|
||||
let beta_result = with_timeout(async {
|
||||
let _ = rustls::crypto::ring::default_provider().install_default();
|
||||
let tls_config = rustls::ClientConfig::builder()
|
||||
.dangerous()
|
||||
.with_custom_certificate_verifier(std::sync::Arc::new(InsecureVerifier))
|
||||
.with_no_client_auth();
|
||||
let connector = tokio_rustls::TlsConnector::from(std::sync::Arc::new(tls_config));
|
||||
|
||||
let stream = tokio::net::TcpStream::connect(format!("127.0.0.1:{}", proxy_port))
|
||||
.await
|
||||
.unwrap();
|
||||
let server_name = rustls::pki_types::ServerName::try_from("beta.example.com".to_string()).unwrap();
|
||||
let mut tls_stream = connector.connect(server_name, stream).await.unwrap();
|
||||
|
||||
let request = "GET /other HTTP/1.1\r\nHost: beta.example.com\r\nConnection: close\r\n\r\n";
|
||||
tls_stream.write_all(request.as_bytes()).await.unwrap();
|
||||
|
||||
let mut response = Vec::new();
|
||||
tls_stream.read_to_end(&mut response).await.unwrap();
|
||||
String::from_utf8_lossy(&response).to_string()
|
||||
}, 10)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let beta_body = extract_body(&beta_result);
|
||||
assert!(
|
||||
beta_body.contains(r#""backend":"beta"#),
|
||||
"Expected beta backend, got: {}",
|
||||
beta_body
|
||||
);
|
||||
assert!(
|
||||
beta_body.contains(r#""path":"/other"#),
|
||||
"Expected /other path, got: {}",
|
||||
beta_body
|
||||
);
|
||||
// Verify original Host header is preserved for beta too
|
||||
assert!(
|
||||
beta_body.contains(r#""host":"beta.example.com"#),
|
||||
"Expected original Host header beta.example.com, got: {}",
|
||||
beta_body
|
||||
);
|
||||
|
||||
proxy.stop().await.unwrap();
|
||||
}
|
||||
|
||||
/// Test that WebSocket upgrade works through terminate-and-reencrypt mode.
|
||||
///
|
||||
/// Verifies the full chain: client→TLS→proxy terminates→re-encrypts→TLS→backend WebSocket.
|
||||
/// The proxy's `handle_websocket_upgrade` checks `upstream.use_tls` and calls
|
||||
/// `connect_tls_backend()` when true. This test covers that path.
|
||||
#[tokio::test]
|
||||
async fn test_terminate_and_reencrypt_websocket() {
|
||||
let backend_port = next_port();
|
||||
let proxy_port = next_port();
|
||||
let domain = "ws.example.com";
|
||||
|
||||
// Frontend cert (client→proxy TLS)
|
||||
let (frontend_cert, frontend_key) = generate_self_signed_cert(domain);
|
||||
// Backend cert (proxy→backend TLS)
|
||||
let (backend_cert, backend_key) = generate_self_signed_cert("localhost");
|
||||
|
||||
// Start TLS WebSocket echo backend
|
||||
let _backend = start_tls_ws_echo_backend(backend_port, &backend_cert, &backend_key).await;
|
||||
|
||||
// Create terminate-and-reencrypt route
|
||||
let mut route = make_tls_terminate_route(
|
||||
proxy_port,
|
||||
domain,
|
||||
"127.0.0.1",
|
||||
backend_port,
|
||||
&frontend_cert,
|
||||
&frontend_key,
|
||||
);
|
||||
route.action.tls.as_mut().unwrap().mode = rustproxy_config::TlsMode::TerminateAndReencrypt;
|
||||
|
||||
let options = RustProxyOptions {
|
||||
routes: vec![route],
|
||||
..Default::default()
|
||||
};
|
||||
|
||||
let mut proxy = RustProxy::new(options).unwrap();
|
||||
proxy.start().await.unwrap();
|
||||
assert!(wait_for_port(proxy_port, 2000).await);
|
||||
|
||||
let result = with_timeout(
|
||||
async {
|
||||
let _ = rustls::crypto::ring::default_provider().install_default();
|
||||
let tls_config = rustls::ClientConfig::builder()
|
||||
.dangerous()
|
||||
.with_custom_certificate_verifier(std::sync::Arc::new(InsecureVerifier))
|
||||
.with_no_client_auth();
|
||||
let connector =
|
||||
tokio_rustls::TlsConnector::from(std::sync::Arc::new(tls_config));
|
||||
|
||||
let stream = tokio::net::TcpStream::connect(format!("127.0.0.1:{}", proxy_port))
|
||||
.await
|
||||
.unwrap();
|
||||
let server_name =
|
||||
rustls::pki_types::ServerName::try_from(domain.to_string()).unwrap();
|
||||
let mut tls_stream = connector.connect(server_name, stream).await.unwrap();
|
||||
|
||||
// Send WebSocket upgrade request through TLS
|
||||
let request = format!(
|
||||
"GET /ws HTTP/1.1\r\n\
|
||||
Host: {}\r\n\
|
||||
Upgrade: websocket\r\n\
|
||||
Connection: Upgrade\r\n\
|
||||
Sec-WebSocket-Key: dGhlIHNhbXBsZSBub25jZQ==\r\n\
|
||||
Sec-WebSocket-Version: 13\r\n\
|
||||
\r\n",
|
||||
domain
|
||||
);
|
||||
tls_stream.write_all(request.as_bytes()).await.unwrap();
|
||||
|
||||
// Read the 101 response (byte-by-byte until \r\n\r\n)
|
||||
let mut response_buf = Vec::with_capacity(4096);
|
||||
let mut temp = [0u8; 1];
|
||||
loop {
|
||||
let n = tls_stream.read(&mut temp).await.unwrap();
|
||||
if n == 0 {
|
||||
break;
|
||||
}
|
||||
response_buf.push(temp[0]);
|
||||
if response_buf.len() >= 4 {
|
||||
let len = response_buf.len();
|
||||
if response_buf[len - 4..] == *b"\r\n\r\n" {
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
let response_str = String::from_utf8_lossy(&response_buf).to_string();
|
||||
assert!(
|
||||
response_str.contains("101"),
|
||||
"Expected 101 Switching Protocols, got: {}",
|
||||
response_str
|
||||
);
|
||||
assert!(
|
||||
response_str.to_lowercase().contains("upgrade: websocket"),
|
||||
"Expected Upgrade header, got: {}",
|
||||
response_str
|
||||
);
|
||||
|
||||
// After upgrade, send data and verify echo
|
||||
let test_data = b"Hello TLS WebSocket!";
|
||||
tls_stream.write_all(test_data).await.unwrap();
|
||||
|
||||
// Read echoed data
|
||||
let mut echo_buf = vec![0u8; 256];
|
||||
let n = tls_stream.read(&mut echo_buf).await.unwrap();
|
||||
let echoed = &echo_buf[..n];
|
||||
|
||||
assert_eq!(echoed, test_data, "Expected echo of sent data");
|
||||
|
||||
"ok".to_string()
|
||||
},
|
||||
10,
|
||||
)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
assert_eq!(result, "ok");
|
||||
proxy.stop().await.unwrap();
|
||||
}
|
||||
|
||||
/// Test that the protocol field on route config is accepted and processed.
|
||||
#[tokio::test]
|
||||
async fn test_protocol_field_in_route_config() {
|
||||
let backend_port = next_port();
|
||||
let proxy_port = next_port();
|
||||
|
||||
let _backend = start_http_echo_backend(backend_port, "main").await;
|
||||
|
||||
// Create a route with protocol: "http" - should only match HTTP traffic
|
||||
let mut route = make_test_route(proxy_port, None, "127.0.0.1", backend_port);
|
||||
route.route_match.protocol = Some("http".to_string());
|
||||
|
||||
let options = RustProxyOptions {
|
||||
routes: vec![route],
|
||||
..Default::default()
|
||||
};
|
||||
|
||||
let mut proxy = RustProxy::new(options).unwrap();
|
||||
proxy.start().await.unwrap();
|
||||
assert!(wait_for_port(proxy_port, 2000).await);
|
||||
|
||||
// HTTP request should match the route and get proxied
|
||||
let result = with_timeout(async {
|
||||
let response = send_http_request(proxy_port, "example.com", "GET", "/test").await;
|
||||
extract_body(&response).to_string()
|
||||
}, 10)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
assert!(
|
||||
result.contains(r#""backend":"main"#),
|
||||
"Expected main backend, got: {}",
|
||||
result
|
||||
);
|
||||
assert!(
|
||||
result.contains(r#""path":"/test"#),
|
||||
"Expected /test path, got: {}",
|
||||
result
|
||||
);
|
||||
|
||||
proxy.stop().await.unwrap();
|
||||
}
|
||||
|
||||
/// InsecureVerifier for test TLS client connections.
|
||||
#[derive(Debug)]
|
||||
struct InsecureVerifier;
|
||||
|
||||
@@ -562,4 +562,168 @@ tap.test('Route Integration - Combining Multiple Route Types', async () => {
|
||||
}
|
||||
});
|
||||
|
||||
// --------------------------------- Protocol Match Field Tests ---------------------------------
|
||||
|
||||
tap.test('Routes: Should accept protocol field on route match', async () => {
|
||||
// Create a route with protocol: 'http'
|
||||
const httpOnlyRoute: IRouteConfig = {
|
||||
match: {
|
||||
ports: 443,
|
||||
domains: 'api.example.com',
|
||||
protocol: 'http',
|
||||
},
|
||||
action: {
|
||||
type: 'forward',
|
||||
targets: [{ host: 'backend', port: 8080 }],
|
||||
tls: {
|
||||
mode: 'terminate',
|
||||
certificate: 'auto',
|
||||
},
|
||||
},
|
||||
name: 'HTTP-only Route',
|
||||
};
|
||||
|
||||
// Validate the route - protocol field should not cause errors
|
||||
const validation = validateRouteConfig(httpOnlyRoute);
|
||||
expect(validation.valid).toBeTrue();
|
||||
|
||||
// Verify the protocol field is preserved
|
||||
expect(httpOnlyRoute.match.protocol).toEqual('http');
|
||||
});
|
||||
|
||||
tap.test('Routes: Should accept protocol tcp on route match', async () => {
|
||||
// Create a route with protocol: 'tcp'
|
||||
const tcpOnlyRoute: IRouteConfig = {
|
||||
match: {
|
||||
ports: 443,
|
||||
domains: 'db.example.com',
|
||||
protocol: 'tcp',
|
||||
},
|
||||
action: {
|
||||
type: 'forward',
|
||||
targets: [{ host: 'db-server', port: 5432 }],
|
||||
tls: {
|
||||
mode: 'passthrough',
|
||||
},
|
||||
},
|
||||
name: 'TCP-only Route',
|
||||
};
|
||||
|
||||
const validation = validateRouteConfig(tcpOnlyRoute);
|
||||
expect(validation.valid).toBeTrue();
|
||||
|
||||
expect(tcpOnlyRoute.match.protocol).toEqual('tcp');
|
||||
});
|
||||
|
||||
tap.test('Routes: Protocol field should work with terminate-and-reencrypt', async () => {
|
||||
// Create a terminate-and-reencrypt route that only accepts HTTP
|
||||
const reencryptRoute = createHttpsTerminateRoute(
|
||||
'secure.example.com',
|
||||
{ host: 'backend', port: 443 },
|
||||
{ reencrypt: true, certificate: 'auto', name: 'Reencrypt HTTP Route' }
|
||||
);
|
||||
|
||||
// Set protocol restriction to http
|
||||
reencryptRoute.match.protocol = 'http';
|
||||
|
||||
// Validate the route
|
||||
const validation = validateRouteConfig(reencryptRoute);
|
||||
expect(validation.valid).toBeTrue();
|
||||
|
||||
// Verify TLS mode
|
||||
expect(reencryptRoute.action.tls?.mode).toEqual('terminate-and-reencrypt');
|
||||
// Verify protocol field is preserved
|
||||
expect(reencryptRoute.match.protocol).toEqual('http');
|
||||
});
|
||||
|
||||
tap.test('Routes: Protocol field should not affect domain/port matching', async () => {
|
||||
// Routes with and without protocol field should both match the same domain/port
|
||||
const routeWithProtocol: IRouteConfig = {
|
||||
match: {
|
||||
ports: 443,
|
||||
domains: 'example.com',
|
||||
protocol: 'http',
|
||||
},
|
||||
action: {
|
||||
type: 'forward',
|
||||
targets: [{ host: 'backend', port: 8080 }],
|
||||
tls: { mode: 'terminate', certificate: 'auto' },
|
||||
},
|
||||
name: 'With Protocol',
|
||||
priority: 10,
|
||||
};
|
||||
|
||||
const routeWithoutProtocol: IRouteConfig = {
|
||||
match: {
|
||||
ports: 443,
|
||||
domains: 'example.com',
|
||||
},
|
||||
action: {
|
||||
type: 'forward',
|
||||
targets: [{ host: 'fallback', port: 8081 }],
|
||||
tls: { mode: 'terminate', certificate: 'auto' },
|
||||
},
|
||||
name: 'Without Protocol',
|
||||
priority: 5,
|
||||
};
|
||||
|
||||
const routes = [routeWithProtocol, routeWithoutProtocol];
|
||||
|
||||
// Both routes should match the domain/port (protocol is a hint for Rust-side matching)
|
||||
const matches = findMatchingRoutes(routes, { domain: 'example.com', port: 443 });
|
||||
expect(matches.length).toEqual(2);
|
||||
|
||||
// The one with higher priority should be first
|
||||
const best = findBestMatchingRoute(routes, { domain: 'example.com', port: 443 });
|
||||
expect(best).not.toBeUndefined();
|
||||
expect(best!.name).toEqual('With Protocol');
|
||||
});
|
||||
|
||||
tap.test('Routes: Protocol field preserved through route cloning', async () => {
|
||||
const original: IRouteConfig = {
|
||||
match: {
|
||||
ports: 8443,
|
||||
domains: 'clone-test.example.com',
|
||||
protocol: 'http',
|
||||
},
|
||||
action: {
|
||||
type: 'forward',
|
||||
targets: [{ host: 'backend', port: 3000 }],
|
||||
tls: { mode: 'terminate-and-reencrypt', certificate: 'auto' },
|
||||
},
|
||||
name: 'Clone Test',
|
||||
};
|
||||
|
||||
const cloned = cloneRoute(original);
|
||||
|
||||
// Verify protocol is preserved in clone
|
||||
expect(cloned.match.protocol).toEqual('http');
|
||||
expect(cloned.action.tls?.mode).toEqual('terminate-and-reencrypt');
|
||||
|
||||
// Modify clone should not affect original
|
||||
cloned.match.protocol = 'tcp';
|
||||
expect(original.match.protocol).toEqual('http');
|
||||
});
|
||||
|
||||
tap.test('Routes: Protocol field preserved through route merging', async () => {
|
||||
const base: IRouteConfig = {
|
||||
match: {
|
||||
ports: 443,
|
||||
domains: 'merge-test.example.com',
|
||||
protocol: 'http',
|
||||
},
|
||||
action: {
|
||||
type: 'forward',
|
||||
targets: [{ host: 'backend', port: 3000 }],
|
||||
tls: { mode: 'terminate-and-reencrypt', certificate: 'auto' },
|
||||
},
|
||||
name: 'Merge Base',
|
||||
};
|
||||
|
||||
// Merge with override that changes name but not protocol
|
||||
const merged = mergeRouteConfigs(base, { name: 'Merged Route' });
|
||||
expect(merged.match.protocol).toEqual('http');
|
||||
expect(merged.name).toEqual('Merged Route');
|
||||
});
|
||||
|
||||
export default tap.start();
|
||||
@@ -151,11 +151,28 @@ tap.test('TCP forward - real-time byte tracking', async (tools) => {
|
||||
console.log(`TCP forward (during) — recent throughput: in=${tpDuring.in}, out=${tpDuring.out}`);
|
||||
expect(tpDuring.in + tpDuring.out).toBeGreaterThan(0);
|
||||
|
||||
// ── v25.2.0: Per-IP tracking (TCP connections) ──
|
||||
// Must check WHILE connection is active — per-IP data is evicted on last close
|
||||
const byIP = mDuring.connections.byIP();
|
||||
console.log('TCP forward — connections byIP:', Array.from(byIP.entries()));
|
||||
expect(byIP.size).toBeGreaterThan(0);
|
||||
|
||||
const topIPs = mDuring.connections.topIPs(10);
|
||||
console.log('TCP forward — topIPs:', topIPs);
|
||||
expect(topIPs.length).toBeGreaterThan(0);
|
||||
expect(topIPs[0].ip).toBeTruthy();
|
||||
|
||||
// ── v25.2.0: Throughput history ──
|
||||
const history = mDuring.throughput.history(10);
|
||||
console.log('TCP forward — throughput history length:', history.length);
|
||||
expect(history.length).toBeGreaterThan(0);
|
||||
expect(history[0].timestamp).toBeGreaterThan(0);
|
||||
|
||||
// Close connection
|
||||
client.destroy();
|
||||
await tools.delayFor(500);
|
||||
|
||||
// Final check
|
||||
// Final check — totals persist even after connection close
|
||||
await pollMetrics(proxy);
|
||||
const m = proxy.getMetrics();
|
||||
const bytesIn = m.totals.bytesIn();
|
||||
@@ -168,21 +185,10 @@ tap.test('TCP forward - real-time byte tracking', async (tools) => {
|
||||
const byRoute = m.throughput.byRoute();
|
||||
console.log('TCP forward — throughput byRoute:', Array.from(byRoute.entries()));
|
||||
|
||||
// ── v25.2.0: Per-IP tracking (TCP connections) ──
|
||||
const byIP = m.connections.byIP();
|
||||
console.log('TCP forward — connections byIP:', Array.from(byIP.entries()));
|
||||
expect(byIP.size).toBeGreaterThan(0);
|
||||
|
||||
const topIPs = m.connections.topIPs(10);
|
||||
console.log('TCP forward — topIPs:', topIPs);
|
||||
expect(topIPs.length).toBeGreaterThan(0);
|
||||
expect(topIPs[0].ip).toBeTruthy();
|
||||
|
||||
// ── v25.2.0: Throughput history ──
|
||||
const history = m.throughput.history(10);
|
||||
console.log('TCP forward — throughput history length:', history.length);
|
||||
expect(history.length).toBeGreaterThan(0);
|
||||
expect(history[0].timestamp).toBeGreaterThan(0);
|
||||
// After close, per-IP data should be evicted (memory leak fix)
|
||||
const byIPAfter = m.connections.byIP();
|
||||
console.log('TCP forward — connections byIP after close:', Array.from(byIPAfter.entries()));
|
||||
expect(byIPAfter.size).toEqual(0);
|
||||
|
||||
await proxy.stop();
|
||||
await tools.delayFor(200);
|
||||
|
||||
@@ -3,6 +3,6 @@
|
||||
*/
|
||||
export const commitinfo = {
|
||||
name: '@push.rocks/smartproxy',
|
||||
version: '25.4.0',
|
||||
version: '25.7.7',
|
||||
description: 'A powerful proxy package with unified route-based configuration for high traffic management. Features include SSL/TLS support, flexible routing patterns, WebSocket handling, advanced security options, and automatic ACME certificate management.'
|
||||
}
|
||||
|
||||
@@ -39,6 +39,7 @@ export interface IRouteMatch {
|
||||
clientIp?: string[]; // Match specific client IPs
|
||||
tlsVersion?: string[]; // Match specific TLS versions
|
||||
headers?: Record<string, string | RegExp>; // Match specific HTTP headers
|
||||
protocol?: 'http' | 'tcp'; // Match specific protocol (http includes h2 + websocket upgrades)
|
||||
}
|
||||
|
||||
|
||||
|
||||
@@ -409,6 +409,7 @@ export class SmartProxy extends plugins.EventEmitter {
|
||||
keepAliveTreatment: this.settings.keepAliveTreatment,
|
||||
keepAliveInactivityMultiplier: this.settings.keepAliveInactivityMultiplier,
|
||||
extendedKeepAliveLifetime: this.settings.extendedKeepAliveLifetime,
|
||||
proxyIps: this.settings.proxyIPs,
|
||||
acceptProxyProtocol: this.settings.acceptProxyProtocol,
|
||||
sendProxyProtocol: this.settings.sendProxyProtocol,
|
||||
metrics: this.settings.metrics,
|
||||
|
||||
Reference in New Issue
Block a user