feat(rustbridge): add streaming responses and robust large-payload/backpressure handling to RustBridge
This commit is contained in:
200
readme.md
200
readme.md
@@ -1,6 +1,6 @@
|
||||
# @push.rocks/smartrust
|
||||
|
||||
A type-safe, standardized bridge between TypeScript and Rust binaries via JSON-over-stdin/stdout IPC.
|
||||
A type-safe, production-ready bridge between TypeScript and Rust binaries via JSON-over-stdin/stdout IPC — with support for request/response, streaming, and event patterns.
|
||||
|
||||
## Issue Reporting and Security
|
||||
|
||||
@@ -16,18 +16,19 @@ pnpm install @push.rocks/smartrust
|
||||
|
||||
## Overview 🔭
|
||||
|
||||
`@push.rocks/smartrust` provides a production-ready bridge for TypeScript applications that need to communicate with Rust binaries. It handles the entire lifecycle — binary discovery, process spawning, request/response correlation with timeouts, event streaming, and graceful shutdown — so you can focus on your command definitions instead of IPC plumbing.
|
||||
`@push.rocks/smartrust` provides a complete bridge for TypeScript applications that need to communicate with Rust binaries. It handles the entire lifecycle — binary discovery, process spawning, request/response correlation, **streaming responses**, event pub/sub, and graceful shutdown — so you can focus on your command definitions instead of IPC plumbing.
|
||||
|
||||
### Why?
|
||||
### Why? 🤔
|
||||
|
||||
If you're integrating Rust into a Node.js project, you'll inevitably need:
|
||||
- A way to **find** the compiled Rust binary across different environments (dev, CI, production, platform packages)
|
||||
- A way to **spawn** it and establish reliable two-way communication
|
||||
- **Type-safe** request/response patterns with proper error handling
|
||||
- **Streaming responses** for progressive data processing, log tailing, or chunked transfers
|
||||
- **Event streaming** from Rust to TypeScript
|
||||
- **Graceful lifecycle management** (ready detection, clean shutdown, force kill)
|
||||
|
||||
`smartrust` wraps all of this into two classes: `RustBridge` and `RustBinaryLocator`.
|
||||
`smartrust` wraps all of this into three classes: `RustBridge`, `RustBinaryLocator`, and `StreamingResponse`.
|
||||
|
||||
## Usage 🚀
|
||||
|
||||
@@ -38,8 +39,9 @@ If you're integrating Rust into a Node.js project, you'll inevitably need:
|
||||
| Direction | Format | Description |
|
||||
|-----------|--------|-------------|
|
||||
| **TS → Rust** (Request) | `{"id": "req_1", "method": "start", "params": {...}}` | Command with unique ID |
|
||||
| **Rust → TS** (Response) | `{"id": "req_1", "success": true, "result": {...}}` | Response correlated by ID |
|
||||
| **Rust → TS** (Response) | `{"id": "req_1", "success": true, "result": {...}}` | Final response correlated by ID |
|
||||
| **Rust → TS** (Error) | `{"id": "req_1", "success": false, "error": "msg"}` | Error correlated by ID |
|
||||
| **Rust → TS** (Stream Chunk) | `{"id": "req_1", "stream": true, "data": {...}}` | Intermediate chunk (zero or more) |
|
||||
| **Rust → TS** (Event) | `{"event": "ready", "data": {...}}` | Unsolicited event (no ID) |
|
||||
|
||||
Your Rust binary reads JSON lines from stdin and writes JSON lines to stdout. That's it. Stderr is free for logging.
|
||||
@@ -49,7 +51,7 @@ Your Rust binary reads JSON lines from stdin and writes JSON lines to stdout. Th
|
||||
Start by defining a type map of commands your Rust binary supports:
|
||||
|
||||
```typescript
|
||||
import { RustBridge, type ICommandDefinition } from '@push.rocks/smartrust';
|
||||
import { RustBridge } from '@push.rocks/smartrust';
|
||||
|
||||
// Define your command types
|
||||
type TMyCommands = {
|
||||
@@ -92,7 +94,91 @@ bridge.on('management:configChanged', (data) => {
|
||||
bridge.kill();
|
||||
```
|
||||
|
||||
### Binary Locator
|
||||
### Streaming Commands 🌊
|
||||
|
||||
For commands where the Rust binary sends a series of chunks before a final result, use `sendCommandStreaming`. This is perfect for progressive data processing, log tailing, search results, or any scenario where you want incremental output.
|
||||
|
||||
#### Defining Streaming Commands
|
||||
|
||||
Add a `chunk` field to your command type definition to mark it as streamable:
|
||||
|
||||
```typescript
|
||||
type TMyCommands = {
|
||||
// Regular command (request → response)
|
||||
ping: { params: {}; result: { pong: boolean } };
|
||||
|
||||
// Streaming command (request → chunks... → final result)
|
||||
processData: { params: { count: number }; chunk: { index: number; progress: number }; result: { totalProcessed: number } };
|
||||
tailLogs: { params: { lines: number }; chunk: string; result: { linesRead: number } };
|
||||
};
|
||||
```
|
||||
|
||||
#### Consuming Streams
|
||||
|
||||
```typescript
|
||||
// Returns a StreamingResponse immediately (does NOT block)
|
||||
const stream = bridge.sendCommandStreaming('processData', { count: 1000 });
|
||||
|
||||
// Consume chunks with for-await-of
|
||||
for await (const chunk of stream) {
|
||||
console.log(`Processing item ${chunk.index}, progress: ${chunk.progress}%`);
|
||||
}
|
||||
|
||||
// Get the final result after all chunks are consumed
|
||||
const result = await stream.result;
|
||||
console.log(`Done! Processed ${result.totalProcessed} items`);
|
||||
```
|
||||
|
||||
#### Error Handling in Streams
|
||||
|
||||
Errors propagate to both the iterator and the `.result` promise:
|
||||
|
||||
```typescript
|
||||
const stream = bridge.sendCommandStreaming('processData', { count: 100 });
|
||||
|
||||
try {
|
||||
for await (const chunk of stream) {
|
||||
console.log(chunk);
|
||||
}
|
||||
} catch (err) {
|
||||
console.error('Stream failed:', err.message);
|
||||
}
|
||||
|
||||
// .result also rejects on error
|
||||
try {
|
||||
await stream.result;
|
||||
} catch (err) {
|
||||
console.error('Same error here:', err.message);
|
||||
}
|
||||
```
|
||||
|
||||
#### Stream Timeout
|
||||
|
||||
By default, streaming commands use the same timeout as regular commands (`requestTimeoutMs`). The timeout **resets on each chunk received**, so it acts as an inactivity timeout rather than an absolute timeout. You can configure it separately:
|
||||
|
||||
```typescript
|
||||
const bridge = new RustBridge<TMyCommands>({
|
||||
binaryName: 'my-server',
|
||||
requestTimeoutMs: 30000, // regular command timeout: 30s
|
||||
streamTimeoutMs: 60000, // streaming inactivity timeout: 60s
|
||||
});
|
||||
```
|
||||
|
||||
#### Implementing Streaming on the Rust Side
|
||||
|
||||
Your Rust binary sends stream chunks by writing lines with `"stream": true` before the final response:
|
||||
|
||||
```rust
|
||||
// For each chunk:
|
||||
println!(r#"{{"id":"{}","stream":true,"data":{{"index":{},"progress":{}}}}}"#, req.id, i, pct);
|
||||
io::stdout().flush().unwrap();
|
||||
|
||||
// When done, send the final response (same as non-streaming):
|
||||
println!(r#"{{"id":"{}","success":true,"result":{{"totalProcessed":{}}}}}"#, req.id, total);
|
||||
io::stdout().flush().unwrap();
|
||||
```
|
||||
|
||||
### Binary Locator 🔍
|
||||
|
||||
The `RustBinaryLocator` searches for your binary using a priority-ordered strategy:
|
||||
|
||||
@@ -119,7 +205,7 @@ const binaryPath = await locator.findBinary();
|
||||
// Result is cached — call clearCache() to force re-search
|
||||
```
|
||||
|
||||
### Configuration Reference
|
||||
### Configuration Reference ⚙️
|
||||
|
||||
The `RustBridge` constructor accepts an `IRustBridgeOptions` object:
|
||||
|
||||
@@ -136,14 +222,16 @@ const bridge = new RustBridge<TMyCommands>({
|
||||
// --- Bridge Options ---
|
||||
cliArgs: ['--management'], // optional: args passed to binary (default: ['--management'])
|
||||
requestTimeoutMs: 30000, // optional: per-request timeout (default: 30000)
|
||||
streamTimeoutMs: 30000, // optional: streaming inactivity timeout (default: requestTimeoutMs)
|
||||
readyTimeoutMs: 10000, // optional: ready event timeout (default: 10000)
|
||||
maxPayloadSize: 50 * 1024 * 1024, // optional: max message size in bytes (default: 50MB)
|
||||
env: { RUST_LOG: 'debug' }, // optional: extra env vars for the child process
|
||||
readyEventName: 'ready', // optional: name of the ready event (default: 'ready')
|
||||
logger: myLogger, // optional: logger implementing IRustBridgeLogger
|
||||
});
|
||||
```
|
||||
|
||||
### Events
|
||||
### Events 📡
|
||||
|
||||
`RustBridge` extends `EventEmitter` and emits the following events:
|
||||
|
||||
@@ -154,7 +242,7 @@ const bridge = new RustBridge<TMyCommands>({
|
||||
| `stderr` | `string` | A line from the binary's stderr |
|
||||
| `management:<name>` | `any` | Custom event from Rust (e.g. `management:configChanged`) |
|
||||
|
||||
### Custom Logger
|
||||
### Custom Logger 📝
|
||||
|
||||
Plug in your own logger by implementing the `IRustBridgeLogger` interface:
|
||||
|
||||
@@ -173,7 +261,7 @@ const bridge = new RustBridge<TMyCommands>({
|
||||
});
|
||||
```
|
||||
|
||||
### Writing the Rust Side
|
||||
### Writing the Rust Side 🦀
|
||||
|
||||
Your Rust binary needs to implement a simple protocol:
|
||||
|
||||
@@ -186,9 +274,11 @@ Your Rust binary needs to implement a simple protocol:
|
||||
|
||||
3. **Write JSON responses to stdout**, each as `{"id": "...", "success": true, "result": {...}}\n`
|
||||
|
||||
4. **Emit events** anytime by writing `{"event": "name", "data": {...}}\n` to stdout
|
||||
4. **For streaming commands**, write zero or more `{"id": "...", "stream": true, "data": {...}}\n` chunks before the final response
|
||||
|
||||
5. **Use stderr** for logging — it won't interfere with the IPC protocol
|
||||
5. **Emit events** anytime by writing `{"event": "name", "data": {...}}\n` to stdout
|
||||
|
||||
6. **Use stderr** for logging — it won't interfere with the IPC protocol
|
||||
|
||||
Here's a minimal Rust skeleton:
|
||||
|
||||
@@ -213,6 +303,13 @@ struct Response {
|
||||
error: Option<String>,
|
||||
}
|
||||
|
||||
#[derive(Serialize)]
|
||||
struct StreamChunk {
|
||||
id: String,
|
||||
stream: bool,
|
||||
data: serde_json::Value,
|
||||
}
|
||||
|
||||
fn main() {
|
||||
// Signal ready
|
||||
println!(r#"{{"event":"ready","data":{{"version":"1.0.0"}}}}"#);
|
||||
@@ -223,24 +320,50 @@ fn main() {
|
||||
let line = line.unwrap();
|
||||
let req: Request = serde_json::from_str(&line).unwrap();
|
||||
|
||||
let response = match req.method.as_str() {
|
||||
"ping" => Response {
|
||||
id: req.id,
|
||||
success: true,
|
||||
result: Some(serde_json::json!({"pong": true})),
|
||||
error: None,
|
||||
},
|
||||
_ => Response {
|
||||
id: req.id,
|
||||
success: false,
|
||||
result: None,
|
||||
error: Some(format!("Unknown method: {}", req.method)),
|
||||
},
|
||||
};
|
||||
|
||||
let json = serde_json::to_string(&response).unwrap();
|
||||
println!("{json}");
|
||||
io::stdout().flush().unwrap();
|
||||
match req.method.as_str() {
|
||||
"ping" => {
|
||||
let resp = Response {
|
||||
id: req.id,
|
||||
success: true,
|
||||
result: Some(serde_json::json!({"pong": true})),
|
||||
error: None,
|
||||
};
|
||||
println!("{}", serde_json::to_string(&resp).unwrap());
|
||||
io::stdout().flush().unwrap();
|
||||
}
|
||||
"processData" => {
|
||||
let count = req.params["count"].as_u64().unwrap_or(0);
|
||||
// Send stream chunks
|
||||
for i in 0..count {
|
||||
let chunk = StreamChunk {
|
||||
id: req.id.clone(),
|
||||
stream: true,
|
||||
data: serde_json::json!({"index": i, "progress": ((i+1) * 100 / count)}),
|
||||
};
|
||||
println!("{}", serde_json::to_string(&chunk).unwrap());
|
||||
io::stdout().flush().unwrap();
|
||||
}
|
||||
// Send final response
|
||||
let resp = Response {
|
||||
id: req.id,
|
||||
success: true,
|
||||
result: Some(serde_json::json!({"totalProcessed": count})),
|
||||
error: None,
|
||||
};
|
||||
println!("{}", serde_json::to_string(&resp).unwrap());
|
||||
io::stdout().flush().unwrap();
|
||||
}
|
||||
_ => {
|
||||
let resp = Response {
|
||||
id: req.id,
|
||||
success: false,
|
||||
result: None,
|
||||
error: Some(format!("Unknown method: {}", req.method)),
|
||||
};
|
||||
println!("{}", serde_json::to_string(&resp).unwrap());
|
||||
io::stdout().flush().unwrap();
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
@@ -254,9 +377,17 @@ fn main() {
|
||||
| `constructor` | `new RustBridge<T>(options: IRustBridgeOptions)` | Create a new bridge instance |
|
||||
| `spawn()` | `Promise<boolean>` | Spawn the binary and wait for ready; returns `false` on failure |
|
||||
| `sendCommand(method, params)` | `Promise<TCommands[K]['result']>` | Send a typed command and await the response |
|
||||
| `sendCommandStreaming(method, params)` | `StreamingResponse<TChunk, TResult>` | Send a streaming command; returns immediately |
|
||||
| `kill()` | `void` | SIGTERM the process, reject pending requests, force SIGKILL after 5s |
|
||||
| `running` | `boolean` | Whether the bridge is currently connected |
|
||||
|
||||
### `StreamingResponse<TChunk, TResult>`
|
||||
|
||||
| Method / Property | Type | Description |
|
||||
|---|---|---|
|
||||
| `[Symbol.asyncIterator]()` | `AsyncIterator<TChunk>` | Enables `for await...of` consumption of chunks |
|
||||
| `result` | `Promise<TResult>` | Resolves with the final result after stream ends |
|
||||
|
||||
### `RustBinaryLocator`
|
||||
|
||||
| Method / Property | Signature | Description |
|
||||
@@ -265,9 +396,9 @@ fn main() {
|
||||
| `findBinary()` | `Promise<string \| null>` | Find the binary using the priority search; result is cached |
|
||||
| `clearCache()` | `void` | Clear the cached path to force a fresh search |
|
||||
|
||||
### Exported Interfaces
|
||||
### Exported Interfaces & Types
|
||||
|
||||
| Interface | Description |
|
||||
| Interface / Type | Description |
|
||||
|---|---|
|
||||
| `IRustBridgeOptions` | Full configuration for `RustBridge` |
|
||||
| `IBinaryLocatorOptions` | Configuration for `RustBinaryLocator` |
|
||||
@@ -275,8 +406,11 @@ fn main() {
|
||||
| `IManagementRequest` | IPC request shape: `{ id, method, params }` |
|
||||
| `IManagementResponse` | IPC response shape: `{ id, success, result?, error? }` |
|
||||
| `IManagementEvent` | IPC event shape: `{ event, data }` |
|
||||
| `IManagementStreamChunk` | IPC stream chunk shape: `{ id, stream: true, data }` |
|
||||
| `ICommandDefinition` | Single command definition: `{ params, result }` |
|
||||
| `TCommandMap` | `Record<string, ICommandDefinition>` |
|
||||
| `TStreamingCommandKeys<T>` | Extracts keys from a command map that have a `chunk` field |
|
||||
| `TExtractChunk<T>` | Extracts the chunk type from a streaming command definition |
|
||||
|
||||
## License and Legal Information
|
||||
|
||||
|
||||
Reference in New Issue
Block a user