8 Commits
v2.1.3 ... main

Author SHA1 Message Date
48f158a98b 2.3.0
Some checks failed
Default (tags) / security (push) Successful in 42s
Default (tags) / test (push) Failing after 3m52s
Default (tags) / release (push) Has been skipped
Default (tags) / metadata (push) Has been skipped
2025-08-30 23:02:49 +00:00
994b1d20fb feat(streaming): Add streaming support: chunked stream transfers, file send/receive, stream events and helpers 2025-08-30 23:02:49 +00:00
7ba064584b 2.2.2
Some checks failed
Default (tags) / security (push) Successful in 36s
Default (tags) / test (push) Failing after 3m49s
Default (tags) / release (push) Has been skipped
Default (tags) / metadata (push) Has been skipped
2025-08-29 17:02:50 +00:00
1c08df8e6a fix(ipc): Propagate per-client disconnects, add proper routing for targeted messages, and remove unused node-ipc deps 2025-08-29 17:02:50 +00:00
44770bf820 2.2.1
Some checks failed
Default (tags) / security (push) Successful in 27s
Default (tags) / test (push) Failing after 3m50s
Default (tags) / release (push) Has been skipped
Default (tags) / metadata (push) Has been skipped
2025-08-29 08:49:04 +00:00
6c77ca1e4c fix(tests): Remove redundant manual topic handlers from tests and rely on server built-in pub/sub 2025-08-29 08:49:04 +00:00
350b3f1359 2.2.0
Some checks failed
Default (tags) / security (push) Successful in 42s
Default (tags) / test (push) Failing after 3m50s
Default (tags) / release (push) Has been skipped
Default (tags) / metadata (push) Has been skipped
2025-08-29 08:48:38 +00:00
fa53dcfc4f feat(ipcclient): Add clientOnly mode to prevent clients from auto-starting servers and improve registration/reconnect behavior 2025-08-29 08:48:38 +00:00
14 changed files with 922 additions and 284 deletions

View File

@@ -1,5 +1,46 @@
# Changelog
## 2025-08-30 - 2.3.0 - feat(streaming)
Add streaming support: chunked stream transfers, file send/receive, stream events and helpers
- Implement chunked streaming protocol in IpcChannel (init / chunk / end / error / cancel messages)
- Add sendStream, cancelOutgoingStream and cancelIncomingStream methods to IpcChannel
- Expose high-level streaming API on client: sendStream, sendFile, cancelOutgoingStream, cancelIncomingStream
- Expose high-level streaming API on server: sendStreamToClient, sendFileToClient, cancelIncomingStreamFromClient, cancelOutgoingStreamToClient
- Emit 'stream' events from channels/servers/clients with (info, readable) where info includes streamId, meta, headers and clientId
- Add maxConcurrentStreams option (default 32) and enforce concurrent stream limits for incoming/outgoing
- Add SmartIpc.pipeStreamToFile helper to persist incoming streams to disk
- Export stream in smartipc.plugins and update README with streaming usage and examples
- Add comprehensive streaming tests (test/test.streaming.ts) covering large payloads, file transfer, cancellation and concurrency limits
## 2025-08-29 - 2.2.2 - fix(ipc)
Propagate per-client disconnects, add proper routing for targeted messages, and remove unused node-ipc deps
- Forward per-client 'clientDisconnected' events from transports up through IpcChannel and IpcServer so higher layers can react and clean up state.
- IpcChannel re-emits 'clientDisconnected' and allows registering handlers for it.
- IpcServer now listens for 'clientDisconnected' to cleanup topic subscriptions, remove clients from the map, and emit 'clientDisconnect'.
- sendToClient injects the target clientId into headers so transports can route messages to the correct socket instead of broadcasting.
- broadcast and broadcastTo delegate to sendToClient to ensure messages are routed to intended recipients and errors are attributed to the correct client.
- Transports now emit 'clientDisconnected' with the clientId when known.
- package.json: removed unused node-ipc and @types/node-ipc dependencies (dependency cleanup).
## 2025-08-29 - 2.2.1 - fix(tests)
Remove redundant manual topic handlers from tests and rely on server built-in pub/sub
- Removed manual server.onMessage('__subscribe__') and server.onMessage('__publish__') handlers from test/test.ts
- Tests now rely on the server's built-in publish/subscribe behavior: clients publish directly and subscribers receive messages
- Test code simplified without changing public API or runtime behavior
## 2025-08-29 - 2.2.0 - feat(ipcclient)
Add clientOnly mode to prevent clients from auto-starting servers and improve registration/reconnect behavior
- Introduce a clientOnly option on transports and clients, and support SMARTIPC_CLIENT_ONLY=1 env override to prevent a client from auto-starting a server when connect() encounters ECONNREFUSED/ENOENT.
- Update UnixSocketTransport/TcpTransport connect behavior: if clientOnly (or env override) is enabled, reject connect with a descriptive error instead of starting a server (preserves backward compatibility when disabled).
- Make SmartIpc.waitForServer use clientOnly probing to avoid accidental server creation during readiness checks.
- Refactor IpcClient registration flow: extract attemptRegistrationInternal, set didRegisterOnce flag, and automatically re-register on reconnects when previously registered.
- Add and update tests to cover clientOnly behavior, SMARTIPC_CLIENT_ONLY env enforcement, temporary socket paths and automatic cleanup, and other reliability improvements.
- Update README with a new 'Client-Only Mode' section documenting the option, env override, and examples.
## 2025-08-28 - 2.1.3 - fix(classes.ipcchannel)
Normalize heartbeatThrowOnTimeout option parsing and allow registering 'heartbeatTimeout' via IpcChannel.on
@@ -82,3 +123,10 @@ Initial release and a series of patch fixes to core components.
- 1.0.1: initial release.
- 1.0.2 → 1.0.7: a sequence of small core fixes and maintenance updates (repeated "fix(core): update" commits).
## 2025-08-29 - 2.1.4 - feat(transports)
Add client-only mode to prevent unintended server auto-start in Unix/NamedPipe transports; safer probing
- Add `clientOnly?: boolean` to transport options; when true (or `SMARTIPC_CLIENT_ONLY=1`), a client will fail fast on `ECONNREFUSED`/`ENOENT` instead of auto-starting a server.
- Update `SmartIpc.waitForServer()` to probe with `clientOnly: true` to avoid races during readiness checks.
- Extend tests to cover option and env override; update core test to use unique socket path and auto-cleanup.
- Docs: add README section for client-only mode.

View File

@@ -1,6 +1,6 @@
{
"name": "@push.rocks/smartipc",
"version": "2.1.3",
"version": "2.3.0",
"private": false,
"description": "A library for node inter process communication, providing an easy-to-use API for IPC.",
"exports": {
@@ -24,9 +24,7 @@
},
"dependencies": {
"@push.rocks/smartdelay": "^3.0.1",
"@push.rocks/smartrx": "^3.0.10",
"@types/node-ipc": "^9.1.1",
"node-ipc": "^12.0.0"
"@push.rocks/smartrx": "^3.0.10"
},
"keywords": [
"IPC",

203
pnpm-lock.yaml generated
View File

@@ -14,12 +14,6 @@ importers:
'@push.rocks/smartrx':
specifier: ^3.0.10
version: 3.0.10
'@types/node-ipc':
specifier: ^9.1.1
version: 9.2.3
node-ipc:
specifier: ^12.0.0
version: 12.0.0
devDependencies:
'@git.zone/tsbuild':
specifier: ^2.0.22
@@ -1573,9 +1567,6 @@ packages:
'@types/node-forge@1.3.14':
resolution: {integrity: sha512-mhVF2BnD4BO+jtOp7z1CdzaK4mbuK0LLQYAvdOLqHTavxFNq4zA1EmYkpnFjP8HOUzedfQkRnp0E2ulSAYSzAw==}
'@types/node-ipc@9.2.3':
resolution: {integrity: sha512-/MvSiF71fYf3+zwqkh/zkVkZj1hl1Uobre9EMFy08mqfJNAmpR0vmPgOUdEIDVgifxHj6G1vYMPLSBLLxoDACQ==}
'@types/node@22.13.8':
resolution: {integrity: sha512-G3EfaZS+iOGYWLLRCEAXdWK9my08oHNZ+FHluRiggIYJPOXzhOiDgpVCUHaUvyIC5/fj7C/p637jdzC666AOKQ==}
@@ -1879,9 +1870,6 @@ packages:
resolution: {integrity: sha512-4CCmhqt4yqbQQI9REDKCf+N6U3SToC5o7PoKCq4veHvr30TJ2Vmz1mYYF23VC0E7Z13tf4CXh9jXY0VC+Jtdng==}
engines: {node: '>=4'}
cliui@7.0.4:
resolution: {integrity: sha512-OcRE68cOsVMXp1Yvonl/fzkQOyjLSu/8bhPDfQt0e0/Eb283TKP20Fs2MqoPsr9SwA595rRCA+QMzYc9nBP+JQ==}
cliui@8.0.1:
resolution: {integrity: sha512-BSeNnyus75C4//NQ9gQt1/csTXyo/8Sb+afLAkzAptFuMsod9HFokGNudZpi/oQV73hnVK+sR+5PVRMd+Dr7YQ==}
engines: {node: '>=12'}
@@ -1967,13 +1955,6 @@ packages:
resolution: {integrity: sha512-TG2hpqe4ELx54QER/S3HQ9SRVnQnGBtKUz5bLQWtYAQ+o6GpgMs6sYUvaiJjVxb+UXwhRhAEP3m7LbsIZ77Hmw==}
engines: {node: '>= 0.8'}
copyfiles@2.4.1:
resolution: {integrity: sha512-fereAvAvxDrQDOXybk3Qu3dPbOoKoysFMWtkY3mv5BsL8//OSZVL5DCLYqgRfY5cWirgRzlC+WSrxp6Bo3eNZg==}
hasBin: true
core-util-is@1.0.3:
resolution: {integrity: sha512-ZQBvi1DcpJ4GDqanjucZ2Hj3wEO5pZDS89BWbkcrvdxksJorwUDDZamX9ldFkp9aw2lmBDLgkObEA4DWNJ9FYQ==}
cors@2.8.5:
resolution: {integrity: sha512-KIHbLJqu73RGr/hnbrO9uBeixNGuvSQjul/jdFvS/KFSIH1hWVd1ng7zOHx+YrEfInLG7q4n6GHQ9cDtxv/P6g==}
engines: {node: '>= 0.10'}
@@ -2122,10 +2103,6 @@ packages:
eastasianwidth@0.2.0:
resolution: {integrity: sha512-I88TYZWc9XiYHRQ4/3c5rjjfgkjhLyW2luGIheGERbNQ6OY7yTybanSpDXZa8y7VUP9YmDcYa+eyq4ca7iLqWA==}
easy-stack@1.0.1:
resolution: {integrity: sha512-wK2sCs4feiiJeFXn3zvY0p41mdU5VUgbgs1rNsc/y5ngFUijdWd+iIN8eoyuZHKB8xN6BL4PdWmzqFmxNg6V2w==}
engines: {node: '>=6.0.0'}
ee-first@1.1.1:
resolution: {integrity: sha512-WMwm9LhRUo+WUaRN+vRuETqG89IgZphVSNkdFgeb6sS/E4OrDIN7t48CAewSHXc6C8lefD8KKfr5vY61brQlow==}
@@ -2236,10 +2213,6 @@ packages:
resolution: {integrity: sha512-aIL5Fx7mawVa300al2BnEE4iNvo1qETxLrPI/o05L7z6go7fCw1J6EQmbK4FmJ2AS7kgVF/KEZWufBfdClMcPg==}
engines: {node: '>= 0.6'}
event-pubsub@5.0.3:
resolution: {integrity: sha512-2QiHxshejKgJrYMzSI9MEHrvhmzxBL+eLyiM5IiyjDBySkgwS2+tdtnO3gbx8pEisu/yOFCIhfCb63gCEu0yBQ==}
engines: {node: '>=13.0.0'}
eventemitter3@4.0.7:
resolution: {integrity: sha512-8guHBZCwKnFhYdHr2ysuRWErTwhoN2X8XELRlrRwpmfeY2jjuUN4taQMsULKUVo1K4DvZl+0pgfyoysHxvmvEw==}
@@ -2635,12 +2608,6 @@ packages:
resolution: {integrity: sha512-fKzAra0rGJUUBwGBgNkHZuToZcn+TtXHpeCgmkMJMMYx1sQDYaCSyjJBSCa2nH1DGm7s3n1oBnohoVTBaN7Lww==}
engines: {node: '>=8'}
isarray@0.0.1:
resolution: {integrity: sha1-ihis/Kmo9Bd+Cav8YDiTmwXR7t8=}
isarray@1.0.0:
resolution: {integrity: sha1-u5NdSFgsuhaMBoNJV6VKPgcSTxE=}
isexe@2.0.0:
resolution: {integrity: sha512-RHxMLp9lnKHGHRng9QFhRCMbYAcVpn69smSGcq3f36xjgVVWThj4qqLbTLlq7Ssj8B+fIQ1EuCEGI2lKsyQeIw==}
@@ -2658,14 +2625,6 @@ packages:
js-base64@3.7.7:
resolution: {integrity: sha512-7rCnleh0z2CkXhH67J8K1Ytz0b2Y+yxTPL+/KOJoa20hfnVQ/3/T6W/KflYI4bRHRagNeXeU2bkNGI3v1oS/lw==}
js-message@1.0.7:
resolution: {integrity: sha512-efJLHhLjIyKRewNS9EGZ4UpI8NguuL6fKkhRxVuMmrGV2xN/0APGdQYwLFky5w9naebSZ0OwAGp0G6/2Cg90rA==}
engines: {node: '>=0.6.0'}
js-queue@2.0.2:
resolution: {integrity: sha512-pbKLsbCfi7kriM3s1J4DDCo7jQkI58zPLHi0heXPzPlj0hjUsm+FesPUbE0DSbIVIK503A36aUBoCN7eMFedkA==}
engines: {node: '>=1.0.0'}
js-tokens@4.0.0:
resolution: {integrity: sha512-RdJUflcE3cUzKiMqQgsCu06FPu9UdIJO0beYbPhHN4k6apgJtifcoCtT9bcxOpYBtpD2kCM6Sbzg4CausW/PKQ==}
@@ -3032,11 +2991,6 @@ packages:
mitt@3.0.1:
resolution: {integrity: sha512-vKivATfr97l2/QBCYAkXYDbrIWPM2IIKEl7YPhjCvKlG3kE2gm+uBo6nEXK3M5/Ffh/FLpKExzOQ3JJoJGFKBw==}
mkdirp@1.0.4:
resolution: {integrity: sha512-vVqVZQyf3WLx2Shd0qJ9xuvqgAyKPLAiqITEtqW0oIUjzo3PePDd6fW9iFz30ef7Ysp/oiWqbhszeGWW2T6Gzw==}
engines: {node: '>=10'}
hasBin: true
mongodb-connection-string-url@3.0.2:
resolution: {integrity: sha512-rMO7CGo/9BFwyZABcKAWL8UJwH/Kc2x0g72uhDWzG48URRax5TCIcJ7Rc3RZqffZzO/Gwff/jyKwCU9TN8gehA==}
@@ -3106,13 +3060,6 @@ packages:
resolution: {integrity: sha512-dPEtOeMvF9VMcYV/1Wb8CPoVAXtp6MKMlcbAt4ddqmGqUJ6fQZFXkNZNkNlfevtNkGtaSoXf/vNNNSvgrdXwtA==}
engines: {node: '>= 6.13.0'}
node-ipc@12.0.0:
resolution: {integrity: sha512-QHJ2gAJiqA3cM7cQiRjLsfCOBRB0TwQ6axYD4FSllQWipEbP6i7Se1dP8EzPKk5J1nCe27W69eqPmCoKyQ61Vg==}
engines: {node: '>=14'}
noms@0.0.0:
resolution: {integrity: sha512-lNDU9VJaOPxUmXcLb+HQFeUgQQPtMI24Gt6hgfuMHRJgMRHMF/qZ4HJD3GDru4sSw9IQl2jPjAYnQrdIeLbwow==}
normalize-newline@4.1.0:
resolution: {integrity: sha512-ff4jKqMI8Xl50/4Mms/9jPobzAV/UK+kXG2XJ/7AqOmxIx8mqfqTIHYxuAnEgJ2AQeBbLnlbmZ5+38Y9A0w/YA==}
engines: {node: '>=12'}
@@ -3291,9 +3238,6 @@ packages:
resolution: {integrity: sha512-4yf0QO/sllf/1zbZWYnvWw3NxCQwLXKzIj0G849LSufP15BXKM0rbD2Z3wVnkMfjdn/CB0Dpp444gYAACdsplg==}
engines: {node: '>=18'}
process-nextick-args@2.0.1:
resolution: {integrity: sha512-3ouUOpQhtgrbOa17J7+uxOTpITYWaGP7/AhoR3+A+/1e9skrzelGi/dXzEYyvbxubEF6Wn2ypscTKiKJFFn1ag==}
progress@2.0.3:
resolution: {integrity: sha512-7PiHtLll5LdnKIMw100I+8xJXR5gW2QwWYkT6iJva0bXitZKa/XMrSbdmg3r2Xnaidz9Qumd0VPaMrZlF9V9sA==}
engines: {node: '>=0.4.0'}
@@ -3362,12 +3306,6 @@ packages:
resolution: {integrity: sha512-y3bGgqKj3QBdxLbLkomlohkvsA8gdAiUQlSBJnBhfn+BPxg4bc62d8TcBW15wavDfgexCgccckhcZvywyQYPOw==}
hasBin: true
readable-stream@1.0.34:
resolution: {integrity: sha1-Elgg40vIQtLyqq+v5MKRbuMsFXw=}
readable-stream@2.3.8:
resolution: {integrity: sha512-8p0AUk4XODgIewSi0l8Epjs+EVnWiK7NoDIEGU0HhE7+ZyY8D1IMY7odu5lRrFXGg71L15KG8QrPmum45RTtdA==}
readable-stream@3.6.2:
resolution: {integrity: sha512-9u/sniCrY3D5WdsERHzHE4G2YCXqoG5FTHUiCC4SIbr6XcLZBY05ya9EKjYek9O5xOAwjGq+1JdGBAS7Q9ScoA==}
engines: {node: '>= 6'}
@@ -3452,9 +3390,6 @@ packages:
engines: {node: '>=8.3.0'}
hasBin: true
safe-buffer@5.1.2:
resolution: {integrity: sha512-Gd2UZBJDkXlY7GbJxfsE8/nvKkUEU1G38c1siN6QP6a9PT9MmHB8GnpscSmMJSoF8LOIrt8ud/wPtojys4G6+g==}
safe-buffer@5.2.1:
resolution: {integrity: sha512-rp3So07KcdmmKbGvgaNxQSJr7bGVSVk5S9Eq1F+ppbRo70+YeaDxkw5Dd8NPN+GD6bjnYm2VuPuCXmpuYvmCXQ==}
@@ -3612,12 +3547,6 @@ packages:
resolution: {integrity: sha512-HnLOCR3vjcY8beoNLtcjZ5/nxn2afmME6lhrDrebokqMap+XbeW8n9TXpPDOqdGK5qcI3oT0GKTW6wC7EMiVqA==}
engines: {node: '>=12'}
string_decoder@0.10.31:
resolution: {integrity: sha1-YuIDvEF2bGwoyfyEMB2rHFMQ+pQ=}
string_decoder@1.1.1:
resolution: {integrity: sha512-n/ShnvDi6FHbbVfviro+WojiFzv+s8MPMHBczVePfUpDJLwoLT0ht1l4YwBCbi8pJAveEEdnkHyPyTP/mzRfwg==}
string_decoder@1.3.0:
resolution: {integrity: sha512-hkRX8U1WjJFd8LsDJ2yQ/wWWxaopEsABU1XfkM8A+j0+85JAGppt16cr1Whg6KIbb4okU6Mql6BOj+uup/wKeA==}
@@ -3646,14 +3575,6 @@ packages:
strnum@2.1.1:
resolution: {integrity: sha512-7ZvoFTiCnGxBtDqJ//Cu6fWtZtc7Y3x+QOirG15wztbdngGSkht27o2pyGWrVy0b4WAy3jbKmnoK6g5VlVNUUw==}
strong-type@0.1.6:
resolution: {integrity: sha512-eJe5caH6Pi5oMMeQtIoBPpvNu/s4jiyb63u5tkHNnQXomK+puyQ5i+Z5iTLBr/xUz/pIcps0NSfzzFI34+gAXg==}
engines: {node: '>=12.0.0'}
strong-type@1.1.0:
resolution: {integrity: sha512-X5Z6riticuH5GnhUyzijfDi1SoXas8ODDyN7K8lJeQK+Jfi4dKdoJGL4CXTskY/ATBcN+rz5lROGn1tAUkOX7g==}
engines: {node: '>=12.21.0'}
strtok3@10.3.4:
resolution: {integrity: sha512-KIy5nylvC5le1OdaaoCJ07L+8iQzJHGH6pWDuzS+d07Cu7n1MZ2x26P8ZKIWfbK02+XIL8Mp4RkWeqdUCrDMfg==}
engines: {node: '>=18'}
@@ -3697,9 +3618,6 @@ packages:
threads@1.7.0:
resolution: {integrity: sha512-Mx5NBSHX3sQYR6iI9VYbgHKBLisyB+xROCBGjjWm1O9wb9vfLxdaGtmT/KCjUqMsSNW6nERzCW3T6H43LqjDZQ==}
through2@2.0.5:
resolution: {integrity: sha512-/mrRod8xqpA+IHSLyGCQ2s8SPHiCDEeQJSep1jqLYeEUClOFG2Qsh+4FU6G9VeqpZnGW/Su8LQGc4YKni5rYSQ==}
through2@4.0.2:
resolution: {integrity: sha512-iOqSav00cVxEEICeD7TjLB1sueEL+81Wpzp2bY17uZjZN0pWZPuo4suZ/61VujxmqSGFfgOcNuTZ85QJwNZQpw==}
@@ -3842,10 +3760,6 @@ packages:
resolution: {integrity: sha512-pjy2bYhSsufwWlKwPc+l3cN7+wuJlK6uz0YdJEOlQDbl6jo/YlPi4mb8agUkVC8BF7V8NuzeyPNqRksA3hztKQ==}
engines: {node: '>= 0.8'}
untildify@4.0.0:
resolution: {integrity: sha512-KK8xQ1mkzZeg9inewmFVDNkg3l5LUhoq9kN6iWYB/CC9YMG8HA+c1Q8HwDe6dEX7kErrEVNVBO3fWsVq5iDgtw==}
engines: {node: '>=8'}
upper-case@1.1.3:
resolution: {integrity: sha512-WRbjgmYzgXkCV7zNVpy5YgrHgbBv126rMALQQMrmzOVC4GM2waQ9x7xtm8VU+1yF2kWyPzI9zbZ48n4vSxwfSA==}
@@ -3955,26 +3869,14 @@ packages:
resolution: {integrity: sha512-TEU+nJVUUnA4CYJFLvK5X9AOeH4KvDvhIfm0vV1GaQRtchnG0hgK5p8hw/xjv8cunWYCsiPCSDzObPyhEwq3KQ==}
engines: {node: '>=0.4.0'}
xtend@4.0.2:
resolution: {integrity: sha512-LKYU1iAXJXUgAXn9URjiu+MWhyUXHsvfp7mcuYm9dSUKK0/CjtrUwFAxD82/mCWbtLsGjFIad0wIsod4zrTAEQ==}
engines: {node: '>=0.4'}
y18n@5.0.8:
resolution: {integrity: sha512-0pfFzegeDWJHJIAmTLRP2DwHjdF5s7jo9tuztdQxAhINCdvS+3nGINqPd00AphqJR/0LhANUS6/+7SCb98YOfA==}
engines: {node: '>=10'}
yargs-parser@20.2.9:
resolution: {integrity: sha512-y11nGElTIV+CT3Zv9t7VKl+Q3hTQoT9a1Qzezhhl6Rp21gJ/IVTW7Z3y9EWXhuUBC2Shnf+DX0antecpAwSP8w==}
engines: {node: '>=10'}
yargs-parser@21.1.1:
resolution: {integrity: sha512-tVpsJW7DdjecAiFpbIB1e3qxIQsE6NoPc5/eTdrbbIC4h0LVsWhnoa3g+m2HclBIujHzsxZ4VJVA+GUuc2/LBw==}
engines: {node: '>=12'}
yargs@16.2.0:
resolution: {integrity: sha512-D1mvvtDG0L5ft/jGWkLpG1+m0eQxOfaBvTNELraWj22wSVUMWxZUvYgJYcKh6jGGIkJFhH4IZPQhR4TKpc8mBw==}
engines: {node: '>=10'}
yargs@17.7.2:
resolution: {integrity: sha512-7dSzzRQ++CKnNI/krKnYRV7JKKPUXMEh61soaHKg9mrWEhzFWhFnxPxGl+69cD1Ou63C13NUPCnmIcrvqCuM6w==}
engines: {node: '>=12'}
@@ -6988,10 +6890,6 @@ snapshots:
dependencies:
'@types/node': 22.13.8
'@types/node-ipc@9.2.3':
dependencies:
'@types/node': 22.13.8
'@types/node@22.13.8':
dependencies:
undici-types: 6.20.0
@@ -7287,12 +7185,6 @@ snapshots:
clean-stack@1.3.0: {}
cliui@7.0.4:
dependencies:
string-width: 4.2.3
strip-ansi: 6.0.1
wrap-ansi: 7.0.0
cliui@8.0.1:
dependencies:
string-width: 4.2.3
@@ -7372,18 +7264,6 @@ snapshots:
depd: 2.0.0
keygrip: 1.1.0
copyfiles@2.4.1:
dependencies:
glob: 7.2.3
minimatch: 3.1.2
mkdirp: 1.0.4
noms: 0.0.0
through2: 2.0.5
untildify: 4.0.0
yargs: 16.2.0
core-util-is@1.0.3: {}
cors@2.8.5:
dependencies:
object-assign: 4.1.1
@@ -7502,8 +7382,6 @@ snapshots:
eastasianwidth@0.2.0: {}
easy-stack@1.0.1: {}
ee-first@1.1.1: {}
emoji-regex@8.0.0: {}
@@ -7630,11 +7508,6 @@ snapshots:
etag@1.8.1: {}
event-pubsub@5.0.3:
dependencies:
copyfiles: 2.4.1
strong-type: 0.1.6
eventemitter3@4.0.7: {}
express-force-ssl@0.3.2:
@@ -8107,10 +7980,6 @@ snapshots:
dependencies:
is-docker: 2.2.1
isarray@0.0.1: {}
isarray@1.0.0: {}
isexe@2.0.0: {}
isexe@3.1.1: {}
@@ -8123,12 +7992,6 @@ snapshots:
js-base64@3.7.7: {}
js-message@1.0.7: {}
js-queue@2.0.2:
dependencies:
easy-stack: 1.0.1
js-tokens@4.0.0: {}
js-yaml@3.14.1:
@@ -8683,8 +8546,6 @@ snapshots:
mitt@3.0.1: {}
mkdirp@1.0.4: {}
mongodb-connection-string-url@3.0.2:
dependencies:
'@types/whatwg-url': 11.0.5
@@ -8759,18 +8620,6 @@ snapshots:
node-forge@1.3.1: {}
node-ipc@12.0.0:
dependencies:
event-pubsub: 5.0.3
js-message: 1.0.7
js-queue: 2.0.2
strong-type: 1.1.0
noms@0.0.0:
dependencies:
inherits: 2.0.4
readable-stream: 1.0.34
normalize-newline@4.1.0:
dependencies:
replace-buffer: 1.2.1
@@ -8928,8 +8777,6 @@ snapshots:
dependencies:
parse-ms: 4.0.0
process-nextick-args@2.0.1: {}
progress@2.0.3: {}
property-information@7.1.0: {}
@@ -9028,23 +8875,6 @@ snapshots:
minimist: 1.2.8
strip-json-comments: 2.0.1
readable-stream@1.0.34:
dependencies:
core-util-is: 1.0.3
inherits: 2.0.4
isarray: 0.0.1
string_decoder: 0.10.31
readable-stream@2.3.8:
dependencies:
core-util-is: 1.0.3
inherits: 2.0.4
isarray: 1.0.0
process-nextick-args: 2.0.1
safe-buffer: 5.1.2
string_decoder: 1.1.1
util-deprecate: 1.0.2
readable-stream@3.6.2:
dependencies:
inherits: 2.0.4
@@ -9183,8 +9013,6 @@ snapshots:
transitivePeerDependencies:
- supports-color
safe-buffer@5.1.2: {}
safe-buffer@5.2.1: {}
safe-regex-test@1.1.0:
@@ -9395,12 +9223,6 @@ snapshots:
emoji-regex: 9.2.2
strip-ansi: 7.1.0
string_decoder@0.10.31: {}
string_decoder@1.1.1:
dependencies:
safe-buffer: 5.1.2
string_decoder@1.3.0:
dependencies:
safe-buffer: 5.2.1
@@ -9428,10 +9250,6 @@ snapshots:
strnum@2.1.1: {}
strong-type@0.1.6: {}
strong-type@1.1.0: {}
strtok3@10.3.4:
dependencies:
'@tokenizer/token': 0.3.0
@@ -9490,11 +9308,6 @@ snapshots:
transitivePeerDependencies:
- supports-color
through2@2.0.5:
dependencies:
readable-stream: 2.3.8
xtend: 4.0.2
through2@4.0.2:
dependencies:
readable-stream: 3.6.2
@@ -9619,8 +9432,6 @@ snapshots:
unpipe@1.0.0: {}
untildify@4.0.0: {}
upper-case@1.1.3: {}
url@0.11.4:
@@ -9719,24 +9530,10 @@ snapshots:
xmlhttprequest-ssl@2.1.2: {}
xtend@4.0.2: {}
y18n@5.0.8: {}
yargs-parser@20.2.9: {}
yargs-parser@21.1.1: {}
yargs@16.2.0:
dependencies:
cliui: 7.0.4
escalade: 3.2.0
get-caller-file: 2.0.5
require-directory: 2.1.1
string-width: 4.2.3
y18n: 5.0.8
yargs-parser: 20.2.9
yargs@17.7.2:
dependencies:
cliui: 8.0.1

112
readme.md
View File

@@ -16,6 +16,7 @@ SmartIPC delivers bulletproof Inter-Process Communication for Node.js applicatio
- **CI/Test Ready** - Built-in helpers and race condition prevention for testing
- **Observable** - Real-time metrics, connection tracking, and health monitoring
- **Multiple Patterns** - Request/Response, Pub/Sub, and Fire-and-Forget messaging
- **Streaming Support** - Efficient, backpressureaware streaming for large data and files
## 📦 Installation
@@ -184,6 +185,90 @@ await publisher.publish('user.login', {
## 💪 Advanced Features
### 📦 Streaming Large Data & Files
SmartIPC supports efficient, backpressure-aware streaming of large payloads using chunked messages. Streams work both directions and emit a high-level `stream` event for consumption.
Client → Server streaming:
```typescript
// Server side: receive stream
server.on('stream', async (info, readable) => {
if (info.meta?.type === 'file') {
console.log('Receiving file', info.meta.basename, 'from', info.clientId);
}
// Pipe to disk or process chunks
await SmartIpc.pipeStreamToFile(readable, '/tmp/incoming.bin');
});
// Client side: send a stream
const readable = fs.createReadStream('/path/to/local.bin');
await client.sendStream(readable, {
meta: { type: 'file', basename: 'local.bin' },
chunkSize: 64 * 1024 // optional, defaults to 64k
});
```
Server → Client streaming:
```typescript
client.on('stream', async (info, readable) => {
console.log('Got stream from server', info.meta);
await SmartIpc.pipeStreamToFile(readable, '/tmp/from-server.bin');
});
await server.sendStreamToClient(client.getClientId(), fs.createReadStream('/path/server.bin'), {
meta: { type: 'file', basename: 'server.bin' }
});
```
High-level helpers for files:
```typescript
// Client → Server
await client.sendFile('/path/to/bigfile.iso');
// Server → Client
await server.sendFileToClient(clientId, '/path/to/backup.tar');
// Save an incoming stream to a file (both sides)
server.on('stream', async (info, readable) => {
await SmartIpc.pipeStreamToFile(readable, '/data/uploaded/' + info.meta?.basename);
});
```
Events & metadata:
- `channel/server/client` emit `stream` with `(info, readable)`
- `info` contains: `streamId`, `meta` (your metadata, e.g., filename/size), `headers`, and `clientId` (if available)
API summary:
- Client: `sendStream(readable, opts)`, `sendFile(filePath, opts)`, `cancelOutgoingStream(id)`, `cancelIncomingStream(id)`
- Server: `sendStreamToClient(clientId, readable, opts)`, `sendFileToClient(clientId, filePath, opts)`, `cancelIncomingStreamFromClient(clientId, id)`, `cancelOutgoingStreamToClient(clientId, id)`
- Utility: `SmartIpc.pipeStreamToFile(readable, filePath)`
Concurrency and cancelation:
```typescript
// Limit concurrent streams per connection
const server = SmartIpc.createServer({
id: 'svc', socketPath: '/tmp/svc.sock', maxConcurrentStreams: 2
});
// Cancel a stream from the receiver side
server.on('stream', (info, readable) => {
if (info.meta?.shouldCancel) {
(server as any).primaryChannel.cancelIncomingStream(info.streamId, { clientId: info.clientId });
}
});
```
Notes:
- Streaming uses chunked messages under the hood and respects socket backpressure.
- Include `meta` to share context like filename/size; its delivered with the `stream` event.
- Configure `maxConcurrentStreams` (default: 32) to guard resources.
### 🏁 Server Readiness Detection
Eliminate race conditions in tests and production:
@@ -238,6 +323,33 @@ await client.connect({
});
```
### 🛑 Client-Only Mode (No Auto-Start)
In some setups (CLI + long-running daemon), you want clients to fail fast when no server is available, rather than implicitly becoming the server. Enable client-only mode to prevent the “client becomes server” fallback for Unix domain sockets and Windows named pipes.
```typescript
// Strict client that never auto-starts a server on connect failure
const client = SmartIpc.createClient({
id: 'my-service',
socketPath: '/tmp/my-service.sock',
clientId: 'my-cli',
clientOnly: true, // NEW: disable auto-start fallback
connectRetry: { enabled: false } // optional: fail fast
});
try {
await client.connect();
} catch (err) {
// With clientOnly: true, errors become descriptive
// e.g. "Server not available (ENOENT); clientOnly prevents auto-start"
console.error(err.message);
}
```
- Default: `clientOnly` is `false` to preserve backward compatibility.
- Env override: set `SMARTIPC_CLIENT_ONLY=1` to enforce client-only behavior without code changes.
- Note: `SmartIpc.waitForServer()` internally uses `clientOnly: true` for safe probing.
### 💓 Graceful Heartbeat Monitoring
Keep connections alive without crashing on timeouts:

View File

@@ -192,6 +192,61 @@ tap.test('Client retry should work with delayed server', async () => {
await server.stop();
});
// Test 7: clientOnly prevents client from auto-starting a server
tap.test('clientOnly should prevent auto-start and fail fast', async () => {
const uniqueSocketPath = path.join(os.tmpdir(), `smartipc-clientonly-${Date.now()}.sock`);
const client = smartipc.SmartIpc.createClient({
id: 'clientonly-test',
socketPath: uniqueSocketPath,
clientId: 'co-client-1',
clientOnly: true,
connectRetry: { enabled: false }
});
let failed = false;
try {
await client.connect();
} catch (err: any) {
failed = true;
expect(err.message).toContain('clientOnly prevents auto-start');
}
expect(failed).toBeTrue();
// Ensure no server-side socket was created
expect(fs.existsSync(uniqueSocketPath)).toBeFalse();
});
// Test 8: env SMARTIPC_CLIENT_ONLY enforces clientOnly behavior
tap.test('SMARTIPC_CLIENT_ONLY=1 should enforce clientOnly', async () => {
const uniqueSocketPath = path.join(os.tmpdir(), `smartipc-clientonly-env-${Date.now()}.sock`);
const prev = process.env.SMARTIPC_CLIENT_ONLY;
process.env.SMARTIPC_CLIENT_ONLY = '1';
const client = smartipc.SmartIpc.createClient({
id: 'clientonly-test-env',
socketPath: uniqueSocketPath,
clientId: 'co-client-2',
connectRetry: { enabled: false }
});
let failed = false;
try {
await client.connect();
} catch (err: any) {
failed = true;
expect(err.message).toContain('clientOnly prevents auto-start');
}
expect(failed).toBeTrue();
expect(fs.existsSync(uniqueSocketPath)).toBeFalse();
// restore env
if (prev === undefined) {
delete process.env.SMARTIPC_CLIENT_ONLY;
} else {
process.env.SMARTIPC_CLIENT_ONLY = prev;
}
});
// Cleanup
tap.test('Cleanup test socket', async () => {
try {

218
test/test.streaming.ts Normal file
View File

@@ -0,0 +1,218 @@
import { expect, tap } from '@git.zone/tstest/tapbundle';
import * as smartipc from '../ts/index.js';
import * as smartdelay from '@push.rocks/smartdelay';
import * as plugins from '../ts/smartipc.plugins.js';
import * as fs from 'fs';
import * as path from 'path';
let server: smartipc.IpcServer;
let client: smartipc.IpcClient;
tap.test('setup TCP server and client (streaming)', async () => {
server = smartipc.SmartIpc.createServer({
id: 'stream-test-server',
host: '127.0.0.1',
port: 19876,
heartbeat: false
});
await server.start();
client = smartipc.SmartIpc.createClient({
id: 'stream-test-server',
host: '127.0.0.1',
port: 19876,
clientId: 'stream-client-1',
heartbeat: false
});
await client.connect();
expect(client.getIsConnected()).toBeTrue();
});
tap.test('client -> server streaming large payload', async () => {
// Create ~5MB buffer
const size = 5 * 1024 * 1024 + 123; // add some non-chunk-aligned bytes
const data = Buffer.alloc(size);
for (let i = 0; i < size; i++) data[i] = i % 251;
const received: Buffer[] = [];
const done = new Promise<void>((resolve, reject) => {
server.on('stream', (info: any, readable: plugins.stream.Readable) => {
// only handle our test stream
if (info?.meta?.direction === 'client-to-server') {
readable.on('data', chunk => received.push(Buffer.from(chunk)));
readable.on('end', resolve);
readable.on('error', reject);
}
});
});
// Send stream from client
const readable = plugins.stream.Readable.from(data);
await client.sendStream(readable, { meta: { direction: 'client-to-server' }, chunkSize: 64 * 1024 });
await done;
const result = Buffer.concat(received);
expect(result.length).toEqual(data.length);
expect(result.equals(data)).toBeTrue();
});
tap.test('server -> client streaming large payload', async () => {
const size = 6 * 1024 * 1024 + 7;
const data = Buffer.alloc(size);
for (let i = 0; i < size; i++) data[i] = (i * 7) % 255;
const received: Buffer[] = [];
const done = new Promise<void>((resolve, reject) => {
client.on('stream', (info: any, readable: plugins.stream.Readable) => {
if (info?.meta?.direction === 'server-to-client') {
readable.on('data', chunk => received.push(Buffer.from(chunk)));
readable.on('end', resolve);
readable.on('error', reject);
}
});
});
const readable = plugins.stream.Readable.from(data);
await server.sendStreamToClient('stream-client-1', readable, { meta: { direction: 'server-to-client' }, chunkSize: 64 * 1024 });
await done;
const result = Buffer.concat(received);
expect(result.length).toEqual(data.length);
expect(result.equals(data)).toBeTrue();
});
tap.test('client -> server file transfer to disk', async () => {
const baseTmp1 = path.join(process.cwd(), '.nogit', 'tmp');
fs.mkdirSync(baseTmp1, { recursive: true });
const tmpDir = fs.mkdtempSync(path.join(baseTmp1, 'tmp-'));
const srcPath = path.join(tmpDir, 'src.bin');
const dstPath = path.join(tmpDir, 'dst.bin');
// Prepare file ~1MB
const size = 1024 * 1024 + 333;
const buf = Buffer.alloc(size);
for (let i = 0; i < size; i++) buf[i] = (i * 11) % 255;
fs.writeFileSync(srcPath, buf);
const done = new Promise<void>((resolve, reject) => {
server.on('stream', async (info: any, readable: plugins.stream.Readable) => {
if (info?.meta?.type === 'file' && info?.meta?.basename === 'src.bin') {
try {
await smartipc.pipeStreamToFile(readable, dstPath);
resolve();
} catch (e) {
reject(e);
}
}
});
});
await client.sendFile(srcPath);
await done;
const out = fs.readFileSync(dstPath);
expect(out.equals(buf)).toBeTrue();
});
tap.test('server -> client file transfer to disk', async () => {
const baseTmp2 = path.join(process.cwd(), '.nogit', 'tmp');
fs.mkdirSync(baseTmp2, { recursive: true });
const tmpDir = fs.mkdtempSync(path.join(baseTmp2, 'tmp-'));
const srcPath = path.join(tmpDir, 'serverfile.bin');
const dstPath = path.join(tmpDir, 'clientfile.bin');
const size = 512 * 1024 + 77;
const buf = Buffer.alloc(size);
for (let i = 0; i < size; i++) buf[i] = (i * 17) % 251;
fs.writeFileSync(srcPath, buf);
const done = new Promise<void>((resolve, reject) => {
client.on('stream', async (info: any, readable: plugins.stream.Readable) => {
if (info?.meta?.type === 'file' && info?.meta?.basename === 'serverfile.bin') {
try {
await smartipc.pipeStreamToFile(readable, dstPath);
resolve();
} catch (e) {
reject(e);
}
}
});
});
await server.sendFileToClient('stream-client-1', srcPath);
await done;
const out = fs.readFileSync(dstPath);
expect(out.equals(buf)).toBeTrue();
});
tap.test('receiver cancels an incoming stream', async () => {
// Create a slow readable that emits many chunks
const bigChunk = Buffer.alloc(128 * 1024, 1);
let pushed = 0;
const readable = new plugins.stream.Readable({
read() {
setTimeout(() => {
if (pushed > 200) {
this.push(null);
} else {
this.push(bigChunk);
pushed++;
}
}, 5);
}
});
let cancelled = false;
const cancelPromise = new Promise<void>((resolve) => {
server.on('stream', (info: any, r: plugins.stream.Readable) => {
if (info?.meta?.direction === 'client-to-server-cancel') {
// cancel after first chunk
r.once('data', async () => {
cancelled = true;
// send cancel back to sender
await (server as any).primaryChannel.cancelIncomingStream(info.streamId, { clientId: info.clientId });
resolve();
});
r.on('error', () => { /* ignore cancellation error */ });
// drain to trigger data
r.resume();
}
});
});
const sendPromise = client
.sendStream(readable, { meta: { direction: 'client-to-server-cancel' } })
.catch(() => { /* expected due to cancel */ });
await cancelPromise;
expect(cancelled).toBeTrue();
await sendPromise;
});
tap.test('enforce maxConcurrentStreams option', async () => {
// Setup separate low-limit server/client
const srv = smartipc.SmartIpc.createServer({ id: 'limit-srv', host: '127.0.0.1', port: 19999, heartbeat: false, maxConcurrentStreams: 1 });
await srv.start();
const cli = smartipc.SmartIpc.createClient({ id: 'limit-srv', host: '127.0.0.1', port: 19999, clientId: 'limit-client', heartbeat: false, maxConcurrentStreams: 1 });
await cli.connect();
const r1 = plugins.stream.Readable.from(Buffer.alloc(256 * 1024));
const r2 = plugins.stream.Readable.from(Buffer.alloc(256 * 1024));
const p1 = cli.sendStream(r1, { meta: { n: 1 } });
let threw = false;
try {
await cli.sendStream(r2, { meta: { n: 2 } });
} catch (e) {
threw = true;
}
expect(threw).toBeTrue();
await p1;
await cli.disconnect();
await srv.stop();
});
tap.test('cleanup streaming test', async () => {
await client.disconnect();
await server.stop();
await smartdelay.delayFor(50);
});
export default tap.start();

View File

@@ -2,6 +2,10 @@ import { expect, tap } from '@git.zone/tstest/tapbundle';
import * as smartipc from '../ts/index.js';
import * as smartdelay from '@push.rocks/smartdelay';
import * as smartpromise from '@push.rocks/smartpromise';
import * as path from 'path';
import * as os from 'os';
const testSocketPath = path.join(os.tmpdir(), `test-smartipc-${Date.now()}.sock`);
let server: smartipc.IpcServer;
let client1: smartipc.IpcClient;
@@ -11,12 +15,13 @@ let client2: smartipc.IpcClient;
tap.test('should create and start an IPC server', async () => {
server = smartipc.SmartIpc.createServer({
id: 'test-server',
socketPath: '/tmp/test-smartipc.sock',
socketPath: testSocketPath,
autoCleanupSocketFile: true,
heartbeat: true,
heartbeatInterval: 2000
});
await server.start();
await server.start({ readyWhen: 'accepting' });
expect(server.getStats().isRunning).toBeTrue();
});
@@ -24,11 +29,12 @@ tap.test('should create and start an IPC server', async () => {
tap.test('should create and connect a client', async () => {
client1 = smartipc.SmartIpc.createClient({
id: 'test-server',
socketPath: '/tmp/test-smartipc.sock',
socketPath: testSocketPath,
clientId: 'client-1',
metadata: { name: 'Test Client 1' },
autoReconnect: true,
heartbeat: true
heartbeat: true,
clientOnly: true
});
await client1.connect();
@@ -76,9 +82,10 @@ tap.test('should handle request/response pattern', async () => {
tap.test('should handle multiple clients', async () => {
client2 = smartipc.SmartIpc.createClient({
id: 'test-server',
socketPath: '/tmp/test-smartipc.sock',
socketPath: testSocketPath,
clientId: 'client-2',
metadata: { name: 'Test Client 2' }
metadata: { name: 'Test Client 2' },
clientOnly: true
});
await client2.connect();
@@ -154,17 +161,6 @@ tap.test('should handle pub/sub pattern', async () => {
messageReceived.resolve();
});
// Server handles the subscription
server.onMessage('__subscribe__', async (payload, clientId) => {
expect(payload.topic).toEqual('news');
});
// Server handles publishing
server.onMessage('__publish__', async (payload, clientId) => {
// Broadcast to all subscribers of the topic
await server.broadcast(`topic:${payload.topic}`, payload.payload);
});
// Client 2 publishes to the topic
await client2.publish('news', { headline: 'Breaking news!' });

View File

@@ -3,6 +3,6 @@
*/
export const commitinfo = {
name: '@push.rocks/smartipc',
version: '2.1.3',
version: '2.3.0',
description: 'A library for node inter process communication, providing an easy-to-use API for IPC.'
}

View File

@@ -26,6 +26,8 @@ export interface IIpcChannelOptions extends IIpcTransportOptions {
heartbeatInitialGracePeriodMs?: number;
/** Throw on heartbeat timeout (default: true, set false to emit event instead) */
heartbeatThrowOnTimeout?: boolean;
/** Maximum concurrent streams (incoming/outgoing) */
maxConcurrentStreams?: number;
}
/**
@@ -54,6 +56,12 @@ export class IpcChannel<TRequest = any, TResponse = any> extends plugins.EventEm
private connectionStartTime: number = Date.now();
private isReconnecting = false;
private isClosing = false;
// Streaming state
private incomingStreams = new Map<string, plugins.stream.PassThrough>();
private incomingStreamMeta = new Map<string, Record<string, any>>();
private outgoingStreams = new Map<string, { cancelled: boolean; abort?: () => void }>();
private activeIncomingStreams = 0;
private activeOutgoingStreams = 0;
// Metrics
private metrics = {
@@ -79,6 +87,7 @@ export class IpcChannel<TRequest = any, TResponse = any> extends plugins.EventEm
heartbeat: true,
heartbeatInterval: 5000,
heartbeatTimeout: 10000,
maxConcurrentStreams: 32,
...options
};
@@ -128,6 +137,13 @@ export class IpcChannel<TRequest = any, TResponse = any> extends plugins.EventEm
this.handleMessage(message);
});
// Forward per-client disconnects from transports that support multi-client servers
// We re-emit a 'clientDisconnected' event with the clientId if known so higher layers can act.
// eslint-disable-next-line @typescript-eslint/no-explicit-any
(this.transport as any).on?.('clientDisconnected', (_socket: any, clientId?: string) => {
this.emit('clientDisconnected', clientId);
});
this.transport.on('drain', () => {
this.emit('drain');
});
@@ -318,6 +334,105 @@ export class IpcChannel<TRequest = any, TResponse = any> extends plugins.EventEm
return;
}
// Handle streaming control messages
if (message.type === '__stream_init__') {
const streamId = (message.payload as any)?.streamId as string;
const meta = (message.payload as any)?.meta as Record<string, any> | undefined;
if (typeof streamId === 'string' && streamId.length) {
// Enforce max concurrent incoming streams
if (this.activeIncomingStreams >= (this.options.maxConcurrentStreams || Infinity)) {
const response: IIpcMessageEnvelope = {
id: plugins.crypto.randomUUID(),
type: '__stream_error__',
timestamp: Date.now(),
payload: { streamId, error: 'Max concurrent streams exceeded' },
headers: message.headers?.clientId ? { clientId: message.headers.clientId } : undefined
};
this.transport.send(response).catch(() => {});
return;
}
const pass = new plugins.stream.PassThrough();
this.incomingStreams.set(streamId, pass);
if (meta) this.incomingStreamMeta.set(streamId, meta);
this.activeIncomingStreams++;
// Emit a high-level stream event
const headersClientId = message.headers?.clientId;
const eventPayload = {
streamId,
meta: meta || {},
headers: message.headers || {},
clientId: headersClientId,
};
// Emit as ('stream', info, readable)
this.emit('stream', eventPayload, pass);
}
return;
}
if (message.type === '__stream_chunk__') {
const streamId = (message.payload as any)?.streamId as string;
const chunkB64 = (message.payload as any)?.chunk as string;
const pass = this.incomingStreams.get(streamId);
if (pass && typeof chunkB64 === 'string') {
try {
const chunk = Buffer.from(chunkB64, 'base64');
pass.write(chunk);
} catch (e) {
// If decode fails, destroy stream
pass.destroy(e as Error);
this.incomingStreams.delete(streamId);
this.incomingStreamMeta.delete(streamId);
}
}
return;
}
if (message.type === '__stream_end__') {
const streamId = (message.payload as any)?.streamId as string;
const pass = this.incomingStreams.get(streamId);
if (pass) {
pass.end();
this.incomingStreams.delete(streamId);
this.incomingStreamMeta.delete(streamId);
this.activeIncomingStreams = Math.max(0, this.activeIncomingStreams - 1);
}
return;
}
if (message.type === '__stream_error__') {
const streamId = (message.payload as any)?.streamId as string;
const errMsg = (message.payload as any)?.error as string;
const pass = this.incomingStreams.get(streamId);
if (pass) {
pass.destroy(new Error(errMsg || 'stream error'));
this.incomingStreams.delete(streamId);
this.incomingStreamMeta.delete(streamId);
this.activeIncomingStreams = Math.max(0, this.activeIncomingStreams - 1);
}
return;
}
if (message.type === '__stream_cancel__') {
const streamId = (message.payload as any)?.streamId as string;
// Cancel outgoing stream with same id if present
const ctrl = this.outgoingStreams.get(streamId);
if (ctrl) {
ctrl.cancelled = true;
try { ctrl.abort?.(); } catch {}
this.outgoingStreams.delete(streamId);
this.activeOutgoingStreams = Math.max(0, this.activeOutgoingStreams - 1);
}
// Also cancel any incoming stream if tracked
const pass = this.incomingStreams.get(streamId);
if (pass) {
try { pass.destroy(new Error('stream cancelled')); } catch {}
this.incomingStreams.delete(streamId);
this.incomingStreamMeta.delete(streamId);
this.activeIncomingStreams = Math.max(0, this.activeIncomingStreams - 1);
}
return;
}
// Handle request/response
if (message.correlationId && this.pendingRequests.has(message.correlationId)) {
const pending = this.pendingRequests.get(message.correlationId)!;
@@ -461,7 +576,7 @@ export class IpcChannel<TRequest = any, TResponse = any> extends plugins.EventEm
* Register a message handler
*/
public on(event: string, handler: (payload: any) => any | Promise<any>): this {
if (event === 'message' || event === 'connect' || event === 'disconnect' || event === 'error' || event === 'reconnecting' || event === 'drain' || event === 'heartbeatTimeout') {
if (event === 'message' || event === 'connect' || event === 'disconnect' || event === 'error' || event === 'reconnecting' || event === 'drain' || event === 'heartbeatTimeout' || event === 'clientDisconnected' || event === 'stream') {
// Special handling for channel events
super.on(event, handler);
} else {
@@ -523,3 +638,129 @@ export class IpcChannel<TRequest = any, TResponse = any> extends plugins.EventEm
};
}
}
/**
* Streaming helpers
*/
export interface IStreamSendOptions {
headers?: Record<string, any>;
chunkSize?: number; // bytes, default 64k
streamId?: string;
meta?: Record<string, any>;
}
export type ReadableLike = NodeJS.ReadableStream | plugins.stream.Readable;
// Extend IpcChannel with a sendStream method
export interface IpcChannel<TRequest, TResponse> {
sendStream(readable: ReadableLike, options?: IStreamSendOptions): Promise<void>;
cancelOutgoingStream(streamId: string, headers?: Record<string, any>): Promise<void>;
cancelIncomingStream(streamId: string, headers?: Record<string, any>): Promise<void>;
}
IpcChannel.prototype.sendStream = async function(this: IpcChannel, readable: ReadableLike, options?: IStreamSendOptions): Promise<void> {
const streamId = options?.streamId || (plugins.crypto.randomUUID ? plugins.crypto.randomUUID() : `${Date.now()}-${Math.random()}`);
const headers = options?.headers || {};
const chunkSize = Math.max(1024, Math.min(options?.chunkSize || 64 * 1024, (this as any).options.maxMessageSize || 8 * 1024 * 1024));
const self: any = this;
// Enforce max concurrent outgoing streams (reserve a slot synchronously)
if (self.activeOutgoingStreams >= (self.options.maxConcurrentStreams || Infinity)) {
throw new Error('Max concurrent streams exceeded');
}
self.activeOutgoingStreams++;
self.outgoingStreams.set(streamId, {
cancelled: false,
abort: () => {
try { (readable as any).destroy?.(new Error('stream cancelled')); } catch {}
}
});
try {
// Send init after reserving slot
await (this as any).sendMessage('__stream_init__', { streamId, meta: options?.meta || {} }, headers);
} catch (e) {
self.outgoingStreams.delete(streamId);
self.activeOutgoingStreams = Math.max(0, self.activeOutgoingStreams - 1);
throw e;
}
const readChunkAndSend = async (buf: Buffer) => {
// Slice into chunkSize frames if needed
for (let offset = 0; offset < buf.length; offset += chunkSize) {
const ctrl = self.outgoingStreams.get(streamId);
if (ctrl?.cancelled) {
return;
}
const slice = buf.subarray(offset, Math.min(offset + chunkSize, buf.length));
const chunkB64 = slice.toString('base64');
await (this as any).sendMessage('__stream_chunk__', { streamId, chunk: chunkB64 }, headers);
}
};
await new Promise<void>((resolve, reject) => {
let sending = Promise.resolve();
readable.on('data', (chunk: any) => {
const buf = Buffer.isBuffer(chunk) ? chunk : Buffer.from(chunk);
// Ensure sequential sending to avoid write races
sending = sending.then(() => readChunkAndSend(buf));
sending.catch(reject);
});
readable.on('end', async () => {
try {
await sending;
await (this as any).sendMessage('__stream_end__', { streamId }, headers);
self.outgoingStreams.delete(streamId);
self.activeOutgoingStreams = Math.max(0, self.activeOutgoingStreams - 1);
resolve();
} catch (e) {
self.outgoingStreams.delete(streamId);
self.activeOutgoingStreams = Math.max(0, self.activeOutgoingStreams - 1);
reject(e);
}
});
readable.on('error', async (err: Error) => {
try {
await sending.catch(() => {});
await (this as any).sendMessage('__stream_error__', { streamId, error: err.message }, headers);
} finally {
self.outgoingStreams.delete(streamId);
self.activeOutgoingStreams = Math.max(0, self.activeOutgoingStreams - 1);
reject(err);
}
});
// In case the stream is already ended
const r = readable as any;
if (r.readableEnded) {
(async () => {
await (this as any).sendMessage('__stream_end__', { streamId }, headers);
self.outgoingStreams.delete(streamId);
self.activeOutgoingStreams = Math.max(0, self.activeOutgoingStreams - 1);
resolve();
})().catch(reject);
}
});
};
IpcChannel.prototype.cancelOutgoingStream = async function(this: IpcChannel, streamId: string, headers?: Record<string, any>): Promise<void> {
const self: any = this;
const ctrl = self.outgoingStreams.get(streamId);
if (ctrl) {
ctrl.cancelled = true;
try { ctrl.abort?.(); } catch {}
self.outgoingStreams.delete(streamId);
self.activeOutgoingStreams = Math.max(0, self.activeOutgoingStreams - 1);
}
await (this as any).sendMessage('__stream_cancel__', { streamId }, headers || {});
};
IpcChannel.prototype.cancelIncomingStream = async function(this: IpcChannel, streamId: string, headers?: Record<string, any>): Promise<void> {
const self: any = this;
const pass = self.incomingStreams.get(streamId);
if (pass) {
try { pass.destroy(new Error('stream cancelled')); } catch {}
self.incomingStreams.delete(streamId);
self.incomingStreamMeta.delete(streamId);
self.activeIncomingStreams = Math.max(0, self.activeIncomingStreams - 1);
}
await (this as any).sendMessage('__stream_cancel__', { streamId }, headers || {});
};

View File

@@ -45,6 +45,7 @@ export class IpcClient extends plugins.EventEmitter {
private messageHandlers = new Map<string, (payload: any) => any | Promise<any>>();
private isConnected = false;
private clientId: string;
private didRegisterOnce = false;
constructor(options: IIpcClientOptions) {
super();
@@ -66,30 +67,7 @@ export class IpcClient extends plugins.EventEmitter {
// Helper function to attempt registration
const attemptRegistration = async (): Promise<void> => {
const registerTimeoutMs = this.options.registerTimeoutMs || 5000;
try {
const response = await this.channel.request<any, any>(
'__register__',
{
clientId: this.clientId,
metadata: this.options.metadata
},
{
timeout: registerTimeoutMs,
headers: { clientId: this.clientId } // Include clientId in headers for proper routing
}
);
if (!response.success) {
throw new Error(response.error || 'Registration failed');
}
this.isConnected = true;
this.emit('connect');
} catch (error) {
throw new Error(`Failed to register with server: ${error.message}`);
}
await this.attemptRegistrationInternal();
};
// Helper function to attempt connection with retry
@@ -144,25 +122,27 @@ export class IpcClient extends plugins.EventEmitter {
// If waitForReady is specified, wait for server socket to exist first
if (connectOptions.waitForReady) {
const waitTimeout = connectOptions.waitTimeout || 10000;
// For Unix domain sockets / named pipes: wait explicitly using helper that probes with clientOnly
if (this.options.socketPath) {
const { SmartIpc } = await import('./index.js');
await (SmartIpc as any).waitForServer({ socketPath: this.options.socketPath, timeoutMs: waitTimeout });
await attemptConnection();
return;
}
// Fallback (e.g., TCP): retry-connect loop
const startTime = Date.now();
while (Date.now() - startTime < waitTimeout) {
try {
// Try to connect
await attemptConnection();
return; // Success!
} catch (error) {
// If it's a connection refused error, server might not be ready yet
if ((error as any).message?.includes('ECONNREFUSED') ||
(error as any).message?.includes('ENOENT')) {
if ((error as any).message?.includes('ECONNREFUSED')) {
await new Promise(resolve => setTimeout(resolve, 100));
continue;
}
// Other errors should be thrown
throw error;
}
}
throw new Error(`Server not ready after ${waitTimeout}ms`);
} else {
// Normal connection attempt
@@ -170,6 +150,38 @@ export class IpcClient extends plugins.EventEmitter {
}
}
/**
* Attempt to register this client over the current channel connection.
* Sets connection flags and emits 'connect' on success.
*/
private async attemptRegistrationInternal(): Promise<void> {
const registerTimeoutMs = this.options.registerTimeoutMs || 5000;
try {
const response = await this.channel.request<any, any>(
'__register__',
{
clientId: this.clientId,
metadata: this.options.metadata
},
{
timeout: registerTimeoutMs,
headers: { clientId: this.clientId }
}
);
if (!response.success) {
throw new Error(response.error || 'Registration failed');
}
this.isConnected = true;
this.didRegisterOnce = true;
this.emit('connect');
} catch (error: any) {
throw new Error(`Failed to register with server: ${error.message}`);
}
}
/**
* Disconnect from the server
*/
@@ -188,8 +200,16 @@ export class IpcClient extends plugins.EventEmitter {
*/
private setupChannelHandlers(): void {
// Forward channel events
this.channel.on('connect', () => {
// Don't emit connect here, wait for successful registration
this.channel.on('connect', async () => {
// On reconnects, re-register automatically when we had connected before
if (this.didRegisterOnce && !this.isConnected) {
try {
await this.attemptRegistrationInternal();
} catch (error) {
this.emit('error', error);
}
}
// For initial connect(), registration is handled explicitly there
});
this.channel.on('disconnect', (reason) => {
@@ -215,6 +235,13 @@ export class IpcClient extends plugins.EventEmitter {
this.emit('reconnecting', info);
});
// Forward streaming events
// Emitted as ('stream', info, readable)
// info contains { streamId, meta, headers, clientId }
this.channel.on('stream', (info: any, readable: plugins.stream.Readable) => {
this.emit('stream', info, readable);
});
// Handle messages
this.channel.on('message', (message) => {
// Check if we have a handler for this message type
@@ -343,4 +370,40 @@ export class IpcClient extends plugins.EventEmitter {
public getStats(): any {
return this.channel.getStats();
}
/**
* Send a Node.js readable stream to the server
*/
public async sendStream(readable: plugins.stream.Readable | NodeJS.ReadableStream, options?: { headers?: Record<string, any>; chunkSize?: number; streamId?: string; meta?: Record<string, any> }): Promise<void> {
const headers = { ...(options?.headers || {}), clientId: this.clientId };
await (this as any).channel.sendStream(readable as any, { ...options, headers });
}
/**
* Send a file to the server via streaming
*/
public async sendFile(filePath: string, options?: { headers?: Record<string, any>; chunkSize?: number; streamId?: string; meta?: Record<string, any> }): Promise<void> {
const fs = plugins.fs;
const path = plugins.path;
const stat = fs.statSync(filePath);
const meta = {
...(options?.meta || {}),
type: 'file',
basename: path.basename(filePath),
size: stat.size,
mtimeMs: stat.mtimeMs
};
const rs = fs.createReadStream(filePath);
await this.sendStream(rs, { ...options, meta });
}
/** Cancel an outgoing stream by id */
public async cancelOutgoingStream(streamId: string): Promise<void> {
await (this as any).channel.cancelOutgoingStream(streamId, { clientId: this.clientId });
}
/** Cancel an incoming stream by id */
public async cancelIncomingStream(streamId: string): Promise<void> {
await (this as any).channel.cancelIncomingStream(streamId, { clientId: this.clientId });
}
}

View File

@@ -200,6 +200,12 @@ export class IpcServer extends plugins.EventEmitter {
this.emit('error', error, 'server');
});
// Forward streaming events to server level
this.primaryChannel.on('stream', (info: any, readable: plugins.stream.Readable) => {
// Emit ('stream', info, readable)
this.emit('stream', info, readable);
});
this.primaryChannel.on('heartbeatTimeout', (error) => {
// Forward heartbeatTimeout event (when heartbeatThrowOnTimeout is false)
this.emit('heartbeatTimeout', error, 'server');
@@ -212,6 +218,17 @@ export class IpcServer extends plugins.EventEmitter {
this.startClientIdleCheck();
this.emit('start');
// Track individual client disconnects forwarded by the channel/transport
this.primaryChannel.on('clientDisconnected', (clientId?: string) => {
if (!clientId) return;
// Clean up any topic subscriptions and client map entry
this.cleanupClientSubscriptions(clientId);
if (this.clients.has(clientId)) {
this.clients.delete(clientId);
this.emit('clientDisconnect', clientId);
}
});
// Handle readiness based on options
if (options.readyWhen === 'accepting') {
// Wait a bit to ensure handlers are fully set up
@@ -375,7 +392,60 @@ export class IpcServer extends plugins.EventEmitter {
throw new Error(`Client ${clientId} not found`);
}
await client.channel.sendMessage(type, payload, headers);
// Ensure the target clientId is part of the headers so the transport
// can route the message to the correct socket instead of broadcasting.
const routedHeaders: Record<string, any> | undefined = {
...(headers || {}),
clientId,
};
await client.channel.sendMessage(type, payload, routedHeaders);
}
/**
* Send a Node.js readable stream to a specific client
*/
public async sendStreamToClient(clientId: string, readable: plugins.stream.Readable | NodeJS.ReadableStream, options?: { headers?: Record<string, any>; chunkSize?: number; streamId?: string; meta?: Record<string, any> }): Promise<void> {
const client = this.clients.get(clientId);
if (!client) {
throw new Error(`Client ${clientId} not found`);
}
const headers = { ...(options?.headers || {}), clientId };
await (client.channel as any).sendStream(readable as any, { ...options, headers });
}
/**
* Send a file to a specific client via streaming
*/
public async sendFileToClient(clientId: string, filePath: string, options?: { headers?: Record<string, any>; chunkSize?: number; streamId?: string; meta?: Record<string, any> }): Promise<void> {
const client = this.clients.get(clientId);
if (!client) {
throw new Error(`Client ${clientId} not found`);
}
const fs = plugins.fs;
const path = plugins.path;
const stat = fs.statSync(filePath);
const meta = {
...(options?.meta || {}),
type: 'file',
basename: path.basename(filePath),
size: stat.size,
mtimeMs: stat.mtimeMs
};
const rs = fs.createReadStream(filePath);
await this.sendStreamToClient(clientId, rs, { ...options, meta });
}
/** Cancel a stream incoming from a client (server side) */
public async cancelIncomingStreamFromClient(clientId: string, streamId: string): Promise<void> {
if (!this.primaryChannel) return;
await (this.primaryChannel as any).cancelIncomingStream(streamId, { clientId });
}
/** Cancel a server->client outgoing stream */
public async cancelOutgoingStreamToClient(clientId: string, streamId: string): Promise<void> {
if (!this.primaryChannel) return;
await (this.primaryChannel as any).cancelOutgoingStream(streamId, { clientId });
}
/**
@@ -401,10 +471,9 @@ export class IpcServer extends plugins.EventEmitter {
public async broadcast(type: string, payload: any, headers?: Record<string, any>): Promise<void> {
const promises: Promise<void>[] = [];
for (const [clientId, client] of this.clients) {
for (const [clientId] of this.clients) {
promises.push(
client.channel.sendMessage(type, payload, headers)
.catch((error) => {
this.sendToClient(clientId, type, payload, headers).catch((error) => {
this.emit('error', error, clientId);
})
);
@@ -427,8 +496,7 @@ export class IpcServer extends plugins.EventEmitter {
for (const [clientId, client] of this.clients) {
if (filter(clientId, client.metadata)) {
promises.push(
client.channel.sendMessage(type, payload, headers)
.catch((error) => {
this.sendToClient(clientId, type, payload, headers).catch((error) => {
this.emit('error', error, clientId);
})
);

View File

@@ -18,6 +18,12 @@ export interface IIpcMessageEnvelope<T = any> {
export interface IIpcTransportOptions {
/** Unique identifier for this transport */
id: string;
/**
* When true, a client transport will NOT auto-start a server when connect()
* encounters ECONNREFUSED/ENOENT. Useful for strict client/daemon setups.
* Default: false. Can also be overridden by env SMARTIPC_CLIENT_ONLY=1.
*/
clientOnly?: boolean;
/** Socket path for Unix domain sockets or pipe name for Windows */
socketPath?: string;
/** TCP host for network transport */
@@ -195,7 +201,21 @@ export class UnixSocketTransport extends IpcTransport {
this.socket.on('error', (error: any) => {
if (error.code === 'ECONNREFUSED' || error.code === 'ENOENT') {
// No server exists, we should become the server
// Determine if we must NOT auto-start server
const envVal = process.env.SMARTIPC_CLIENT_ONLY;
const envClientOnly = !!envVal && (envVal === '1' || envVal === 'true' || envVal === 'TRUE');
const clientOnly = this.options.clientOnly === true || envClientOnly;
if (clientOnly) {
// Reject instead of starting a server to avoid races
const reason = error.code || 'UNKNOWN';
const err = new Error(`Server not available (${reason}); clientOnly prevents auto-start`);
(err as any).code = reason;
reject(err);
return;
}
// No server exists and clientOnly is false: become the server (back-compat)
this.socket = null;
this.startServer(socketPath).then(resolve).catch(reject);
} else {
@@ -247,7 +267,8 @@ export class UnixSocketTransport extends IpcTransport {
this.clientIdToSocket.delete(clientId);
}
this.socketToClientId.delete(socket);
this.emit('clientDisconnected', socket);
// Emit with clientId if known so higher layers can react
this.emit('clientDisconnected', socket, clientId);
});
socket.on('drain', () => {

View File

@@ -6,6 +6,7 @@ export * from './classes.ipcclient.js';
import { IpcServer } from './classes.ipcserver.js';
import { IpcClient } from './classes.ipcclient.js';
import { IpcChannel } from './classes.ipcchannel.js';
import { stream as nodeStream, fs as nodeFs, path as nodePath } from './smartipc.plugins.js';
import type { IIpcServerOptions } from './classes.ipcserver.js';
import type { IIpcClientOptions, IConnectRetryConfig } from './classes.ipcclient.js';
import type { IIpcChannelOptions } from './classes.ipcchannel.js';
@@ -35,6 +36,7 @@ export class SmartIpc {
socketPath: options.socketPath,
clientId: `probe-${process.pid}-${Date.now()}`,
heartbeat: false,
clientOnly: true,
connectRetry: {
enabled: false // Don't retry, we're handling retries here
},
@@ -128,3 +130,21 @@ export class SmartIpc {
// Export the main class as default
export default SmartIpc;
/**
* Helper: pipe an incoming SmartIPC readable stream to a file path.
* Ensures directory exists; resolves on finish.
*/
export async function pipeStreamToFile(readable: NodeJS.ReadableStream, filePath: string): Promise<void> {
// Ensure directory exists
try {
nodeFs.mkdirSync(nodePath.dirname(filePath), { recursive: true });
} catch {}
await new Promise<void>((resolve, reject) => {
const ws = nodeFs.createWriteStream(filePath);
ws.on('finish', () => resolve());
ws.on('error', reject);
readable.on('error', reject);
(readable as any).pipe(ws);
});
}

View File

@@ -11,6 +11,7 @@ import * as os from 'os';
import * as path from 'path';
import * as fs from 'fs';
import * as crypto from 'crypto';
import * as stream from 'stream';
import { EventEmitter } from 'events';
export { net, os, path, fs, crypto, EventEmitter };
export { net, os, path, fs, crypto, stream, EventEmitter };