18 Commits

Author SHA1 Message Date
3c010a3b1b v1.25.2
Some checks failed
Docker (tags) / release (push) Failing after 4s
2026-04-14 20:19:34 +00:00
88768f0586 fix(proxy-engine): improve inbound SIP routing diagnostics and enrich leg media state reporting 2026-04-14 20:19:34 +00:00
0d82a626b5 v1.25.1
Some checks failed
Docker (tags) / release (push) Failing after 3s
2026-04-14 18:58:48 +00:00
30d056f376 fix(proxy-engine): respect explicit inbound route targets and store voicemail in the configured mailbox 2026-04-14 18:58:48 +00:00
89ae12318e v1.25.0
Some checks failed
Docker (tags) / release (push) Failing after 3s
2026-04-14 18:52:13 +00:00
feb3514de4 feat(proxy-engine): add live TTS streaming interactions and incoming number range support 2026-04-14 18:52:13 +00:00
adfc4726fd v1.24.0
Some checks failed
Docker (tags) / release (push) Failing after 2s
2026-04-14 16:35:54 +00:00
06c86d7e81 feat(routing): require explicit inbound DID routes and normalize SIP identities for provider-based number matching 2026-04-14 16:35:54 +00:00
cff70ab179 v1.23.0
Some checks failed
Docker (tags) / release (push) Failing after 3s
2026-04-14 10:45:59 +00:00
51f7560730 feat(runtime): refactor runtime state and proxy event handling for typed WebRTC linking and shared status models 2026-04-14 10:45:59 +00:00
5a280c5c41 v1.22.0
Some checks failed
Docker (tags) / release (push) Failing after 2s
2026-04-12 20:45:08 +00:00
59d8c2557c feat(proxy-engine): add on-demand TTS caching for voicemail and IVR prompts 2026-04-12 20:45:08 +00:00
cfadd7a2b6 v1.21.0
Some checks failed
Docker (tags) / release (push) Failing after 3s
2026-04-11 20:04:56 +00:00
80f710f6d8 feat(providers): replace provider creation modal with a guided multi-step setup flow 2026-04-11 20:04:56 +00:00
9ea57cd659 v1.20.5
Some checks failed
Docker (tags) / release (push) Failing after 3s
2026-04-11 19:20:14 +00:00
c40c726dc3 fix(readme): improve architecture and call flow documentation with Mermaid diagrams 2026-04-11 19:20:14 +00:00
37ba7501fa v1.20.4
Some checks failed
Docker (tags) / release (push) Failing after 3s
2026-04-11 19:17:39 +00:00
24924a1aea fix(deps): bump @design.estate/dees-catalog to ^3.71.1 2026-04-11 19:17:38 +00:00
82 changed files with 560475 additions and 2288 deletions

View File

@@ -1,5 +1,74 @@
# Changelog
## 2026-04-14 - 1.25.2 - fix(proxy-engine)
improve inbound SIP routing diagnostics and enrich leg media state reporting
- Extract inbound called numbers from DID-related SIP headers when the request URI contains a provider account username.
- Emit detailed sip_unhandled diagnostics for inbound route misses, missing devices, and RTP allocation failures.
- Include codec, RTP port, remote media, and metadata in leg state change events and preserve those fields in runtime status/history views.
- Match hostname-based providers against resolved inbound source IPs to accept provider traffic sent from resolved addresses.
- Invalidate cached TTS WAV metadata across engine restarts and vendor the kokoro-tts crate via a local patch.
## 2026-04-14 - 1.25.1 - fix(proxy-engine)
respect explicit inbound route targets and store voicemail in the configured mailbox
- Prevent inbound routes with an explicit empty target list from ringing arbitrary registered devices by distinguishing omitted targets from empty targets.
- Route unrouted or no-target inbound calls to voicemail with a generated unrouted greeting instead of falling back to random devices.
- Pass voicemail box identifiers through proxy events and runtime handling so recordings are saved and indexed under the correct mailbox instead of always using default.
## 2026-04-14 - 1.25.0 - feat(proxy-engine)
add live TTS streaming interactions and incoming number range support
- add a new start_tts_interaction command and bridge API to begin IVR or leg interactions before full TTS rendering completes
- stream synthesized TTS chunks into the mixer with cancellation handling so prompts can stop cleanly on digit match, leg removal, or shutdown
- extract PCM-to-mixer frame conversion for reusable live prompt processing
- extend routing pattern matching to support numeric number ranges like start..end, including + prefixed values
- add incomingNumbers config typing and frontend config update support for single, range, and regex number modes
## 2026-04-14 - 1.24.0 - feat(routing)
require explicit inbound DID routes and normalize SIP identities for provider-based number matching
- Inbound route resolution now returns no match unless a configured inbound route explicitly matches the provider and called number.
- Normalized routing identities were added for SIP/TEL URIs so inbound DIDs and outbound dialed numbers match consistently across provider-specific formats.
- Call handling and incoming call events now use normalized numbers, improving routing accuracy for shared trunk providers.
- Route configuration docs and the web route editor were updated to support explicit inbound DID ownership, voicemail fallback, and IVR selection.
- Mixer RTP handling was enhanced to better support variable packet durations, timestamp-based gap fill, and non-blocking output drop reporting.
## 2026-04-14 - 1.23.0 - feat(runtime)
refactor runtime state and proxy event handling for typed WebRTC linking and shared status models
- extract proxy event handling into dedicated runtime modules for status tracking and WebRTC session-to-call linking
- introduce shared typed proxy event and status interfaces used by both backend and web UI
- update web UI server initialization to use structured options and await async config save hooks
- simplify browser signaling by routing WebRTC offer/ICE handling through frontend-to-Rust integration
- align device status rendering with the new address/port fields in dashboard views
## 2026-04-12 - 1.22.0 - feat(proxy-engine)
add on-demand TTS caching for voicemail and IVR prompts
- Route inbound calls directly to configured IVR menus and track them with a dedicated IVR call state
- Generate voicemail greetings and IVR menu prompts inside the Rust proxy engine on demand instead of precomputing prompts in TypeScript
- Add cacheable TTS output with sidecar metadata and enable Kokoro CMUdict support for improved prompt generation
- Extend proxy configuration to include voiceboxes and IVR menus, and update documentation to reflect Kokoro-only prompt generation
## 2026-04-11 - 1.21.0 - feat(providers)
replace provider creation modal with a guided multi-step setup flow
- Adds a stepper-based provider creation flow with provider type selection, connection, credentials, advanced settings, and review steps.
- Applies built-in templates for Sipgate and O2/Alice from the selected provider type instead of separate add actions.
- Adds a final review step with generated provider ID preview and duplicate ID collision handling before saving.
## 2026-04-11 - 1.20.5 - fix(readme)
improve architecture and call flow documentation with Mermaid diagrams
- Replace ASCII architecture and audio pipeline diagrams with Mermaid diagrams for better readability
- Document the WebRTC browser call setup sequence, including offer handling and session-to-call linking
## 2026-04-11 - 1.20.4 - fix(deps)
bump @design.estate/dees-catalog to ^3.71.1
- Updates the @design.estate/dees-catalog dependency from ^3.70.0 to ^3.71.1 in package.json.
## 2026-04-11 - 1.20.3 - fix(ts-config,proxybridge,voicebox)
align voicebox config types and add missing proxy bridge command definitions

View File

@@ -1,6 +1,6 @@
{
"name": "siprouter",
"version": "1.20.3",
"version": "1.25.2",
"private": true,
"type": "module",
"scripts": {
@@ -13,7 +13,7 @@
"restartBackground": "pnpm run buildRust && pnpm run bundle; test -f .server.pid && kill $(cat .server.pid) 2>/dev/null; sleep 1; rm -f sip_trace.log proxy.out && nohup tsx ts/sipproxy.ts > proxy.out 2>&1 & echo $! > .server.pid; sleep 2; cat proxy.out"
},
"dependencies": {
"@design.estate/dees-catalog": "^3.70.0",
"@design.estate/dees-catalog": "^3.77.0",
"@design.estate/dees-element": "^2.2.4",
"@push.rocks/smartrust": "^1.3.2",
"@push.rocks/smartstate": "^2.3.0",

36
pnpm-lock.yaml generated
View File

@@ -9,8 +9,8 @@ importers:
.:
dependencies:
'@design.estate/dees-catalog':
specifier: ^3.70.0
version: 3.70.0(@tiptap/pm@2.27.2)
specifier: ^3.77.0
version: 3.77.0(@tiptap/pm@2.27.2)
'@design.estate/dees-element':
specifier: ^2.2.4
version: 2.2.4
@@ -81,8 +81,8 @@ packages:
'@configvault.io/interfaces@1.0.17':
resolution: {integrity: sha512-bEcCUR2VBDJsTin8HQh8Uw/mlYl2v8A3jMIaQ+MTB9Hrqd6CZL2dL7iJdWyFl/3EIX+LDxWFR+Oq7liIq7w+1Q==}
'@design.estate/dees-catalog@3.70.0':
resolution: {integrity: sha512-bNqOxxl83FNCCV+7QoUj6oeRC0VTExWOClrLrHNMoLIU0TCtzhcmQqiuJhdWrcCwZ5RBhXHGMSFsR62d2RcWpw==}
'@design.estate/dees-catalog@3.77.0':
resolution: {integrity: sha512-2IfvH390WXCF733XcmEcUP9skqogTz9xlqQw5PUJZy0u2Hf6+hJTyQOi4mcKmhpTE/kCpaD51uw21Lr4ncW6cg==}
'@design.estate/dees-comms@1.0.30':
resolution: {integrity: sha512-KchMlklJfKAjQiJiR0xmofXtQ27VgZtBIxcMwPE9d+h3jJRv+lPZxzBQVOM0eyM0uS44S5vJMZ11IeV4uDXSHg==}
@@ -93,8 +93,8 @@ packages:
'@design.estate/dees-element@2.2.4':
resolution: {integrity: sha512-O9cA6flBMMd+pBwMQrZXwAWel9yVxgokolb+Em6gvkXxPJ0P/B5UDn4Vc2d4ts3ta55PTBm+l2dPeDVGx/bl7Q==}
'@design.estate/dees-wcctools@3.8.0':
resolution: {integrity: sha512-CC14iVKUrguzD9jIrdPBd9fZ4egVJEZMxl5y8iy0l7WLumeoYvGsoXj5INVkRPLRVLqziIdi4Je1hXqHt2NU+g==}
'@design.estate/dees-wcctools@3.8.4':
resolution: {integrity: sha512-KpFK/azK+a/Xpq33pXKcho+tdFKVHhKZM5ArvHqo9QMwTczgp5DZZgowTDUuqAofjZwnuVfCPHK/Pw9e64N46A==}
'@emnapi/core@1.9.2':
resolution: {integrity: sha512-UC+ZhH3XtczQYfOlu3lNEkdW/p4dsJ1r/bP7H8+rhao3TTTMO1ATq/4DdIi23XuGoFY+Cz0JmCbdVl0hz9jZcA==}
@@ -1566,8 +1566,8 @@ packages:
humanize-ms@1.2.1:
resolution: {integrity: sha1-xG4xWaKT9riW2ikxbYtv6Lt5u+0=}
ibantools@4.5.2:
resolution: {integrity: sha512-is+8TgZcKS/AMv/z9nW1zz0bhjhoyjpA1p0nc3A6GkW/InOdcQiUZpkufADzh/aO/LY/TOD/P3oPWncNRn5QMA==}
ibantools@4.5.4:
resolution: {integrity: sha512-6jX1gh4aH6XH+o0ey+wtkMTzkcvsEta7DakIOZSng9voZYpMw3U+gK1+tZChk3aRcPcloEt0NOzksjaRZiqXbw==}
iconv-lite@0.4.24:
resolution: {integrity: sha512-v3MXnZAcvnywkTUEZomIActle7RXXeedOR31wwl7VlyoXO4Qi9arvSenNQWne1TcRwhCL1HwLI21bEqdpj8/rA==}
@@ -1694,8 +1694,8 @@ packages:
resolution: {integrity: sha512-JvNw9Y81y33E+BEYPr0U7omo+U9AySnsMsEiXgwT6yqd31VQWTLNQqmT4ou5eqPFUrTfIDFta2wKhB1hyohtAQ==}
engines: {node: 20 || >=22}
lucide@0.577.0:
resolution: {integrity: sha512-PpC/m5eOItp/WU/GlQPFBXDOhq6HibL73KzYP37OX3LM7VmzWQF8voEj8QRWUFvy9FIKfeDQkWYoyS1D/MdWFA==}
lucide@1.8.0:
resolution: {integrity: sha512-JjV/QnadgFLj1Pyu9IKl0lknrolFEzo04B64QcYLLeRzZl/iEHpdbSrRRKbyXcv45SZNv+WGjIUCT33e7xHO6Q==}
make-dir@3.1.0:
resolution: {integrity: sha512-g3FeP20LNwhALb/6Cz6Dd4F2ngze0jz7tbzrD2wAV+o9FeNHe4rL+yK2md0J/fiSf1sa1ADhXqi5+oVwOM/eGw==}
@@ -2462,7 +2462,7 @@ snapshots:
'@api.global/typedrequest-interfaces': 3.0.19
'@api.global/typedsocket': 4.1.2(@push.rocks/smartserve@2.0.3)
'@cloudflare/workers-types': 4.20260409.1
'@design.estate/dees-catalog': 3.70.0(@tiptap/pm@2.27.2)
'@design.estate/dees-catalog': 3.77.0(@tiptap/pm@2.27.2)
'@design.estate/dees-comms': 1.0.30
'@push.rocks/lik': 6.4.0
'@push.rocks/smartdelay': 3.0.5
@@ -2529,11 +2529,11 @@ snapshots:
dependencies:
'@api.global/typedrequest-interfaces': 3.0.19
'@design.estate/dees-catalog@3.70.0(@tiptap/pm@2.27.2)':
'@design.estate/dees-catalog@3.77.0(@tiptap/pm@2.27.2)':
dependencies:
'@design.estate/dees-domtools': 2.5.4
'@design.estate/dees-element': 2.2.4
'@design.estate/dees-wcctools': 3.8.0
'@design.estate/dees-wcctools': 3.8.4
'@fortawesome/fontawesome-svg-core': 7.2.0
'@fortawesome/free-brands-svg-icons': 7.2.0
'@fortawesome/free-regular-svg-icons': 7.2.0
@@ -2551,9 +2551,9 @@ snapshots:
'@tsclass/tsclass': 9.5.0
echarts: 5.6.0
highlight.js: 11.11.1
ibantools: 4.5.2
ibantools: 4.5.4
lightweight-charts: 5.1.0
lucide: 0.577.0
lucide: 1.8.0
monaco-editor: 0.55.1
pdfjs-dist: 4.10.38
xterm: 5.3.0
@@ -2610,7 +2610,7 @@ snapshots:
- supports-color
- vue
'@design.estate/dees-wcctools@3.8.0':
'@design.estate/dees-wcctools@3.8.4':
dependencies:
'@design.estate/dees-domtools': 2.5.4
'@design.estate/dees-element': 2.2.4
@@ -4369,7 +4369,7 @@ snapshots:
dependencies:
ms: 2.1.3
ibantools@4.5.2: {}
ibantools@4.5.4: {}
iconv-lite@0.4.24:
dependencies:
@@ -4487,7 +4487,7 @@ snapshots:
lru-cache@11.3.3: {}
lucide@0.577.0: {}
lucide@1.8.0: {}
make-dir@3.1.0:
dependencies:

146
readme.md
View File

@@ -20,7 +20,7 @@ siprouter sits between your SIP trunk providers and your endpoints — hardware
- 🎯 **Adaptive Jitter Buffer** — Per-leg jitter buffering with sequence-based reordering, adaptive depth (60120ms), Opus PLC for lost packets, and hold/resume detection
- 📧 **Voicemail** — Configurable voicemail boxes with TTS greetings, recording, and web playback
- 🔢 **IVR Menus** — DTMF-navigable interactive voice response with nested menus, routing actions, and custom prompts
- 🗣️ **Neural TTS** — Kokoro-powered announcements and greetings with 25+ voice presets, backed by espeak-ng fallback
- 🗣️ **Neural TTS** — Kokoro-powered greetings and IVR prompts with 25+ voice presets
- 🎙️ **Call Recording** — Per-source separated WAV recording at 48kHz via tool legs
- 🖥️ **Web Dashboard** — Real-time SPA with 9 views: live calls, browser phone, routing, voicemail, IVR, contacts, providers, and streaming logs
@@ -28,39 +28,26 @@ siprouter sits between your SIP trunk providers and your endpoints — hardware
## 🏗️ Architecture
```
┌─────────────────────────────────────┐
Browser Softphone
(WebRTC via WebSocket signaling) │
└──────────────┬──────────────────────┘
│ Opus/WebRTC
┌──────────────────────────────────────┐
siprouter │
TypeScript Control Plane │
│ ┌────────────────────────────────┐ │
│ │ Config · WebRTC Signaling │ │
│ REST API · Web Dashboard │ │
│ Voicebox Manager · TTS Cache │ │
└────────────┬───────────────────┘ │
│ JSON-over-stdio IPC │
┌────────────┴───────────────────┐ │
Rust proxy-engine (data plane) │ │
│ │
│ │ SIP Stack · Dialog SM · Auth │ │
│ │ Call Manager · N-Leg Mixer │ │
│ │ 48kHz f32 Bus · Jitter Buffer │ │
│ │ Codec Engine · RTP Port Pool │ │
│ │ WebRTC Engine · Kokoro TTS │ │
│ │ Voicemail · IVR · Recording │ │
│ └────┬──────────────────┬────────┘ │
└───────┤──────────────────┤───────────┘
│ │
┌──────┴──────┐ ┌──────┴──────┐
│ SIP Devices │ │ SIP Trunk │
│ (HT801 etc) │ │ Providers │
└─────────────┘ └─────────────┘
```mermaid
flowchart TB
Browser["🌐 Browser Softphone<br/>(WebRTC via WebSocket signaling)"]
Devices["📞 SIP Devices<br/>(HT801, desk phones, ATAs)"]
Trunks["☎️ SIP Trunk Providers<br/>(sipgate, easybell, …)"]
subgraph Router["siprouter"]
direction TB
subgraph TS["TypeScript Control Plane"]
TSBits["Config · WebRTC Signaling<br/>REST API · Web Dashboard<br/>Voicebox Manager · TTS Cache"]
end
subgraph Rust["Rust proxy-engine (data plane)"]
RustBits["SIP Stack · Dialog SM · Auth<br/>Call Manager · N-Leg Mixer<br/>48kHz f32 Bus · Jitter Buffer<br/>Codec Engine · RTP Port Pool<br/>WebRTC Engine · Kokoro TTS<br/>Voicemail · IVR · Recording"]
end
TS <-->|"JSON-over-stdio IPC"| Rust
end
Browser <-->|"Opus / WebRTC"| TS
Rust <-->|"SIP / RTP"| Devices
Rust <-->|"SIP / RTP"| Trunks
```
### 🧠 Key Design Decisions
@@ -71,6 +58,37 @@ siprouter sits between your SIP trunk providers and your endpoints — hardware
- **Per-Session Codec Isolation** — Each call leg gets its own encoder/decoder/resampler/denoiser state — no cross-call corruption.
- **SDP Codec Negotiation** — Outbound encoding uses the codec actually negotiated in SDP answers, not just the first offered codec.
### 📲 WebRTC Browser Call Flow
Browser calls are set up in a strict three-step dance — the WebRTC leg cannot be attached at call-creation time because the browser's session ID is only known once the SDP offer arrives:
```mermaid
sequenceDiagram
participant B as Browser
participant TS as TypeScript (sipproxy.ts)
participant R as Rust proxy-engine
participant P as SIP Provider
B->>TS: POST /api/call
TS->>R: make_call (pending call, no WebRTC leg yet)
R-->>TS: call_created
TS-->>B: webrtc-incoming (callId)
B->>TS: webrtc-offer (sessionId, SDP)
TS->>R: handle_webrtc_offer
R-->>TS: webrtc-answer (SDP)
TS-->>B: webrtc-answer
Note over R: Standalone WebRTC session<br/>(not yet attached to call)
B->>TS: webrtc_link (callId + sessionId)
TS->>R: link session → call
R->>R: wire WebRTC leg through mixer
R->>P: SIP INVITE
P-->>R: 200 OK + SDP
R-->>TS: call_answered
Note over B,P: Bidirectional Opus ↔ codec-transcoded<br/>audio flows through the mixer
```
---
## 🚀 Getting Started
@@ -80,7 +98,6 @@ siprouter sits between your SIP trunk providers and your endpoints — hardware
- **Node.js** ≥ 20 with `tsx` globally available
- **pnpm** for package management
- **Rust** toolchain (for building the proxy engine)
- **espeak-ng** (optional, for TTS fallback)
### Install & Build
@@ -131,24 +148,41 @@ Create `.nogit/config.json`:
"routing": {
"routes": [
{
"id": "inbound-default",
"name": "Ring all devices",
"priority": 100,
"direction": "inbound",
"match": {},
"id": "inbound-main-did",
"name": "Main DID",
"priority": 200,
"enabled": true,
"match": {
"direction": "inbound",
"sourceProvider": "my-trunk",
"numberPattern": "+49421219694"
},
"action": {
"targets": ["desk-phone"],
"ringBrowsers": true,
"voicemailBox": "main",
"noAnswerTimeout": 25
"voicemailBox": "main"
}
},
{
"id": "inbound-support-did",
"name": "Support DID",
"priority": 190,
"enabled": true,
"match": {
"direction": "inbound",
"sourceProvider": "my-trunk",
"numberPattern": "+49421219695"
},
"action": {
"ivrMenuId": "support-menu"
}
},
{
"id": "outbound-default",
"name": "Route via trunk",
"priority": 100,
"direction": "outbound",
"match": {},
"enabled": true,
"match": { "direction": "outbound" },
"action": { "provider": "my-trunk" }
}
]
@@ -170,9 +204,11 @@ Create `.nogit/config.json`:
}
```
Inbound number ownership is explicit: add one inbound route per DID (or DID prefix) and scope it with `sourceProvider` when a provider delivers multiple external numbers.
### TTS Setup (Optional)
For neural announcements and voicemail greetings, download the Kokoro TTS model:
For neural voicemail greetings and IVR prompts, download the Kokoro TTS model:
```bash
mkdir -p .nogit/tts
@@ -182,7 +218,7 @@ curl -L -o .nogit/tts/voices.bin \
https://github.com/mzdk100/kokoro/releases/download/V1.0/voices.bin
```
Without the model files, TTS falls back to `espeak-ng`. Without either, announcements are skipped — everything else works fine.
Without the model files, TTS prompts (IVR menus, voicemail greetings) are skipped — everything else works fine.
### Run
@@ -209,7 +245,6 @@ siprouter/
│ ├── frontend.ts # Web dashboard HTTP/WS server + REST API
│ ├── webrtcbridge.ts # WebRTC signaling layer
│ ├── registrar.ts # Browser softphone registration
│ ├── announcement.ts # TTS announcement generator (espeak-ng / Kokoro)
│ ├── voicebox.ts # Voicemail box management
│ └── call/
│ └── prompt-cache.ts # Named audio prompt WAV management
@@ -246,9 +281,17 @@ The `proxy-engine` binary handles all real-time audio processing with a **48kHz
### Audio Pipeline
```
Inbound: Wire RTP → Jitter Buffer → Decode → Resample to 48kHz → Denoise (RNNoise) → Mix Bus
Outbound: Mix Bus → Mix-Minus → Resample to codec rate → Encode → Wire RTP
```mermaid
flowchart LR
subgraph Inbound["Inbound path (per leg)"]
direction LR
IN_RTP["Wire RTP"] --> IN_JB["Jitter Buffer"] --> IN_DEC["Decode"] --> IN_RS["Resample → 48 kHz"] --> IN_DN["Denoise (RNNoise)"] --> IN_BUS["Mix Bus"]
end
subgraph Outbound["Outbound path (per leg)"]
direction LR
OUT_BUS["Mix Bus"] --> OUT_MM["Mix-Minus"] --> OUT_RS["Resample → codec rate"] --> OUT_ENC["Encode"] --> OUT_RTP["Wire RTP"]
end
```
- **Adaptive jitter buffer** — per-leg `BTreeMap`-based buffer keyed by RTP sequence number. Delivers exactly one frame per 20ms mixer tick in sequence order. Adaptive target depth starts at 3 frames (60ms) and adjusts between 26 frames based on observed network jitter. Handles hold/resume by detecting large forward sequence jumps and resetting cleanly.
@@ -262,13 +305,12 @@ Outbound: Mix Bus → Mix-Minus → Resample to codec rate → Encode → Wire
## 🗣️ Neural TTS
Announcements and voicemail greetings are synthesized using [Kokoro TTS](https://github.com/mzdk100/kokoro) — an 82M parameter neural model running via ONNX Runtime directly in the Rust process:
Voicemail greetings and IVR prompts are synthesized using [Kokoro TTS](https://github.com/mzdk100/kokoro) — an 82M parameter neural model running via ONNX Runtime directly in the Rust process:
- **24 kHz, 16-bit mono** output
- **25+ voice presets** — American/British, male/female (e.g., `af_bella`, `am_adam`, `bf_emma`, `bm_george`)
- **~800ms** synthesis time for a 3-second phrase
- Lazy-loaded on first use — no startup cost if TTS is unused
- Falls back to `espeak-ng` if the ONNX model is not available
---

12
rust/Cargo.lock generated
View File

@@ -532,6 +532,15 @@ dependencies = [
"cc",
]
[[package]]
name = "cmudict-fast"
version = "0.8.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2c9f73004e928ed46c3e7fd7406d2b12c8674153295f08af084b49860276dc02"
dependencies = [
"thiserror",
]
[[package]]
name = "codec-lib"
version = "0.1.0"
@@ -1724,12 +1733,11 @@ dependencies = [
[[package]]
name = "kokoro-tts"
version = "0.3.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "68e5d46e20a28fa5fd313d9ffcf4bbcf41570e64841d3944c832eef6b98d208b"
dependencies = [
"bincode 2.0.1",
"cc",
"chinese-number",
"cmudict-fast",
"futures",
"jieba-rs",
"log",

View File

@@ -9,3 +9,6 @@ resolver = "2"
[profile.release]
opt-level = 3
lto = true
[patch.crates-io]
kokoro-tts = { path = "vendor/kokoro-tts" }

View File

@@ -115,9 +115,8 @@ pub struct TranscodeState {
impl TranscodeState {
/// Create a new transcoding session with fresh codec state.
pub fn new() -> Result<Self, String> {
let mut opus_enc =
OpusEncoder::new(SampleRate::Hz48000, Channels::Mono, Application::Voip)
.map_err(|e| format!("opus encoder: {e}"))?;
let mut opus_enc = OpusEncoder::new(SampleRate::Hz48000, Channels::Mono, Application::Voip)
.map_err(|e| format!("opus encoder: {e}"))?;
opus_enc
.set_complexity(5)
.map_err(|e| format!("opus set_complexity: {e}"))?;
@@ -160,14 +159,9 @@ impl TranscodeState {
let key = (from_rate, to_rate, canonical_chunk);
if !self.resamplers.contains_key(&key) {
let r = FftFixedIn::<f64>::new(
from_rate as usize,
to_rate as usize,
canonical_chunk,
1,
1,
)
.map_err(|e| format!("resampler {from_rate}->{to_rate}: {e}"))?;
let r =
FftFixedIn::<f64>::new(from_rate as usize, to_rate as usize, canonical_chunk, 1, 1)
.map_err(|e| format!("resampler {from_rate}->{to_rate}: {e}"))?;
self.resamplers.insert(key, r);
}
let resampler = self.resamplers.get_mut(&key).unwrap();
@@ -284,8 +278,7 @@ impl TranscodeState {
match pt {
PT_OPUS => {
let mut pcm = vec![0i16; 5760]; // up to 120ms at 48kHz
let packet =
OpusPacket::try_from(data).map_err(|e| format!("opus packet: {e}"))?;
let packet = OpusPacket::try_from(data).map_err(|e| format!("opus packet: {e}"))?;
let out =
MutSignals::try_from(&mut pcm[..]).map_err(|e| format!("opus signals: {e}"))?;
let n: usize = self
@@ -343,8 +336,7 @@ impl TranscodeState {
match pt {
PT_OPUS => {
let mut pcm = vec![0.0f32; 5760]; // up to 120ms at 48kHz
let packet =
OpusPacket::try_from(data).map_err(|e| format!("opus packet: {e}"))?;
let packet = OpusPacket::try_from(data).map_err(|e| format!("opus packet: {e}"))?;
let out =
MutSignals::try_from(&mut pcm[..]).map_err(|e| format!("opus signals: {e}"))?;
let n: usize = self
@@ -368,8 +360,8 @@ impl TranscodeState {
/// Returns f32 PCM at 48kHz. `frame_size` should be 960 for 20ms.
pub fn opus_plc(&mut self, frame_size: usize) -> Result<Vec<f32>, String> {
let mut pcm = vec![0.0f32; frame_size];
let out = MutSignals::try_from(&mut pcm[..])
.map_err(|e| format!("opus plc signals: {e}"))?;
let out =
MutSignals::try_from(&mut pcm[..]).map_err(|e| format!("opus plc signals: {e}"))?;
let n: usize = self
.opus_dec
.decode_float(None::<OpusPacket<'_>>, out, false)
@@ -425,14 +417,9 @@ impl TranscodeState {
let key = (from_rate, to_rate, canonical_chunk);
if !self.resamplers_f32.contains_key(&key) {
let r = FftFixedIn::<f32>::new(
from_rate as usize,
to_rate as usize,
canonical_chunk,
1,
1,
)
.map_err(|e| format!("resampler f32 {from_rate}->{to_rate}: {e}"))?;
let r =
FftFixedIn::<f32>::new(from_rate as usize, to_rate as usize, canonical_chunk, 1, 1)
.map_err(|e| format!("resampler f32 {from_rate}->{to_rate}: {e}"))?;
self.resamplers_f32.insert(key, r);
}
let resampler = self.resamplers_f32.get_mut(&key).unwrap();
@@ -508,8 +495,10 @@ mod tests {
let encoded = mulaw_encode(sample);
let decoded = mulaw_decode(encoded);
// µ-law is lossy; verify the decoded value is close.
assert!((sample as i32 - decoded as i32).abs() < 1000,
"µ-law roundtrip failed for {sample}: got {decoded}");
assert!(
(sample as i32 - decoded as i32).abs() < 1000,
"µ-law roundtrip failed for {sample}: got {decoded}"
);
}
}
@@ -518,8 +507,10 @@ mod tests {
for sample in [-32768i16, -1000, -1, 0, 1, 1000, 32767] {
let encoded = alaw_encode(sample);
let decoded = alaw_decode(encoded);
assert!((sample as i32 - decoded as i32).abs() < 1000,
"A-law roundtrip failed for {sample}: got {decoded}");
assert!(
(sample as i32 - decoded as i32).abs() < 1000,
"A-law roundtrip failed for {sample}: got {decoded}"
);
}
}
@@ -543,7 +534,9 @@ mod tests {
fn pcmu_to_pcma_roundtrip() {
let mut st = TranscodeState::new().unwrap();
// 160 bytes = 20ms of PCMU at 8kHz
let pcmu_data: Vec<u8> = (0..160).map(|i| mulaw_encode((i as i16 * 200) - 16000)).collect();
let pcmu_data: Vec<u8> = (0..160)
.map(|i| mulaw_encode((i as i16 * 200) - 16000))
.collect();
let pcma = st.transcode(&pcmu_data, PT_PCMU, PT_PCMA, None).unwrap();
assert_eq!(pcma.len(), 160); // Same frame size
let back = st.transcode(&pcma, PT_PCMA, PT_PCMU, None).unwrap();

View File

@@ -19,7 +19,7 @@ regex-lite = "0.1"
webrtc = "0.8"
rand = "0.8"
hound = "3.5"
kokoro-tts = { version = "0.3", default-features = false }
kokoro-tts = { version = "0.3", default-features = false, features = ["use-cmudict"] }
ort = { version = "=2.0.0-rc.11", default-features = false, features = [
"std", "download-binaries", "copy-dylibs", "ndarray",
"tls-native-vendored"

View File

@@ -36,10 +36,7 @@ pub async fn play_wav_file(
// Read all samples as i16.
let samples: Vec<i16> = if spec.bits_per_sample == 16 {
reader
.samples::<i16>()
.filter_map(|s| s.ok())
.collect()
reader.samples::<i16>().filter_map(|s| s.ok()).collect()
} else if spec.bits_per_sample == 32 && spec.sample_format == hound::SampleFormat::Float {
reader
.samples::<f32>()
@@ -199,10 +196,7 @@ pub fn load_prompt_pcm_frames(wav_path: &str) -> Result<Vec<Vec<f32>>, String> {
.map(|s| s as f32 / 32768.0)
.collect()
} else if spec.bits_per_sample == 32 && spec.sample_format == hound::SampleFormat::Float {
reader
.samples::<f32>()
.filter_map(|s| s.ok())
.collect()
reader.samples::<f32>().filter_map(|s| s.ok()).collect()
} else {
return Err(format!(
"unsupported WAV format: {}bit {:?}",
@@ -214,14 +208,23 @@ pub fn load_prompt_pcm_frames(wav_path: &str) -> Result<Vec<Vec<f32>>, String> {
return Ok(vec![]);
}
pcm_to_mix_frames(&samples, wav_rate)
}
/// Convert PCM samples at an arbitrary rate into 48kHz 20ms mixer frames.
pub fn pcm_to_mix_frames(samples: &[f32], sample_rate: u32) -> Result<Vec<Vec<f32>>, String> {
if samples.is_empty() {
return Ok(vec![]);
}
// Resample to MIX_RATE (48kHz) if needed.
let resampled = if wav_rate != MIX_RATE {
let resampled = if sample_rate != MIX_RATE {
let mut transcoder = TranscodeState::new().map_err(|e| format!("codec init: {e}"))?;
transcoder
.resample_f32(&samples, wav_rate, MIX_RATE)
.resample_f32(samples, sample_rate, MIX_RATE)
.map_err(|e| format!("resample: {e}"))?
} else {
samples
samples.to_vec()
};
// Split into MIX_FRAME_SIZE (960) sample frames.

View File

@@ -23,6 +23,7 @@ pub enum CallState {
Ringing,
Connected,
Voicemail,
Ivr,
Terminated,
}
@@ -37,6 +38,7 @@ impl CallState {
Self::Ringing => "ringing",
Self::Connected => "connected",
Self::Voicemail => "voicemail",
Self::Ivr => "ivr",
Self::Terminated => "terminated",
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -4,6 +4,7 @@
//! proxy engine via the `configure` command. These types mirror the TS interfaces.
use serde::Deserialize;
use sip_proto::message::SipMessage;
use std::net::SocketAddr;
/// Network endpoint.
@@ -159,6 +160,10 @@ pub struct AppConfig {
pub providers: Vec<ProviderConfig>,
pub devices: Vec<DeviceConfig>,
pub routing: RoutingConfig,
#[serde(default)]
pub voiceboxes: Vec<VoiceboxConfig>,
#[serde(default)]
pub ivr: Option<IvrConfig>,
}
#[derive(Debug, Clone, Deserialize)]
@@ -166,12 +171,190 @@ pub struct RoutingConfig {
pub routes: Vec<Route>,
}
// ---------------------------------------------------------------------------
// Voicebox config
// ---------------------------------------------------------------------------
#[allow(dead_code)]
#[derive(Debug, Clone, Deserialize)]
pub struct VoiceboxConfig {
pub id: String,
#[serde(default)]
pub enabled: bool,
#[serde(rename = "greetingText")]
pub greeting_text: Option<String>,
#[serde(rename = "greetingVoice")]
pub greeting_voice: Option<String>,
#[serde(rename = "greetingWavPath")]
pub greeting_wav_path: Option<String>,
#[serde(rename = "maxRecordingSec")]
pub max_recording_sec: Option<u32>,
}
// ---------------------------------------------------------------------------
// IVR config
// ---------------------------------------------------------------------------
#[allow(dead_code)]
#[derive(Debug, Clone, Deserialize)]
pub struct IvrConfig {
pub enabled: bool,
pub menus: Vec<IvrMenuConfig>,
#[serde(rename = "entryMenuId")]
pub entry_menu_id: String,
}
#[derive(Debug, Clone, Deserialize)]
pub struct IvrMenuConfig {
pub id: String,
#[serde(rename = "promptText")]
pub prompt_text: String,
#[serde(rename = "promptVoice")]
pub prompt_voice: Option<String>,
pub entries: Vec<IvrMenuEntry>,
#[serde(rename = "timeoutSec")]
pub timeout_sec: Option<u32>,
}
#[allow(dead_code)]
#[derive(Debug, Clone, Deserialize)]
pub struct IvrMenuEntry {
pub digit: String,
pub action: String,
pub target: Option<String>,
}
// ---------------------------------------------------------------------------
// Pattern matching (ported from ts/config.ts)
// ---------------------------------------------------------------------------
/// Extract the URI user part and normalize phone-like identities for routing.
///
/// This keeps inbound route matching stable across provider-specific URI shapes,
/// e.g. `sip:+49 421 219694@trunk.example` and `sip:0049421219694@trunk.example`
/// both normalize to `+49421219694`.
pub fn normalize_routing_identity(value: &str) -> String {
let extracted = SipMessage::extract_uri_user(value).unwrap_or(value).trim();
if extracted.is_empty() {
return String::new();
}
let mut digits = String::new();
let mut saw_plus = false;
for (idx, ch) in extracted.chars().enumerate() {
if ch.is_ascii_digit() {
digits.push(ch);
continue;
}
if ch == '+' && idx == 0 {
saw_plus = true;
continue;
}
if matches!(ch, ' ' | '\t' | '-' | '.' | '/' | '(' | ')') {
continue;
}
return extracted.to_string();
}
if digits.is_empty() {
return extracted.to_string();
}
if saw_plus {
return format!("+{digits}");
}
if digits.starts_with("00") && digits.len() > 2 {
return format!("+{}", &digits[2..]);
}
digits
}
fn looks_like_phone_identity(value: &str) -> bool {
let digits = value.chars().filter(|c| c.is_ascii_digit()).count();
digits >= 6 && value.chars().all(|c| c.is_ascii_digit() || c == '+')
}
/// Pick the best inbound called-number identity from common SIP headers.
///
/// Some providers deliver the DID in `To` / `P-Called-Party-ID` while the
/// request URI contains an account username. Prefer a phone-like identity when
/// present; otherwise fall back to the request URI user part.
pub fn extract_inbound_called_number(msg: &SipMessage) -> String {
let request_uri = normalize_routing_identity(msg.request_uri().unwrap_or(""));
if looks_like_phone_identity(&request_uri) {
return request_uri;
}
for header_name in [
"P-Called-Party-ID",
"X-Called-Party-ID",
"Diversion",
"History-Info",
"To",
] {
let candidate = normalize_routing_identity(msg.get_header(header_name).unwrap_or(""));
if looks_like_phone_identity(&candidate) {
return candidate;
}
}
request_uri
}
fn parse_numeric_range_value(value: &str) -> Option<(bool, &str)> {
let trimmed = value.trim();
if trimmed.is_empty() {
return None;
}
let (has_plus, digits) = if let Some(rest) = trimmed.strip_prefix('+') {
(true, rest)
} else {
(false, trimmed)
};
if digits.is_empty() || !digits.chars().all(|c| c.is_ascii_digit()) {
return None;
}
Some((has_plus, digits))
}
fn matches_numeric_range_pattern(pattern: &str, value: &str) -> bool {
let Some((start, end)) = pattern.split_once("..") else {
return false;
};
let Some((start_plus, start_digits)) = parse_numeric_range_value(start) else {
return false;
};
let Some((end_plus, end_digits)) = parse_numeric_range_value(end) else {
return false;
};
let Some((value_plus, value_digits)) = parse_numeric_range_value(value) else {
return false;
};
if start_plus != end_plus || value_plus != start_plus {
return false;
}
if start_digits.len() != end_digits.len() || value_digits.len() != start_digits.len() {
return false;
}
if start_digits > end_digits {
return false;
}
value_digits >= start_digits && value_digits <= end_digits
}
/// Test a value against a pattern string.
/// - None/empty: matches everything (wildcard)
/// - `start..end`: numeric range match
/// - Trailing '*': prefix match
/// - Starts with '/': regex match
/// - Otherwise: exact match
@@ -187,6 +370,10 @@ pub fn matches_pattern(pattern: Option<&str>, value: &str) -> bool {
return value.starts_with(&pattern[..pattern.len() - 1]);
}
if matches_numeric_range_pattern(pattern, value) {
return true;
}
// Regex match: "/^\\+49/" or "/pattern/i"
if pattern.starts_with('/') {
if let Some(last_slash) = pattern[1..].rfind('/') {
@@ -218,12 +405,14 @@ pub struct OutboundRouteResult {
/// Result of resolving an inbound route.
//
// `device_ids` and `ring_browsers` are consumed by create_inbound_call.
// `device_ids`, `ring_all_devices`, and `ring_browsers` are consumed by
// create_inbound_call.
// The remaining fields (voicemail_box, ivr_menu_id, no_answer_timeout)
// are resolved but not yet acted upon — see the multi-target TODO.
#[allow(dead_code)]
pub struct InboundRouteResult {
pub device_ids: Vec<String>,
pub ring_all_devices: bool,
pub ring_browsers: bool,
pub voicemail_box: Option<String>,
pub ivr_menu_id: Option<String>,
@@ -306,7 +495,7 @@ impl AppConfig {
provider_id: &str,
called_number: &str,
caller_number: &str,
) -> InboundRouteResult {
) -> Option<InboundRouteResult> {
let mut routes: Vec<&Route> = self
.routing
.routes
@@ -330,22 +519,186 @@ impl AppConfig {
continue;
}
return InboundRouteResult {
device_ids: route.action.targets.clone().unwrap_or_default(),
let explicit_targets = route.action.targets.clone();
return Some(InboundRouteResult {
device_ids: explicit_targets.clone().unwrap_or_default(),
ring_all_devices: explicit_targets.is_none(),
ring_browsers: route.action.ring_browsers.unwrap_or(false),
voicemail_box: route.action.voicemail_box.clone(),
ivr_menu_id: route.action.ivr_menu_id.clone(),
no_answer_timeout: route.action.no_answer_timeout,
};
});
}
// Fallback: ring all devices + browsers.
InboundRouteResult {
device_ids: vec![],
ring_browsers: true,
voicemail_box: None,
ivr_menu_id: None,
no_answer_timeout: None,
}
None
}
}
#[cfg(test)]
mod tests {
use super::*;
fn test_app_config(routes: Vec<Route>) -> AppConfig {
AppConfig {
proxy: ProxyConfig {
lan_ip: "127.0.0.1".to_string(),
lan_port: 5070,
public_ip_seed: None,
rtp_port_range: RtpPortRange {
min: 20_000,
max: 20_100,
},
},
providers: vec![ProviderConfig {
id: "provider-a".to_string(),
display_name: "Provider A".to_string(),
domain: "example.com".to_string(),
outbound_proxy: Endpoint {
address: "example.com".to_string(),
port: 5060,
},
username: "user".to_string(),
password: "pass".to_string(),
register_interval_sec: 300,
codecs: vec![9],
quirks: Quirks {
early_media_silence: false,
silence_payload_type: None,
silence_max_packets: None,
},
}],
devices: vec![DeviceConfig {
id: "desk".to_string(),
display_name: "Desk".to_string(),
expected_address: "127.0.0.1".to_string(),
extension: "100".to_string(),
}],
routing: RoutingConfig { routes },
voiceboxes: vec![],
ivr: None,
}
}
#[test]
fn normalize_routing_identity_extracts_uri_user_and_phone_number() {
assert_eq!(
normalize_routing_identity("sip:0049 421 219694@voip.easybell.de"),
"+49421219694"
);
assert_eq!(
normalize_routing_identity("<tel:+49 (421) 219694>"),
"+49421219694"
);
assert_eq!(normalize_routing_identity("sip:100@pbx.local"), "100");
assert_eq!(normalize_routing_identity("sip:alice@pbx.local"), "alice");
}
#[test]
fn resolve_inbound_route_requires_explicit_match() {
let cfg = test_app_config(vec![]);
assert!(cfg
.resolve_inbound_route("provider-a", "+49421219694", "+491701234567")
.is_none());
}
#[test]
fn resolve_inbound_route_matches_per_number_on_shared_provider() {
let cfg = test_app_config(vec![
Route {
id: "main".to_string(),
name: "Main DID".to_string(),
priority: 200,
enabled: true,
match_criteria: RouteMatch {
direction: "inbound".to_string(),
number_pattern: Some("+49421219694".to_string()),
caller_pattern: None,
source_provider: Some("provider-a".to_string()),
source_device: None,
},
action: RouteAction {
targets: Some(vec!["desk".to_string()]),
ring_browsers: Some(true),
voicemail_box: None,
ivr_menu_id: None,
no_answer_timeout: None,
provider: None,
failover_providers: None,
strip_prefix: None,
prepend_prefix: None,
},
},
Route {
id: "support".to_string(),
name: "Support DID".to_string(),
priority: 100,
enabled: true,
match_criteria: RouteMatch {
direction: "inbound".to_string(),
number_pattern: Some("+49421219695".to_string()),
caller_pattern: None,
source_provider: Some("provider-a".to_string()),
source_device: None,
},
action: RouteAction {
targets: None,
ring_browsers: Some(false),
voicemail_box: Some("support-box".to_string()),
ivr_menu_id: None,
no_answer_timeout: Some(20),
provider: None,
failover_providers: None,
strip_prefix: None,
prepend_prefix: None,
},
},
]);
let main = cfg
.resolve_inbound_route("provider-a", "+49421219694", "+491701234567")
.expect("main DID should match");
assert_eq!(main.device_ids, vec!["desk".to_string()]);
assert!(main.ring_browsers);
let support = cfg
.resolve_inbound_route("provider-a", "+49421219695", "+491701234567")
.expect("support DID should match");
assert_eq!(support.voicemail_box.as_deref(), Some("support-box"));
assert_eq!(support.no_answer_timeout, Some(20));
assert!(!support.ring_browsers);
}
#[test]
fn extract_inbound_called_number_prefers_did_headers_over_username_ruri() {
let raw = b"INVITE sip:2830573e1@proxy.example SIP/2.0\r\nTo: <sip:+4942116767548@proxy.example>\r\nFrom: <sip:+491701234567@provider.example>;tag=abc\r\nCall-ID: test-1\r\nCSeq: 1 INVITE\r\nContent-Length: 0\r\n\r\n";
let msg = SipMessage::parse(raw).expect("invite should parse");
assert_eq!(extract_inbound_called_number(&msg), "+4942116767548");
}
#[test]
fn extract_inbound_called_number_keeps_phone_ruri_when_already_present() {
let raw = b"INVITE sip:042116767548@proxy.example SIP/2.0\r\nTo: <sip:2830573e1@proxy.example>\r\nFrom: <sip:+491701234567@provider.example>;tag=abc\r\nCall-ID: test-2\r\nCSeq: 1 INVITE\r\nContent-Length: 0\r\n\r\n";
let msg = SipMessage::parse(raw).expect("invite should parse");
assert_eq!(extract_inbound_called_number(&msg), "042116767548");
}
#[test]
fn matches_pattern_supports_numeric_ranges() {
assert!(matches_pattern(
Some("042116767546..042116767548"),
"042116767547"
));
assert!(!matches_pattern(
Some("042116767546..042116767548"),
"042116767549"
));
assert!(matches_pattern(
Some("+4942116767546..+4942116767548"),
"+4942116767547"
));
assert!(!matches_pattern(
Some("+4942116767546..+4942116767548"),
"042116767547"
));
}
}

View File

@@ -19,7 +19,13 @@ pub struct Command {
}
/// Send a response to a command.
pub fn respond(tx: &OutTx, id: &str, success: bool, result: Option<serde_json::Value>, error: Option<&str>) {
pub fn respond(
tx: &OutTx,
id: &str,
success: bool,
result: Option<serde_json::Value>,
error: Option<&str>,
) {
let mut resp = serde_json::json!({ "id": id, "success": success });
if let Some(r) = result {
resp["result"] = r;

View File

@@ -63,7 +63,8 @@ pub fn spawn_sip_inbound(
if offset + 4 > n {
continue; // Malformed: extension header truncated.
}
let ext_len = u16::from_be_bytes([buf[offset + 2], buf[offset + 3]]) as usize;
let ext_len =
u16::from_be_bytes([buf[offset + 2], buf[offset + 3]]) as usize;
offset += 4 + ext_len * 4;
}
if offset >= n {
@@ -74,7 +75,17 @@ pub fn spawn_sip_inbound(
if payload.is_empty() {
continue;
}
if inbound_tx.send(RtpPacket { payload, payload_type: pt, marker, seq, timestamp }).await.is_err() {
if inbound_tx
.send(RtpPacket {
payload,
payload_type: pt,
marker,
seq,
timestamp,
})
.await
.is_err()
{
break; // Channel closed — leg removed.
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -7,7 +7,8 @@
//! All encoding/decoding happens at leg boundaries. Per-leg inbound denoising at 48kHz.
//!
//! The mixer runs a 20ms tick loop:
//! 1. Drain inbound channels, decode to f32, resample to 48kHz, denoise per-leg
//! 1. Drain inbound channels, reorder RTP, decode variable-duration packets to 48kHz,
//! and queue them in per-leg PCM buffers
//! 2. Compute total mix (sum of all **participant** legs' f32 PCM as f64)
//! 3. For each participant leg: mix-minus = total - own, resample to leg codec rate, encode, send
//! 4. For each isolated leg: play prompt frame or silence, check DTMF
@@ -16,11 +17,12 @@
use crate::ipc::{emit_event, OutTx};
use crate::jitter_buffer::{JitterBuffer, JitterResult};
use crate::rtp::{build_rtp_header, rtp_clock_increment};
use crate::rtp::{build_rtp_header, rtp_clock_increment, rtp_clock_rate};
use crate::tts::TtsStreamMessage;
use codec_lib::{codec_sample_rate, new_denoiser, TranscodeState};
use nnnoiseless::DenoiseState;
use std::collections::{HashMap, VecDeque};
use tokio::sync::{mpsc, oneshot};
use tokio::sync::{mpsc, oneshot, watch};
use tokio::task::JoinHandle;
use tokio::time::{self, Duration, MissedTickBehavior};
@@ -29,6 +31,12 @@ use tokio::time::{self, Duration, MissedTickBehavior};
const MIX_RATE: u32 = 48000;
/// Samples per 20ms frame at the mixing rate.
const MIX_FRAME_SIZE: usize = 960; // 48000 * 0.020
/// Safety cap for how much timestamp-derived gap fill we synthesize at once.
const MAX_GAP_FILL_SAMPLES: usize = MIX_FRAME_SIZE * 6; // 120ms
/// Bound how many decode / concealment steps a leg can consume in one tick.
const MAX_PACKET_STEPS_PER_TICK: usize = 24;
/// Report the first output drop immediately, then every N drops.
const DROP_REPORT_INTERVAL: u64 = 50;
/// A raw RTP payload received from a leg (no RTP header).
pub struct RtpPacket {
@@ -39,10 +47,6 @@ pub struct RtpPacket {
/// RTP sequence number for reordering.
pub seq: u16,
/// RTP timestamp from the original packet header.
///
/// Set on inbound RTP but not yet consumed downstream — reserved for
/// future jitter/sync work in the mixer.
#[allow(dead_code)]
pub timestamp: u32,
}
@@ -61,6 +65,12 @@ enum LegRole {
struct IsolationState {
/// PCM frames at MIX_RATE (960 samples each, 48kHz f32) queued for playback.
prompt_frames: VecDeque<Vec<f32>>,
/// Live TTS frames arrive here while playback is already in progress.
prompt_stream_rx: Option<mpsc::Receiver<TtsStreamMessage>>,
/// Cancels the background TTS producer when the interaction ends early.
prompt_cancel_tx: Option<watch::Sender<bool>>,
/// Whether the live prompt stream has ended.
prompt_stream_finished: bool,
/// Digits that complete the interaction (e.g., ['1', '2']).
expected_digits: Vec<char>,
/// Ticks remaining before timeout (decremented each tick after prompt ends).
@@ -109,6 +119,7 @@ struct ToolLegSlot {
#[allow(dead_code)]
tool_type: ToolType,
audio_tx: mpsc::Sender<ToolAudioBatch>,
dropped_batches: u64,
}
// ---------------------------------------------------------------------------
@@ -136,6 +147,10 @@ pub enum MixerCommand {
leg_id: String,
/// PCM frames at MIX_RATE (48kHz f32), each 960 samples.
prompt_pcm_frames: Vec<Vec<f32>>,
/// Optional live prompt stream. Frames are appended as they are synthesized.
prompt_stream_rx: Option<mpsc::Receiver<TtsStreamMessage>>,
/// Optional cancellation handle for the live prompt stream.
prompt_cancel_tx: Option<watch::Sender<bool>>,
expected_digits: Vec<char>,
timeout_ms: u32,
result_tx: oneshot::Sender<InteractionResult>,
@@ -163,8 +178,15 @@ struct MixerLegSlot {
denoiser: Box<DenoiseState<'static>>,
inbound_rx: mpsc::Receiver<RtpPacket>,
outbound_tx: mpsc::Sender<Vec<u8>>,
/// Decoded PCM waiting for playout. Variable-duration RTP packets are
/// decoded into this FIFO; the mixer consumes exactly one 20ms frame per tick.
pcm_buffer: VecDeque<f32>,
/// Last decoded+denoised PCM frame at MIX_RATE (960 samples, 48kHz f32).
last_pcm_frame: Vec<f32>,
/// Next RTP timestamp expected from the inbound stream.
expected_rtp_timestamp: Option<u32>,
/// Best-effort estimate of packet duration in RTP clock units.
estimated_packet_ts: u32,
/// Number of consecutive ticks with no inbound packet.
silent_ticks: u32,
/// Per-leg jitter buffer for packet reordering and timing.
@@ -173,15 +195,302 @@ struct MixerLegSlot {
rtp_seq: u16,
rtp_ts: u32,
rtp_ssrc: u32,
/// Dropped outbound frames for this leg (queue full / closed).
outbound_drops: u64,
/// Current role of this leg in the mixer.
role: LegRole,
}
fn mix_samples_to_rtp_ts(codec_pt: u8, mix_samples: usize) -> u32 {
let clock_rate = rtp_clock_rate(codec_pt).max(1) as u64;
(((mix_samples as u64 * clock_rate) + (MIX_RATE as u64 / 2)) / MIX_RATE as u64) as u32
}
fn rtp_ts_to_mix_samples(codec_pt: u8, rtp_ts: u32) -> usize {
let clock_rate = rtp_clock_rate(codec_pt).max(1) as u64;
(((rtp_ts as u64 * MIX_RATE as u64) + (clock_rate / 2)) / clock_rate) as usize
}
fn is_forward_rtp_delta(delta: u32) -> bool {
delta > 0 && delta < 0x8000_0000
}
fn should_emit_drop_event(total_drops: u64) -> bool {
total_drops == 1 || total_drops % DROP_REPORT_INTERVAL == 0
}
fn emit_output_drop_event(
out_tx: &OutTx,
call_id: &str,
leg_id: Option<&str>,
tool_leg_id: Option<&str>,
stream: &str,
reason: &str,
total_drops: u64,
) {
if !should_emit_drop_event(total_drops) {
return;
}
emit_event(
out_tx,
"mixer_output_drop",
serde_json::json!({
"call_id": call_id,
"leg_id": leg_id,
"tool_leg_id": tool_leg_id,
"stream": stream,
"reason": reason,
"total_drops": total_drops,
}),
);
}
fn fade_concealment_from_last_frame(slot: &mut MixerLegSlot, samples: usize, decay: f32) {
let mut template = if slot.last_pcm_frame.is_empty() {
vec![0.0f32; MIX_FRAME_SIZE]
} else {
slot.last_pcm_frame.clone()
};
let mut remaining = samples;
while remaining > 0 {
for sample in &mut template {
*sample *= decay;
}
let take = remaining.min(template.len());
slot.pcm_buffer.extend(template.iter().take(take).copied());
remaining -= take;
}
}
fn append_packet_loss_concealment(slot: &mut MixerLegSlot, samples: usize) {
let mut remaining = samples.max(1);
while remaining > 0 {
let chunk = remaining.min(MIX_FRAME_SIZE);
if slot.codec_pt == codec_lib::PT_OPUS {
match slot.transcoder.opus_plc(chunk) {
Ok(mut pcm) => {
pcm.resize(chunk, 0.0);
slot.pcm_buffer.extend(pcm);
}
Err(_) => fade_concealment_from_last_frame(slot, chunk, 0.8),
}
} else {
fade_concealment_from_last_frame(slot, chunk, 0.85);
}
remaining -= chunk;
}
}
fn decode_packet_to_mix_pcm(slot: &mut MixerLegSlot, pkt: &RtpPacket) -> Option<Vec<f32>> {
let (pcm, rate) = slot
.transcoder
.decode_to_f32(&pkt.payload, pkt.payload_type)
.ok()?;
let pcm_48k = if rate == MIX_RATE {
pcm
} else {
slot.transcoder
.resample_f32(&pcm, rate, MIX_RATE)
.unwrap_or_else(|_| vec![0.0f32; MIX_FRAME_SIZE])
};
let processed = if slot.codec_pt != codec_lib::PT_OPUS {
TranscodeState::denoise_f32(&mut slot.denoiser, &pcm_48k)
} else {
pcm_48k
};
Some(processed)
}
fn queue_inbound_packet(slot: &mut MixerLegSlot, pkt: RtpPacket) {
if let Some(pcm_48k) = decode_packet_to_mix_pcm(slot, &pkt) {
if pcm_48k.is_empty() {
return;
}
if let Some(expected_ts) = slot.expected_rtp_timestamp {
let gap_ts = pkt.timestamp.wrapping_sub(expected_ts);
if is_forward_rtp_delta(gap_ts) {
let gap_samples = rtp_ts_to_mix_samples(slot.codec_pt, gap_ts);
if gap_samples <= MAX_GAP_FILL_SAMPLES {
append_packet_loss_concealment(slot, gap_samples);
} else {
slot.pcm_buffer.clear();
}
}
}
let packet_ts = mix_samples_to_rtp_ts(slot.codec_pt, pcm_48k.len());
if packet_ts > 0 {
slot.estimated_packet_ts = packet_ts;
slot.expected_rtp_timestamp = Some(pkt.timestamp.wrapping_add(packet_ts));
}
slot.pcm_buffer.extend(pcm_48k);
}
}
fn fill_leg_playout_buffer(slot: &mut MixerLegSlot) {
let mut steps = 0usize;
while slot.pcm_buffer.len() < MIX_FRAME_SIZE && steps < MAX_PACKET_STEPS_PER_TICK {
steps += 1;
match slot.jitter.consume() {
JitterResult::Packet(pkt) => queue_inbound_packet(slot, pkt),
JitterResult::Missing => {
let conceal_ts = slot
.estimated_packet_ts
.max(rtp_clock_increment(slot.codec_pt));
let conceal_samples =
rtp_ts_to_mix_samples(slot.codec_pt, conceal_ts).clamp(1, MAX_GAP_FILL_SAMPLES);
append_packet_loss_concealment(slot, conceal_samples);
if let Some(expected_ts) = slot.expected_rtp_timestamp {
slot.expected_rtp_timestamp = Some(expected_ts.wrapping_add(conceal_ts));
}
}
JitterResult::Filling => break,
}
}
}
fn take_mix_frame(slot: &mut MixerLegSlot) -> Vec<f32> {
let mut frame = Vec::with_capacity(MIX_FRAME_SIZE);
while frame.len() < MIX_FRAME_SIZE {
if let Some(sample) = slot.pcm_buffer.pop_front() {
frame.push(sample);
} else {
frame.push(0.0);
}
}
frame
}
fn soft_limit_sample(sample: f32) -> f32 {
const KNEE: f32 = 0.85;
let abs = sample.abs();
if abs <= KNEE {
sample
} else {
let excess = abs - KNEE;
let compressed = KNEE + (excess / (1.0 + (excess / (1.0 - KNEE))));
sample.signum() * compressed.min(1.0)
}
}
fn try_send_leg_output(
out_tx: &OutTx,
call_id: &str,
leg_id: &str,
slot: &mut MixerLegSlot,
rtp: Vec<u8>,
stream: &str,
) {
let reason = match slot.outbound_tx.try_send(rtp) {
Ok(()) => return,
Err(mpsc::error::TrySendError::Full(_)) => "full",
Err(mpsc::error::TrySendError::Closed(_)) => "closed",
};
slot.outbound_drops += 1;
emit_output_drop_event(
out_tx,
call_id,
Some(leg_id),
None,
stream,
reason,
slot.outbound_drops,
);
}
fn try_send_tool_output(
out_tx: &OutTx,
call_id: &str,
tool_leg_id: &str,
tool: &mut ToolLegSlot,
batch: ToolAudioBatch,
) {
let reason = match tool.audio_tx.try_send(batch) {
Ok(()) => return,
Err(mpsc::error::TrySendError::Full(_)) => "full",
Err(mpsc::error::TrySendError::Closed(_)) => "closed",
};
tool.dropped_batches += 1;
emit_output_drop_event(
out_tx,
call_id,
None,
Some(tool_leg_id),
"tool-batch",
reason,
tool.dropped_batches,
);
}
fn cancel_prompt_producer(state: &mut IsolationState) {
if let Some(cancel_tx) = state.prompt_cancel_tx.take() {
let _ = cancel_tx.send(true);
}
}
fn cancel_isolated_interaction(state: &mut IsolationState) {
cancel_prompt_producer(state);
if let Some(tx) = state.result_tx.take() {
let _ = tx.send(InteractionResult::Cancelled);
}
}
fn drain_prompt_stream(
out_tx: &OutTx,
call_id: &str,
leg_id: &str,
state: &mut IsolationState,
) {
loop {
let Some(mut stream_rx) = state.prompt_stream_rx.take() else {
return;
};
match stream_rx.try_recv() {
Ok(TtsStreamMessage::Frames(frames)) => {
state.prompt_frames.extend(frames);
state.prompt_stream_rx = Some(stream_rx);
}
Ok(TtsStreamMessage::Finished) => {
state.prompt_stream_finished = true;
return;
}
Ok(TtsStreamMessage::Failed(error)) => {
emit_event(
out_tx,
"mixer_error",
serde_json::json!({
"call_id": call_id,
"leg_id": leg_id,
"error": format!("tts stream failed: {error}"),
}),
);
state.prompt_stream_finished = true;
return;
}
Err(mpsc::error::TryRecvError::Empty) => {
state.prompt_stream_rx = Some(stream_rx);
return;
}
Err(mpsc::error::TryRecvError::Disconnected) => {
state.prompt_stream_finished = true;
return;
}
}
}
}
/// Spawn the mixer task for a call. Returns the command sender and task handle.
pub fn spawn_mixer(
call_id: String,
out_tx: OutTx,
) -> (mpsc::Sender<MixerCommand>, JoinHandle<()>) {
pub fn spawn_mixer(call_id: String, out_tx: OutTx) -> (mpsc::Sender<MixerCommand>, JoinHandle<()>) {
let (cmd_tx, cmd_rx) = mpsc::channel::<MixerCommand>(32);
let handle = tokio::spawn(async move {
@@ -192,11 +501,7 @@ pub fn spawn_mixer(
}
/// The 20ms mixing loop.
async fn mixer_loop(
call_id: String,
mut cmd_rx: mpsc::Receiver<MixerCommand>,
out_tx: OutTx,
) {
async fn mixer_loop(call_id: String, mut cmd_rx: mpsc::Receiver<MixerCommand>, out_tx: OutTx) {
let mut legs: HashMap<String, MixerLegSlot> = HashMap::new();
let mut tool_legs: HashMap<String, ToolLegSlot> = HashMap::new();
let mut interval = time::interval(Duration::from_millis(20));
@@ -237,11 +542,15 @@ async fn mixer_loop(
denoiser: new_denoiser(),
inbound_rx,
outbound_tx,
pcm_buffer: VecDeque::new(),
last_pcm_frame: vec![0.0f32; MIX_FRAME_SIZE],
expected_rtp_timestamp: None,
estimated_packet_ts: rtp_clock_increment(codec_pt),
silent_ticks: 0,
rtp_seq: 0,
rtp_ts: 0,
rtp_ssrc: rand::random(),
outbound_drops: 0,
role: LegRole::Participant,
jitter: JitterBuffer::new(),
},
@@ -251,9 +560,7 @@ async fn mixer_loop(
// If the leg is isolated, send Cancelled before dropping.
if let Some(slot) = legs.get_mut(&leg_id) {
if let LegRole::Isolated(ref mut state) = slot.role {
if let Some(tx) = state.result_tx.take() {
let _ = tx.send(InteractionResult::Cancelled);
}
cancel_isolated_interaction(state);
}
}
legs.remove(&leg_id);
@@ -263,9 +570,7 @@ async fn mixer_loop(
// Cancel all outstanding interactions before shutting down.
for slot in legs.values_mut() {
if let LegRole::Isolated(ref mut state) = slot.role {
if let Some(tx) = state.result_tx.take() {
let _ = tx.send(InteractionResult::Cancelled);
}
cancel_isolated_interaction(state);
}
}
return;
@@ -273,6 +578,8 @@ async fn mixer_loop(
Ok(MixerCommand::StartInteraction {
leg_id,
prompt_pcm_frames,
prompt_stream_rx,
prompt_cancel_tx,
expected_digits,
timeout_ms,
result_tx,
@@ -280,13 +587,14 @@ async fn mixer_loop(
if let Some(slot) = legs.get_mut(&leg_id) {
// Cancel any existing interaction first.
if let LegRole::Isolated(ref mut old_state) = slot.role {
if let Some(tx) = old_state.result_tx.take() {
let _ = tx.send(InteractionResult::Cancelled);
}
cancel_isolated_interaction(old_state);
}
let timeout_ticks = timeout_ms / 20;
slot.role = LegRole::Isolated(IsolationState {
prompt_frames: VecDeque::from(prompt_pcm_frames),
prompt_stream_rx,
prompt_cancel_tx,
prompt_stream_finished: false,
expected_digits,
timeout_ticks_remaining: timeout_ticks,
prompt_done: false,
@@ -294,6 +602,9 @@ async fn mixer_loop(
});
} else {
// Leg not found — immediately cancel.
if let Some(cancel_tx) = prompt_cancel_tx {
let _ = cancel_tx.send(true);
}
let _ = result_tx.send(InteractionResult::Cancelled);
}
}
@@ -302,7 +613,14 @@ async fn mixer_loop(
tool_type,
audio_tx,
}) => {
tool_legs.insert(leg_id, ToolLegSlot { tool_type, audio_tx });
tool_legs.insert(
leg_id,
ToolLegSlot {
tool_type,
audio_tx,
dropped_batches: 0,
},
);
}
Ok(MixerCommand::RemoveToolLeg { leg_id }) => {
tool_legs.remove(&leg_id);
@@ -343,54 +661,11 @@ async fn mixer_loop(
}
}
// Step 2b: Consume exactly one frame from the jitter buffer.
match slot.jitter.consume() {
JitterResult::Packet(pkt) => {
match slot.transcoder.decode_to_f32(&pkt.payload, pkt.payload_type) {
Ok((pcm, rate)) => {
let pcm_48k = if rate == MIX_RATE {
pcm
} else {
slot.transcoder
.resample_f32(&pcm, rate, MIX_RATE)
.unwrap_or_else(|_| vec![0.0f32; MIX_FRAME_SIZE])
};
let processed = if slot.codec_pt != codec_lib::PT_OPUS {
TranscodeState::denoise_f32(&mut slot.denoiser, &pcm_48k)
} else {
pcm_48k
};
let mut frame = processed;
frame.resize(MIX_FRAME_SIZE, 0.0);
slot.last_pcm_frame = frame;
}
Err(_) => {}
}
}
JitterResult::Missing => {
// Invoke Opus PLC or fade for non-Opus codecs.
if slot.codec_pt == codec_lib::PT_OPUS {
match slot.transcoder.opus_plc(MIX_FRAME_SIZE) {
Ok(pcm) => {
slot.last_pcm_frame = pcm;
}
Err(_) => {
for s in slot.last_pcm_frame.iter_mut() {
*s *= 0.8;
}
}
}
} else {
// Non-Opus: fade last frame toward silence.
for s in slot.last_pcm_frame.iter_mut() {
*s *= 0.85;
}
}
}
JitterResult::Filling => {
slot.last_pcm_frame = vec![0.0f32; MIX_FRAME_SIZE];
}
}
// Step 2b: Decode enough RTP to cover one 20ms playout frame.
// Variable-duration packets (10ms, 20ms, 60ms, ...) accumulate in
// the per-leg PCM FIFO; we pop exactly one 20ms frame below.
fill_leg_playout_buffer(slot);
slot.last_pcm_frame = take_mix_frame(slot);
// Run jitter adaptation + prune stale packets.
slot.jitter.adapt();
@@ -404,6 +679,9 @@ async fn mixer_loop(
}
if slot.silent_ticks > 150 {
slot.last_pcm_frame = vec![0.0f32; MIX_FRAME_SIZE];
slot.pcm_buffer.clear();
slot.expected_rtp_timestamp = None;
slot.estimated_packet_ts = rtp_clock_increment(slot.codec_pt);
}
}
@@ -426,12 +704,12 @@ async fn mixer_loop(
for (lid, slot) in legs.iter_mut() {
match &mut slot.role {
LegRole::Participant => {
// Mix-minus: total minus this leg's own contribution, clamped to [-1.0, 1.0].
// Mix-minus: total minus this leg's own contribution.
// Apply a light soft limiter instead of hard clipping the sum.
let mut mix_minus = Vec::with_capacity(MIX_FRAME_SIZE);
for i in 0..MIX_FRAME_SIZE {
let sample =
(total_mix[i] - slot.last_pcm_frame[i] as f64) as f32;
mix_minus.push(sample.clamp(-1.0, 1.0));
let sample = (total_mix[i] - slot.last_pcm_frame[i] as f64) as f32;
mix_minus.push(soft_limit_sample(sample));
}
// Resample from 48kHz to the leg's codec native rate.
@@ -445,11 +723,10 @@ async fn mixer_loop(
};
// Encode to the leg's codec (f32 → i16 → codec inside encode_from_f32).
let encoded =
match slot.transcoder.encode_from_f32(&resampled, slot.codec_pt) {
Ok(e) if !e.is_empty() => e,
_ => continue,
};
let encoded = match slot.transcoder.encode_from_f32(&resampled, slot.codec_pt) {
Ok(e) if !e.is_empty() => e,
_ => continue,
};
// Build RTP packet with header.
let header =
@@ -460,10 +737,11 @@ async fn mixer_loop(
slot.rtp_seq = slot.rtp_seq.wrapping_add(1);
slot.rtp_ts = slot.rtp_ts.wrapping_add(rtp_clock_increment(slot.codec_pt));
// Non-blocking send — drop frame if channel is full.
let _ = slot.outbound_tx.try_send(rtp);
try_send_leg_output(&out_tx, &call_id, lid, slot, rtp, "participant-audio");
}
LegRole::Isolated(state) => {
drain_prompt_stream(&out_tx, &call_id, lid, state);
// Check for DTMF digit from this leg.
let mut matched_digit: Option<char> = None;
for (src_lid, dtmf_pkt) in &dtmf_forward {
@@ -487,12 +765,14 @@ async fn mixer_loop(
if let Some(digit) = matched_digit {
// Interaction complete — digit matched.
completed_interactions
.push((lid.clone(), InteractionResult::Digit(digit)));
completed_interactions.push((lid.clone(), InteractionResult::Digit(digit)));
} else {
// Play prompt frame or silence.
// Play prompt frame, wait for live TTS, or move to timeout once the
// prompt stream has fully drained.
let pcm_frame = if let Some(frame) = state.prompt_frames.pop_front() {
frame
} else if !state.prompt_stream_finished {
vec![0.0f32; MIX_FRAME_SIZE]
} else {
state.prompt_done = true;
vec![0.0f32; MIX_FRAME_SIZE]
@@ -508,6 +788,7 @@ async fn mixer_loop(
.unwrap_or_default()
};
let mut prompt_rtp: Option<Vec<u8>> = None;
if let Ok(encoded) =
slot.transcoder.encode_from_f32(&resampled, slot.codec_pt)
{
@@ -521,10 +802,9 @@ async fn mixer_loop(
let mut rtp = header.to_vec();
rtp.extend_from_slice(&encoded);
slot.rtp_seq = slot.rtp_seq.wrapping_add(1);
slot.rtp_ts = slot
.rtp_ts
.wrapping_add(rtp_clock_increment(slot.codec_pt));
let _ = slot.outbound_tx.try_send(rtp);
slot.rtp_ts =
slot.rtp_ts.wrapping_add(rtp_clock_increment(slot.codec_pt));
prompt_rtp = Some(rtp);
}
}
@@ -537,6 +817,17 @@ async fn mixer_loop(
state.timeout_ticks_remaining -= 1;
}
}
if let Some(rtp) = prompt_rtp {
try_send_leg_output(
&out_tx,
&call_id,
lid,
slot,
rtp,
"isolated-prompt",
);
}
}
}
}
@@ -546,6 +837,7 @@ async fn mixer_loop(
for (lid, result) in completed_interactions {
if let Some(slot) = legs.get_mut(&lid) {
if let LegRole::Isolated(ref mut state) = slot.role {
cancel_prompt_producer(state);
if let Some(tx) = state.result_tx.take() {
let _ = tx.send(result);
}
@@ -566,7 +858,7 @@ async fn mixer_loop(
})
.collect();
for tool in tool_legs.values() {
for (tool_leg_id, tool) in tool_legs.iter_mut() {
let batch = ToolAudioBatch {
sources: sources
.iter()
@@ -576,8 +868,7 @@ async fn mixer_loop(
})
.collect(),
};
// Non-blocking send — drop batch if tool can't keep up.
let _ = tool.audio_tx.try_send(batch);
try_send_tool_output(&out_tx, &call_id, tool_leg_id, tool, batch);
}
}
@@ -610,7 +901,7 @@ async fn mixer_loop(
rtp_out.extend_from_slice(&dtmf_pkt.payload);
target_slot.rtp_seq = target_slot.rtp_seq.wrapping_add(1);
// Don't increment rtp_ts for DTMF — it shares timestamp context with audio.
let _ = target_slot.outbound_tx.try_send(rtp_out);
try_send_leg_output(&out_tx, &call_id, target_lid, target_slot, rtp_out, "dtmf");
}
}
}

View File

@@ -267,11 +267,7 @@ impl ProviderManager {
/// Try to handle a SIP response as a provider registration response.
/// Returns true if consumed.
pub async fn handle_response(
&self,
msg: &SipMessage,
socket: &UdpSocket,
) -> bool {
pub async fn handle_response(&self, msg: &SipMessage, socket: &UdpSocket) -> bool {
for ps_arc in &self.providers {
let mut ps = ps_arc.lock().await;
let was_registered = ps.is_registered;
@@ -317,12 +313,32 @@ impl ProviderManager {
if ps.config.outbound_proxy.address == addr.ip().to_string() {
return Some(ps_arc.clone());
}
// Hostname-based providers (e.g. sipgate.de) often deliver inbound
// INVITEs from resolved IPs rather than the literal configured host.
// Resolve the proxy host and accept any matching IP/port variant.
use std::net::ToSocketAddrs;
if let Ok(resolved) = format!(
"{}:{}",
ps.config.outbound_proxy.address, ps.config.outbound_proxy.port
)
.to_socket_addrs()
{
for resolved_addr in resolved {
if resolved_addr == *addr || resolved_addr.ip() == addr.ip() {
return Some(ps_arc.clone());
}
}
}
}
None
}
/// Find a provider by its config ID (e.g. "easybell").
pub async fn find_by_provider_id(&self, provider_id: &str) -> Option<Arc<Mutex<ProviderState>>> {
pub async fn find_by_provider_id(
&self,
provider_id: &str,
) -> Option<Arc<Mutex<ProviderState>>> {
for ps_arc in &self.providers {
let ps = ps_arc.lock().await;
if ps.config.id == provider_id {

View File

@@ -25,8 +25,7 @@ impl Recorder {
) -> Result<Self, String> {
// Ensure parent directory exists.
if let Some(parent) = Path::new(file_path).parent() {
std::fs::create_dir_all(parent)
.map_err(|e| format!("create dir: {e}"))?;
std::fs::create_dir_all(parent).map_err(|e| format!("create dir: {e}"))?;
}
let sample_rate = 8000u32; // Record at 8kHz (standard telephony)
@@ -57,10 +56,13 @@ impl Recorder {
/// Create a recorder that writes raw PCM at a given sample rate.
/// Used by tool legs that already have decoded PCM (no RTP processing needed).
pub fn new_pcm(file_path: &str, sample_rate: u32, max_duration_ms: Option<u64>) -> Result<Self, String> {
pub fn new_pcm(
file_path: &str,
sample_rate: u32,
max_duration_ms: Option<u64>,
) -> Result<Self, String> {
if let Some(parent) = Path::new(file_path).parent() {
std::fs::create_dir_all(parent)
.map_err(|e| format!("create dir: {e}"))?;
std::fs::create_dir_all(parent).map_err(|e| format!("create dir: {e}"))?;
}
let spec = hound::WavSpec {

View File

@@ -60,18 +60,17 @@ impl Registrar {
/// Try to handle a SIP REGISTER from a device.
/// Returns Some(response_bytes) if handled, None if not a known device.
pub fn handle_register(
&mut self,
msg: &SipMessage,
from_addr: SocketAddr,
) -> Option<Vec<u8>> {
pub fn handle_register(&mut self, msg: &SipMessage, from_addr: SocketAddr) -> Option<Vec<u8>> {
if msg.method() != Some("REGISTER") {
return None;
}
// Find the device by matching the source IP against expectedAddress.
let from_ip = from_addr.ip().to_string();
let device = self.devices.iter().find(|d| d.expected_address == from_ip)?;
let device = self
.devices
.iter()
.find(|d| d.expected_address == from_ip)?;
let from_header = msg.get_header("From").unwrap_or("");
let aor = SipMessage::extract_uri(from_header)
@@ -79,9 +78,7 @@ impl Registrar {
.unwrap_or_else(|| format!("sip:{}@{}", device.extension, from_ip));
let expires_header = msg.get_header("Expires");
let requested: u32 = expires_header
.and_then(|s| s.parse().ok())
.unwrap_or(3600);
let requested: u32 = expires_header.and_then(|s| s.parse().ok()).unwrap_or(3600);
let expires = requested.min(MAX_EXPIRES);
let entry = RegisteredDevice {
@@ -122,10 +119,7 @@ impl Registrar {
Some(ResponseOptions {
to_tag: Some(generate_tag()),
contact: Some(contact),
extra_headers: Some(vec![(
"Expires".to_string(),
expires.to_string(),
)]),
extra_headers: Some(vec![("Expires".to_string(), expires.to_string())]),
..Default::default()
}),
);
@@ -145,8 +139,8 @@ impl Registrar {
/// Find a registered device by its source IP address.
pub fn find_by_address(&self, addr: &SocketAddr) -> Option<&RegisteredDevice> {
let ip = addr.ip().to_string();
self.registered.values().find(|e| {
e.contact_addr.ip().to_string() == ip && Instant::now() <= e.expires_at
})
self.registered
.values()
.find(|e| e.contact_addr.ip().to_string() == ip && Instant::now() <= e.expires_at)
}
}

View File

@@ -82,10 +82,15 @@ pub fn build_rtp_header(pt: u8, seq: u16, timestamp: u32, ssrc: u32) -> [u8; 12]
/// Get the RTP clock increment per 20ms frame for a payload type.
pub fn rtp_clock_increment(pt: u8) -> u32 {
rtp_clock_rate(pt) / 50
}
/// Get the RTP clock rate for a payload type.
pub fn rtp_clock_rate(pt: u8) -> u32 {
match pt {
9 => 160, // G.722: 8000 Hz clock rate (despite 16kHz audio) × 0.02s
0 | 8 => 160, // PCMU/PCMA: 8000 × 0.02
111 => 960, // Opus: 48000 × 0.02
_ => 160,
9 => 8000, // G.722 uses an 8kHz RTP clock despite 16kHz audio.
0 | 8 => 8000, // PCMU/PCMA
111 => 48000, // Opus
_ => 8000,
}
}

View File

@@ -128,17 +128,24 @@ impl SipLeg {
max_forwards: Some(70),
body: Some(sdp),
content_type: Some("application/sdp".to_string()),
extra_headers: Some(vec![
("User-Agent".to_string(), "SipRouter/1.0".to_string()),
]),
extra_headers: Some(vec![(
"User-Agent".to_string(),
"SipRouter/1.0".to_string(),
)]),
},
);
self.dialog = Some(SipDialog::from_uac_invite(&invite, ip, self.config.lan_port));
self.dialog = Some(SipDialog::from_uac_invite(
&invite,
ip,
self.config.lan_port,
));
self.invite = Some(invite.clone());
self.state = LegState::Inviting;
let _ = socket.send_to(&invite.serialize(), self.config.sip_target).await;
let _ = socket
.send_to(&invite.serialize(), self.config.sip_target)
.await;
}
/// Handle an incoming SIP message routed to this leg.
@@ -443,10 +450,7 @@ pub enum SipLegAction {
/// Build an ACK for a non-2xx response (same transaction as the INVITE).
fn build_non_2xx_ack(original_invite: &SipMessage, response: &SipMessage) -> SipMessage {
let via = original_invite.get_header("Via").unwrap_or("").to_string();
let from = original_invite
.get_header("From")
.unwrap_or("")
.to_string();
let from = original_invite.get_header("From").unwrap_or("").to_string();
let to = response.get_header("To").unwrap_or("").to_string();
let call_id = original_invite.call_id().to_string();
let cseq_num: u32 = original_invite

View File

@@ -28,10 +28,8 @@ impl SipTransport {
}
/// Spawn the UDP receive loop. Calls the handler for every received packet.
pub fn spawn_receiver<F>(
&self,
handler: F,
) where
pub fn spawn_receiver<F>(&self, handler: F)
where
F: Fn(&[u8], SocketAddr) + Send + 'static,
{
let socket = self.socket.clone();

View File

@@ -51,7 +51,8 @@ pub fn spawn_recording_tool(
});
// Convert f32 [-1.0, 1.0] to i16 for WAV writing.
let pcm_i16: Vec<i16> = source.pcm_48k
let pcm_i16: Vec<i16> = source
.pcm_48k
.iter()
.map(|&s| (s * 32767.0).round().clamp(-32768.0, 32767.0) as i16)
.collect();

View File

@@ -1,18 +1,57 @@
//! Text-to-speech engine — synthesizes text to WAV files using Kokoro neural TTS.
//!
//! The model is loaded lazily on first use. If the model/voices files are not
//! present, the generate command returns an error and the TS side falls back
//! to espeak-ng.
//! present, the generate command returns an error and the caller skips the prompt.
//!
//! Caching is handled internally via a `.meta` sidecar file next to each WAV.
//! When `cacheable` is true, the engine checks whether the existing WAV was
//! generated from the same text+voice; if so it returns immediately (cache hit).
//! Callers never need to check for cached files — that is entirely this module's
//! responsibility.
use crate::audio_player::pcm_to_mix_frames;
use kokoro_tts::{KokoroTts, Voice};
use std::path::Path;
use std::sync::Arc;
use std::time::{SystemTime, UNIX_EPOCH};
use tokio::sync::{mpsc, watch};
pub const DEFAULT_MODEL_PATH: &str = ".nogit/tts/kokoro-v1.0.onnx";
pub const DEFAULT_VOICES_PATH: &str = ".nogit/tts/voices.bin";
const TTS_OUTPUT_RATE: u32 = 24000;
const MAX_CHUNK_CHARS: usize = 220;
const MIN_CHUNK_CHARS: usize = 80;
pub enum TtsStreamMessage {
Frames(Vec<Vec<f32>>),
Finished,
Failed(String),
}
pub struct TtsLivePrompt {
pub initial_frames: Vec<Vec<f32>>,
pub stream_rx: mpsc::Receiver<TtsStreamMessage>,
pub cancel_tx: watch::Sender<bool>,
}
#[derive(Clone)]
pub struct TtsPromptRequest {
pub model_path: String,
pub voices_path: String,
pub voice_name: String,
pub text: String,
}
/// Wraps the Kokoro TTS engine with lazy model loading.
pub struct TtsEngine {
tts: Option<KokoroTts>,
tts: Option<Arc<KokoroTts>>,
/// Path that was used to load the current model (for cache invalidation).
loaded_model_path: String,
loaded_voices_path: String,
/// On-disk TTS WAVs are cacheable only within a single engine lifetime.
/// Every restart gets a new generation token, so prior process outputs are
/// treated as stale and regenerated on first use.
cache_generation: String,
}
impl TtsEngine {
@@ -21,9 +60,76 @@ impl TtsEngine {
tts: None,
loaded_model_path: String::new(),
loaded_voices_path: String::new(),
cache_generation: SystemTime::now()
.duration_since(UNIX_EPOCH)
.map(|d| d.as_nanos().to_string())
.unwrap_or_else(|_| "0".to_string()),
}
}
async fn ensure_loaded(
&mut self,
model_path: &str,
voices_path: &str,
) -> Result<Arc<KokoroTts>, String> {
if !Path::new(model_path).exists() {
return Err(format!("model not found: {model_path}"));
}
if !Path::new(voices_path).exists() {
return Err(format!("voices not found: {voices_path}"));
}
if self.tts.is_none()
|| self.loaded_model_path != model_path
|| self.loaded_voices_path != voices_path
{
eprintln!("[tts] loading model: {model_path}");
let tts = Arc::new(
KokoroTts::new(model_path, voices_path)
.await
.map_err(|e| format!("model load failed: {e:?}"))?,
);
self.tts = Some(tts);
self.loaded_model_path = model_path.to_string();
self.loaded_voices_path = voices_path.to_string();
}
Ok(self.tts.as_ref().unwrap().clone())
}
pub async fn start_live_prompt(
&mut self,
request: TtsPromptRequest,
) -> Result<TtsLivePrompt, String> {
if request.text.trim().is_empty() {
return Err("empty text".into());
}
let tts = self
.ensure_loaded(&request.model_path, &request.voices_path)
.await?;
let voice = select_voice(&request.voice_name);
let chunks = chunk_text(&request.text);
if chunks.is_empty() {
return Err("empty text".into());
}
let initial_frames = synth_text_to_mix_frames(&tts, chunks[0].as_str(), voice).await?;
let remaining_chunks: Vec<String> = chunks.into_iter().skip(1).collect();
let (stream_tx, stream_rx) = mpsc::channel(8);
let (cancel_tx, cancel_rx) = watch::channel(false);
tokio::spawn(async move {
stream_live_prompt_chunks(tts, voice, remaining_chunks, stream_tx, cancel_rx).await;
});
Ok(TtsLivePrompt {
initial_frames,
stream_rx,
cancel_tx,
})
}
/// Generate a WAV file from text.
///
/// Params (from IPC JSON):
@@ -32,52 +138,64 @@ impl TtsEngine {
/// - `voice`: voice name (e.g. "af_bella")
/// - `text`: text to synthesize
/// - `output`: output WAV file path
pub async fn generate(&mut self, params: &serde_json::Value) -> Result<serde_json::Value, String> {
let model_path = params.get("model").and_then(|v| v.as_str())
/// - `cacheable`: if true, skip synthesis when the output WAV already
/// matches the same text+voice (checked via a `.meta` sidecar file)
pub async fn generate(
&mut self,
params: &serde_json::Value,
) -> Result<serde_json::Value, String> {
let model_path = params
.get("model")
.and_then(|v| v.as_str())
.ok_or("missing 'model' param")?;
let voices_path = params.get("voices").and_then(|v| v.as_str())
let voices_path = params
.get("voices")
.and_then(|v| v.as_str())
.ok_or("missing 'voices' param")?;
let voice_name = params.get("voice").and_then(|v| v.as_str())
let voice_name = params
.get("voice")
.and_then(|v| v.as_str())
.unwrap_or("af_bella");
let text = params.get("text").and_then(|v| v.as_str())
let text = params
.get("text")
.and_then(|v| v.as_str())
.ok_or("missing 'text' param")?;
let output_path = params.get("output").and_then(|v| v.as_str())
let output_path = params
.get("output")
.and_then(|v| v.as_str())
.ok_or("missing 'output' param")?;
let cacheable = params
.get("cacheable")
.and_then(|v| v.as_bool())
.unwrap_or(false);
if text.is_empty() {
return Err("empty text".into());
}
// Check that model/voices files exist.
if !Path::new(model_path).exists() {
return Err(format!("model not found: {model_path}"));
}
if !Path::new(voices_path).exists() {
return Err(format!("voices not found: {voices_path}"));
// Cache check: if cacheable and the sidecar matches, return immediately.
if cacheable && self.is_cache_hit(output_path, text, voice_name) {
eprintln!("[tts] cache hit: {output_path}");
return Ok(serde_json::json!({ "output": output_path }));
}
// Lazy-load or reload if paths changed.
if self.tts.is_none()
|| self.loaded_model_path != model_path
|| self.loaded_voices_path != voices_path
{
eprintln!("[tts] loading model: {model_path}");
let tts = KokoroTts::new(model_path, voices_path)
.await
.map_err(|e| format!("model load failed: {e:?}"))?;
self.tts = Some(tts);
self.loaded_model_path = model_path.to_string();
self.loaded_voices_path = voices_path.to_string();
// Ensure parent directory exists.
if let Some(parent) = Path::new(output_path).parent() {
let _ = std::fs::create_dir_all(parent);
}
let tts = self.tts.as_ref().unwrap();
let tts = self.ensure_loaded(model_path, voices_path).await?;
let voice = select_voice(voice_name);
eprintln!("[tts] synthesizing voice '{voice_name}': \"{text}\"");
let (samples, duration) = tts.synth(text, voice)
eprintln!("[tts] synthesizing WAV voice '{voice_name}' to {output_path}");
let (samples, duration) = tts
.synth(text, voice)
.await
.map_err(|e| format!("synthesis failed: {e:?}"))?;
eprintln!("[tts] synthesized {} samples in {duration:?}", samples.len());
eprintln!(
"[tts] synthesized {} samples in {duration:?}",
samples.len()
);
// Write 24kHz 16-bit mono WAV.
let spec = hound::WavSpec {
@@ -91,13 +209,149 @@ impl TtsEngine {
.map_err(|e| format!("WAV create failed: {e}"))?;
for &sample in &samples {
let s16 = (sample * 32767.0).round().clamp(-32768.0, 32767.0) as i16;
writer.write_sample(s16).map_err(|e| format!("WAV write: {e}"))?;
writer
.write_sample(s16)
.map_err(|e| format!("WAV write: {e}"))?;
}
writer
.finalize()
.map_err(|e| format!("WAV finalize: {e}"))?;
// Write sidecar for future cache checks.
if cacheable {
self.write_cache_meta(output_path, text, voice_name);
}
writer.finalize().map_err(|e| format!("WAV finalize: {e}"))?;
eprintln!("[tts] wrote {output_path}");
Ok(serde_json::json!({ "output": output_path }))
}
// -----------------------------------------------------------------------
// Cache helpers
// -----------------------------------------------------------------------
/// Check if the WAV + sidecar on disk match the given text+voice.
fn is_cache_hit(&self, output_path: &str, text: &str, voice: &str) -> bool {
let meta_path = format!("{output_path}.meta");
if !Path::new(output_path).exists() || !Path::new(&meta_path).exists() {
return false;
}
match std::fs::read_to_string(&meta_path) {
Ok(contents) => contents == self.cache_key(text, voice),
Err(_) => false,
}
}
/// Write the sidecar `.meta` file next to the WAV.
fn write_cache_meta(&self, output_path: &str, text: &str, voice: &str) {
let meta_path = format!("{output_path}.meta");
let _ = std::fs::write(&meta_path, self.cache_key(text, voice));
}
/// Build the cache key from process generation + text + voice.
fn cache_key(&self, text: &str, voice: &str) -> String {
format!("{}\0{}\0{}", self.cache_generation, text, voice)
}
}
async fn synth_text_to_mix_frames(
tts: &Arc<KokoroTts>,
text: &str,
voice: Voice,
) -> Result<Vec<Vec<f32>>, String> {
let (samples, duration) = tts
.synth(text, voice)
.await
.map_err(|e| format!("synthesis failed: {e:?}"))?;
eprintln!(
"[tts] synthesized chunk ({} chars, {} samples) in {duration:?}",
text.chars().count(),
samples.len()
);
pcm_to_mix_frames(&samples, TTS_OUTPUT_RATE)
}
async fn stream_live_prompt_chunks(
tts: Arc<KokoroTts>,
voice: Voice,
chunks: Vec<String>,
stream_tx: mpsc::Sender<TtsStreamMessage>,
mut cancel_rx: watch::Receiver<bool>,
) {
for chunk in chunks {
if *cancel_rx.borrow() {
break;
}
match synth_text_to_mix_frames(&tts, &chunk, voice).await {
Ok(frames) => {
if *cancel_rx.borrow() {
break;
}
if stream_tx.send(TtsStreamMessage::Frames(frames)).await.is_err() {
return;
}
}
Err(error) => {
let _ = stream_tx.send(TtsStreamMessage::Failed(error)).await;
return;
}
}
if cancel_rx.has_changed().unwrap_or(false) && *cancel_rx.borrow_and_update() {
break;
}
}
let _ = stream_tx.send(TtsStreamMessage::Finished).await;
}
fn chunk_text(text: &str) -> Vec<String> {
let mut chunks = Vec::new();
let mut current = String::new();
for ch in text.chars() {
current.push(ch);
let len = current.chars().count();
let hard_split = len >= MAX_CHUNK_CHARS && (ch.is_whitespace() || is_soft_boundary(ch));
let natural_split = len >= MIN_CHUNK_CHARS && is_sentence_boundary(ch);
if natural_split || hard_split {
push_chunk(&mut chunks, &mut current);
}
}
push_chunk(&mut chunks, &mut current);
if chunks.len() >= 2 {
let last_len = chunks.last().unwrap().chars().count();
if last_len < (MIN_CHUNK_CHARS / 2) {
let tail = chunks.pop().unwrap();
if let Some(prev) = chunks.last_mut() {
prev.push(' ');
prev.push_str(tail.trim());
}
}
}
chunks
}
fn push_chunk(chunks: &mut Vec<String>, current: &mut String) {
let trimmed = current.trim();
if !trimmed.is_empty() {
chunks.push(trimmed.to_string());
}
current.clear();
}
fn is_sentence_boundary(ch: char) -> bool {
matches!(ch, '.' | '!' | '?' | '\n' | ';' | ':')
}
fn is_soft_boundary(ch: char) -> bool {
matches!(ch, ',' | ';' | ':' | ')' | ']' | '\n')
}
/// Map voice name string to Kokoro Voice enum variant.

View File

@@ -19,6 +19,7 @@ pub async fn run_voicemail_session(
rtp_socket: Arc<UdpSocket>,
provider_media: SocketAddr,
codec_pt: u8,
voicebox_id: Option<String>,
greeting_wav: Option<String>,
recording_path: String,
max_recording_ms: u64,
@@ -33,6 +34,7 @@ pub async fn run_voicemail_session(
"voicemail_started",
serde_json::json!({
"call_id": call_id,
"voicebox_id": voicebox_id,
"caller_number": caller_number,
}),
);
@@ -102,6 +104,7 @@ pub async fn run_voicemail_session(
"recording_done",
serde_json::json!({
"call_id": call_id,
"voicebox_id": voicebox_id,
"file_path": result.file_path,
"duration_ms": result.duration_ms,
"caller_number": caller_number,
@@ -128,8 +131,8 @@ async fn record_from_socket(
break; // Max duration reached.
}
}
Ok(Err(_)) => break, // Socket error (closed).
Err(_) => break, // Timeout (max duration + grace).
Ok(Err(_)) => break, // Socket error (closed).
Err(_) => break, // Timeout (max duration + grace).
}
}

View File

@@ -58,9 +58,7 @@ impl WebRtcEngine {
.register_default_codecs()
.map_err(|e| format!("register codecs: {e}"))?;
let api = APIBuilder::new()
.with_media_engine(media_engine)
.build();
let api = APIBuilder::new().with_media_engine(media_engine).build();
let config = RTCConfiguration {
ice_servers: vec![],
@@ -91,8 +89,7 @@ impl WebRtcEngine {
.map_err(|e| format!("add track: {e}"))?;
// Shared mixer channel sender (populated when linked to a call).
let mixer_tx: Arc<Mutex<Option<mpsc::Sender<RtpPacket>>>> =
Arc::new(Mutex::new(None));
let mixer_tx: Arc<Mutex<Option<mpsc::Sender<RtpPacket>>>> = Arc::new(Mutex::new(None));
// ICE candidate handler.
let out_tx_ice = self.out_tx.clone();
@@ -256,7 +253,11 @@ impl WebRtcEngine {
pub async fn close_session(&mut self, session_id: &str) -> Result<(), String> {
if let Some(session) = self.sessions.remove(session_id) {
session.pc.close().await.map_err(|e| format!("close: {e}"))?;
session
.pc
.close()
.await
.map_err(|e| format!("close: {e}"))?;
}
Ok(())
}

View File

@@ -51,9 +51,7 @@ impl SipDialog {
.map(|s| s.to_string())
.unwrap_or_else(generate_tag),
remote_tag: None,
local_uri: SipMessage::extract_uri(from)
.unwrap_or("")
.to_string(),
local_uri: SipMessage::extract_uri(from).unwrap_or("").to_string(),
remote_uri: SipMessage::extract_uri(to).unwrap_or("").to_string(),
local_cseq,
remote_cseq: 0,
@@ -181,10 +179,7 @@ impl SipDialog {
format!("<{}>{remote_tag_str}", self.remote_uri),
),
("Call-ID".to_string(), self.call_id.clone()),
(
"CSeq".to_string(),
format!("{} {method}", self.local_cseq),
),
("CSeq".to_string(), format!("{} {method}", self.local_cseq)),
("Max-Forwards".to_string(), "70".to_string()),
];
@@ -243,10 +238,7 @@ impl SipDialog {
format!("<{}>{remote_tag_str}", self.remote_uri),
),
("Call-ID".to_string(), self.call_id.clone()),
(
"CSeq".to_string(),
format!("{} ACK", self.local_cseq),
),
("CSeq".to_string(), format!("{} ACK", self.local_cseq)),
("Max-Forwards".to_string(), "70".to_string()),
];
@@ -271,10 +263,7 @@ impl SipDialog {
("From".to_string(), from),
("To".to_string(), to),
("Call-ID".to_string(), self.call_id.clone()),
(
"CSeq".to_string(),
format!("{} CANCEL", self.local_cseq),
),
("CSeq".to_string(), format!("{} CANCEL", self.local_cseq)),
("Max-Forwards".to_string(), "70".to_string()),
("Content-Length".to_string(), "0".to_string()),
];
@@ -284,11 +273,7 @@ impl SipDialog {
.unwrap_or(&self.remote_target)
.to_string();
SipMessage::new(
format!("CANCEL {ruri} SIP/2.0"),
headers,
String::new(),
)
SipMessage::new(format!("CANCEL {ruri} SIP/2.0"), headers, String::new())
}
/// Transition the dialog to terminated state.

View File

@@ -27,7 +27,9 @@ pub fn generate_branch() -> String {
fn random_hex(bytes: usize) -> String {
let mut rng = rand::thread_rng();
(0..bytes).map(|_| format!("{:02x}", rng.gen::<u8>())).collect()
(0..bytes)
.map(|_| format!("{:02x}", rng.gen::<u8>()))
.collect()
}
// ---- Codec registry --------------------------------------------------------
@@ -142,7 +144,9 @@ pub fn parse_digest_challenge(header: &str) -> Option<DigestChallenge> {
return Some(after[1..1 + end].to_string());
}
// Unquoted value.
let end = after.find(|c: char| c == ',' || c.is_whitespace()).unwrap_or(after.len());
let end = after
.find(|c: char| c == ',' || c.is_whitespace())
.unwrap_or(after.len());
return Some(after[..end].to_string());
}
None
@@ -241,11 +245,7 @@ pub struct MwiResult {
pub extra_headers: Vec<(String, String)>,
}
pub fn build_mwi_body(
new_messages: u32,
old_messages: u32,
account_uri: &str,
) -> MwiResult {
pub fn build_mwi_body(new_messages: u32, old_messages: u32, account_uri: &str) -> MwiResult {
let waiting = if new_messages > 0 { "yes" } else { "no" };
let body = format!(
"Messages-Waiting: {waiting}\r\n\

View File

@@ -4,9 +4,9 @@
//! SDP handling, Digest authentication, and URI rewriting.
//! Ported from the TypeScript `ts/sip/` library.
pub mod message;
pub mod dialog;
pub mod helpers;
pub mod message;
pub mod rewrite;
/// Network endpoint (address + port + optional negotiated codec).

View File

@@ -14,7 +14,11 @@ pub struct SipMessage {
impl SipMessage {
pub fn new(start_line: String, headers: Vec<(String, String)>, body: String) -> Self {
Self { start_line, headers, body }
Self {
start_line,
headers,
body,
}
}
// ---- Parsing -----------------------------------------------------------
@@ -175,7 +179,8 @@ impl SipMessage {
/// Inserts a header at the top of the header list.
pub fn prepend_header(&mut self, name: &str, value: &str) -> &mut Self {
self.headers.insert(0, (name.to_string(), value.to_string()));
self.headers
.insert(0, (name.to_string(), value.to_string()));
self
}
@@ -233,10 +238,7 @@ impl SipMessage {
.to_display_name
.map(|d| format!("\"{d}\" "))
.unwrap_or_default();
let to_tag_str = opts
.to_tag
.map(|t| format!(";tag={t}"))
.unwrap_or_default();
let to_tag_str = opts.to_tag.map(|t| format!(";tag={t}")).unwrap_or_default();
let mut headers = vec![
(
@@ -364,7 +366,43 @@ impl SipMessage {
.find(|c: char| c == ';' || c == '>')
.unwrap_or(trimmed.len());
let result = &trimmed[..end];
if result.is_empty() { None } else { Some(result) }
if result.is_empty() {
None
} else {
Some(result)
}
}
}
/// Extract the user part from a SIP/TEL URI or header value.
pub fn extract_uri_user(uri_or_header_value: &str) -> Option<&str> {
let raw = Self::extract_uri(uri_or_header_value).unwrap_or(uri_or_header_value);
let raw = raw.trim();
if raw.is_empty() {
return None;
}
let user_part = if raw
.get(..5)
.is_some_and(|prefix| prefix.eq_ignore_ascii_case("sips:"))
{
&raw[5..]
} else if raw.get(..4).is_some_and(|prefix| {
prefix.eq_ignore_ascii_case("sip:") || prefix.eq_ignore_ascii_case("tel:")
}) {
&raw[4..]
} else {
raw
};
let end = user_part
.find(|c: char| matches!(c, '@' | ';' | '?' | '>'))
.unwrap_or(user_part.len());
let result = &user_part[..end];
if result.is_empty() {
None
} else {
Some(result)
}
}
}
@@ -506,6 +544,19 @@ mod tests {
SipMessage::extract_uri("\"Name\" <sip:user@host>;tag=abc"),
Some("sip:user@host")
);
assert_eq!(
SipMessage::extract_uri_user("\"Name\" <sip:+49 421 219694@host>;tag=abc"),
Some("+49 421 219694")
);
assert_eq!(
SipMessage::extract_uri_user("sip:0049421219694@voip.easybell.de"),
Some("0049421219694")
);
assert_eq!(
SipMessage::extract_uri_user("tel:+49421219694;phone-context=example.com"),
Some("+49421219694")
);
assert_eq!(SipMessage::extract_uri_user("SIP:user@host"), Some("user"));
}
#[test]
@@ -535,7 +586,10 @@ mod tests {
);
assert_eq!(invite.method(), Some("INVITE"));
assert_eq!(invite.call_id(), "test-123");
assert!(invite.get_header("Via").unwrap().contains("192.168.1.1:5070"));
assert!(invite
.get_header("Via")
.unwrap()
.contains("192.168.1.1:5070"));
let response = SipMessage::create_response(
200,

View File

@@ -92,7 +92,11 @@ pub fn rewrite_sdp(body: &str, ip: &str, port: u16) -> (String, Option<Endpoint>
.collect();
let original = match (orig_addr, orig_port) {
(Some(a), Some(p)) => Some(Endpoint { address: a, port: p, codec_pt: None }),
(Some(a), Some(p)) => Some(Endpoint {
address: a,
port: p,
codec_pt: None,
}),
_ => None,
};

1
rust/vendor/kokoro-tts/.cargo-ok vendored Normal file
View File

@@ -0,0 +1 @@
{"v":1}

View File

@@ -0,0 +1,7 @@
{
"git": {
"sha1": "dfa3eda5e8c3f23f8b4c5d504acaebd6e7a45020",
"dirty": true
},
"path_in_vcs": ""
}

View File

@@ -0,0 +1,35 @@
name: Rust
on:
push:
branches: [ "master" ]
pull_request:
branches: [ "master" ]
env:
CARGO_TERM_COLOR: always
jobs:
build:
strategy:
matrix:
os: [ubuntu-latest, windows-latest, macos-latest]
runs-on: ${{ matrix.os }}
steps:
- uses: actions/checkout@v4
# Ubuntu 专属依赖安装
- name: Setup Ubuntu dependencies
if: matrix.os == 'ubuntu-latest'
run: |
sudo apt-get update
sudo apt install libasound2-dev
# 构建项目
- name: Build
run: cargo build -vv
# 运行测试
- name: Run tests
run: cargo test --workspace -vv

5
rust/vendor/kokoro-tts/.gitignore vendored Normal file
View File

@@ -0,0 +1,5 @@
*.bin
*.onnx
Cargo.lock
/target
.idea

116
rust/vendor/kokoro-tts/Cargo.toml vendored Normal file
View File

@@ -0,0 +1,116 @@
# THIS FILE IS AUTOMATICALLY GENERATED BY CARGO
#
# When uploading crates to the registry Cargo will automatically
# "normalize" Cargo.toml files for maximal compatibility
# with all versions of Cargo and also rewrite `path` dependencies
# to registry (e.g., crates.io) dependencies.
#
# If you are reading this file be aware that the original Cargo.toml
# will likely look very different (and much more reasonable).
# See Cargo.toml.orig for the original contents.
[package]
edition = "2024"
name = "kokoro-tts"
version = "0.3.2"
build = "build.rs"
autolib = false
autobins = false
autoexamples = false
autotests = false
autobenches = false
description = "用于Rust的轻量级AI离线语音合成器Kokoro TTS可轻松交叉编译到移动端"
readme = "README.md"
keywords = [
"TTS",
"Offline",
"Lite",
"AI",
"Synthesizer",
]
license = "Apache-2.0"
repository = "https://github.com/mzdk100/kokoro.git"
[features]
use-cmudict = ["cmudict-fast"]
[lib]
name = "kokoro_tts"
path = "src/lib.rs"
[[example]]
name = "synth_directly_v10"
path = "examples/synth_directly_v10.rs"
[[example]]
name = "synth_directly_v11"
path = "examples/synth_directly_v11.rs"
[[example]]
name = "synth_stream"
path = "examples/synth_stream.rs"
[dependencies.bincode]
version = "2.0"
[dependencies.chinese-number]
version = "0.7.8"
features = [
"number-to-chinese",
"chinese-to-number",
]
default-features = false
[dependencies.cmudict-fast]
version = "0.8.0"
optional = true
[dependencies.futures]
version = "0.3.31"
[dependencies.jieba-rs]
version = "0.8.1"
[dependencies.log]
version = "0.4.29"
[dependencies.ndarray]
version = "0.17.2"
[dependencies.ort]
version = "2.0.0-rc.11"
[dependencies.pin-project]
version = "1.1.10"
[dependencies.pinyin]
version = "0.11.0"
[dependencies.rand]
version = "0.10.0-rc.7"
[dependencies.regex]
version = "1.12.2"
[dependencies.tokio]
version = "1.49.0"
features = [
"fs",
"rt-multi-thread",
"time",
"sync",
]
[dev-dependencies.anyhow]
version = "1.0.100"
[dev-dependencies.tokio]
version = "1.49.0"
features = ["macros"]
[dev-dependencies.voxudio]
version = "0.5.7"
features = ["device"]
[build-dependencies.cc]
version = "1.2.53"

35
rust/vendor/kokoro-tts/Cargo.toml.orig generated vendored Normal file
View File

@@ -0,0 +1,35 @@
[package]
name = "kokoro-tts"
description = "用于Rust的轻量级AI离线语音合成器Kokoro TTS可轻松交叉编译到移动端"
version = "0.3.2"
edition = "2024"
keywords = ["TTS", "Offline", "Lite", "AI", "Synthesizer"]
license = "Apache-2.0"
repository = "https://github.com/mzdk100/kokoro.git"
readme = "README.md"
[features]
use-cmudict = ["cmudict-fast"]
[dependencies]
bincode = "2.0"
chinese-number = { version = "0.7.8",default-features = false,features = ["number-to-chinese", "chinese-to-number"] }
cmudict-fast = { version = "0.8.0", optional = true }
futures = "0.3.31"
jieba-rs = "0.8.1"
log = "0.4.29"
ndarray = "0.17.2"
ort = "2.0.0-rc.11"
pin-project = "1.1.10"
pinyin = "0.11.0"
rand="0.10.0-rc.7"
regex = "1.12.2"
tokio = { version = "1.49.0",features = ["fs", "rt-multi-thread","time", "sync"] }
[dev-dependencies]
anyhow = "1.0.100"
tokio = {version = "1.49.0",features = ["macros"]}
voxudio = { version = "0.5.7",features = ["device"] }
[build-dependencies]
cc = "1.2.53"

201
rust/vendor/kokoro-tts/LICENSE vendored Normal file
View File

@@ -0,0 +1,201 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

59
rust/vendor/kokoro-tts/README.md vendored Normal file
View File

@@ -0,0 +1,59 @@
# Kokoro TTS的rust推理实现
[Kokoro](https://github.com/hexgrad/kokoro)
> **Kokoro**是具有8200万参数的开放式TTS型号。
> 尽管具有轻巧的体系结构但它的质量与大型型号相当同时更快更具成本效益。使用Apache许可的权重可以将Kokoro部署从生产环境到个人项目的任何地方。
## 概述
本项目包含幾个示例脚本展示了如何使用Kokoro库进行语音合成。这些示例展示了如何直接合成语音和通过流式合成来处理更长的文本。
## 前置条件
- Rust编程语言
- Tokio异步运行时
- Rodio音频处理和播放的库可选
- 下载模型资源,在這裡可以找到[1.0模型](https://github.com/mzdk100/kokoro/releases/tag/V1.0)和[1.1模型](https://github.com/mzdk100/kokoro/releases/tag/V1.1)
## 特点
- 跨平台可以轻松在Windows、Mac OS上构建也可以轻松交叉编译到安卓和iOS。
- 离线推理,不依赖网络。
- 足够轻量级有不同尺寸的模型可以选择最小的模型仅88M
- 发音人多样化,跨越多国语言。
## 使用方法
1. 运行示例,克隆或下载本项目到本地。在项目根目录下运行:
```shell
cargo run --example synth_directly_v10
cargo run --example synth_directly_v11
```
2. 集成到自己的项目中:
```shell
cargo add kokoro-tts
```
3. Linux依赖项
```shell
sudo apt install libasound2-dev
```
参考[examples](examples)文件夹中的示例代码进行开发。
## 许可证
本项目采用Apache-2.0许可证。请查看项目中的LICENSE文件了解更多信息。
## 注意
- 请确保在运行示例之前已经正确加载了模型和语音数据。
- 示例中的语音合成参数(如语音名称、文本内容、速度等)仅作为示例,实际使用时请根据需要进行调整。
## 贡献
如果您有任何改进意见或想要贡献代码请随时提交Pull Request或创建Issue。
## 免责声明
本项目中的示例代码仅用于演示目的。在使用本项目中的代码时,请确保遵守相关法律法规和社会主义核心价值观。开发者不对因使用本项目中的代码而导致的任何后果负责。

5
rust/vendor/kokoro-tts/build.rs vendored Normal file
View File

@@ -0,0 +1,5 @@
fn main() {
const SRC: &str = "src/transcription/en_ipa.c";
cc::Build::new().file(SRC).compile("es");
println!("cargo:rerun-if-changed={}", SRC);
}

135010
rust/vendor/kokoro-tts/dict/cmudict.dict vendored Normal file

File diff suppressed because it is too large Load Diff

BIN
rust/vendor/kokoro-tts/dict/espeak.dict vendored Normal file

Binary file not shown.

411980
rust/vendor/kokoro-tts/dict/pinyin.dict vendored Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,21 @@
use {
kokoro_tts::{KokoroTts, Voice},
voxudio::AudioPlayer,
};
#[tokio::main]
async fn main() -> anyhow::Result<()> {
let tts = KokoroTts::new("kokoro-v1.0.int8.onnx", "voices.bin").await?;
let (audio, took) = tts
.synth(
"Hello, world!你好我们是一群追逐梦想的人。我正在使用qq。",
Voice::ZfXiaoxiao(1.2),
)
.await?;
println!("Synth took: {:?}", took);
let mut player = AudioPlayer::new()?;
player.play()?;
player.write::<24000>(&audio, 1).await?;
Ok(())
}

View File

@@ -0,0 +1,21 @@
use {
kokoro_tts::{KokoroTts, Voice},
voxudio::AudioPlayer,
};
#[tokio::main]
async fn main() -> anyhow::Result<()> {
let tts = KokoroTts::new("kokoro-v1.1-zh.onnx", "voices-v1.1-zh.bin").await?;
let (audio, took) = tts
.synth(
"Hello, world!你好我们是一群追逐梦想的人。我正在使用qq。",
Voice::Zm045(1),
)
.await?;
println!("Synth took: {:?}", took);
let mut player = AudioPlayer::new()?;
player.play()?;
player.write::<24000>(&audio, 1).await?;
Ok(())
}

View File

@@ -0,0 +1,51 @@
use {
futures::StreamExt,
kokoro_tts::{KokoroTts, Voice},
voxudio::AudioPlayer,
};
#[tokio::main]
async fn main() -> anyhow::Result<()> {
let tts = KokoroTts::new("kokoro-v1.1-zh.onnx", "voices-v1.1-zh.bin").await?;
let (mut sink, mut stream) = tts.stream(Voice::Zm098(1));
sink.synth("hello world.").await?;
sink.synth("你好,我们是一群追逐梦想的人。").await?;
sink.set_voice(Voice::Zf032(2));
sink.synth("我正在使用qq。").await?;
sink.set_voice(Voice::Zf090(3));
sink.synth("今天天气如何?").await?;
sink.set_voice(Voice::Zm045(1));
sink.synth("你在使用Rust编程语言吗").await?;
sink.set_voice(Voice::Zf039(1));
sink.synth(
"你轻轻地走过那
在风雨花丛中
每一点一滴带走
是我醒来的梦
是在那天空上
最美丽的云朵
在那彩虹 最温柔的风",
)
.await?;
sink.set_voice(Voice::Zf088(1));
sink.synth(
"你静静看着我们
最不舍的面容
像流星划过夜空
转瞬即逝的梦
是最深情的脸 在这一瞬间
在遥远天边
",
)
.await?;
drop(sink);
let mut player = AudioPlayer::new()?;
player.play()?;
while let Some((audio, took)) = stream.next().await {
player.write::<24000>(&audio, 1).await?;
println!("Synth took: {:?}", took);
}
Ok(())
}

514
rust/vendor/kokoro-tts/g2p.py vendored Normal file
View File

@@ -0,0 +1,514 @@
import re
from typing import List, Optional, Tuple
from jieba import posseg, cut_for_search
from pypinyin import lazy_pinyin, load_phrases_dict, Style
from dataclasses import dataclass
@dataclass
class MToken:
tag: str
whitespace: str
phonemes: Optional[str] = None
ZH_MAP = {"b":"","p":"","m":"","f":"","d":"","t":"","n":"","l":"","g":"","k":"","h":"","j":"","q":"","x":"","zh":"","ch":"","sh":"","r":"","z":"","c":"","s":"","a":"","o":"","e":"","ie":"","ai":"","ei":"","ao":"","ou":"","an":"","en":"","ang":"","eng":"","er":"","i":"","u":"","v":"","ii":"","iii":"","ve":"","ia":"","ian":"","iang":"","iao":"","in":"","ing":"","iong":"","iou":"","ong":"","ua":"","uai":"","uan":"","uang":"","uei":"","uen":"","ueng":"","uo":"","van":"","vn":""}
for p in ';:,.!?/—…"()“” 12345R':
assert p not in ZH_MAP, p
ZH_MAP[p] = p
unk = ''
punc = frozenset(';:,.!?—…"()“”')
phrases_dict = {
'开户行': [['ka1i'], ['hu4'], ['hang2']],
'发卡行': [['fa4'], ['ka3'], ['hang2']],
'放款行': [['fa4ng'], ['kua3n'], ['hang2']],
'茧行': [['jia3n'], ['hang2']],
'行号': [['hang2'], ['ha4o']],
'各地': [['ge4'], ['di4']],
'借还款': [['jie4'], ['hua2n'], ['kua3n']],
'时间为': [['shi2'], ['jia1n'], ['we2i']],
'为准': [['we2i'], ['zhu3n']],
'色差': [['se4'], ['cha1']],
'': [['dia3']],
'': [['bei5']],
'': [['bu4']],
'': [['zuo5']],
'': [['lei5']],
'掺和': [['chan1'], ['huo5']]
}
must_erhua = {
"小院儿", "胡同儿", "范儿", "老汉儿", "撒欢儿", "寻老礼儿", "妥妥儿", "媳妇儿"
}
must_not_neural_tone_words = {
'男子', '女子', '分子', '原子', '量子', '莲子', '石子', '瓜子', '电子', '人人', '虎虎',
'幺幺', '干嘛', '学子', '哈哈', '数数', '袅袅', '局地', '以下', '娃哈哈', '花花草草', '留得',
'耕地', '想想', '熙熙', '攘攘', '卵子', '死死', '冉冉', '恳恳', '佼佼', '吵吵', '打打',
'考考', '整整', '莘莘', '落地', '算子', '家家户户', '青青'
}
must_neural_tone_words = {
'麻烦', '麻利', '鸳鸯', '高粱', '骨头', '骆驼', '马虎', '首饰', '馒头', '馄饨', '风筝',
'难为', '队伍', '阔气', '闺女', '门道', '锄头', '铺盖', '铃铛', '铁匠', '钥匙', '里脊',
'里头', '部分', '那么', '道士', '造化', '迷糊', '连累', '这么', '这个', '运气', '过去',
'软和', '转悠', '踏实', '跳蚤', '跟头', '趔趄', '财主', '豆腐', '讲究', '记性', '记号',
'认识', '规矩', '见识', '裁缝', '补丁', '衣裳', '衣服', '衙门', '街坊', '行李', '行当',
'蛤蟆', '蘑菇', '薄荷', '葫芦', '葡萄', '萝卜', '荸荠', '苗条', '苗头', '苍蝇', '芝麻',
'舒服', '舒坦', '舌头', '自在', '膏药', '脾气', '脑袋', '脊梁', '能耐', '胳膊', '胭脂',
'胡萝', '胡琴', '胡同', '聪明', '耽误', '耽搁', '耷拉', '耳朵', '老爷', '老实', '老婆',
'戏弄', '将军', '翻腾', '罗嗦', '罐头', '编辑', '结实', '红火', '累赘', '糨糊', '糊涂',
'精神', '粮食', '簸箕', '篱笆', '算计', '算盘', '答应', '笤帚', '笑语', '笑话', '窟窿',
'窝囊', '窗户', '稳当', '稀罕', '称呼', '秧歌', '秀气', '秀才', '福气', '祖宗', '砚台',
'码头', '石榴', '石头', '石匠', '知识', '眼睛', '眯缝', '眨巴', '眉毛', '相声', '盘算',
'白净', '痢疾', '痛快', '疟疾', '疙瘩', '疏忽', '畜生', '生意', '甘蔗', '琵琶', '琢磨',
'琉璃', '玻璃', '玫瑰', '玄乎', '狐狸', '状元', '特务', '牲口', '牙碜', '牌楼', '爽快',
'爱人', '热闹', '烧饼', '烟筒', '烂糊', '点心', '炊帚', '灯笼', '火候', '漂亮', '滑溜',
'溜达', '温和', '清楚', '消息', '浪头', '活泼', '比方', '正经', '欺负', '模糊', '槟榔',
'棺材', '棒槌', '棉花', '核桃', '栅栏', '柴火', '架势', '枕头', '枇杷', '机灵', '本事',
'木头', '木匠', '朋友', '月饼', '月亮', '暖和', '明白', '时候', '新鲜', '故事', '收拾',
'收成', '提防', '挖苦', '挑剔', '指甲', '指头', '拾掇', '拳头', '拨弄', '招牌', '招呼',
'抬举', '护士', '折腾', '扫帚', '打量', '打算', '打扮', '打听', '打发', '扎实', '扁担',
'戒指', '懒得', '意识', '意思', '悟性', '怪物', '思量', '怎么', '念头', '念叨', '别人',
'快活', '忙活', '志气', '心思', '得罪', '张罗', '弟兄', '开通', '应酬', '庄稼', '干事',
'帮手', '帐篷', '希罕', '师父', '师傅', '巴结', '巴掌', '差事', '工夫', '岁数', '屁股',
'尾巴', '少爷', '小气', '小伙', '将就', '对头', '对付', '寡妇', '家伙', '客气', '实在',
'官司', '学问', '字号', '嫁妆', '媳妇', '媒人', '婆家', '娘家', '委屈', '姑娘', '姐夫',
'妯娌', '妥当', '妖精', '奴才', '女婿', '头发', '太阳', '大爷', '大方', '大意', '大夫',
'多少', '多么', '外甥', '壮实', '地道', '地方', '在乎', '困难', '嘴巴', '嘱咐', '嘟囔',
'嘀咕', '喜欢', '喇嘛', '喇叭', '商量', '唾沫', '哑巴', '哈欠', '哆嗦', '咳嗽', '和尚',
'告诉', '告示', '含糊', '吓唬', '后头', '名字', '名堂', '合同', '吆喝', '叫唤', '口袋',
'厚道', '厉害', '千斤', '包袱', '包涵', '匀称', '勤快', '动静', '动弹', '功夫', '力气',
'前头', '刺猬', '刺激', '别扭', '利落', '利索', '利害', '分析', '出息', '凑合', '凉快',
'冷战', '冤枉', '冒失', '养活', '关系', '先生', '兄弟', '便宜', '使唤', '佩服', '作坊',
'体面', '位置', '似的', '伙计', '休息', '什么', '人家', '亲戚', '亲家', '交情', '云彩',
'事情', '买卖', '主意', '丫头', '丧气', '两口', '东西', '东家', '世故', '不由', '下水',
'下巴', '上头', '上司', '丈夫', '丈人', '一辈', '那个', '菩萨', '父亲', '母亲', '咕噜',
'邋遢', '费用', '冤家', '甜头', '介绍', '荒唐', '大人', '泥鳅', '幸福', '熟悉', '计划',
'扑腾', '蜡烛', '姥爷', '照顾', '喉咙', '吉他', '弄堂', '蚂蚱', '凤凰', '拖沓', '寒碜',
'糟蹋', '倒腾', '报复', '逻辑', '盘缠', '喽啰', '牢骚', '咖喱', '扫把', '惦记'
}
not_erhua = {
"虐儿", "为儿", "护儿", "瞒儿", "救儿", "替儿", "有儿", "一儿", "我儿", "俺儿", "妻儿",
"拐儿", "聋儿", "乞儿", "患儿", "幼儿", "孤儿", "婴儿", "婴幼儿", "连体儿", "脑瘫儿",
"流浪儿", "体弱儿", "混血儿", "蜜雪儿", "舫儿", "祖儿", "美儿", "应采儿", "可儿", "侄儿",
"孙儿", "侄孙儿", "女儿", "男儿", "红孩儿", "花儿", "虫儿", "马儿", "鸟儿", "猪儿", "猫儿",
"狗儿", "少儿"
}
BU = ''
YI = ''
X_ENG = frozenset(['x', 'eng'])
# g2p
load_phrases_dict(phrases_dict)
def get_initials_finals(word: str) -> Tuple[List[str], List[str]]:
"""
Get word initial and final by pypinyin or g2pM
"""
initials = []
finals = []
orig_initials = lazy_pinyin(word, neutral_tone_with_five=True, style=Style.INITIALS)
orig_finals = lazy_pinyin(word, neutral_tone_with_five=True, style=Style.FINALS_TONE3)
print(orig_initials, orig_finals)
# after pypinyin==0.44.0, '嗯' need to be n2, cause the initial and final consonants cannot be empty at the same time
en_index = [index for index, c in enumerate(word) if c == ""]
for i in en_index:
orig_finals[i] = "n2"
for c, v in zip(orig_initials, orig_finals):
if re.match(r'i\d', v):
if c in ['z', 'c', 's']:
# zi, ci, si
v = re.sub('i', 'ii', v)
elif c in ['zh', 'ch', 'sh', 'r']:
# zhi, chi, shi
v = re.sub('i', 'iii', v)
initials.append(c)
finals.append(v)
return initials, finals
def merge_erhua(initials: List[str], finals: List[str], word: str, pos: str) -> Tuple[List[str], List[str]]:
"""
Do erhub.
"""
# fix er1
for i, phn in enumerate(finals):
if i == len(finals) - 1 and word[i] == "" and phn == 'er1':
finals[i] = 'er2'
# 发音
if word not in must_erhua and (word in not_erhua or pos in {"a", "j", "nr"}):
return initials, finals
# "……" 等情况直接返回
if len(finals) != len(word):
return initials, finals
assert len(finals) == len(word)
# 不发音
new_initials = []
new_finals = []
for i, phn in enumerate(finals):
if i == len(finals) - 1 and word[i] == "" and phn in {"er2", "er5"} and word[-2:] not in not_erhua and new_finals:
new_finals[-1] = new_finals[-1][:-1] + "R" + new_finals[-1][-1]
else:
new_initials.append(initials[i])
new_finals.append(phn)
return new_initials, new_finals
# merge "不" and the word behind it
# if don't merge, "不" sometimes appears alone according to jieba, which may occur sandhi error
def merge_bu(seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]:
new_seg = []
for i, (word, pos) in enumerate(seg):
if pos not in X_ENG:
last_word = None
if i > 0:
last_word, _ = seg[i - 1]
if last_word == BU:
word = last_word + word
next_pos = None
if i + 1 < len(seg):
_, next_pos = seg[i + 1]
if word != BU or next_pos is None or next_pos in X_ENG:
new_seg.append((word, pos))
return new_seg
# function 1: merge "一" and reduplication words in it's left and right, e.g. "听","一","听" ->"听一听"
# function 2: merge single "一" and the word behind it
# if don't merge, "一" sometimes appears alone according to jieba, which may occur sandhi error
# e.g.
# input seg: [('听', 'v'), ('一', 'm'), ('听', 'v')]
# output seg: [['听一听', 'v']]
def merge_yi(seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]:
new_seg = []
skip_next = False
# function 1
for i, (word, pos) in enumerate(seg):
if skip_next:
skip_next = False
continue
if i - 1 >= 0 and word == YI and i + 1 < len(seg) and seg[i - 1][0] == seg[i + 1][0] and seg[i - 1][1] == "v" and seg[i + 1][1] not in X_ENG:
new_seg[-1] = (new_seg[-1][0] + YI + seg[i + 1][0], new_seg[-1][1])
skip_next = True
else:
new_seg.append((word, pos))
seg = new_seg
new_seg = []
# function 2
for i, (word, pos) in enumerate(seg):
if new_seg and new_seg[-1][0] == YI and pos not in X_ENG:
new_seg[-1] = (new_seg[-1][0] + word, new_seg[-1][1])
else:
new_seg.append((word, pos))
return new_seg
def merge_reduplication(seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]:
new_seg = []
for i, (word, pos) in enumerate(seg):
if new_seg and word == new_seg[-1][0] and pos not in X_ENG:
new_seg[-1][0] = new_seg[-1][0] + seg[i][0]
else:
new_seg.append([word, pos])
return new_seg
def is_reduplication(word: str) -> bool:
return len(word) == 2 and word[0] == word[1]
# the first and the second words are all_tone_three
def merge_continuous_three_tones(seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]:
new_seg = []
sub_finals_list = []
for (word, pos) in seg:
if pos in X_ENG:
sub_finals_list.append(['0'])
continue
orig_finals = lazy_pinyin(word, neutral_tone_with_five=True, style=Style.FINALS_TONE3)
# after pypinyin==0.44.0, '嗯' need to be n2, cause the initial and final consonants cannot be empty at the same time
en_index = [index for index, c in enumerate(word) if c == ""]
for i in en_index:
orig_finals[i] = "n2"
sub_finals_list.append(orig_finals)
assert len(sub_finals_list) == len(seg)
merge_last = [False] * len(seg)
for i, (word, pos) in enumerate(seg):
if pos not in X_ENG and i - 1 >= 0 and all_tone_three(sub_finals_list[i - 1]) and all_tone_three(sub_finals_list[i]) and not merge_last[i - 1]:
# if the last word is reduplication, not merge, because reduplication need to be _neural_sandhi
if not is_reduplication(seg[i - 1][0]) and len(seg[i - 1][0]) + len(seg[i][0]) <= 3:
new_seg[-1][0] = new_seg[-1][0] + seg[i][0]
merge_last[i] = True
else:
new_seg.append([word, pos])
else:
new_seg.append([word, pos])
return new_seg
# the last char of first word and the first char of second word is tone_three
def merge_continuous_three_tones_2(seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]:
new_seg = []
sub_finals_list = []
for (word, pos) in seg:
if pos in X_ENG:
sub_finals_list.append(['0'])
continue
orig_finals = lazy_pinyin(
word, neutral_tone_with_five=True, style=Style.FINALS_TONE3)
# after pypinyin==0.44.0, '嗯' need to be n2, cause the initial and final consonants cannot be empty at the same time
en_index = [index for index, c in enumerate(word) if c == ""]
for i in en_index:
orig_finals[i] = "n2"
sub_finals_list.append(orig_finals)
assert len(sub_finals_list) == len(seg)
merge_last = [False] * len(seg)
for i, (word, pos) in enumerate(seg):
if pos not in X_ENG and i - 1 >= 0 and sub_finals_list[i - 1][-1][-1] == "3" and sub_finals_list[i][0][-1] == "3" and not merge_last[i - 1]:
# if the last word is reduplication, not merge, because reduplication need to be _neural_sandhi
if not is_reduplication(seg[i - 1][0]) and len(seg[i - 1][0]) + len(seg[i][0]) <= 3:
new_seg[-1][0] = new_seg[-1][0] + seg[i][0]
merge_last[i] = True
else:
new_seg.append([word, pos])
else:
new_seg.append([word, pos])
return new_seg
def merge_er(seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]:
new_seg = []
for i, (word, pos) in enumerate(seg):
if i - 1 >= 0 and word == "" and new_seg[-1][1] not in X_ENG:
new_seg[-1][0] = new_seg[-1][0] + seg[i][0]
else:
new_seg.append([word, pos])
return new_seg
def pre_merge_for_modify(seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]:
"""
seg: [(word, pos), ...]
"""
seg = merge_bu(seg)
seg = merge_yi(seg)
seg = merge_reduplication(seg)
seg = merge_continuous_three_tones(seg)
seg = merge_continuous_three_tones_2(seg)
return merge_er(seg)
def bu_sandhi(word: str, finals: List[str]) -> List[str]:
# e.g. 看不懂
if len(word) == 3 and word[1] == BU:
finals[1] = finals[1][:-1] + "5"
else:
for i, char in enumerate(word):
# "不" before tone4 should be bu2, e.g. 不怕
if char == BU and i + 1 < len(word) and finals[i + 1][-1] == "4":
finals[i] = finals[i][:-1] + "2"
return finals
def yi_sandhi(word: str, finals: List[str]) -> List[str]:
# "一" in number sequences, e.g. 一零零, 二一零
if word.find(YI) != -1 and all(
[item.isnumeric() for item in word if item != YI]):
return finals
# "一" between reduplication words shold be yi5, e.g. 看一看
elif len(word) == 3 and word[1] == YI and word[0] == word[-1]:
finals[1] = finals[1][:-1] + "5"
# when "一" is ordinal word, it should be yi1
elif word.startswith("第一"):
finals[1] = finals[1][:-1] + "1"
else:
for i, char in enumerate(word):
if char == YI and i + 1 < len(word):
# "一" before tone4 should be yi2, e.g. 一段
if finals[i + 1][-1] in {'4', '5'}:
finals[i] = finals[i][:-1] + "2"
# "一" before non-tone4 should be yi4, e.g. 一天
else:
# "一" 后面如果是标点,还读一声
if word[i + 1] not in punc:
finals[i] = finals[i][:-1] + "4"
return finals
def split_word(word: str) -> List[str]:
word_list = cut_for_search(word)
word_list = sorted(word_list, key=lambda i: len(i), reverse=False)
first_subword = word_list[0]
first_begin_idx = word.find(first_subword)
if first_begin_idx == 0:
second_subword = word[len(first_subword):]
new_word_list = [first_subword, second_subword]
else:
second_subword = word[:-len(first_subword)]
new_word_list = [second_subword, first_subword]
return new_word_list
# the meaning of jieba pos tag: https://blog.csdn.net/weixin_44174352/article/details/113731041
# e.g.
# word: "家里"
# pos: "s"
# finals: ['ia1', 'i3']
def neural_sandhi(word: str, pos: str, finals: List[str]) -> List[str]:
if word in must_not_neural_tone_words:
return finals
# reduplication words for n. and v. e.g. 奶奶, 试试, 旺旺
for j, item in enumerate(word):
if j - 1 >= 0 and item == word[j - 1] and pos[0] in {"n", "v", "a"}:
finals[j] = finals[j][:-1] + "5"
ge_idx = word.find("")
if len(word) >= 1 and word[-1] in "吧呢啊呐噻嘛吖嗨呐哦哒滴哩哟喽啰耶喔诶":
finals[-1] = finals[-1][:-1] + "5"
elif len(word) >= 1 and word[-1] in "的地得":
finals[-1] = finals[-1][:-1] + "5"
# e.g. 走了, 看着, 去过
elif len(word) == 1 and word in "了着过" and pos in {"ul", "uz", "ug"}:
finals[-1] = finals[-1][:-1] + "5"
elif len(word) > 1 and word[-1] in "们子" and pos in {"r", "n"}:
finals[-1] = finals[-1][:-1] + "5"
# e.g. 桌上, 地下
elif len(word) > 1 and word[-1] in "上下" and pos in {"s", "l", "f"}:
finals[-1] = finals[-1][:-1] + "5"
# e.g. 上来, 下去
elif len(word) > 1 and word[-1] in "来去" and word[-2] in "上下进出回过起开":
finals[-1] = finals[-1][:-1] + "5"
# 个做量词
elif (ge_idx >= 1 and (word[ge_idx - 1].isnumeric() or word[ge_idx - 1] in "几有两半多各整每做是")) or word == '':
finals[ge_idx] = finals[ge_idx][:-1] + "5"
else:
if word in must_neural_tone_words or word[-2:] in must_neural_tone_words:
finals[-1] = finals[-1][:-1] + "5"
word_list = split_word(word)
finals_list = [finals[:len(word_list[0])], finals[len(word_list[0]):]]
for i, word in enumerate(word_list):
# conventional neural in Chinese
if word in must_neural_tone_words or word[-2:] in must_neural_tone_words:
finals_list[i][-1] = finals_list[i][-1][:-1] + "5"
finals = sum(finals_list, [])
return finals
def all_tone_three(finals: List[str]) -> bool:
return all(x[-1] == "3" for x in finals)
def three_sandhi(word: str, finals: List[str]) -> List[str]:
if len(word) == 2 and all_tone_three(finals):
finals[0] = finals[0][:-1] + "2"
elif len(word) == 3:
word_list = split_word(word)
if all_tone_three(finals):
# disyllabic + monosyllabic, e.g. 蒙古/包
if len(word_list[0]) == 2:
finals[0] = finals[0][:-1] + "2"
finals[1] = finals[1][:-1] + "2"
# monosyllabic + disyllabic, e.g. 纸/老虎
elif len(word_list[0]) == 1:
finals[1] = finals[1][:-1] + "2"
else:
finals_list = [finals[:len(word_list[0])], finals[len(word_list[0]):]]
if len(finals_list) == 2:
for i, sub in enumerate(finals_list):
# e.g. 所有/人
if all_tone_three(sub) and len(sub) == 2:
finals_list[i][0] = finals_list[i][0][:-1] + "2"
# e.g. 好/喜欢
elif i == 1 and not all_tone_three(sub) and finals_list[i][0][-1] == "3" and finals_list[0][-1][-1] == "3":
finals_list[0][-1] = finals_list[0][-1][:-1] + "2"
finals = sum(finals_list, [])
# split idiom into two words who's length is 2
elif len(word) == 4:
finals_list = [finals[:2], finals[2:]]
finals = []
for sub in finals_list:
if all_tone_three(sub):
sub[0] = sub[0][:-1] + "2"
finals += sub
return finals
def modified_tone(word: str, pos: str, finals: List[str]) -> List[str]:
"""
word: 分词
pos: 词性
finals: 带调韵母, [final1, ..., finaln]
"""
finals = bu_sandhi(word, finals)
finals = yi_sandhi(word, finals)
finals = neural_sandhi(word, pos, finals)
return three_sandhi(word, finals)
def g2p(text: str, with_erhua: bool = True) -> str:
"""
Return: string of phonemes.
'ㄋㄧ2ㄏㄠ3/ㄕ十4ㄐㄝ4'
"""
tokens = []
seg_cut = posseg.lcut(text)
# fix wordseg bad case for sandhi
seg_cut = pre_merge_for_modify(seg_cut)
# 为了多音词获得更好的效果,这里采用整句预测
initials = []
finals = []
# pypinyin, g2pM
for word, pos in seg_cut:
if pos == 'x' and '\u4E00' <= min(word) and max(word) <= '\u9FFF':
pos = 'X'
elif pos != 'x' and word in punc:
pos = 'x'
tk = MToken(tag=pos, whitespace='')
if pos in X_ENG:
if not word.isspace():
if pos == 'x' and word in punc:
tk.phonemes = word
tokens.append(tk)
elif tokens:
tokens[-1].whitespace += word
continue
elif tokens and tokens[-1].tag not in X_ENG and not tokens[-1].whitespace:
tokens[-1].whitespace = '/'
# g2p
sub_initials, sub_finals = get_initials_finals(word)
# tone sandhi
sub_finals = modified_tone(word, pos, sub_finals)
# er hua
if with_erhua:
sub_initials, sub_finals = merge_erhua(sub_initials, sub_finals, word, pos)
initials.append(sub_initials)
finals.append(sub_finals)
# assert len(sub_initials) == len(sub_finals) == len(word)
# sum(iterable[, start])
# initials = sum(initials, [])
# finals = sum(finals, [])
phones = []
for c, v in zip(sub_initials, sub_finals):
# NOTE: post process for pypinyin outputs
# we discriminate i, ii and iii
if c:
phones.append(c)
# replace punctuation by ` `
# if c and c in punc:
# phones.append(c)
if v and (v not in punc or v != c):# and v not in rhy_phns:
phones.append(v)
phones = '_'.join(phones).replace('_eR', '_er').replace('R', '_R')
phones = re.sub(r'(?=\d)', '_', phones).split('_')
print(phones)
tk.phonemes = ''.join(ZH_MAP.get(p, unk) for p in phones)
tokens.append(tk)
return ''.join((unk if tk.phonemes is None else tk.phonemes) + tk.whitespace for tk in tokens)
print(g2p('时间为。Hello, world!你好我们是一群追逐梦想的人。我正在使用qq。忽略卢驴'))
seg = posseg.lcut('不好看', True)
print(seg, merge_bu(seg))
seg = merge_bu(posseg.lcut('听一听一个', True))
print(seg, merge_yi(seg))
seg = merge_bu(posseg.lcut('谢谢谢谢', True))
print(seg, merge_reduplication(seg))
seg = merge_bu(posseg.lcut('小美好', True))
print(seg, merge_continuous_three_tones(seg))
seg = merge_bu(posseg.lcut('风景好', True))
print(seg, merge_continuous_three_tones_2(seg))

3
rust/vendor/kokoro-tts/run.bat vendored Normal file
View File

@@ -0,0 +1,3 @@
set PATH=%PATH%;D:\msys64\mingw64\bin
cargo run --example synth_directly_v11
pause

80
rust/vendor/kokoro-tts/src/error.rs vendored Normal file
View File

@@ -0,0 +1,80 @@
use crate::G2PError;
use bincode::error::DecodeError;
use ndarray::ShapeError;
use ort::Error as OrtError;
use std::{
error::Error,
fmt::{Debug, Display, Formatter, Result as FmtResult},
io::Error as IoError,
time::SystemTimeError,
};
#[derive(Debug)]
pub enum KokoroError {
Decode(DecodeError),
G2P(G2PError),
Io(IoError),
ModelReleased,
Ort(OrtError),
Send(String),
Shape(ShapeError),
SystemTime(SystemTimeError),
VoiceNotFound(String),
VoiceVersionInvalid(String),
}
impl Display for KokoroError {
fn fmt(&self, f: &mut Formatter<'_>) -> FmtResult {
write!(f, "KokoroError: ")?;
match self {
Self::Decode(e) => Display::fmt(e, f),
Self::G2P(e) => Display::fmt(e, f),
Self::Io(e) => Display::fmt(e, f),
Self::Ort(e) => Display::fmt(e, f),
Self::ModelReleased => write!(f, "ModelReleased"),
Self::Send(e) => Display::fmt(e, f),
Self::Shape(e) => Display::fmt(e, f),
Self::SystemTime(e) => Display::fmt(e, f),
Self::VoiceNotFound(name) => write!(f, "VoiceNotFound({})", name),
Self::VoiceVersionInvalid(msg) => write!(f, "VoiceVersionInvalid({})", msg),
}
}
}
impl Error for KokoroError {}
impl From<IoError> for KokoroError {
fn from(value: IoError) -> Self {
Self::Io(value)
}
}
impl From<DecodeError> for KokoroError {
fn from(value: DecodeError) -> Self {
Self::Decode(value)
}
}
impl From<OrtError> for KokoroError {
fn from(value: OrtError) -> Self {
Self::Ort(value)
}
}
impl From<G2PError> for KokoroError {
fn from(value: G2PError) -> Self {
Self::G2P(value)
}
}
impl From<ShapeError> for KokoroError {
fn from(value: ShapeError) -> Self {
Self::Shape(value)
}
}
impl From<SystemTimeError> for KokoroError {
fn from(value: SystemTimeError) -> Self {
Self::SystemTime(value)
}
}

321
rust/vendor/kokoro-tts/src/g2p.rs vendored Normal file
View File

@@ -0,0 +1,321 @@
/// 文本到国际音标的转换
mod v10;
mod v11;
use super::PinyinError;
use chinese_number::{ChineseCase, ChineseCountMethod, ChineseVariant, NumberToChinese};
#[cfg(feature = "use-cmudict")]
use cmudict_fast::{Cmudict, Error as CmudictError};
use pinyin::ToPinyin;
use regex::{Captures, Error as RegexError, Regex};
use std::{
error::Error,
fmt::{Display, Formatter, Result as FmtResult},
};
#[derive(Debug)]
pub enum G2PError {
#[cfg(feature = "use-cmudict")]
CmudictError(CmudictError),
EnptyData,
#[cfg(not(feature = "use-cmudict"))]
Nul(std::ffi::NulError),
Pinyin(PinyinError),
Regex(RegexError),
#[cfg(not(feature = "use-cmudict"))]
Utf8(std::str::Utf8Error),
}
impl Display for G2PError {
fn fmt(&self, f: &mut Formatter<'_>) -> FmtResult {
write!(f, "G2PError: ")?;
match self {
#[cfg(feature = "use-cmudict")]
Self::CmudictError(e) => Display::fmt(e, f),
Self::EnptyData => Display::fmt("EmptyData", f),
#[cfg(not(feature = "use-cmudict"))]
Self::Nul(e) => Display::fmt(e, f),
Self::Pinyin(e) => Display::fmt(e, f),
Self::Regex(e) => Display::fmt(e, f),
#[cfg(not(feature = "use-cmudict"))]
Self::Utf8(e) => Display::fmt(e, f),
}
}
}
impl Error for G2PError {}
impl From<PinyinError> for G2PError {
fn from(value: PinyinError) -> Self {
Self::Pinyin(value)
}
}
impl From<RegexError> for G2PError {
fn from(value: RegexError) -> Self {
Self::Regex(value)
}
}
#[cfg(feature = "use-cmudict")]
impl From<CmudictError> for G2PError {
fn from(value: CmudictError) -> Self {
Self::CmudictError(value)
}
}
#[cfg(not(feature = "use-cmudict"))]
impl From<std::ffi::NulError> for G2PError {
fn from(value: std::ffi::NulError) -> Self {
Self::Nul(value)
}
}
#[cfg(not(feature = "use-cmudict"))]
impl From<std::str::Utf8Error> for G2PError {
fn from(value: std::str::Utf8Error) -> Self {
Self::Utf8(value)
}
}
fn word2ipa_zh(word: &str) -> Result<String, G2PError> {
let iter = word.chars().map(|i| match i.to_pinyin() {
None => Ok(i.to_string()),
Some(p) => v10::py2ipa(p.with_tone_num_end()),
});
let mut result = String::new();
for i in iter {
result.push_str(&i?);
}
Ok(result)
}
#[cfg(feature = "use-cmudict")]
fn word2ipa_en(word: &str) -> Result<String, G2PError> {
use super::{arpa_to_ipa, letters_to_ipa};
use std::{
io::{Error as IoError, ErrorKind},
str::FromStr,
sync::LazyLock,
};
fn get_cmudict<'a>() -> Result<&'a Cmudict, CmudictError> {
static CMUDICT: LazyLock<Result<Cmudict, CmudictError>> =
LazyLock::new(|| Cmudict::from_str(include_str!("../dict/cmudict.dict")));
CMUDICT.as_ref().map_err(|i| match i {
CmudictError::IoErr(e) => CmudictError::IoErr(IoError::new(ErrorKind::Other, e)),
CmudictError::InvalidLine(e) => CmudictError::InvalidLine(*e),
CmudictError::RuleParseError(e) => CmudictError::RuleParseError(e.clone()),
})
}
if word.chars().count() < 4 && word.chars().all(|c| c.is_ascii_uppercase()) {
return Ok(letters_to_ipa(word));
}
let dict = get_cmudict()?;
let upper = word.to_ascii_uppercase();
let lower = word.to_ascii_lowercase();
let Some(rules) = dict
.get(word)
.or_else(|| dict.get(&upper))
.or_else(|| dict.get(&lower))
else {
return Ok(letters_to_ipa(word));
};
if rules.is_empty() {
return Ok(word.to_owned());
}
let i = rand::random_range(0..rules.len());
let result = rules[i]
.pronunciation()
.iter()
.map(|i| arpa_to_ipa(&i.to_string()).unwrap_or_default())
.collect::<String>();
Ok(result)
}
#[cfg(not(feature = "use-cmudict"))]
fn word2ipa_en(word: &str) -> Result<String, G2PError> {
use super::letters_to_ipa;
use std::{
ffi::{CStr, CString, c_char},
sync::Once,
};
if word.chars().count() < 4 && word.chars().all(|c| c.is_ascii_uppercase()) {
return Ok(letters_to_ipa(word));
}
unsafe extern "C" {
fn TextToPhonemes(text: *const c_char) -> *const ::std::os::raw::c_char;
fn Initialize(data_dictlist: *const c_char);
}
unsafe {
static INIT: Once = Once::new();
INIT.call_once(|| {
static DATA: &[u8] = include_bytes!("../dict/espeak.dict");
Initialize(DATA.as_ptr() as _);
});
let word = CString::new(word.to_lowercase())?.into_raw() as *const c_char;
let res = TextToPhonemes(word);
Ok(CStr::from_ptr(res).to_str()?.to_string())
}
}
fn to_half_shape(text: &str) -> String {
let mut result = String::with_capacity(text.len() * 2); // 预分配合理空间
let chars = text.chars().peekable();
for c in chars {
match c {
// 处理需要后看的情况
'«' | '《' => result.push('“'),
'»' | '》' => result.push('”'),
'' => result.push('('),
'' => result.push(')'),
// 简单替换规则
'、' | '' => result.push(','),
'。' => result.push('.'),
'' => result.push('!'),
'' => result.push(':'),
'' => result.push(';'),
'' => result.push('?'),
// 默认字符
_ => result.push(c),
}
}
// 清理多余空格并返回
result
}
fn num_repr(text: &str) -> Result<String, G2PError> {
let regex = Regex::new(r#"\d+(\.\d+)?"#)?;
Ok(regex
.replace(text, |caps: &Captures| {
let text = &caps[0];
if let Ok(num) = text.parse::<f64>() {
num.to_chinese(
ChineseVariant::Traditional,
ChineseCase::Lower,
ChineseCountMethod::Low,
)
.map_or(text.to_owned(), |i| i)
} else if let Ok(num) = text.parse::<i64>() {
num.to_chinese(
ChineseVariant::Traditional,
ChineseCase::Lower,
ChineseCountMethod::Low,
)
.map_or(text.to_owned(), |i| i)
} else {
text.to_owned()
}
})
.to_string())
}
pub fn g2p(text: &str, use_v11: bool) -> Result<String, G2PError> {
let text = num_repr(text)?;
let sentence_pattern = Regex::new(
r#"([\u4E00-\u9FFF]+)|([,。:·?、!《》()【】〖〗〔〕“”‘’〈〉…— ]+)|([\u0000-\u00FF]+)+"#,
)?;
let en_word_pattern = Regex::new("\\w+|\\W+")?;
let jieba = jieba_rs::Jieba::new();
let mut result = String::new();
for i in sentence_pattern.captures_iter(&text) {
match (i.get(1), i.get(2), i.get(3)) {
(Some(text), _, _) => {
let text = to_half_shape(text.as_str());
if use_v11 {
if !result.is_empty() && !result.ends_with(' ') {
result.push(' ');
}
result.push_str(&v11::g2p(&text, true));
result.push(' ');
} else {
for i in jieba.cut(&text, true) {
result.push_str(&word2ipa_zh(i)?);
result.push(' ');
}
}
}
(_, Some(text), _) => {
let text = to_half_shape(text.as_str());
result = result.trim_end().to_string();
result.push_str(&text);
result.push(' ');
}
(_, _, Some(text)) => {
for i in en_word_pattern.captures_iter(text.as_str()) {
let c = (i[0]).chars().next().unwrap_or_default();
if c == '\''
|| c == '_'
|| c == '-'
|| c.is_ascii_lowercase()
|| c.is_ascii_uppercase()
{
let i = &i[0];
if result.trim_end().ends_with(['.', ',', '!', '?'])
&& !result.ends_with(' ')
{
result.push(' ');
}
result.push_str(&word2ipa_en(i)?);
} else if c == ' ' && result.ends_with(' ') {
result.push_str((i[0]).trim_start());
} else {
result.push_str(&i[0]);
}
}
}
_ => (),
};
}
Ok(result.trim().to_string())
}
#[cfg(test)]
mod tests {
#[cfg(not(feature = "use-cmudict"))]
#[test]
fn test_word2ipa_en() -> Result<(), super::G2PError> {
use super::word2ipa_en;
// println!("{:?}", espeak_rs::text_to_phonemes("days", "en", None, true, false));
assert_eq!("kjˌuːkjˈuː", word2ipa_en("qq")?);
assert_eq!("həlˈəʊ", word2ipa_en("hello")?);
assert_eq!("wˈɜːld", word2ipa_en("world")?);
assert_eq!("ˈapəl", word2ipa_en("apple")?);
assert_eq!("ˈɪldɹɛn", word2ipa_en("children")?);
assert_eq!("ˈaʊə", word2ipa_en("hour")?);
assert_eq!("dˈeɪz", word2ipa_en("days")?);
Ok(())
}
#[cfg(feature = "use-cmudict")]
#[test]
fn test_word2ipa_en_is_case_insensitive_for_dictionary_words() -> Result<(), super::G2PError> {
use super::word2ipa_en;
assert_eq!(word2ipa_en("Welcome")?, word2ipa_en("welcome")?);
Ok(())
}
#[test]
fn test_g2p() -> Result<(), super::G2PError> {
use super::g2p;
assert_eq!("ni↓xau↓ ʂɻ↘ʨje↘", g2p("你好世界", false)?);
assert_eq!("ㄋㄧ2ㄏㄠ3/ㄕ十4ㄐㄝ4", g2p("你好世界", true)?);
Ok(())
}
}

62
rust/vendor/kokoro-tts/src/g2p/v10.rs vendored Normal file
View File

@@ -0,0 +1,62 @@
use crate::{G2PError, pinyin_to_ipa};
fn retone(p: &str) -> String {
let chars: Vec<char> = p.chars().collect();
let mut result = String::with_capacity(p.len());
let mut i = 0;
while i < chars.len() {
match () {
// 三声调优先处理
_ if i + 2 < chars.len()
&& chars[i] == '˧'
&& chars[i + 1] == '˩'
&& chars[i + 2] == '˧' =>
{
result.push('↓');
i += 3;
}
// 二声调
_ if i + 1 < chars.len() && chars[i] == '˧' && chars[i + 1] == '˥' => {
result.push('↗');
i += 2;
}
// 四声调
_ if i + 1 < chars.len() && chars[i] == '˥' && chars[i + 1] == '˩' => {
result.push('↘');
i += 2;
}
// 一声调
_ if chars[i] == '˥' => {
result.push('→');
i += 1;
}
// 组合字符替换(ɻ̩ 和 ɱ̩)
_ if !(i + 1 >= chars.len() || chars[i+1] != '\u{0329}' || chars[i] != '\u{027B}' && chars[i] != '\u{0271}') =>
{
result.push('ɨ');
i += 2;
}
// 默认情况
_ => {
result.push(chars[i]);
i += 1;
}
}
}
assert!(
!result.contains('\u{0329}'),
"Unexpected combining mark in: {}",
result
);
result
}
pub(super) fn py2ipa(py: &str) -> Result<String, G2PError> {
pinyin_to_ipa(py)?
.first()
.map_or(Err(G2PError::EnptyData), |i| {
Ok(i.iter().map(|i| retone(i)).collect::<String>())
})
}

1263
rust/vendor/kokoro-tts/src/g2p/v11.rs vendored Normal file

File diff suppressed because it is too large Load Diff

83
rust/vendor/kokoro-tts/src/lib.rs vendored Normal file
View File

@@ -0,0 +1,83 @@
mod error;
mod g2p;
mod stream;
mod synthesizer;
mod tokenizer;
mod transcription;
mod voice;
use {
bincode::{config::standard, decode_from_slice},
ort::{execution_providers::CUDAExecutionProvider, session::Session},
std::{collections::HashMap, path::Path, sync::Arc, time::Duration},
tokio::{fs::read, sync::Mutex},
};
pub use {error::*, g2p::*, stream::*, tokenizer::*, transcription::*, voice::*};
pub struct KokoroTts {
model: Arc<Mutex<Session>>,
voices: Arc<HashMap<String, Vec<Vec<Vec<f32>>>>>,
}
impl KokoroTts {
pub async fn new<P: AsRef<Path>>(model_path: P, voices_path: P) -> Result<Self, KokoroError> {
let voices = read(voices_path).await?;
let (voices, _) = decode_from_slice(&voices, standard())?;
let model = Session::builder()?
.with_execution_providers([CUDAExecutionProvider::default().build()])?
.commit_from_file(model_path)?;
Ok(Self {
model: Arc::new(model.into()),
voices,
})
}
pub async fn new_from_bytes<B>(model: B, voices: B) -> Result<Self, KokoroError>
where
B: AsRef<[u8]>,
{
let (voices, _) = decode_from_slice(voices.as_ref(), standard())?;
let model = Session::builder()?
.with_execution_providers([CUDAExecutionProvider::default().build()])?
.commit_from_memory(model.as_ref())?;
Ok(Self {
model: Arc::new(model.into()),
voices,
})
}
pub async fn synth<S>(&self, text: S, voice: Voice) -> Result<(Vec<f32>, Duration), KokoroError>
where
S: AsRef<str>,
{
let name = voice.get_name();
let pack = self
.voices
.get(name)
.ok_or(KokoroError::VoiceNotFound(name.to_owned()))?;
synthesizer::synth(Arc::downgrade(&self.model), text, pack, voice).await
}
pub fn stream<S>(&self, voice: Voice) -> (SynthSink<S>, SynthStream)
where
S: AsRef<str> + Send + 'static,
{
let voices = Arc::downgrade(&self.voices);
let model = Arc::downgrade(&self.model);
start_synth_session(voice, move |text, voice| {
let voices = voices.clone();
let model = model.clone();
async move {
let name = voice.get_name();
let voices = voices.upgrade().ok_or(KokoroError::ModelReleased)?;
let pack = voices
.get(name)
.ok_or(KokoroError::VoiceNotFound(name.to_owned()))?;
synthesizer::synth(model, text, pack, voice).await
}
})
}
}

157
rust/vendor/kokoro-tts/src/stream.rs vendored Normal file
View File

@@ -0,0 +1,157 @@
use {
crate::{KokoroError, Voice},
futures::{Sink, SinkExt, Stream},
pin_project::pin_project,
std::{
pin::Pin,
task::{Context, Poll},
time::Duration,
},
tokio::sync::mpsc::{UnboundedReceiver, UnboundedSender, unbounded_channel},
};
struct Request<S> {
voice: Voice,
text: S,
}
struct Response {
data: Vec<f32>,
took: Duration,
}
/// 语音合成流
///
/// 该结构体用于通过流式合成来处理更长的文本。它实现了`Stream` trait可以用于异步迭代合成后的音频数据。
#[pin_project]
pub struct SynthStream {
#[pin]
rx: UnboundedReceiver<Response>,
}
impl Stream for SynthStream {
type Item = (Vec<f32>, Duration);
fn poll_next(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Option<Self::Item>> {
Pin::new(&mut self.project().rx)
.poll_recv(cx)
.map(|i| i.map(|Response { data, took }| (data, took)))
}
}
/// 语音合成发送端
///
/// 该结构体用于发送语音合成请求。它实现了`Sink` trait可以用于异步发送合成请求。
#[pin_project]
pub struct SynthSink<S> {
tx: UnboundedSender<Request<S>>,
voice: Voice,
}
impl<S> SynthSink<S> {
/// 设置语音名称
///
/// 该方法用于设置要合成的语音名称。
///
/// # 参数
///
/// * `voice_name` - 语音名称,用于选择要合成的语音。
///
/// # 示例
///
/// ```rust
/// use kokoro_tts::{KokoroTts, Voice};
///
/// #[tokio::main]
/// async fn main() {
/// let Ok(tts) = KokoroTts::new("../kokoro-v1.0.int8.onnx", "../voices.bin").await else {
/// return;
/// };
/// // speed: 1.0
/// let (mut sink, _) = tts.stream::<&str>(Voice::ZfXiaoxiao(1.0));
/// // speed: 1.8
/// sink.set_voice(Voice::ZmYunxi(1.8));
/// }
/// ```
///
pub fn set_voice(&mut self, voice: Voice) {
self.voice = voice
}
/// 发送合成请求
///
/// 该方法用于发送语音合成请求。
///
/// # 参数
///
/// * `text` - 要合成的文本内容。
///
/// # 返回值
///
/// 如果发送成功,将返回`Ok(())`;如果发送失败,将返回一个`KokoroError`类型的错误。
///
/// # 示例
///
/// ```rust
/// use kokoro_tts::{KokoroTts, Voice};
///
/// #[tokio::main]
/// async fn main() {
/// let Ok(tts) = KokoroTts::new("../kokoro-v1.1-zh.onnx", "../voices-v1.1-zh.bin").await else {
/// return;
/// };
/// let (mut sink, _) =tts.stream(Voice::Zf003(2));
/// let _ = sink.synth("hello world.").await;
/// }
/// ```
///
pub async fn synth(&mut self, text: S) -> Result<(), KokoroError> {
self.send((self.voice, text)).await
}
}
impl<S> Sink<(Voice, S)> for SynthSink<S> {
type Error = KokoroError;
fn poll_ready(self: Pin<&mut Self>, _cx: &mut Context<'_>) -> Poll<Result<(), Self::Error>> {
Poll::Ready(Ok(()))
}
fn start_send(self: Pin<&mut Self>, (voice, text): (Voice, S)) -> Result<(), Self::Error> {
self.tx
.send(Request { voice, text })
.map_err(|e| KokoroError::Send(e.to_string()))
}
fn poll_flush(self: Pin<&mut Self>, _cx: &mut Context<'_>) -> Poll<Result<(), Self::Error>> {
Poll::Ready(Ok(()))
}
fn poll_close(self: Pin<&mut Self>, _cx: &mut Context<'_>) -> Poll<Result<(), Self::Error>> {
Poll::Ready(Ok(()))
}
}
pub(super) fn start_synth_session<F, R, S>(
voice: Voice,
synth_request_callback: F,
) -> (SynthSink<S>, SynthStream)
where
F: Fn(S, Voice) -> R + Send + 'static,
R: Future<Output = Result<(Vec<f32>, Duration), KokoroError>> + Send,
S: AsRef<str> + Send + 'static,
{
let (tx, mut rx) = unbounded_channel::<Request<S>>();
let (tx2, rx2) = unbounded_channel();
tokio::spawn(async move {
while let Some(req) = rx.recv().await {
let (data, took) = synth_request_callback(req.text, req.voice).await?;
tx2.send(Response { data, took })
.map_err(|e| KokoroError::Send(e.to_string()))?;
}
Ok::<_, KokoroError>(())
});
(SynthSink { tx, voice }, SynthStream { rx: rx2 })
}

View File

@@ -0,0 +1,123 @@
use {
crate::{KokoroError, Voice, g2p, get_token_ids},
ndarray::Array,
ort::{
inputs,
session::{RunOptions, Session},
value::TensorRef,
},
std::{
cmp::min,
sync::Weak,
time::{Duration, SystemTime},
},
tokio::sync::Mutex,
};
async fn synth_v10<P, S>(
model: Weak<Mutex<Session>>,
phonemes: S,
pack: P,
speed: f32,
) -> Result<(Vec<f32>, Duration), KokoroError>
where
P: AsRef<Vec<Vec<Vec<f32>>>>,
S: AsRef<str>,
{
let model = model.upgrade().ok_or(KokoroError::ModelReleased)?;
let phonemes = get_token_ids(phonemes.as_ref(), false);
let phonemes = Array::from_shape_vec((1, phonemes.len()), phonemes)?;
let ref_s = pack.as_ref()[phonemes.len() - 1]
.first()
.cloned()
.unwrap_or_default();
let style = Array::from_shape_vec((1, ref_s.len()), ref_s)?;
let speed = Array::from_vec(vec![speed]);
let options = RunOptions::new()?;
let mut model = model.lock().await;
let t = SystemTime::now();
let kokoro_output = model
.run_async(
inputs![
"tokens" => TensorRef::from_array_view(&phonemes)?,
"style" => TensorRef::from_array_view(&style)?,
"speed" => TensorRef::from_array_view(&speed)?,
],
&options,
)?
.await?;
let elapsed = t.elapsed()?;
let (_, audio) = kokoro_output["audio"].try_extract_tensor::<f32>()?;
Ok((audio.to_owned(), elapsed))
}
async fn synth_v11<P, S>(
model: Weak<Mutex<Session>>,
phonemes: S,
pack: P,
speed: i32,
) -> Result<(Vec<f32>, Duration), KokoroError>
where
P: AsRef<Vec<Vec<Vec<f32>>>>,
S: AsRef<str>,
{
let model = model.upgrade().ok_or(KokoroError::ModelReleased)?;
let mut phonemes = get_token_ids(phonemes.as_ref(), true);
let mut ret = Vec::new();
let mut elapsed = Duration::ZERO;
while let p = phonemes.drain(..min(pack.as_ref().len(), phonemes.len()))
&& p.len() != 0
{
let phonemes = Array::from_shape_vec((1, p.len()), p.collect())?;
let ref_s = pack.as_ref()[phonemes.len() - 1]
.first()
.cloned()
.unwrap_or(vec![0.; 256]);
let style = Array::from_shape_vec((1, ref_s.len()), ref_s)?;
let speed = Array::from_vec(vec![speed]);
let options = RunOptions::new()?;
let mut model = model.lock().await;
let t = SystemTime::now();
let kokoro_output = model
.run_async(
inputs![
"input_ids" => TensorRef::from_array_view(&phonemes)?,
"style" => TensorRef::from_array_view(&style)?,
"speed" => TensorRef::from_array_view(&speed)?,
],
&options,
)?
.await?;
elapsed = t.elapsed()?;
let (_, audio) = kokoro_output["waveform"].try_extract_tensor::<f32>()?;
let (_, _duration) = kokoro_output["duration"].try_extract_tensor::<i64>()?;
// let _ = dbg!(duration.len());
ret.extend_from_slice(audio);
}
Ok((ret, elapsed))
}
pub(super) async fn synth<P, S>(
model: Weak<Mutex<Session>>,
text: S,
pack: P,
voice: Voice,
) -> Result<(Vec<f32>, Duration), KokoroError>
where
P: AsRef<Vec<Vec<Vec<f32>>>>,
S: AsRef<str>,
{
let phonemes = g2p(text.as_ref(), voice.is_v11_supported())?;
// #[cfg(debug_assertions)]
// println!("{}", phonemes);
match voice {
v if v.is_v11_supported() => synth_v11(model, phonemes, pack, v.get_speed_v11()?).await,
v if v.is_v10_supported() => synth_v10(model, phonemes, pack, v.get_speed_v10()?).await,
v => Err(KokoroError::VoiceVersionInvalid(v.get_name().to_owned())),
}
}

324
rust/vendor/kokoro-tts/src/tokenizer.rs vendored Normal file
View File

@@ -0,0 +1,324 @@
use {
log::warn,
std::{collections::HashMap, sync::LazyLock},
};
static VOCAB_V10: LazyLock<HashMap<char, u8>> = LazyLock::new(|| {
let mut map = HashMap::new();
map.insert(';', 1);
map.insert(':', 2);
map.insert(',', 3);
map.insert('.', 4);
map.insert('!', 5);
map.insert('?', 6);
map.insert('—', 9);
map.insert('…', 10);
map.insert('"', 11);
map.insert('(', 12);
map.insert(')', 13);
map.insert('“', 14);
map.insert('”', 15);
map.insert(' ', 16);
map.insert('\u{0303}', 17); // Unicode escape for combining tilde
map.insert('ʣ', 18);
map.insert('ʥ', 19);
map.insert('ʦ', 20);
map.insert('ʨ', 21);
map.insert('ᵝ', 22);
map.insert('\u{AB67}', 23); // Unicode escape
map.insert('A', 24);
map.insert('I', 25);
map.insert('O', 31);
map.insert('Q', 33);
map.insert('S', 35);
map.insert('T', 36);
map.insert('W', 39);
map.insert('Y', 41);
map.insert('ᵊ', 42);
map.insert('a', 43);
map.insert('b', 44);
map.insert('c', 45);
map.insert('d', 46);
map.insert('e', 47);
map.insert('f', 48);
map.insert('h', 50);
map.insert('i', 51);
map.insert('j', 52);
map.insert('k', 53);
map.insert('l', 54);
map.insert('m', 55);
map.insert('n', 56);
map.insert('o', 57);
map.insert('p', 58);
map.insert('q', 59);
map.insert('r', 60);
map.insert('s', 61);
map.insert('t', 62);
map.insert('u', 63);
map.insert('v', 64);
map.insert('w', 65);
map.insert('x', 66);
map.insert('y', 67);
map.insert('z', 68);
map.insert('ɑ', 69);
map.insert('ɐ', 70);
map.insert('ɒ', 71);
map.insert('æ', 72);
map.insert('β', 75);
map.insert('ɔ', 76);
map.insert('ɕ', 77);
map.insert('ç', 78);
map.insert('ɖ', 80);
map.insert('ð', 81);
map.insert('ʤ', 82);
map.insert('ə', 83);
map.insert('ɚ', 85);
map.insert('ɛ', 86);
map.insert('ɜ', 87);
map.insert('ɟ', 90);
map.insert('ɡ', 92);
map.insert('ɥ', 99);
map.insert('ɨ', 101);
map.insert('ɪ', 102);
map.insert('ʝ', 103);
map.insert('ɯ', 110);
map.insert('ɰ', 111);
map.insert('ŋ', 112);
map.insert('ɳ', 113);
map.insert('ɲ', 114);
map.insert('ɴ', 115);
map.insert('ø', 116);
map.insert('ɸ', 118);
map.insert('θ', 119);
map.insert('œ', 120);
map.insert('ɹ', 123);
map.insert('ɾ', 125);
map.insert('ɻ', 126);
map.insert('ʁ', 128);
map.insert('ɽ', 129);
map.insert('ʂ', 130);
map.insert('ʃ', 131);
map.insert('ʈ', 132);
map.insert('ʧ', 133);
map.insert('ʊ', 135);
map.insert('ʋ', 136);
map.insert('ʌ', 138);
map.insert('ɣ', 139);
map.insert('ɤ', 140);
map.insert('χ', 142);
map.insert('ʎ', 143);
map.insert('ʒ', 147);
map.insert('ʔ', 148);
map.insert('ˈ', 156);
map.insert('ˌ', 157);
map.insert('ː', 158);
map.insert('ʰ', 162);
map.insert('ʲ', 164);
map.insert('↓', 169);
map.insert('→', 171);
map.insert('↗', 172);
map.insert('↘', 173);
map.insert('ᵻ', 177);
map
});
static VOCAB_V11: LazyLock<HashMap<char, u8>> = LazyLock::new(|| {
let mut map = HashMap::new();
map.insert(';', 1);
map.insert(':', 2);
map.insert(',', 3);
map.insert('.', 4);
map.insert('!', 5);
map.insert('?', 6);
map.insert('/', 7);
map.insert('—', 9);
map.insert('…', 10);
map.insert('"', 11);
map.insert('(', 12);
map.insert(')', 13);
map.insert('“', 14);
map.insert('”', 15);
map.insert(' ', 16);
map.insert('\u{0303}', 17); // Unicode escape for combining tilde
map.insert('ʣ', 18);
map.insert('ʥ', 19);
map.insert('ʦ', 20);
map.insert('ʨ', 21);
map.insert('ᵝ', 22);
map.insert('ㄓ', 23);
map.insert('A', 24);
map.insert('I', 25);
map.insert('ㄅ', 30);
map.insert('O', 31);
map.insert('ㄆ', 32);
map.insert('Q', 33);
map.insert('R', 34);
map.insert('S', 35);
map.insert('T', 36);
map.insert('ㄇ', 37);
map.insert('ㄈ', 38);
map.insert('W', 39);
map.insert('ㄉ', 40);
map.insert('Y', 41);
map.insert('ᵊ', 42);
map.insert('a', 43);
map.insert('b', 44);
map.insert('c', 45);
map.insert('d', 46);
map.insert('e', 47);
map.insert('f', 48);
map.insert('ㄊ', 49);
map.insert('h', 50);
map.insert('i', 51);
map.insert('j', 52);
map.insert('k', 53);
map.insert('l', 54);
map.insert('m', 55);
map.insert('n', 56);
map.insert('o', 57);
map.insert('p', 58);
map.insert('q', 59);
map.insert('r', 60);
map.insert('s', 61);
map.insert('t', 62);
map.insert('u', 63);
map.insert('v', 64);
map.insert('w', 65);
map.insert('x', 66);
map.insert('y', 67);
map.insert('z', 68);
map.insert('ɑ', 69);
map.insert('ɐ', 70);
map.insert('ɒ', 71);
map.insert('æ', 72);
map.insert('ㄋ', 73);
map.insert('ㄌ', 74);
map.insert('β', 75);
map.insert('ɔ', 76);
map.insert('ɕ', 77);
map.insert('ç', 78);
map.insert('ㄍ', 79);
map.insert('ɖ', 80);
map.insert('ð', 81);
map.insert('ʤ', 82);
map.insert('ə', 83);
map.insert('ㄎ', 84);
map.insert('ㄦ', 85);
map.insert('ɛ', 86);
map.insert('ɜ', 87);
map.insert('ㄏ', 88);
map.insert('ㄐ', 89);
map.insert('ɟ', 90);
map.insert('ㄑ', 91);
map.insert('ɡ', 92);
map.insert('ㄒ', 93);
map.insert('ㄔ', 94);
map.insert('ㄕ', 95);
map.insert('ㄗ', 96);
map.insert('ㄘ', 97);
map.insert('ㄙ', 98);
map.insert('月', 99);
map.insert('ㄚ', 100);
map.insert('ɨ', 101);
map.insert('ɪ', 102);
map.insert('ʝ', 103);
map.insert('ㄛ', 104);
map.insert('ㄝ', 105);
map.insert('ㄞ', 106);
map.insert('ㄟ', 107);
map.insert('ㄠ', 108);
map.insert('ㄡ', 109);
map.insert('ɯ', 110);
map.insert('ɰ', 111);
map.insert('ŋ', 112);
map.insert('ɳ', 113);
map.insert('ɲ', 114);
map.insert('ɴ', 115);
map.insert('ø', 116);
map.insert('ㄢ', 117);
map.insert('ɸ', 118);
map.insert('θ', 119);
map.insert('œ', 120);
map.insert('ㄣ', 121);
map.insert('ㄤ', 122);
map.insert('ɹ', 123);
map.insert('ㄥ', 124);
map.insert('ɾ', 125);
map.insert('ㄖ', 126);
map.insert('ㄧ', 127);
map.insert('ʁ', 128);
map.insert('ɽ', 129);
map.insert('ʂ', 130);
map.insert('ʃ', 131);
map.insert('ʈ', 132);
map.insert('ʧ', 133);
map.insert('ㄨ', 134);
map.insert('ʊ', 135);
map.insert('ʋ', 136);
map.insert('ㄩ', 137);
map.insert('ʌ', 138);
map.insert('ɣ', 139);
map.insert('ㄜ', 140);
map.insert('ㄭ', 141);
map.insert('χ', 142);
map.insert('ʎ', 143);
map.insert('十', 144);
map.insert('压', 145);
map.insert('言', 146);
map.insert('ʒ', 147);
map.insert('ʔ', 148);
map.insert('阳', 149);
map.insert('要', 150);
map.insert('阴', 151);
map.insert('应', 152);
map.insert('用', 153);
map.insert('又', 154);
map.insert('中', 155);
map.insert('ˈ', 156);
map.insert('ˌ', 157);
map.insert('ː', 158);
map.insert('穵', 159);
map.insert('外', 160);
map.insert('万', 161);
map.insert('ʰ', 162);
map.insert('王', 163);
map.insert('ʲ', 164);
map.insert('为', 165);
map.insert('文', 166);
map.insert('瓮', 167);
map.insert('我', 168);
map.insert('3', 169);
map.insert('5', 170);
map.insert('1', 171);
map.insert('2', 172);
map.insert('4', 173);
map.insert('元', 175);
map.insert('云', 176);
map.insert('ᵻ', 177);
map
});
pub fn get_token_ids(phonemes: &str, v11: bool) -> Vec<i64> {
let mut tokens = Vec::with_capacity(phonemes.len() + 2);
tokens.push(0);
for i in phonemes.chars() {
let v = if v11 {
VOCAB_V11.get(&i).copied()
} else {
VOCAB_V10.get(&i).copied()
};
match v {
Some(t) => {
tokens.push(t as _);
}
_ => {
warn!("Unknown phone {}, skipped.", i);
}
}
}
tokens.push(0);
tokens
}

View File

@@ -0,0 +1,4 @@
mod en;
mod zh;
pub use {en::*, zh::*};

View File

@@ -0,0 +1,147 @@
use regex::Regex;
use std::{collections::HashMap, sync::LazyLock};
static LETTERS_IPA_MAP: LazyLock<HashMap<char, &'static str>> = LazyLock::new(|| {
let mut map = HashMap::new();
map.insert('a', "ɐ");
map.insert('b', "bˈi");
map.insert('c', "sˈi");
map.insert('d', "dˈi");
map.insert('e', "ˈi");
map.insert('f', "ˈɛf");
map.insert('g', "ʤˈi");
map.insert('h', "ˈ");
map.insert('i', "ˈI");
map.insert('j', "ʤˈA");
map.insert('k', "kˈA");
map.insert('l', "ˈɛl");
map.insert('m', "ˈɛm");
map.insert('n', "ˈɛn");
map.insert('o', "ˈO");
map.insert('p', "pˈi");
map.insert('q', "kjˈu");
map.insert('r', "ˈɑɹ");
map.insert('s', "ˈɛs");
map.insert('t', "tˈi");
map.insert('u', "jˈu");
map.insert('v', "vˈi");
map.insert('w', "dˈʌbᵊlju");
map.insert('x', "ˈɛks");
map.insert('y', "wˈI");
map.insert('z', "zˈi");
map.insert('A', "ˈA");
map.insert('B', "bˈi");
map.insert('C', "sˈi");
map.insert('D', "dˈi");
map.insert('E', "ˈi");
map.insert('F', "ˈɛf");
map.insert('G', "ʤˈi");
map.insert('H', "ˈ");
map.insert('I', "ˈI");
map.insert('J', "ʤˈA");
map.insert('K', "kˈA");
map.insert('L', "ˈɛl");
map.insert('M', "ˈɛm");
map.insert('N', "ˈɛn");
map.insert('O', "ˈO");
map.insert('P', "pˈi");
map.insert('Q', "kjˈu");
map.insert('R', "ˈɑɹ");
map.insert('S', "ˈɛs");
map.insert('T', "tˈi");
map.insert('U', "jˈu");
map.insert('V', "vˈi");
map.insert('W', "dˈʌbᵊlju");
map.insert('X', "ˈɛks");
map.insert('Y', "wˈI");
map.insert('Z', "zˈi");
map
});
static ARPA_IPA_MAP: LazyLock<HashMap<&'static str, &'static str>> = LazyLock::new(|| {
let mut map = HashMap::new();
map.insert("AA", "ɑ");
map.insert("AE", "æ");
map.insert("AH", "ə");
map.insert("AO", "ɔ");
map.insert("AW", "");
map.insert("AY", "aɪ");
map.insert("B", "b");
map.insert("CH", "");
map.insert("D", "d");
map.insert("DH", "ð");
map.insert("EH", "ɛ");
map.insert("ER", "ɝ");
map.insert("EY", "eɪ");
map.insert("F", "f");
map.insert("G", "ɡ");
map.insert("HH", "h");
map.insert("IH", "ɪ");
map.insert("IY", "i");
map.insert("JH", "");
map.insert("K", "k");
map.insert("L", "l");
map.insert("M", "m");
map.insert("N", "n");
map.insert("NG", "ŋ");
map.insert("OW", "");
map.insert("OY", "ɔɪ");
map.insert("P", "p");
map.insert("R", "ɹ");
map.insert("S", "s");
map.insert("SH", "ʃ");
map.insert("T", "t");
map.insert("TH", "θ");
map.insert("UH", "ʊ");
map.insert("UW", "u");
map.insert("V", "v");
map.insert("W", "w");
map.insert("Y", "j");
map.insert("Z", "z");
map.insert("ZH", "ʒ");
map.insert("SIL", "");
map
});
/// 支持2025新增符号吸气音ʘ
const SPECIAL_CASES: [(&str, &str); 3] = [("CLICK!", "ʘ"), ("TSK!", "ǀ"), ("TUT!", "ǁ")];
pub fn arpa_to_ipa(arpa: &str) -> Result<String, regex::Error> {
let re = Regex::new(r"([A-Z!]+)(\d*)")?;
let Some(caps) = re.captures(arpa) else {
return Ok(Default::default());
};
// 处理特殊符号2025新增
if let Some(sc) = SPECIAL_CASES.iter().find(|&&(s, _)| s == &caps[1]) {
return Ok(sc.1.to_string());
}
// 获取IPA映射
let phoneme = ARPA_IPA_MAP
.get(&caps[1])
.map_or_else(|| letters_to_ipa(arpa), |i| i.to_string());
let mut result = String::with_capacity(arpa.len() * 2);
// 添加重音标记(支持三级重音)
result.push(match &caps[2] {
"1" => 'ˈ',
"2" => 'ˌ',
"3" => '˧', // 2025新增中级重音
_ => '\0',
});
result.push_str(&phoneme);
Ok(result)
}
pub fn letters_to_ipa(letters: &str) -> String {
let mut res = String::with_capacity(letters.len());
for i in letters.chars() {
if let Some(p) = LETTERS_IPA_MAP.get(&i) {
res.push_str(p);
}
}
res
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,364 @@
/// 汉语拼音到国际音标的转换
/// 参考了python的misaki库的zh.py。
use std::{collections::HashMap, error::Error, fmt, sync::LazyLock};
const VALID_FINALS: [&str; 37] = [
"i", "u", "ü", "a", "ia", "ua", "o", "uo", "e", "ie", "üe", "ai", "uai", "ei", "uei", "ao",
"iao", "ou", "iou", "an", "ian", "uan", "üan", "en", "in", "uen", "ün", "ang", "iang", "uang",
"eng", "ing", "ueng", "ong", "iong", "er", "ê",
];
const INITIALS: [&str; 21] = [
"zh", "ch", "sh", "b", "c", "d", "f", "g", "h", "j", "k", "l", "m", "n", "p", "q", "r", "s",
"t", "x", "z",
];
// 错误类型定义
#[derive(Debug)]
pub enum PinyinError {
FinalNotFound(String),
}
impl fmt::Display for PinyinError {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
match self {
PinyinError::FinalNotFound(tip) => write!(f, "Final not found: {}", tip),
}
}
}
impl Error for PinyinError {}
static INITIAL_MAPPING: LazyLock<HashMap<&'static str, Vec<Vec<&'static str>>>> =
LazyLock::new(|| {
let mut map = HashMap::new();
map.insert("b", vec![vec!["p"]]);
map.insert("c", vec![vec!["ʦʰ"]]);
map.insert("ch", vec![vec!["ꭧʰ"]]);
map.insert("d", vec![vec!["t"]]);
map.insert("f", vec![vec!["f"]]);
map.insert("g", vec![vec!["k"]]);
map.insert("h", vec![vec!["x"], vec!["h"]]);
map.insert("j", vec![vec!["ʨ"]]);
map.insert("k", vec![vec![""]]);
map.insert("l", vec![vec!["l"]]);
map.insert("m", vec![vec!["m"]]);
map.insert("n", vec![vec!["n"]]);
map.insert("p", vec![vec![""]]);
map.insert("q", vec![vec!["ʨʰ"]]);
map.insert("r", vec![vec!["ɻ"], vec!["ʐ"]]);
map.insert("s", vec![vec!["s"]]);
map.insert("sh", vec![vec!["ʂ"]]);
map.insert("t", vec![vec![""]]);
map.insert("x", vec![vec!["ɕ"]]);
map.insert("z", vec![vec!["ʦ"]]);
map.insert("zh", vec![vec![""]]);
map
});
static SYLLABIC_CONSONANT_MAPPINGS: LazyLock<HashMap<&'static str, Vec<Vec<&'static str>>>> =
LazyLock::new(|| {
let mut map = HashMap::new();
map.insert("hm", vec![vec!["h", "m0"]]);
map.insert("hng", vec![vec!["h", "ŋ0"]]);
map.insert("m", vec![vec!["m0"]]);
map.insert("n", vec![vec!["n0"]]);
map.insert("ng", vec![vec!["ŋ0"]]);
map
});
static INTERJECTION_MAPPINGS: LazyLock<HashMap<&'static str, Vec<Vec<&'static str>>>> =
LazyLock::new(|| {
let mut map = HashMap::new();
map.insert("io", vec![vec!["j", "ɔ0"]]);
map.insert("ê", vec![vec!["ɛ0"]]);
map.insert("er", vec![vec!["ɚ0"], vec!["aɚ̯0"]]);
map.insert("o", vec![vec!["ɔ0"]]);
map
});
/// Duanmu (2000, p. 37) and Lin (2007, p. 68f)
/// Diphtongs from Duanmu (2007, p. 40): au, əu, əi, ai
/// Diphthongs from Lin (2007, p. 68f): au̯, ou̯, ei̯, ai̯
static FINAL_MAPPING: LazyLock<HashMap<&'static str, Vec<Vec<&'static str>>>> =
LazyLock::new(|| {
let mut map = HashMap::new();
map.insert("a", vec![vec!["a0"]]);
map.insert("ai", vec![vec!["ai0"]]);
map.insert("an", vec![vec!["a0", "n"]]);
map.insert("ang", vec![vec!["a0", "ŋ"]]);
map.insert("ao", vec![vec!["au0"]]);
map.insert("e", vec![vec!["ɤ0"]]);
map.insert("ei", vec![vec!["ei0"]]);
map.insert("en", vec![vec!["ə0", "n"]]);
map.insert("eng", vec![vec!["ə0", "ŋ"]]);
map.insert("i", vec![vec!["i0"]]);
map.insert("ia", vec![vec!["j", "a0"]]);
map.insert("ian", vec![vec!["j", "ɛ0", "n"]]);
map.insert("iang", vec![vec!["j", "a0", "ŋ"]]);
map.insert("iao", vec![vec!["j", "au0"]]);
map.insert("ie", vec![vec!["j", "e0"]]);
map.insert("in", vec![vec!["i0", "n"]]);
map.insert("iou", vec![vec!["j", "ou0"]]);
map.insert("ing", vec![vec!["i0", "ŋ"]]);
map.insert("iong", vec![vec!["j", "ʊ0", "ŋ"]]);
map.insert("ong", vec![vec!["ʊ0", "ŋ"]]);
map.insert("ou", vec![vec!["ou0"]]);
map.insert("u", vec![vec!["u0"]]);
map.insert("uei", vec![vec!["w", "ei0"]]);
map.insert("ua", vec![vec!["w", "a0"]]);
map.insert("uai", vec![vec!["w", "ai0"]]);
map.insert("uan", vec![vec!["w", "a0", "n"]]);
map.insert("uen", vec![vec!["w", "ə0", "n"]]);
map.insert("uang", vec![vec!["w", "a0", "ŋ"]]);
map.insert("ueng", vec![vec!["w", "ə0", "ŋ"]]);
map.insert("ui", vec![vec!["w", "ei0"]]);
map.insert("un", vec![vec!["w", "ə0", "n"]]);
map.insert("uo", vec![vec!["w", "o0"]]);
map.insert("o", vec![vec!["w", "o0"]]); // 注意:这里'o'的映射可能与预期不符,根据注释可能需要特殊处理
map.insert("ü", vec![vec!["y0"]]);
map.insert("üe", vec![vec!["ɥ", "e0"]]);
map.insert("üan", vec![vec!["ɥ", "ɛ0", "n"]]);
map.insert("ün", vec![vec!["y0", "n"]]);
map
});
static FINAL_MAPPING_AFTER_ZH_CH_SH_R: LazyLock<HashMap<&'static str, Vec<Vec<&'static str>>>> =
LazyLock::new(|| {
let mut map = HashMap::new();
map.insert("i", vec![vec!["ɻ0"], vec!["ʐ0"]]);
map
});
static FINAL_MAPPING_AFTER_Z_C_S: LazyLock<HashMap<&'static str, Vec<Vec<&'static str>>>> =
LazyLock::new(|| {
let mut map = HashMap::new();
map.insert("i", vec![vec!["ɹ0"], vec!["z0"]]);
map
});
static TONE_MAPPING: LazyLock<HashMap<u8, &'static str>> = LazyLock::new(|| {
let mut map = HashMap::new();
map.insert(1u8, "˥");
map.insert(2u8, "˧˥");
map.insert(3u8, "˧˩˧");
map.insert(4u8, "˥˩");
map.insert(5u8, "");
map
});
pub(crate) fn split_tone(pinyin: &str) -> (&str, u8) {
if let Some(t) = pinyin
.chars()
.last()
.and_then(|c| c.to_digit(10).map(|n| n as u8))
{
return (&pinyin[..pinyin.len() - 1], t);
}
(pinyin, 5)
}
/// uen 转换,还原原始的韵母
/// iouueiuen前面加声母的时候写成iuuiun。
/// 例如niu(牛)gui(归)lun(论)。
fn convert_uen(s: &str) -> String {
match s.strip_suffix('n') {
Some(stem) if stem.ends_with(['u', 'ū', 'ú', 'ǔ', 'ù']) => {
format!("{}en", stem)
}
_ => s.to_string(),
}
}
/// ü 转换,还原原始的韵母
/// ü行的韵母跟声母jqx拼的时候写成ju(居)qu(区)xu(虚) ü上两点也省略;
/// 但是跟声母nl拼的时候仍然写成nü(女)lü(吕)
fn convert_uv(pinyin: &str) -> String {
let chars = pinyin.chars().collect::<Vec<_>>();
match chars.as_slice() {
[
c @ ('j' | 'q' | 'x'),
tone @ ('u' | 'ū' | 'ú' | 'ǔ' | 'ù'),
rest @ ..,
] => {
let new_tone = match tone {
'u' => 'ü',
'ū' => 'ǖ',
'ú' => 'ǘ',
'ǔ' => 'ǚ',
'ù' => 'ǜ',
_ => unreachable!(),
};
format!("{}{}{}", c, new_tone, rest.iter().collect::<String>())
}
_ => pinyin.to_string(),
}
}
/// iou 转换,还原原始的韵母
/// iouueiuen前面加声母的时候写成iuuiun。
/// 例如niu(牛)gui(归)lun(论)。
fn convert_iou(pinyin: &str) -> String {
let chars = pinyin.chars().collect::<Vec<_>>();
match chars.as_slice() {
// 处理 iu 系列
[.., 'i', u @ ('u' | 'ū' | 'ú' | 'ǔ' | 'ù')] => {
format!("{}o{}", &pinyin[..pinyin.len() - 1], u)
}
// 其他情况保持原样
_ => pinyin.to_string(),
}
}
/// uei 转换,还原原始的韵母
/// iouueiuen前面加声母的时候写成iuuiun。
/// 例如niu(牛)gui(归)lun(论)。
fn convert_uei(pinyin: &str) -> String {
let chars = pinyin.chars().collect::<Vec<_>>();
match chars.as_slice() {
// 处理 ui 系列
[.., 'u', i @ ('i' | 'ī' | 'í' | 'ǐ' | 'ì')] => {
format!("{}e{}", &pinyin[..pinyin.len() - 1], i)
}
// 其他情况保持原样
_ => pinyin.to_string(),
}
}
/// 零声母转换,还原原始的韵母
/// i行的韵母前面没有声母的时候写成yi(衣)ya(呀)ye(耶)yao(腰)you(忧)yan(烟)yin(因)yang(央)ying(英)yong(雍)。
/// u行的韵母前面没有声母的时候写成wu(乌)wa(蛙)wo(窝)wai(歪)wei(威)wan(弯)wen(温)wang(汪)weng(翁)。
/// ü行的韵母前面没有声母的时候写成yu(迂)yue(约)yuan(冤)yun(晕);ü上两点省略。"""
pub(crate) fn convert_zero_consonant(pinyin: &str) -> String {
let mut buffer = String::with_capacity(pinyin.len() + 2);
let chars: Vec<char> = pinyin.chars().collect();
match chars.as_slice() {
// 处理Y系转换
['y', 'u', rest @ ..] => {
buffer.push('ü');
buffer.extend(rest.iter());
}
['y', u @ ('ū' | 'ú' | 'ǔ' | 'ù'), rest @ ..] => {
buffer.push(match u {
'ū' => 'ǖ', // ü 第一声
'ú' => 'ǘ', // ü 第二声
'ǔ' => 'ǚ', // ü 第三声
'ù' => 'ǜ', // ü 第四声
_ => unreachable!(),
});
buffer.extend(rest.iter());
}
['y', i @ ('i' | 'ī' | 'í' | 'ǐ' | 'ì'), rest @ ..] => {
buffer.push(*i);
buffer.extend(rest.iter());
}
['y', rest @ ..] => {
buffer.push('i');
buffer.extend(rest);
}
// 处理W系转换
['w', u @ ('u' | 'ū' | 'ú' | 'ǔ' | 'ù'), rest @ ..] => {
buffer.push(*u);
buffer.extend(rest.iter());
}
['w', rest @ ..] => {
buffer.push('u');
buffer.extend(rest);
}
// 无需转换的情况
_ => return pinyin.to_string(),
}
// 有效性验证
if VALID_FINALS.contains(&buffer.as_str()) {
buffer
} else {
pinyin.to_string()
}
}
pub(crate) fn split_initial(pinyin: &str) -> (&'static str, &str) {
for &initial in &INITIALS {
if let Some(stripped) = pinyin.strip_prefix(initial) {
return (initial, stripped);
}
}
("", pinyin)
}
fn apply_tone(variants: &[Vec<&str>], tone: u8) -> Vec<Vec<String>> {
let tone_str = TONE_MAPPING.get(&tone).unwrap_or(&"");
variants
.iter()
.map(|v| v.iter().map(|s| s.replace("0", tone_str)).collect())
.collect()
}
pub fn pinyin_to_ipa(pinyin: &str) -> Result<Vec<Vec<String>>, PinyinError> {
let (pinyin, tone) = split_tone(pinyin);
let pinyin = convert_zero_consonant(pinyin);
let pinyin = convert_uv(&pinyin);
let pinyin = convert_iou(&pinyin);
let pinyin = convert_uei(&pinyin);
let pinyin = convert_uen(&pinyin);
// 处理特殊成音节辅音和感叹词
if let Some(ipa) = SYLLABIC_CONSONANT_MAPPINGS.get(pinyin.as_str()) {
return Ok(apply_tone(ipa, tone)
.into_iter()
.map(|i| i.into_iter().collect())
.collect());
}
if let Some(ipa) = INTERJECTION_MAPPINGS.get(pinyin.as_str()) {
return Ok(apply_tone(ipa, tone)
.into_iter()
.map(|i| i.into_iter().collect())
.collect());
}
// 分解声母韵母
let (initial_part, final_part) = split_initial(pinyin.as_str());
// 获取韵母IPA
let final_ipa = match initial_part {
"zh" | "ch" | "sh" | "r" if FINAL_MAPPING_AFTER_ZH_CH_SH_R.contains_key(final_part) => {
FINAL_MAPPING_AFTER_ZH_CH_SH_R.get(final_part)
}
"z" | "c" | "s" if FINAL_MAPPING_AFTER_Z_C_S.contains_key(final_part) => {
FINAL_MAPPING_AFTER_Z_C_S.get(final_part)
}
_ => FINAL_MAPPING.get(final_part),
}
.ok_or(PinyinError::FinalNotFound(final_part.to_owned()))?;
// 组合所有可能
let mut result = Vec::<Vec<String>>::new();
let initials = INITIAL_MAPPING
.get(initial_part)
.map_or(vec![vec![Default::default()]], |i| {
i.iter()
.map(|i| i.iter().map(|i| i.to_string()).collect())
.collect()
});
for i in initials.into_iter() {
for j in apply_tone(final_ipa, tone).into_iter() {
result.push(
i.iter()
.chain(j.iter())
.map(|i| i.to_owned())
.collect::<Vec<_>>(),
)
}
}
Ok(result)
}

673
rust/vendor/kokoro-tts/src/voice.rs vendored Normal file
View File

@@ -0,0 +1,673 @@
use crate::KokoroError;
//noinspection SpellCheckingInspection
#[derive(Copy, Clone, Debug)]
pub enum Voice {
// v1.0
ZmYunyang(f32),
ZfXiaoni(f32),
AfJessica(f32),
BfLily(f32),
ZfXiaobei(f32),
ZmYunxia(f32),
AfHeart(f32),
BfEmma(f32),
AmPuck(f32),
BfAlice(f32),
HfAlpha(f32),
BfIsabella(f32),
AfNova(f32),
AmFenrir(f32),
EmAlex(f32),
ImNicola(f32),
PmAlex(f32),
AfAlloy(f32),
ZmYunxi(f32),
AfSarah(f32),
JfNezumi(f32),
BmDaniel(f32),
JfTebukuro(f32),
JfAlpha(f32),
JmKumo(f32),
EmSanta(f32),
AmLiam(f32),
AmSanta(f32),
AmEric(f32),
BmFable(f32),
AfBella(f32),
BmLewis(f32),
PfDora(f32),
AfNicole(f32),
BmGeorge(f32),
AmOnyx(f32),
HmPsi(f32),
HfBeta(f32),
HmOmega(f32),
ZfXiaoxiao(f32),
FfSiwis(f32),
EfDora(f32),
AfAoede(f32),
AmEcho(f32),
AmMichael(f32),
AfKore(f32),
ZfXiaoyi(f32),
JfGongitsune(f32),
AmAdam(f32),
IfSara(f32),
AfSky(f32),
PmSanta(f32),
AfRiver(f32),
ZmYunjian(f32),
// v1.1
Zm029(i32),
Zf048(i32),
Zf008(i32),
Zm014(i32),
Zf003(i32),
Zf047(i32),
Zm080(i32),
Zf094(i32),
Zf046(i32),
Zm054(i32),
Zf001(i32),
Zm062(i32),
BfVale(i32),
Zf044(i32),
Zf005(i32),
Zf028(i32),
Zf059(i32),
Zm030(i32),
Zf074(i32),
Zm009(i32),
Zf004(i32),
Zf021(i32),
Zm095(i32),
Zm041(i32),
Zf087(i32),
Zf039(i32),
Zm031(i32),
Zf007(i32),
Zf038(i32),
Zf092(i32),
Zm056(i32),
Zf099(i32),
Zm010(i32),
Zm069(i32),
Zm016(i32),
Zm068(i32),
Zf083(i32),
Zf093(i32),
Zf006(i32),
Zf026(i32),
Zm053(i32),
Zm064(i32),
AfSol(i32),
Zf042(i32),
Zf084(i32),
Zf073(i32),
Zf067(i32),
Zm025(i32),
Zm020(i32),
Zm050(i32),
Zf070(i32),
Zf002(i32),
Zf032(i32),
Zm091(i32),
Zm066(i32),
Zm089(i32),
Zm034(i32),
Zm100(i32),
Zf086(i32),
Zf040(i32),
Zm011(i32),
Zm098(i32),
Zm015(i32),
Zf051(i32),
Zm065(i32),
Zf076(i32),
Zf036(i32),
Zm033(i32),
Zf018(i32),
Zf017(i32),
Zf049(i32),
AfMaple(i32),
Zm082(i32),
Zm057(i32),
Zf079(i32),
Zf022(i32),
Zm063(i32),
Zf060(i32),
Zf019(i32),
Zm097(i32),
Zm096(i32),
Zf023(i32),
Zf027(i32),
Zf085(i32),
Zf077(i32),
Zm035(i32),
Zf088(i32),
Zf024(i32),
Zf072(i32),
Zm055(i32),
Zm052(i32),
Zf071(i32),
Zm061(i32),
Zf078(i32),
Zm013(i32),
Zm081(i32),
Zm037(i32),
Zf090(i32),
Zf043(i32),
Zm058(i32),
Zm012(i32),
Zm045(i32),
Zf075(i32),
}
impl Voice {
//noinspection SpellCheckingInspection
pub(super) fn get_name(&self) -> &str {
match self {
Self::ZmYunyang(_) => "zm_yunyang",
Self::ZfXiaoni(_) => "zf_xiaoni",
Self::AfJessica(_) => "af_jessica",
Self::BfLily(_) => "bf_lily",
Self::ZfXiaobei(_) => "zf_xiaobei",
Self::ZmYunxia(_) => "zm_yunxia",
Self::AfHeart(_) => "af_heart",
Self::BfEmma(_) => "bf_emma",
Self::AmPuck(_) => "am_puck",
Self::BfAlice(_) => "bf_alice",
Self::HfAlpha(_) => "hf_alpha",
Self::BfIsabella(_) => "bf_isabella",
Self::AfNova(_) => "af_nova",
Self::AmFenrir(_) => "am_fenrir",
Self::EmAlex(_) => "em_alex",
Self::ImNicola(_) => "im_nicola",
Self::PmAlex(_) => "pm_alex",
Self::AfAlloy(_) => "af_alloy",
Self::ZmYunxi(_) => "zm_yunxi",
Self::AfSarah(_) => "af_sarah",
Self::JfNezumi(_) => "jf_nezumi",
Self::BmDaniel(_) => "bm_daniel",
Self::JfTebukuro(_) => "jf_tebukuro",
Self::JfAlpha(_) => "jf_alpha",
Self::JmKumo(_) => "jm_kumo",
Self::EmSanta(_) => "em_santa",
Self::AmLiam(_) => "am_liam",
Self::AmSanta(_) => "am_santa",
Self::AmEric(_) => "am_eric",
Self::BmFable(_) => "bm_fable",
Self::AfBella(_) => "af_bella",
Self::BmLewis(_) => "bm_lewis",
Self::PfDora(_) => "pf_dora",
Self::AfNicole(_) => "af_nicole",
Self::BmGeorge(_) => "bm_george",
Self::AmOnyx(_) => "am_onyx",
Self::HmPsi(_) => "hm_psi",
Self::HfBeta(_) => "hf_beta",
Self::HmOmega(_) => "hm_omega",
Self::ZfXiaoxiao(_) => "zf_xiaoxiao",
Self::FfSiwis(_) => "ff_siwis",
Self::EfDora(_) => "ef_dora",
Self::AfAoede(_) => "af_aoede",
Self::AmEcho(_) => "am_echo",
Self::AmMichael(_) => "am_michael",
Self::AfKore(_) => "af_kore",
Self::ZfXiaoyi(_) => "zf_xiaoyi",
Self::JfGongitsune(_) => "jf_gongitsune",
Self::AmAdam(_) => "am_adam",
Self::IfSara(_) => "if_sara",
Self::AfSky(_) => "af_sky",
Self::PmSanta(_) => "pm_santa",
Self::AfRiver(_) => "af_river",
Self::ZmYunjian(_) => "zm_yunjian",
Self::Zm029(_) => "zm_029",
Self::Zf048(_) => "zf_048",
Self::Zf008(_) => "zf_008",
Self::Zm014(_) => "zm_014",
Self::Zf003(_) => "zf_003",
Self::Zf047(_) => "zf_047",
Self::Zm080(_) => "zm_080",
Self::Zf094(_) => "zf_094",
Self::Zf046(_) => "zf_046",
Self::Zm054(_) => "zm_054",
Self::Zf001(_) => "zf_001",
Self::Zm062(_) => "zm_062",
Self::BfVale(_) => "bf_vale",
Self::Zf044(_) => "zf_044",
Self::Zf005(_) => "zf_005",
Self::Zf028(_) => "zf_028",
Self::Zf059(_) => "zf_059",
Self::Zm030(_) => "zm_030",
Self::Zf074(_) => "zf_074",
Self::Zm009(_) => "zm_009",
Self::Zf004(_) => "zf_004",
Self::Zf021(_) => "zf_021",
Self::Zm095(_) => "zm_095",
Self::Zm041(_) => "zm_041",
Self::Zf087(_) => "zf_087",
Self::Zf039(_) => "zf_039",
Self::Zm031(_) => "zm_031",
Self::Zf007(_) => "zf_007",
Self::Zf038(_) => "zf_038",
Self::Zf092(_) => "zf_092",
Self::Zm056(_) => "zm_056",
Self::Zf099(_) => "zf_099",
Self::Zm010(_) => "zm_010",
Self::Zm069(_) => "zm_069",
Self::Zm016(_) => "zm_016",
Self::Zm068(_) => "zm_068",
Self::Zf083(_) => "zf_083",
Self::Zf093(_) => "zf_093",
Self::Zf006(_) => "zf_006",
Self::Zf026(_) => "zf_026",
Self::Zm053(_) => "zm_053",
Self::Zm064(_) => "zm_064",
Self::AfSol(_) => "af_sol",
Self::Zf042(_) => "zf_042",
Self::Zf084(_) => "zf_084",
Self::Zf073(_) => "zf_073",
Self::Zf067(_) => "zf_067",
Self::Zm025(_) => "zm_025",
Self::Zm020(_) => "zm_020",
Self::Zm050(_) => "zm_050",
Self::Zf070(_) => "zf_070",
Self::Zf002(_) => "zf_002",
Self::Zf032(_) => "zf_032",
Self::Zm091(_) => "zm_091",
Self::Zm066(_) => "zm_066",
Self::Zm089(_) => "zm_089",
Self::Zm034(_) => "zm_034",
Self::Zm100(_) => "zm_100",
Self::Zf086(_) => "zf_086",
Self::Zf040(_) => "zf_040",
Self::Zm011(_) => "zm_011",
Self::Zm098(_) => "zm_098",
Self::Zm015(_) => "zm_015",
Self::Zf051(_) => "zf_051",
Self::Zm065(_) => "zm_065",
Self::Zf076(_) => "zf_076",
Self::Zf036(_) => "zf_036",
Self::Zm033(_) => "zm_033",
Self::Zf018(_) => "zf_018",
Self::Zf017(_) => "zf_017",
Self::Zf049(_) => "zf_049",
Self::AfMaple(_) => "af_maple",
Self::Zm082(_) => "zm_082",
Self::Zm057(_) => "zm_057",
Self::Zf079(_) => "zf_079",
Self::Zf022(_) => "zf_022",
Self::Zm063(_) => "zm_063",
Self::Zf060(_) => "zf_060",
Self::Zf019(_) => "zf_019",
Self::Zm097(_) => "zm_097",
Self::Zm096(_) => "zm_096",
Self::Zf023(_) => "zf_023",
Self::Zf027(_) => "zf_027",
Self::Zf085(_) => "zf_085",
Self::Zf077(_) => "zf_077",
Self::Zm035(_) => "zm_035",
Self::Zf088(_) => "zf_088",
Self::Zf024(_) => "zf_024",
Self::Zf072(_) => "zf_072",
Self::Zm055(_) => "zm_055",
Self::Zm052(_) => "zm_052",
Self::Zf071(_) => "zf_071",
Self::Zm061(_) => "zm_061",
Self::Zf078(_) => "zf_078",
Self::Zm013(_) => "zm_013",
Self::Zm081(_) => "zm_081",
Self::Zm037(_) => "zm_037",
Self::Zf090(_) => "zf_090",
Self::Zf043(_) => "zf_043",
Self::Zm058(_) => "zm_058",
Self::Zm012(_) => "zm_012",
Self::Zm045(_) => "zm_045",
Self::Zf075(_) => "zf_075",
}
}
pub(super) fn is_v10_supported(&self) -> bool {
matches!(
self,
Self::ZmYunyang(_)
| Self::ZfXiaoni(_)
| Self::AfJessica(_)
| Self::BfLily(_)
| Self::ZfXiaobei(_)
| Self::ZmYunxia(_)
| Self::AfHeart(_)
| Self::BfEmma(_)
| Self::AmPuck(_)
| Self::BfAlice(_)
| Self::HfAlpha(_)
| Self::BfIsabella(_)
| Self::AfNova(_)
| Self::AmFenrir(_)
| Self::EmAlex(_)
| Self::ImNicola(_)
| Self::PmAlex(_)
| Self::AfAlloy(_)
| Self::ZmYunxi(_)
| Self::AfSarah(_)
| Self::JfNezumi(_)
| Self::BmDaniel(_)
| Self::JfTebukuro(_)
| Self::JfAlpha(_)
| Self::JmKumo(_)
| Self::EmSanta(_)
| Self::AmLiam(_)
| Self::AmSanta(_)
| Self::AmEric(_)
| Self::BmFable(_)
| Self::AfBella(_)
| Self::BmLewis(_)
| Self::PfDora(_)
| Self::AfNicole(_)
| Self::BmGeorge(_)
| Self::AmOnyx(_)
| Self::HmPsi(_)
| Self::HfBeta(_)
| Self::HmOmega(_)
| Self::ZfXiaoxiao(_)
| Self::FfSiwis(_)
| Self::EfDora(_)
| Self::AfAoede(_)
| Self::AmEcho(_)
| Self::AmMichael(_)
| Self::AfKore(_)
| Self::ZfXiaoyi(_)
| Self::JfGongitsune(_)
| Self::AmAdam(_)
| Self::IfSara(_)
| Self::AfSky(_)
| Self::PmSanta(_)
| Self::AfRiver(_)
| Self::ZmYunjian(_)
)
}
pub(super) fn is_v11_supported(&self) -> bool {
matches!(
self,
Self::Zm029(_)
| Self::Zf048(_)
| Self::Zf008(_)
| Self::Zm014(_)
| Self::Zf003(_)
| Self::Zf047(_)
| Self::Zm080(_)
| Self::Zf094(_)
| Self::Zf046(_)
| Self::Zm054(_)
| Self::Zf001(_)
| Self::Zm062(_)
| Self::BfVale(_)
| Self::Zf044(_)
| Self::Zf005(_)
| Self::Zf028(_)
| Self::Zf059(_)
| Self::Zm030(_)
| Self::Zf074(_)
| Self::Zm009(_)
| Self::Zf004(_)
| Self::Zf021(_)
| Self::Zm095(_)
| Self::Zm041(_)
| Self::Zf087(_)
| Self::Zf039(_)
| Self::Zm031(_)
| Self::Zf007(_)
| Self::Zf038(_)
| Self::Zf092(_)
| Self::Zm056(_)
| Self::Zf099(_)
| Self::Zm010(_)
| Self::Zm069(_)
| Self::Zm016(_)
| Self::Zm068(_)
| Self::Zf083(_)
| Self::Zf093(_)
| Self::Zf006(_)
| Self::Zf026(_)
| Self::Zm053(_)
| Self::Zm064(_)
| Self::AfSol(_)
| Self::Zf042(_)
| Self::Zf084(_)
| Self::Zf073(_)
| Self::Zf067(_)
| Self::Zm025(_)
| Self::Zm020(_)
| Self::Zm050(_)
| Self::Zf070(_)
| Self::Zf002(_)
| Self::Zf032(_)
| Self::Zm091(_)
| Self::Zm066(_)
| Self::Zm089(_)
| Self::Zm034(_)
| Self::Zm100(_)
| Self::Zf086(_)
| Self::Zf040(_)
| Self::Zm011(_)
| Self::Zm098(_)
| Self::Zm015(_)
| Self::Zf051(_)
| Self::Zm065(_)
| Self::Zf076(_)
| Self::Zf036(_)
| Self::Zm033(_)
| Self::Zf018(_)
| Self::Zf017(_)
| Self::Zf049(_)
| Self::AfMaple(_)
| Self::Zm082(_)
| Self::Zm057(_)
| Self::Zf079(_)
| Self::Zf022(_)
| Self::Zm063(_)
| Self::Zf060(_)
| Self::Zf019(_)
| Self::Zm097(_)
| Self::Zm096(_)
| Self::Zf023(_)
| Self::Zf027(_)
| Self::Zf085(_)
| Self::Zf077(_)
| Self::Zm035(_)
| Self::Zf088(_)
| Self::Zf024(_)
| Self::Zf072(_)
| Self::Zm055(_)
| Self::Zm052(_)
| Self::Zf071(_)
| Self::Zm061(_)
| Self::Zf078(_)
| Self::Zm013(_)
| Self::Zm081(_)
| Self::Zm037(_)
| Self::Zf090(_)
| Self::Zf043(_)
| Self::Zm058(_)
| Self::Zm012(_)
| Self::Zm045(_)
| Self::Zf075(_)
)
}
pub(super) fn get_speed_v10(&self) -> Result<f32, KokoroError> {
match self {
Self::ZmYunyang(v)
| Self::ZfXiaoni(v)
| Self::AfJessica(v)
| Self::BfLily(v)
| Self::ZfXiaobei(v)
| Self::ZmYunxia(v)
| Self::AfHeart(v)
| Self::BfEmma(v)
| Self::AmPuck(v)
| Self::BfAlice(v)
| Self::HfAlpha(v)
| Self::BfIsabella(v)
| Self::AfNova(v)
| Self::AmFenrir(v)
| Self::EmAlex(v)
| Self::ImNicola(v)
| Self::PmAlex(v)
| Self::AfAlloy(v)
| Self::ZmYunxi(v)
| Self::AfSarah(v)
| Self::JfNezumi(v)
| Self::BmDaniel(v)
| Self::JfTebukuro(v)
| Self::JfAlpha(v)
| Self::JmKumo(v)
| Self::EmSanta(v)
| Self::AmLiam(v)
| Self::AmSanta(v)
| Self::AmEric(v)
| Self::BmFable(v)
| Self::AfBella(v)
| Self::BmLewis(v)
| Self::PfDora(v)
| Self::AfNicole(v)
| Self::BmGeorge(v)
| Self::AmOnyx(v)
| Self::HmPsi(v)
| Self::HfBeta(v)
| Self::HmOmega(v)
| Self::ZfXiaoxiao(v)
| Self::FfSiwis(v)
| Self::EfDora(v)
| Self::AfAoede(v)
| Self::AmEcho(v)
| Self::AmMichael(v)
| Self::AfKore(v)
| Self::ZfXiaoyi(v)
| Self::JfGongitsune(v)
| Self::AmAdam(v)
| Self::IfSara(v)
| Self::AfSky(v)
| Self::PmSanta(v)
| Self::AfRiver(v)
| Self::ZmYunjian(v) => Ok(*v),
_ => Err(KokoroError::VoiceVersionInvalid(
"Expect version 1.0".to_owned(),
)),
}
}
pub(super) fn get_speed_v11(&self) -> Result<i32, KokoroError> {
match self {
Self::Zm029(v)
| Self::Zf048(v)
| Self::Zf008(v)
| Self::Zm014(v)
| Self::Zf003(v)
| Self::Zf047(v)
| Self::Zm080(v)
| Self::Zf094(v)
| Self::Zf046(v)
| Self::Zm054(v)
| Self::Zf001(v)
| Self::Zm062(v)
| Self::BfVale(v)
| Self::Zf044(v)
| Self::Zf005(v)
| Self::Zf028(v)
| Self::Zf059(v)
| Self::Zm030(v)
| Self::Zf074(v)
| Self::Zm009(v)
| Self::Zf004(v)
| Self::Zf021(v)
| Self::Zm095(v)
| Self::Zm041(v)
| Self::Zf087(v)
| Self::Zf039(v)
| Self::Zm031(v)
| Self::Zf007(v)
| Self::Zf038(v)
| Self::Zf092(v)
| Self::Zm056(v)
| Self::Zf099(v)
| Self::Zm010(v)
| Self::Zm069(v)
| Self::Zm016(v)
| Self::Zm068(v)
| Self::Zf083(v)
| Self::Zf093(v)
| Self::Zf006(v)
| Self::Zf026(v)
| Self::Zm053(v)
| Self::Zm064(v)
| Self::AfSol(v)
| Self::Zf042(v)
| Self::Zf084(v)
| Self::Zf073(v)
| Self::Zf067(v)
| Self::Zm025(v)
| Self::Zm020(v)
| Self::Zm050(v)
| Self::Zf070(v)
| Self::Zf002(v)
| Self::Zf032(v)
| Self::Zm091(v)
| Self::Zm066(v)
| Self::Zm089(v)
| Self::Zm034(v)
| Self::Zm100(v)
| Self::Zf086(v)
| Self::Zf040(v)
| Self::Zm011(v)
| Self::Zm098(v)
| Self::Zm015(v)
| Self::Zf051(v)
| Self::Zm065(v)
| Self::Zf076(v)
| Self::Zf036(v)
| Self::Zm033(v)
| Self::Zf018(v)
| Self::Zf017(v)
| Self::Zf049(v)
| Self::AfMaple(v)
| Self::Zm082(v)
| Self::Zm057(v)
| Self::Zf079(v)
| Self::Zf022(v)
| Self::Zm063(v)
| Self::Zf060(v)
| Self::Zf019(v)
| Self::Zm097(v)
| Self::Zm096(v)
| Self::Zf023(v)
| Self::Zf027(v)
| Self::Zf085(v)
| Self::Zf077(v)
| Self::Zm035(v)
| Self::Zf088(v)
| Self::Zf024(v)
| Self::Zf072(v)
| Self::Zm055(v)
| Self::Zm052(v)
| Self::Zf071(v)
| Self::Zm061(v)
| Self::Zf078(v)
| Self::Zm013(v)
| Self::Zm081(v)
| Self::Zm037(v)
| Self::Zf090(v)
| Self::Zf043(v)
| Self::Zm058(v)
| Self::Zm012(v)
| Self::Zm045(v)
| Self::Zf075(v) => Ok(*v),
_ => Err(KokoroError::VoiceVersionInvalid(
"Expect version 1.1".to_owned(),
)),
}
}
}

View File

@@ -3,6 +3,6 @@
*/
export const commitinfo = {
name: 'siprouter',
version: '1.20.3',
version: '1.25.2',
description: 'undefined'
}

View File

@@ -1,137 +0,0 @@
/**
* TTS announcement module — generates announcement WAV files at startup.
*
* Engine priority: espeak-ng (formant TTS, fast) → Kokoro neural TTS via
* proxy-engine → disabled.
*
* The generated WAV is left on disk for Rust's audio_player / start_interaction
* to play during calls. No encoding or RTP playback happens in TypeScript.
*/
import { execSync } from 'node:child_process';
import fs from 'node:fs';
import path from 'node:path';
import { sendProxyCommand, isProxyReady } from './proxybridge.ts';
// ---------------------------------------------------------------------------
// State
// ---------------------------------------------------------------------------
const TTS_DIR = path.join(process.cwd(), '.nogit', 'tts');
const ANNOUNCEMENT_TEXT = "Hello. I'm connecting your call now.";
const CACHE_WAV = path.join(TTS_DIR, 'announcement.wav');
// Kokoro fallback constants.
const KOKORO_MODEL = 'kokoro-v1.0.onnx';
const KOKORO_VOICES = 'voices.bin';
const KOKORO_VOICE = 'af_bella';
// ---------------------------------------------------------------------------
// TTS generators
// ---------------------------------------------------------------------------
/** Check if espeak-ng is available on the system. */
function isEspeakAvailable(): boolean {
try {
execSync('which espeak-ng', { stdio: 'pipe' });
return true;
} catch {
return false;
}
}
/** Generate announcement WAV via espeak-ng (primary engine). */
function generateViaEspeak(wavPath: string, text: string, log: (msg: string) => void): boolean {
log('[tts] generating announcement audio via espeak-ng...');
try {
execSync(
`espeak-ng -v en-us -s 150 -w "${wavPath}" "${text}"`,
{ timeout: 10000, stdio: 'pipe' },
);
log('[tts] espeak-ng WAV generated');
return true;
} catch (e: any) {
log(`[tts] espeak-ng failed: ${e.message}`);
return false;
}
}
/** Generate announcement WAV via Kokoro TTS (fallback, runs inside proxy-engine). */
async function generateViaKokoro(wavPath: string, text: string, log: (msg: string) => void): Promise<boolean> {
const modelPath = path.join(TTS_DIR, KOKORO_MODEL);
const voicesPath = path.join(TTS_DIR, KOKORO_VOICES);
if (!fs.existsSync(modelPath) || !fs.existsSync(voicesPath)) {
log('[tts] Kokoro model/voices not found — Kokoro fallback unavailable');
return false;
}
if (!isProxyReady()) {
log('[tts] proxy-engine not ready — Kokoro fallback unavailable');
return false;
}
log('[tts] generating announcement audio via Kokoro TTS (fallback)...');
try {
await sendProxyCommand('generate_tts', {
model: modelPath,
voices: voicesPath,
voice: KOKORO_VOICE,
text,
output: wavPath,
});
log('[tts] Kokoro WAV generated (via proxy-engine)');
return true;
} catch (e: any) {
log(`[tts] Kokoro failed: ${e.message}`);
return false;
}
}
// ---------------------------------------------------------------------------
// Initialization
// ---------------------------------------------------------------------------
/**
* Pre-generate the announcement WAV file.
* Must be called after the proxy engine is initialized.
*
* Engine priority: espeak-ng → Kokoro → disabled.
*/
export async function initAnnouncement(log: (msg: string) => void): Promise<boolean> {
fs.mkdirSync(TTS_DIR, { recursive: true });
try {
if (!fs.existsSync(CACHE_WAV)) {
let generated = false;
// Try espeak-ng first.
if (isEspeakAvailable()) {
generated = generateViaEspeak(CACHE_WAV, ANNOUNCEMENT_TEXT, log);
} else {
log('[tts] espeak-ng not installed — trying Kokoro fallback');
}
// Fall back to Kokoro (via proxy-engine).
if (!generated) {
generated = await generateViaKokoro(CACHE_WAV, ANNOUNCEMENT_TEXT, log);
}
if (!generated) {
log('[tts] no TTS engine available — announcements disabled');
return false;
}
}
log('[tts] announcement WAV ready');
return true;
} catch (e: any) {
log(`[tts] init error: ${e.message}`);
return false;
}
}
/** Get the path to the cached announcement WAV, or null if not generated. */
export function getAnnouncementWavPath(): string | null {
return fs.existsSync(CACHE_WAV) ? CACHE_WAV : null;
}

View File

@@ -1,275 +0,0 @@
/**
* PromptCache — manages named audio prompt WAV files for IVR and voicemail.
*
* Generates WAV files via espeak-ng (primary) or Kokoro TTS through the
* proxy-engine (fallback). Also supports loading pre-existing WAV files
* and programmatic tone generation.
*
* All audio playback happens in Rust (audio_player / start_interaction).
* This module only manages WAV files on disk.
*/
import { execSync } from 'node:child_process';
import fs from 'node:fs';
import path from 'node:path';
import { Buffer } from 'node:buffer';
import { sendProxyCommand, isProxyReady } from '../proxybridge.ts';
// ---------------------------------------------------------------------------
// Types
// ---------------------------------------------------------------------------
/** A cached prompt — just a WAV file path and metadata. */
export interface ICachedPrompt {
/** Unique prompt identifier. */
id: string;
/** Path to the WAV file on disk. */
wavPath: string;
/** Total duration in milliseconds (approximate, from WAV header). */
durationMs: number;
}
// ---------------------------------------------------------------------------
// TTS helpers
// ---------------------------------------------------------------------------
const TTS_DIR = path.join(process.cwd(), '.nogit', 'tts');
/** Check if espeak-ng is available. */
function isEspeakAvailable(): boolean {
try {
execSync('which espeak-ng', { stdio: 'pipe' });
return true;
} catch {
return false;
}
}
/** Generate WAV via espeak-ng. */
function generateViaEspeak(wavPath: string, text: string): boolean {
try {
execSync(
`espeak-ng -v en-us -s 150 -w "${wavPath}" "${text}"`,
{ timeout: 10000, stdio: 'pipe' },
);
return true;
} catch {
return false;
}
}
/** Generate WAV via Kokoro TTS (runs inside proxy-engine). */
async function generateViaKokoro(wavPath: string, text: string, voice: string): Promise<boolean> {
const modelPath = path.join(TTS_DIR, 'kokoro-v1.0.onnx');
const voicesPath = path.join(TTS_DIR, 'voices.bin');
if (!fs.existsSync(modelPath) || !fs.existsSync(voicesPath)) return false;
if (!isProxyReady()) return false;
try {
await sendProxyCommand('generate_tts', {
model: modelPath,
voices: voicesPath,
voice,
text,
output: wavPath,
});
return true;
} catch {
return false;
}
}
/** Read a WAV file's duration from its header. */
function getWavDurationMs(wavPath: string): number {
try {
const wav = fs.readFileSync(wavPath);
if (wav.length < 44) return 0;
if (wav.toString('ascii', 0, 4) !== 'RIFF') return 0;
let sampleRate = 16000;
let dataSize = 0;
let bitsPerSample = 16;
let channels = 1;
let offset = 12;
while (offset < wav.length - 8) {
const chunkId = wav.toString('ascii', offset, offset + 4);
const chunkSize = wav.readUInt32LE(offset + 4);
if (chunkId === 'fmt ') {
channels = wav.readUInt16LE(offset + 10);
sampleRate = wav.readUInt32LE(offset + 12);
bitsPerSample = wav.readUInt16LE(offset + 22);
}
if (chunkId === 'data') {
dataSize = chunkSize;
}
offset += 8 + chunkSize;
if (offset % 2 !== 0) offset++;
}
const bytesPerSample = (bitsPerSample / 8) * channels;
const totalSamples = bytesPerSample > 0 ? dataSize / bytesPerSample : 0;
return sampleRate > 0 ? Math.round((totalSamples / sampleRate) * 1000) : 0;
} catch {
return 0;
}
}
// ---------------------------------------------------------------------------
// PromptCache
// ---------------------------------------------------------------------------
export class PromptCache {
private prompts = new Map<string, ICachedPrompt>();
private log: (msg: string) => void;
private espeakAvailable: boolean | null = null;
constructor(log: (msg: string) => void) {
this.log = log;
}
// -------------------------------------------------------------------------
// Public API
// -------------------------------------------------------------------------
/** Get a cached prompt by ID. */
get(id: string): ICachedPrompt | null {
return this.prompts.get(id) ?? null;
}
/** Check if a prompt is cached. */
has(id: string): boolean {
return this.prompts.has(id);
}
/** List all cached prompt IDs. */
listIds(): string[] {
return [...this.prompts.keys()];
}
/**
* Generate a TTS prompt WAV and cache its path.
* Uses espeak-ng (primary) or Kokoro (fallback).
*/
async generatePrompt(id: string, text: string, voice = 'af_bella'): Promise<ICachedPrompt | null> {
fs.mkdirSync(TTS_DIR, { recursive: true });
const wavPath = path.join(TTS_DIR, `prompt-${id}.wav`);
// Check espeak availability once.
if (this.espeakAvailable === null) {
this.espeakAvailable = isEspeakAvailable();
}
// Generate WAV if not already on disk.
if (!fs.existsSync(wavPath)) {
let generated = false;
if (this.espeakAvailable) {
generated = generateViaEspeak(wavPath, text);
}
if (!generated) {
generated = await generateViaKokoro(wavPath, text, voice);
}
if (!generated) {
this.log(`[prompt-cache] failed to generate TTS for "${id}"`);
return null;
}
this.log(`[prompt-cache] generated WAV for "${id}"`);
}
return this.registerWav(id, wavPath);
}
/**
* Load a pre-existing WAV file as a prompt.
*/
async loadWavPrompt(id: string, wavPath: string): Promise<ICachedPrompt | null> {
if (!fs.existsSync(wavPath)) {
this.log(`[prompt-cache] WAV not found: ${wavPath}`);
return null;
}
return this.registerWav(id, wavPath);
}
/**
* Generate a beep tone WAV and cache it.
*/
async generateBeep(
id: string,
freqHz = 1000,
durationMs = 500,
amplitude = 8000,
): Promise<ICachedPrompt | null> {
fs.mkdirSync(TTS_DIR, { recursive: true });
const wavPath = path.join(TTS_DIR, `prompt-${id}.wav`);
if (!fs.existsSync(wavPath)) {
// Generate 16kHz 16-bit mono sine wave WAV.
const sampleRate = 16000;
const totalSamples = Math.floor((sampleRate * durationMs) / 1000);
const pcm = Buffer.alloc(totalSamples * 2);
for (let i = 0; i < totalSamples; i++) {
const t = i / sampleRate;
const fadeLen = Math.floor(sampleRate * 0.01); // 10ms fade
let envelope = 1.0;
if (i < fadeLen) envelope = i / fadeLen;
else if (i > totalSamples - fadeLen) envelope = (totalSamples - i) / fadeLen;
const sample = Math.round(Math.sin(2 * Math.PI * freqHz * t) * amplitude * envelope);
pcm.writeInt16LE(Math.max(-32768, Math.min(32767, sample)), i * 2);
}
// Write WAV file.
const headerSize = 44;
const dataSize = pcm.length;
const wav = Buffer.alloc(headerSize + dataSize);
// RIFF header
wav.write('RIFF', 0);
wav.writeUInt32LE(36 + dataSize, 4);
wav.write('WAVE', 8);
// fmt chunk
wav.write('fmt ', 12);
wav.writeUInt32LE(16, 16); // chunk size
wav.writeUInt16LE(1, 20); // PCM format
wav.writeUInt16LE(1, 22); // mono
wav.writeUInt32LE(sampleRate, 24);
wav.writeUInt32LE(sampleRate * 2, 28); // byte rate
wav.writeUInt16LE(2, 32); // block align
wav.writeUInt16LE(16, 34); // bits per sample
// data chunk
wav.write('data', 36);
wav.writeUInt32LE(dataSize, 40);
pcm.copy(wav, 44);
fs.writeFileSync(wavPath, wav);
this.log(`[prompt-cache] beep WAV generated for "${id}"`);
}
return this.registerWav(id, wavPath);
}
/** Remove a prompt from the cache. */
remove(id: string): void {
this.prompts.delete(id);
}
/** Clear all cached prompts. */
clear(): void {
this.prompts.clear();
}
// -------------------------------------------------------------------------
// Internal
// -------------------------------------------------------------------------
private registerWav(id: string, wavPath: string): ICachedPrompt {
const durationMs = getWavDurationMs(wavPath);
const prompt: ICachedPrompt = { id, wavPath, durationMs };
this.prompts.set(id, prompt);
this.log(`[prompt-cache] cached "${id}": ${wavPath} (${(durationMs / 1000).toFixed(1)}s)`);
return prompt;
}
}

View File

@@ -48,6 +48,24 @@ export interface IDeviceConfig {
extension: string;
}
export type TIncomingNumberMode = 'single' | 'range' | 'regex';
export interface IIncomingNumberConfig {
id: string;
label: string;
providerId?: string;
mode: TIncomingNumberMode;
countryCode?: string;
areaCode?: string;
localNumber?: string;
rangeEnd?: string;
pattern?: string;
// Legacy persisted fields kept for migration compatibility.
number?: string;
rangeStart?: string;
}
// ---------------------------------------------------------------------------
// Match/Action routing model
// ---------------------------------------------------------------------------
@@ -62,8 +80,11 @@ export interface ISipRouteMatch {
direction: 'inbound' | 'outbound';
/**
* Match the dialed/called number (To/Request-URI for inbound DID, dialed digits for outbound).
* Supports: exact string, prefix with trailing '*' (e.g. "+4930*"), or regex ("/^\\+49/").
* Match the normalized called number.
*
* Inbound: matches the provider-delivered DID / Request-URI user part.
* Outbound: matches the normalized dialed digits.
* Supports: exact string, numeric range `start..end`, prefix with trailing '*' (e.g. "+4930*"), or regex ("/^\\+49/").
*/
numberPattern?: string;
@@ -89,13 +110,13 @@ export interface ISipRouteAction {
// --- Inbound actions (IVR / voicemail) ---
/** Route directly to a voicemail box (skip ringing devices). */
/** Voicemail fallback for matched inbound routes. */
voicemailBox?: string;
/** Route to an IVR menu by menu ID (skip ringing devices). */
ivrMenuId?: string;
/** Override no-answer timeout (seconds) before routing to voicemail. */
/** Reserved for future no-answer handling. */
noAnswerTimeout?: number;
// --- Outbound actions (provider selection) ---
@@ -231,6 +252,7 @@ export interface IAppConfig {
proxy: IProxyConfig;
providers: IProviderConfig[];
devices: IDeviceConfig[];
incomingNumbers?: IIncomingNumberConfig[];
routing: IRoutingConfig;
contacts: IContact[];
voiceboxes?: IVoiceboxConfig[];
@@ -285,6 +307,14 @@ export function loadConfig(): IAppConfig {
d.extension ??= '100';
}
cfg.incomingNumbers ??= [];
for (const incoming of cfg.incomingNumbers) {
if (!incoming.id) incoming.id = `incoming-${Date.now()}`;
incoming.label ??= incoming.id;
incoming.mode ??= incoming.pattern ? 'regex' : incoming.rangeStart || incoming.rangeEnd ? 'range' : 'single';
incoming.countryCode ??= incoming.mode === 'regex' ? undefined : '+49';
}
cfg.routing ??= { routes: [] };
cfg.routing.routes ??= [];

View File

@@ -14,12 +14,36 @@ import { WebSocketServer, WebSocket } from 'ws';
import { handleWebRtcSignaling } from './webrtcbridge.ts';
import type { VoiceboxManager } from './voicebox.ts';
// CallManager was previously used for WebRTC call handling. Now replaced by Rust proxy-engine.
// Kept as `any` type for backward compat with the function signature until full WebRTC port.
type CallManager = any;
const CONFIG_PATH = path.join(process.cwd(), '.nogit', 'config.json');
interface IHandleRequestContext {
getStatus: () => unknown;
log: (msg: string) => void;
onStartCall: (number: string, deviceId?: string, providerId?: string) => { id: string } | null;
onHangupCall: (callId: string) => boolean;
onConfigSaved?: () => void | Promise<void>;
voiceboxManager?: VoiceboxManager;
}
interface IWebUiOptions extends IHandleRequestContext {
port: number;
onWebRtcOffer?: (sessionId: string, sdp: string, ws: WebSocket) => Promise<void>;
onWebRtcIce?: (sessionId: string, candidate: unknown) => Promise<void>;
onWebRtcClose?: (sessionId: string) => Promise<void>;
onWebRtcAccept?: (callId: string, sessionId: string) => void;
}
interface IWebRtcSocketMessage {
type?: string;
sessionId?: string;
callId?: string;
sdp?: string;
candidate?: unknown;
userAgent?: string;
_remoteIp?: string | null;
[key: string]: unknown;
}
// ---------------------------------------------------------------------------
// WebSocket broadcast
// ---------------------------------------------------------------------------
@@ -82,14 +106,9 @@ function loadStaticFiles(): void {
async function handleRequest(
req: http.IncomingMessage,
res: http.ServerResponse,
getStatus: () => unknown,
log: (msg: string) => void,
onStartCall: (number: string, deviceId?: string, providerId?: string) => { id: string } | null,
onHangupCall: (callId: string) => boolean,
onConfigSaved?: () => void,
callManager?: CallManager,
voiceboxManager?: VoiceboxManager,
context: IHandleRequestContext,
): Promise<void> {
const { getStatus, log, onStartCall, onHangupCall, onConfigSaved, voiceboxManager } = context;
const url = new URL(req.url || '/', `http://${req.headers.host || 'localhost'}`);
const method = req.method || 'GET';
@@ -247,6 +266,7 @@ async function handleRequest(
if (existing && ud.displayName !== undefined) existing.displayName = ud.displayName;
}
}
if (updates.incomingNumbers !== undefined) cfg.incomingNumbers = updates.incomingNumbers;
if (updates.routing) {
if (updates.routing.routes) {
cfg.routing.routes = updates.routing.routes;
@@ -258,7 +278,7 @@ async function handleRequest(
fs.writeFileSync(CONFIG_PATH, JSON.stringify(cfg, null, 2) + '\n');
log('[config] updated config.json');
onConfigSaved?.();
await onConfigSaved?.();
return sendJson(res, { ok: true });
} catch (e: any) {
return sendJson(res, { ok: false, error: e.message }, 400);
@@ -339,21 +359,21 @@ async function handleRequest(
// ---------------------------------------------------------------------------
export function initWebUi(
getStatus: () => unknown,
log: (msg: string) => void,
onStartCall: (number: string, deviceId?: string, providerId?: string) => { id: string } | null,
onHangupCall: (callId: string) => boolean,
onConfigSaved?: () => void,
callManager?: CallManager,
voiceboxManager?: VoiceboxManager,
/** WebRTC signaling handlers — forwarded to Rust proxy-engine. */
onWebRtcOffer?: (sessionId: string, sdp: string, ws: WebSocket) => Promise<void>,
onWebRtcIce?: (sessionId: string, candidate: any) => Promise<void>,
onWebRtcClose?: (sessionId: string) => Promise<void>,
/** Called when browser sends webrtc-accept (callId + sessionId linking). */
onWebRtcAccept?: (callId: string, sessionId: string) => void,
options: IWebUiOptions,
): void {
const WEB_PORT = 3060;
const {
port,
getStatus,
log,
onStartCall,
onHangupCall,
onConfigSaved,
voiceboxManager,
onWebRtcOffer,
onWebRtcIce,
onWebRtcClose,
onWebRtcAccept,
} = options;
loadStaticFiles();
@@ -367,12 +387,12 @@ export function initWebUi(
const cert = fs.readFileSync(certPath, 'utf8');
const key = fs.readFileSync(keyPath, 'utf8');
server = https.createServer({ cert, key }, (req, res) =>
handleRequest(req, res, getStatus, log, onStartCall, onHangupCall, onConfigSaved, callManager, voiceboxManager).catch(() => { res.writeHead(500); res.end(); }),
handleRequest(req, res, { getStatus, log, onStartCall, onHangupCall, onConfigSaved, voiceboxManager }).catch(() => { res.writeHead(500); res.end(); }),
);
useTls = true;
} catch {
server = http.createServer((req, res) =>
handleRequest(req, res, getStatus, log, onStartCall, onHangupCall, onConfigSaved, callManager, voiceboxManager).catch(() => { res.writeHead(500); res.end(); }),
handleRequest(req, res, { getStatus, log, onStartCall, onHangupCall, onConfigSaved, voiceboxManager }).catch(() => { res.writeHead(500); res.end(); }),
);
}
@@ -386,12 +406,12 @@ export function initWebUi(
socket.on('message', (raw) => {
try {
const msg = JSON.parse(raw.toString());
const msg = JSON.parse(raw.toString()) as IWebRtcSocketMessage;
if (msg.type === 'webrtc-offer' && msg.sessionId) {
// Forward to Rust proxy-engine for WebRTC handling.
if (onWebRtcOffer) {
if (onWebRtcOffer && typeof msg.sdp === 'string') {
log(`[webrtc-ws] offer msg keys: ${Object.keys(msg).join(',')}, sdp type: ${typeof msg.sdp}, sdp len: ${msg.sdp?.length || 0}`);
onWebRtcOffer(msg.sessionId, msg.sdp, socket as any).catch((e: any) =>
onWebRtcOffer(msg.sessionId, msg.sdp, socket).catch((e: any) =>
log(`[webrtc] offer error: ${e.message}`));
}
} else if (msg.type === 'webrtc-ice' && msg.sessionId) {
@@ -409,7 +429,7 @@ export function initWebUi(
}
} else if (msg.type?.startsWith('webrtc-')) {
msg._remoteIp = remoteIp;
handleWebRtcSignaling(socket as any, msg);
handleWebRtcSignaling(socket, msg);
}
} catch { /* ignore */ }
});
@@ -418,8 +438,8 @@ export function initWebUi(
socket.on('error', () => wsClients.delete(socket));
});
server.listen(WEB_PORT, '0.0.0.0', () => {
log(`web ui listening on ${useTls ? 'https' : 'http'}://0.0.0.0:${WEB_PORT}`);
server.listen(port, '0.0.0.0', () => {
log(`web ui listening on ${useTls ? 'https' : 'http'}://0.0.0.0:${port}`);
});
setInterval(() => broadcastWs('status', getStatus()), 1000);

View File

@@ -4,13 +4,36 @@
* The proxy-engine handles ALL SIP protocol mechanics. TypeScript only:
* - Sends configuration
* - Receives high-level events (incoming_call, call_ended, etc.)
* - Sends high-level commands (hangup, make_call, play_audio)
* - Sends high-level commands (hangup, make_call, add_leg, webrtc_offer)
*
* No raw SIP ever touches TypeScript.
*/
import path from 'node:path';
import { RustBridge } from '@push.rocks/smartrust';
import type { TProxyEventMap } from './shared/proxy-events.ts';
export type {
ICallAnsweredEvent,
ICallEndedEvent,
ICallRingingEvent,
IDeviceRegisteredEvent,
IIncomingCallEvent,
ILegAddedEvent,
ILegRemovedEvent,
ILegStateChangedEvent,
IOutboundCallEvent,
IOutboundCallStartedEvent,
IProviderRegisteredEvent,
IRecordingDoneEvent,
ISipUnhandledEvent,
IVoicemailErrorEvent,
IVoicemailStartedEvent,
IWebRtcAudioRxEvent,
IWebRtcIceCandidateEvent,
IWebRtcStateEvent,
IWebRtcTrackEvent,
TProxyEventMap,
} from './shared/proxy-events.ts';
// ---------------------------------------------------------------------------
// Command type map for smartrust
@@ -29,18 +52,6 @@ type TProxyCommands = {
params: { number: string; device_id?: string; provider_id?: string };
result: { call_id: string };
};
play_audio: {
params: { call_id: string; leg_id?: string; file_path: string; codec?: number };
result: Record<string, never>;
};
start_recording: {
params: { call_id: string; file_path: string; max_duration_ms?: number };
result: Record<string, never>;
};
stop_recording: {
params: { call_id: string };
result: { file_path: string; duration_ms: number };
};
add_leg: {
params: { call_id: string; number: string; provider_id?: string };
result: { leg_id: string };
@@ -71,6 +82,19 @@ type TProxyCommands = {
};
result: { result: 'digit' | 'timeout' | 'cancelled'; digit?: string };
};
start_tts_interaction: {
params: {
call_id: string;
leg_id: string;
text: string;
voice?: string;
model?: string;
voices?: string;
expected_digits: string;
timeout_ms: number;
};
result: { result: 'digit' | 'timeout' | 'cancelled'; digit?: string };
};
add_tool_leg: {
params: {
call_id: string;
@@ -88,7 +112,7 @@ type TProxyCommands = {
result: Record<string, never>;
};
generate_tts: {
params: { model: string; voices: string; voice: string; text: string; output: string };
params: { model: string; voices: string; voice: string; text: string; output: string; cacheable?: boolean };
result: { output: string };
};
// WebRTC signaling — bridged from the browser via the TS control plane.
@@ -121,50 +145,6 @@ type TProxyCommands = {
};
};
// ---------------------------------------------------------------------------
// Event types from Rust
// ---------------------------------------------------------------------------
export interface IIncomingCallEvent {
call_id: string;
from_uri: string;
to_number: string;
provider_id: string;
/** Whether registered browsers should see a `webrtc-incoming` toast for
* this call. Set by the Rust engine from the matched inbound route's
* `ringBrowsers` flag (defaults to `true` when no route matches, so
* deployments without explicit routes preserve pre-routing behavior). */
ring_browsers?: boolean;
}
export interface IOutboundCallEvent {
call_id: string;
from_device: string | null;
to_number: string;
}
export interface ICallEndedEvent {
call_id: string;
reason: string;
duration: number;
from_side?: string;
}
export interface IProviderRegisteredEvent {
provider_id: string;
registered: boolean;
public_ip: string | null;
}
export interface IDeviceRegisteredEvent {
device_id: string;
display_name: string;
address: string;
port: number;
aor: string;
expires: number;
}
// ---------------------------------------------------------------------------
// Bridge singleton
// ---------------------------------------------------------------------------
@@ -173,6 +153,16 @@ let bridge: RustBridge<TProxyCommands> | null = null;
let initialized = false;
let logFn: ((msg: string) => void) | undefined;
type TWebRtcIceCandidate = {
candidate?: string;
sdpMid?: string;
sdpMLineIndex?: number;
} | string;
function errorMessage(error: unknown): string {
return error instanceof Error ? error.message : String(error);
}
function buildLocalPaths(): string[] {
const root = process.cwd();
// Map Node's process.arch to tsrust's friendly target name.
@@ -231,8 +221,8 @@ export async function initProxyEngine(log?: (msg: string) => void): Promise<bool
initialized = true;
log?.('[proxy-engine] spawned and ready');
return true;
} catch (e: any) {
log?.(`[proxy-engine] init error: ${e.message}`);
} catch (error: unknown) {
log?.(`[proxy-engine] init error: ${errorMessage(error)}`);
bridge = null;
return false;
}
@@ -242,14 +232,14 @@ export async function initProxyEngine(log?: (msg: string) => void): Promise<bool
* Send the full app config to the proxy engine.
* This binds the SIP socket, starts provider registrations, etc.
*/
export async function configureProxyEngine(config: Record<string, unknown>): Promise<boolean> {
export async function configureProxyEngine(config: TProxyCommands['configure']['params']): Promise<boolean> {
if (!bridge || !initialized) return false;
try {
const result = await bridge.sendCommand('configure', config as any);
logFn?.(`[proxy-engine] configured, SIP bound on ${(result as any)?.bound || '?'}`);
const result = await sendProxyCommand('configure', config);
logFn?.(`[proxy-engine] configured, SIP bound on ${result.bound || '?'}`);
return true;
} catch (e: any) {
logFn?.(`[proxy-engine] configure error: ${e.message}`);
} catch (error: unknown) {
logFn?.(`[proxy-engine] configure error: ${errorMessage(error)}`);
return false;
}
}
@@ -260,14 +250,14 @@ export async function configureProxyEngine(config: Record<string, unknown>): Pro
export async function makeCall(number: string, deviceId?: string, providerId?: string): Promise<string | null> {
if (!bridge || !initialized) return null;
try {
const result = await bridge.sendCommand('make_call', {
const result = await sendProxyCommand('make_call', {
number,
device_id: deviceId,
provider_id: providerId,
} as any);
return (result as any)?.call_id || null;
} catch (e: any) {
logFn?.(`[proxy-engine] make_call error: ${e?.message || e}`);
});
return result.call_id || null;
} catch (error: unknown) {
logFn?.(`[proxy-engine] make_call error: ${errorMessage(error)}`);
return null;
}
}
@@ -278,7 +268,7 @@ export async function makeCall(number: string, deviceId?: string, providerId?: s
export async function hangupCall(callId: string): Promise<boolean> {
if (!bridge || !initialized) return false;
try {
await bridge.sendCommand('hangup', { call_id: callId } as any);
await sendProxyCommand('hangup', { call_id: callId });
return true;
} catch {
return false;
@@ -291,10 +281,9 @@ export async function hangupCall(callId: string): Promise<boolean> {
export async function webrtcOffer(sessionId: string, sdp: string): Promise<{ sdp: string } | null> {
if (!bridge || !initialized) return null;
try {
const result = await bridge.sendCommand('webrtc_offer', { session_id: sessionId, sdp } as any);
return result as any;
} catch (e: any) {
logFn?.(`[proxy-engine] webrtc_offer error: ${e?.message || e}`);
return await sendProxyCommand('webrtc_offer', { session_id: sessionId, sdp });
} catch (error: unknown) {
logFn?.(`[proxy-engine] webrtc_offer error: ${errorMessage(error)}`);
return null;
}
}
@@ -302,15 +291,15 @@ export async function webrtcOffer(sessionId: string, sdp: string): Promise<{ sdp
/**
* Forward an ICE candidate to the proxy engine.
*/
export async function webrtcIce(sessionId: string, candidate: any): Promise<void> {
export async function webrtcIce(sessionId: string, candidate: TWebRtcIceCandidate): Promise<void> {
if (!bridge || !initialized) return;
try {
await bridge.sendCommand('webrtc_ice', {
await sendProxyCommand('webrtc_ice', {
session_id: sessionId,
candidate: candidate?.candidate || candidate,
sdp_mid: candidate?.sdpMid,
sdp_mline_index: candidate?.sdpMLineIndex,
} as any);
candidate: typeof candidate === 'string' ? candidate : candidate.candidate || '',
sdp_mid: typeof candidate === 'string' ? undefined : candidate.sdpMid,
sdp_mline_index: typeof candidate === 'string' ? undefined : candidate.sdpMLineIndex,
});
} catch { /* ignore */ }
}
@@ -321,16 +310,16 @@ export async function webrtcIce(sessionId: string, candidate: any): Promise<void
export async function webrtcLink(sessionId: string, callId: string, providerMediaAddr: string, providerMediaPort: number, sipPt: number = 9): Promise<boolean> {
if (!bridge || !initialized) return false;
try {
await bridge.sendCommand('webrtc_link', {
await sendProxyCommand('webrtc_link', {
session_id: sessionId,
call_id: callId,
provider_media_addr: providerMediaAddr,
provider_media_port: providerMediaPort,
sip_pt: sipPt,
} as any);
});
return true;
} catch (e: any) {
logFn?.(`[proxy-engine] webrtc_link error: ${e?.message || e}`);
} catch (error: unknown) {
logFn?.(`[proxy-engine] webrtc_link error: ${errorMessage(error)}`);
return false;
}
}
@@ -341,14 +330,14 @@ export async function webrtcLink(sessionId: string, callId: string, providerMedi
export async function addLeg(callId: string, number: string, providerId?: string): Promise<string | null> {
if (!bridge || !initialized) return null;
try {
const result = await bridge.sendCommand('add_leg', {
const result = await sendProxyCommand('add_leg', {
call_id: callId,
number,
provider_id: providerId,
} as any);
return (result as any)?.leg_id || null;
} catch (e: any) {
logFn?.(`[proxy-engine] add_leg error: ${e?.message || e}`);
});
return result.leg_id || null;
} catch (error: unknown) {
logFn?.(`[proxy-engine] add_leg error: ${errorMessage(error)}`);
return null;
}
}
@@ -359,10 +348,10 @@ export async function addLeg(callId: string, number: string, providerId?: string
export async function removeLeg(callId: string, legId: string): Promise<boolean> {
if (!bridge || !initialized) return false;
try {
await bridge.sendCommand('remove_leg', { call_id: callId, leg_id: legId } as any);
await sendProxyCommand('remove_leg', { call_id: callId, leg_id: legId });
return true;
} catch (e: any) {
logFn?.(`[proxy-engine] remove_leg error: ${e?.message || e}`);
} catch (error: unknown) {
logFn?.(`[proxy-engine] remove_leg error: ${errorMessage(error)}`);
return false;
}
}
@@ -373,7 +362,7 @@ export async function removeLeg(callId: string, legId: string): Promise<boolean>
export async function webrtcClose(sessionId: string): Promise<void> {
if (!bridge || !initialized) return;
try {
await bridge.sendCommand('webrtc_close', { session_id: sessionId } as any);
await sendProxyCommand('webrtc_close', { session_id: sessionId });
} catch { /* ignore */ }
}
@@ -387,13 +376,13 @@ export async function webrtcClose(sessionId: string): Promise<void> {
export async function addDeviceLeg(callId: string, deviceId: string): Promise<string | null> {
if (!bridge || !initialized) return null;
try {
const result = await bridge.sendCommand('add_device_leg', {
const result = await sendProxyCommand('add_device_leg', {
call_id: callId,
device_id: deviceId,
} as any);
return (result as any)?.leg_id || null;
} catch (e: any) {
logFn?.(`[proxy-engine] add_device_leg error: ${e?.message || e}`);
});
return result.leg_id || null;
} catch (error: unknown) {
logFn?.(`[proxy-engine] add_device_leg error: ${errorMessage(error)}`);
return null;
}
}
@@ -408,14 +397,14 @@ export async function transferLeg(
): Promise<boolean> {
if (!bridge || !initialized) return false;
try {
await bridge.sendCommand('transfer_leg', {
await sendProxyCommand('transfer_leg', {
source_call_id: sourceCallId,
leg_id: legId,
target_call_id: targetCallId,
} as any);
});
return true;
} catch (e: any) {
logFn?.(`[proxy-engine] transfer_leg error: ${e?.message || e}`);
} catch (error: unknown) {
logFn?.(`[proxy-engine] transfer_leg error: ${errorMessage(error)}`);
return false;
}
}
@@ -431,15 +420,15 @@ export async function replaceLeg(
): Promise<string | null> {
if (!bridge || !initialized) return null;
try {
const result = await bridge.sendCommand('replace_leg', {
const result = await sendProxyCommand('replace_leg', {
call_id: callId,
old_leg_id: oldLegId,
number,
provider_id: providerId,
} as any);
return (result as any)?.new_leg_id || null;
} catch (e: any) {
logFn?.(`[proxy-engine] replace_leg error: ${e?.message || e}`);
});
return result.new_leg_id || null;
} catch (error: unknown) {
logFn?.(`[proxy-engine] replace_leg error: ${errorMessage(error)}`);
return null;
}
}
@@ -457,16 +446,49 @@ export async function startInteraction(
): Promise<{ result: 'digit' | 'timeout' | 'cancelled'; digit?: string } | null> {
if (!bridge || !initialized) return null;
try {
const result = await bridge.sendCommand('start_interaction', {
return await sendProxyCommand('start_interaction', {
call_id: callId,
leg_id: legId,
prompt_wav: promptWav,
expected_digits: expectedDigits,
timeout_ms: timeoutMs,
} as any);
return result as any;
} catch (e: any) {
logFn?.(`[proxy-engine] start_interaction error: ${e?.message || e}`);
});
} catch (error: unknown) {
logFn?.(`[proxy-engine] start_interaction error: ${errorMessage(error)}`);
return null;
}
}
/**
* Start a live TTS interaction on a specific leg. The first chunk is rendered
* up front and the rest streams into the mixer while playback is already live.
*/
export async function startTtsInteraction(
callId: string,
legId: string,
text: string,
expectedDigits: string,
timeoutMs: number,
options?: {
voice?: string;
model?: string;
voices?: string;
},
): Promise<{ result: 'digit' | 'timeout' | 'cancelled'; digit?: string } | null> {
if (!bridge || !initialized) return null;
try {
return await sendProxyCommand('start_tts_interaction', {
call_id: callId,
leg_id: legId,
text,
expected_digits: expectedDigits,
timeout_ms: timeoutMs,
voice: options?.voice,
model: options?.model,
voices: options?.voices,
});
} catch (error: unknown) {
logFn?.(`[proxy-engine] start_tts_interaction error: ${errorMessage(error)}`);
return null;
}
}
@@ -482,14 +504,14 @@ export async function addToolLeg(
): Promise<string | null> {
if (!bridge || !initialized) return null;
try {
const result = await bridge.sendCommand('add_tool_leg', {
const result = await sendProxyCommand('add_tool_leg', {
call_id: callId,
tool_type: toolType,
config,
} as any);
return (result as any)?.tool_leg_id || null;
} catch (e: any) {
logFn?.(`[proxy-engine] add_tool_leg error: ${e?.message || e}`);
});
return result.tool_leg_id || null;
} catch (error: unknown) {
logFn?.(`[proxy-engine] add_tool_leg error: ${errorMessage(error)}`);
return null;
}
}
@@ -500,13 +522,13 @@ export async function addToolLeg(
export async function removeToolLeg(callId: string, toolLegId: string): Promise<boolean> {
if (!bridge || !initialized) return false;
try {
await bridge.sendCommand('remove_tool_leg', {
await sendProxyCommand('remove_tool_leg', {
call_id: callId,
tool_leg_id: toolLegId,
} as any);
});
return true;
} catch (e: any) {
logFn?.(`[proxy-engine] remove_tool_leg error: ${e?.message || e}`);
} catch (error: unknown) {
logFn?.(`[proxy-engine] remove_tool_leg error: ${errorMessage(error)}`);
return false;
}
}
@@ -522,15 +544,15 @@ export async function setLegMetadata(
): Promise<boolean> {
if (!bridge || !initialized) return false;
try {
await bridge.sendCommand('set_leg_metadata', {
await sendProxyCommand('set_leg_metadata', {
call_id: callId,
leg_id: legId,
key,
value,
} as any);
});
return true;
} catch (e: any) {
logFn?.(`[proxy-engine] set_leg_metadata error: ${e?.message || e}`);
} catch (error: unknown) {
logFn?.(`[proxy-engine] set_leg_metadata error: ${errorMessage(error)}`);
return false;
}
}
@@ -542,7 +564,7 @@ export async function setLegMetadata(
* dtmf_digit, recording_done, tool_recording_done, tool_transcription_done,
* leg_added, leg_removed, sip_unhandled
*/
export function onProxyEvent(event: string, handler: (data: any) => void): void {
export function onProxyEvent<K extends keyof TProxyEventMap>(event: K, handler: (data: TProxyEventMap[K]) => void): void {
if (!bridge) throw new Error('proxy engine not initialized');
bridge.on(`management:${event}`, handler);
}

210
ts/runtime/proxy-events.ts Normal file
View File

@@ -0,0 +1,210 @@
import { onProxyEvent } from '../proxybridge.ts';
import type { VoiceboxManager } from '../voicebox.ts';
import type { StatusStore } from './status-store.ts';
import type { IProviderMediaInfo, WebRtcLinkManager } from './webrtc-linking.ts';
export interface IRegisterProxyEventHandlersOptions {
log: (msg: string) => void;
statusStore: StatusStore;
voiceboxManager: VoiceboxManager;
webRtcLinks: WebRtcLinkManager;
getBrowserDeviceIds: () => string[];
sendToBrowserDevice: (deviceId: string, data: unknown) => boolean;
broadcast: (type: string, data: unknown) => void;
onLinkWebRtcSession: (callId: string, sessionId: string, media: IProviderMediaInfo) => void;
onCloseWebRtcSession: (sessionId: string) => void;
}
export function registerProxyEventHandlers(options: IRegisterProxyEventHandlersOptions): void {
const {
log,
statusStore,
voiceboxManager,
webRtcLinks,
getBrowserDeviceIds,
sendToBrowserDevice,
broadcast,
onLinkWebRtcSession,
onCloseWebRtcSession,
} = options;
const legMediaDetails = (data: {
codec?: string | null;
remoteMedia?: string | null;
rtpPort?: number | null;
}): string => {
const parts: string[] = [];
if (data.codec) {
parts.push(`codec=${data.codec}`);
}
if (data.remoteMedia) {
parts.push(`remote=${data.remoteMedia}`);
}
if (data.rtpPort !== undefined && data.rtpPort !== null) {
parts.push(`rtp=${data.rtpPort}`);
}
return parts.length ? ` ${parts.join(' ')}` : '';
};
onProxyEvent('provider_registered', (data) => {
const previous = statusStore.noteProviderRegistered(data);
if (previous) {
if (data.registered && !previous.wasRegistered) {
log(`[provider:${data.provider_id}] registered (publicIp=${data.public_ip})`);
} else if (!data.registered && previous.wasRegistered) {
log(`[provider:${data.provider_id}] registration lost`);
}
}
broadcast('registration', { providerId: data.provider_id, registered: data.registered });
});
onProxyEvent('device_registered', (data) => {
if (statusStore.noteDeviceRegistered(data)) {
log(`[registrar] ${data.display_name} registered from ${data.address}:${data.port}`);
}
});
onProxyEvent('incoming_call', (data) => {
log(`[call] incoming: ${data.from_uri} -> ${data.to_number} via ${data.provider_id} (${data.call_id})`);
statusStore.noteIncomingCall(data);
if (data.ring_browsers === false) {
return;
}
for (const deviceId of getBrowserDeviceIds()) {
sendToBrowserDevice(deviceId, {
type: 'webrtc-incoming',
callId: data.call_id,
from: data.from_uri,
deviceId,
});
}
});
onProxyEvent('outbound_device_call', (data) => {
log(`[call] outbound: device ${data.from_device} -> ${data.to_number} (${data.call_id})`);
statusStore.noteOutboundDeviceCall(data);
});
onProxyEvent('outbound_call_started', (data) => {
log(`[call] outbound started: ${data.call_id} -> ${data.number} via ${data.provider_id}`);
statusStore.noteOutboundCallStarted(data);
for (const deviceId of getBrowserDeviceIds()) {
sendToBrowserDevice(deviceId, {
type: 'webrtc-incoming',
callId: data.call_id,
from: data.number,
deviceId,
});
}
});
onProxyEvent('call_ringing', (data) => {
statusStore.noteCallRinging(data);
});
onProxyEvent('call_answered', (data) => {
if (statusStore.noteCallAnswered(data)) {
log(`[call] ${data.call_id} connected`);
}
if (!data.provider_media_addr || !data.provider_media_port) {
return;
}
const target = webRtcLinks.noteCallAnswered(data.call_id, {
addr: data.provider_media_addr,
port: data.provider_media_port,
sipPt: data.sip_pt ?? 9,
});
if (!target) {
log(`[webrtc] media info cached for call=${data.call_id}, waiting for session accept`);
return;
}
onLinkWebRtcSession(data.call_id, target.sessionId, target.media);
});
onProxyEvent('call_ended', (data) => {
if (statusStore.noteCallEnded(data)) {
log(`[call] ${data.call_id} ended: ${data.reason} (${data.duration}s)`);
}
broadcast('webrtc-call-ended', { callId: data.call_id });
const sessionId = webRtcLinks.cleanupCall(data.call_id);
if (sessionId) {
onCloseWebRtcSession(sessionId);
}
});
onProxyEvent('sip_unhandled', (data) => {
log(`[sip] unhandled ${data.method_or_status} Call-ID=${data.call_id?.slice(0, 20)} from=${data.from_addr}:${data.from_port}`);
});
onProxyEvent('leg_added', (data) => {
log(
`[leg] added: call=${data.call_id} leg=${data.leg_id} kind=${data.kind} state=${data.state}${legMediaDetails(data)}`,
);
statusStore.noteLegAdded(data);
});
onProxyEvent('leg_removed', (data) => {
log(`[leg] removed: call=${data.call_id} leg=${data.leg_id}`);
statusStore.noteLegRemoved(data);
});
onProxyEvent('leg_state_changed', (data) => {
log(
`[leg] state: call=${data.call_id} leg=${data.leg_id} -> ${data.state}${legMediaDetails(data)}`,
);
statusStore.noteLegStateChanged(data);
});
onProxyEvent('webrtc_ice_candidate', (data) => {
broadcast('webrtc-ice', {
sessionId: data.session_id,
candidate: {
candidate: data.candidate,
sdpMid: data.sdp_mid,
sdpMLineIndex: data.sdp_mline_index,
},
});
});
onProxyEvent('webrtc_state', (data) => {
log(`[webrtc] session=${data.session_id?.slice(0, 8)} state=${data.state}`);
});
onProxyEvent('webrtc_track', (data) => {
log(`[webrtc] session=${data.session_id?.slice(0, 8)} track=${data.kind} codec=${data.codec}`);
});
onProxyEvent('webrtc_audio_rx', (data) => {
if (data.packet_count === 1 || data.packet_count === 50) {
log(`[webrtc] session=${data.session_id?.slice(0, 8)} browser audio rx #${data.packet_count}`);
}
});
onProxyEvent('voicemail_started', (data) => {
log(`[voicemail] started for call ${data.call_id} box=${data.voicebox_id || 'default'} caller=${data.caller_number}`);
});
onProxyEvent('recording_done', (data) => {
const boxId = data.voicebox_id || 'default';
log(`[voicemail] recording done: ${data.file_path} (${data.duration_ms}ms) box=${boxId} caller=${data.caller_number}`);
voiceboxManager.addMessage(boxId, {
callerNumber: data.caller_number || 'Unknown',
callerName: null,
fileName: data.file_path,
durationMs: data.duration_ms,
});
});
onProxyEvent('voicemail_error', (data) => {
log(`[voicemail] error: ${data.error} call=${data.call_id}`);
});
}

326
ts/runtime/status-store.ts Normal file
View File

@@ -0,0 +1,326 @@
import type { IAppConfig } from '../config.ts';
import type {
ICallAnsweredEvent,
ICallEndedEvent,
ICallRingingEvent,
IDeviceRegisteredEvent,
IIncomingCallEvent,
ILegAddedEvent,
ILegRemovedEvent,
ILegStateChangedEvent,
IOutboundCallEvent,
IOutboundCallStartedEvent,
IProviderRegisteredEvent,
} from '../shared/proxy-events.ts';
import type {
IActiveCall,
ICallHistoryEntry,
IDeviceStatus,
IProviderStatus,
IStatusSnapshot,
TLegType,
} from '../shared/status.ts';
const MAX_HISTORY = 100;
const CODEC_NAMES: Record<number, string> = {
0: 'PCMU',
8: 'PCMA',
9: 'G.722',
111: 'Opus',
};
export class StatusStore {
private appConfig: IAppConfig;
private providerStatuses = new Map<string, IProviderStatus>();
private deviceStatuses = new Map<string, IDeviceStatus>();
private activeCalls = new Map<string, IActiveCall>();
private callHistory: ICallHistoryEntry[] = [];
constructor(appConfig: IAppConfig) {
this.appConfig = appConfig;
this.rebuildConfigState();
}
updateConfig(appConfig: IAppConfig): void {
this.appConfig = appConfig;
this.rebuildConfigState();
}
buildStatusSnapshot(
instanceId: string,
startTime: number,
browserDeviceIds: string[],
voicemailCounts: Record<string, number>,
): IStatusSnapshot {
const devices = [...this.deviceStatuses.values()];
for (const deviceId of browserDeviceIds) {
devices.push({
id: deviceId,
displayName: 'Browser',
address: null,
port: 0,
aor: null,
connected: true,
isBrowser: true,
});
}
return {
instanceId,
uptime: Math.floor((Date.now() - startTime) / 1000),
lanIp: this.appConfig.proxy.lanIp,
providers: [...this.providerStatuses.values()],
devices,
calls: [...this.activeCalls.values()].map((call) => ({
...call,
duration: Math.floor((Date.now() - call.startedAt) / 1000),
legs: [...call.legs.values()].map((leg) => ({
...leg,
pktSent: 0,
pktReceived: 0,
transcoding: false,
})),
})),
callHistory: this.callHistory,
contacts: this.appConfig.contacts || [],
voicemailCounts,
};
}
noteDashboardCallStarted(callId: string, number: string, providerId?: string): void {
this.activeCalls.set(callId, {
id: callId,
direction: 'outbound',
callerNumber: null,
calleeNumber: number,
providerUsed: providerId || null,
state: 'setting-up',
startedAt: Date.now(),
legs: new Map(),
});
}
noteProviderRegistered(data: IProviderRegisteredEvent): { wasRegistered: boolean } | null {
const provider = this.providerStatuses.get(data.provider_id);
if (!provider) {
return null;
}
const wasRegistered = provider.registered;
provider.registered = data.registered;
provider.publicIp = data.public_ip;
return { wasRegistered };
}
noteDeviceRegistered(data: IDeviceRegisteredEvent): boolean {
const device = this.deviceStatuses.get(data.device_id);
if (!device) {
return false;
}
device.address = data.address;
device.port = data.port;
device.aor = data.aor;
device.connected = true;
return true;
}
noteIncomingCall(data: IIncomingCallEvent): void {
this.activeCalls.set(data.call_id, {
id: data.call_id,
direction: 'inbound',
callerNumber: data.from_uri,
calleeNumber: data.to_number,
providerUsed: data.provider_id,
state: 'ringing',
startedAt: Date.now(),
legs: new Map(),
});
}
noteOutboundDeviceCall(data: IOutboundCallEvent): void {
this.activeCalls.set(data.call_id, {
id: data.call_id,
direction: 'outbound',
callerNumber: data.from_device,
calleeNumber: data.to_number,
providerUsed: null,
state: 'setting-up',
startedAt: Date.now(),
legs: new Map(),
});
}
noteOutboundCallStarted(data: IOutboundCallStartedEvent): void {
this.activeCalls.set(data.call_id, {
id: data.call_id,
direction: 'outbound',
callerNumber: null,
calleeNumber: data.number,
providerUsed: data.provider_id,
state: 'setting-up',
startedAt: Date.now(),
legs: new Map(),
});
}
noteCallRinging(data: ICallRingingEvent): void {
const call = this.activeCalls.get(data.call_id);
if (call) {
call.state = 'ringing';
}
}
noteCallAnswered(data: ICallAnsweredEvent): boolean {
const call = this.activeCalls.get(data.call_id);
if (!call) {
return false;
}
call.state = 'connected';
if (data.provider_media_addr && data.provider_media_port) {
for (const leg of call.legs.values()) {
if (leg.type !== 'sip-provider') {
continue;
}
leg.remoteMedia = `${data.provider_media_addr}:${data.provider_media_port}`;
if (data.sip_pt !== undefined) {
leg.codec = CODEC_NAMES[data.sip_pt] || `PT${data.sip_pt}`;
}
break;
}
}
return true;
}
noteCallEnded(data: ICallEndedEvent): boolean {
const call = this.activeCalls.get(data.call_id);
if (!call) {
return false;
}
this.callHistory.unshift({
id: call.id,
direction: call.direction,
callerNumber: call.callerNumber,
calleeNumber: call.calleeNumber,
providerUsed: call.providerUsed,
startedAt: call.startedAt,
duration: data.duration,
legs: [...call.legs.values()].map((leg) => ({
id: leg.id,
type: leg.type,
state: leg.state,
codec: leg.codec,
rtpPort: leg.rtpPort,
remoteMedia: leg.remoteMedia,
metadata: leg.metadata || {},
})),
});
if (this.callHistory.length > MAX_HISTORY) {
this.callHistory.pop();
}
this.activeCalls.delete(data.call_id);
return true;
}
noteLegAdded(data: ILegAddedEvent): void {
const call = this.activeCalls.get(data.call_id);
if (!call) {
return;
}
call.legs.set(data.leg_id, {
id: data.leg_id,
type: data.kind,
state: data.state,
codec: data.codec ?? null,
rtpPort: data.rtpPort ?? null,
remoteMedia: data.remoteMedia ?? null,
metadata: data.metadata || {},
});
}
noteLegRemoved(data: ILegRemovedEvent): void {
this.activeCalls.get(data.call_id)?.legs.delete(data.leg_id);
}
noteLegStateChanged(data: ILegStateChangedEvent): void {
const call = this.activeCalls.get(data.call_id);
if (!call) {
return;
}
const existingLeg = call.legs.get(data.leg_id);
if (existingLeg) {
existingLeg.state = data.state;
if (data.codec !== undefined) {
existingLeg.codec = data.codec;
}
if (data.rtpPort !== undefined) {
existingLeg.rtpPort = data.rtpPort;
}
if (data.remoteMedia !== undefined) {
existingLeg.remoteMedia = data.remoteMedia;
}
if (data.metadata) {
existingLeg.metadata = data.metadata;
}
return;
}
call.legs.set(data.leg_id, {
id: data.leg_id,
type: this.inferLegType(data.leg_id),
state: data.state,
codec: data.codec ?? null,
rtpPort: data.rtpPort ?? null,
remoteMedia: data.remoteMedia ?? null,
metadata: data.metadata || {},
});
}
private rebuildConfigState(): void {
const nextProviderStatuses = new Map<string, IProviderStatus>();
for (const provider of this.appConfig.providers) {
const previous = this.providerStatuses.get(provider.id);
nextProviderStatuses.set(provider.id, {
id: provider.id,
displayName: provider.displayName,
registered: previous?.registered ?? false,
publicIp: previous?.publicIp ?? null,
});
}
this.providerStatuses = nextProviderStatuses;
const nextDeviceStatuses = new Map<string, IDeviceStatus>();
for (const device of this.appConfig.devices) {
const previous = this.deviceStatuses.get(device.id);
nextDeviceStatuses.set(device.id, {
id: device.id,
displayName: device.displayName,
address: previous?.address ?? null,
port: previous?.port ?? 0,
aor: previous?.aor ?? null,
connected: previous?.connected ?? false,
isBrowser: false,
});
}
this.deviceStatuses = nextDeviceStatuses;
}
private inferLegType(legId: string): TLegType {
if (legId.includes('-prov')) {
return 'sip-provider';
}
if (legId.includes('-dev')) {
return 'sip-device';
}
return 'webrtc';
}
}

View File

@@ -0,0 +1,66 @@
export interface IProviderMediaInfo {
addr: string;
port: number;
sipPt: number;
}
export interface IWebRtcLinkTarget {
sessionId: string;
media: IProviderMediaInfo;
}
export class WebRtcLinkManager {
private sessionToCall = new Map<string, string>();
private callToSession = new Map<string, string>();
private pendingCallMedia = new Map<string, IProviderMediaInfo>();
acceptCall(callId: string, sessionId: string): IProviderMediaInfo | null {
const previousCallId = this.sessionToCall.get(sessionId);
if (previousCallId && previousCallId !== callId) {
this.callToSession.delete(previousCallId);
}
const previousSessionId = this.callToSession.get(callId);
if (previousSessionId && previousSessionId !== sessionId) {
this.sessionToCall.delete(previousSessionId);
}
this.sessionToCall.set(sessionId, callId);
this.callToSession.set(callId, sessionId);
const pendingMedia = this.pendingCallMedia.get(callId) ?? null;
if (pendingMedia) {
this.pendingCallMedia.delete(callId);
}
return pendingMedia;
}
noteCallAnswered(callId: string, media: IProviderMediaInfo): IWebRtcLinkTarget | null {
const sessionId = this.callToSession.get(callId);
if (!sessionId) {
this.pendingCallMedia.set(callId, media);
return null;
}
return { sessionId, media };
}
removeSession(sessionId: string): string | null {
const callId = this.sessionToCall.get(sessionId) ?? null;
this.sessionToCall.delete(sessionId);
if (callId) {
this.callToSession.delete(callId);
}
return callId;
}
cleanupCall(callId: string): string | null {
const sessionId = this.callToSession.get(callId) ?? null;
this.callToSession.delete(callId);
this.pendingCallMedia.delete(callId);
if (sessionId) {
this.sessionToCall.delete(sessionId);
}
return sessionId;
}
}

151
ts/shared/proxy-events.ts Normal file
View File

@@ -0,0 +1,151 @@
import type { TLegType } from './status.ts';
export interface IIncomingCallEvent {
call_id: string;
from_uri: string;
to_number: string;
provider_id: string;
ring_browsers?: boolean;
}
export interface IOutboundCallEvent {
call_id: string;
from_device: string | null;
to_number: string;
}
export interface IOutboundCallStartedEvent {
call_id: string;
number: string;
provider_id: string;
}
export interface ICallRingingEvent {
call_id: string;
}
export interface ICallAnsweredEvent {
call_id: string;
provider_media_addr?: string;
provider_media_port?: number;
sip_pt?: number;
}
export interface ICallEndedEvent {
call_id: string;
reason: string;
duration: number;
from_side?: string;
}
export interface IProviderRegisteredEvent {
provider_id: string;
registered: boolean;
public_ip: string | null;
}
export interface IDeviceRegisteredEvent {
device_id: string;
display_name: string;
address: string;
port: number;
aor: string;
expires: number;
}
export interface ISipUnhandledEvent {
method_or_status: string;
call_id?: string;
from_addr: string;
from_port: number;
}
export interface ILegAddedEvent {
call_id: string;
leg_id: string;
kind: TLegType;
state: string;
codec?: string | null;
rtpPort?: number | null;
remoteMedia?: string | null;
metadata?: Record<string, unknown>;
}
export interface ILegRemovedEvent {
call_id: string;
leg_id: string;
}
export interface ILegStateChangedEvent {
call_id: string;
leg_id: string;
state: string;
codec?: string | null;
rtpPort?: number | null;
remoteMedia?: string | null;
metadata?: Record<string, unknown>;
}
export interface IWebRtcIceCandidateEvent {
session_id: string;
candidate: string;
sdp_mid?: string;
sdp_mline_index?: number;
}
export interface IWebRtcStateEvent {
session_id?: string;
state: string;
}
export interface IWebRtcTrackEvent {
session_id?: string;
kind: string;
codec: string;
}
export interface IWebRtcAudioRxEvent {
session_id?: string;
packet_count: number;
}
export interface IVoicemailStartedEvent {
call_id: string;
voicebox_id?: string;
caller_number?: string;
}
export interface IRecordingDoneEvent {
call_id?: string;
voicebox_id?: string;
file_path: string;
duration_ms: number;
caller_number?: string;
}
export interface IVoicemailErrorEvent {
call_id: string;
error: string;
}
export type TProxyEventMap = {
provider_registered: IProviderRegisteredEvent;
device_registered: IDeviceRegisteredEvent;
incoming_call: IIncomingCallEvent;
outbound_device_call: IOutboundCallEvent;
outbound_call_started: IOutboundCallStartedEvent;
call_ringing: ICallRingingEvent;
call_answered: ICallAnsweredEvent;
call_ended: ICallEndedEvent;
sip_unhandled: ISipUnhandledEvent;
leg_added: ILegAddedEvent;
leg_removed: ILegRemovedEvent;
leg_state_changed: ILegStateChangedEvent;
webrtc_ice_candidate: IWebRtcIceCandidateEvent;
webrtc_state: IWebRtcStateEvent;
webrtc_track: IWebRtcTrackEvent;
webrtc_audio_rx: IWebRtcAudioRxEvent;
voicemail_started: IVoicemailStartedEvent;
recording_done: IRecordingDoneEvent;
voicemail_error: IVoicemailErrorEvent;
};

93
ts/shared/status.ts Normal file
View File

@@ -0,0 +1,93 @@
import type { IContact } from '../config.ts';
export type TLegType = 'sip-device' | 'sip-provider' | 'webrtc' | 'tool';
export type TCallDirection = 'inbound' | 'outbound';
export interface IProviderStatus {
id: string;
displayName: string;
registered: boolean;
publicIp: string | null;
}
export interface IDeviceStatus {
id: string;
displayName: string;
address: string | null;
port: number;
aor: string | null;
connected: boolean;
isBrowser: boolean;
}
export interface IActiveLeg {
id: string;
type: TLegType;
state: string;
codec: string | null;
rtpPort: number | null;
remoteMedia: string | null;
metadata: Record<string, unknown>;
}
export interface IActiveCall {
id: string;
direction: TCallDirection;
callerNumber: string | null;
calleeNumber: string | null;
providerUsed: string | null;
state: string;
startedAt: number;
legs: Map<string, IActiveLeg>;
}
export interface IHistoryLeg {
id: string;
type: TLegType;
state: string;
codec: string | null;
rtpPort: number | null;
remoteMedia: string | null;
metadata: Record<string, unknown>;
}
export interface ICallHistoryEntry {
id: string;
direction: TCallDirection;
callerNumber: string | null;
calleeNumber: string | null;
providerUsed: string | null;
startedAt: number;
duration: number;
legs: IHistoryLeg[];
}
export interface ILegStatus extends IActiveLeg {
pktSent: number;
pktReceived: number;
transcoding: boolean;
}
export interface ICallStatus {
id: string;
direction: TCallDirection;
callerNumber: string | null;
calleeNumber: string | null;
providerUsed: string | null;
state: string;
startedAt: number;
duration: number;
legs: ILegStatus[];
}
export interface IStatusSnapshot {
instanceId: string;
uptime: number;
lanIp: string;
providers: IProviderStatus[];
devices: IDeviceStatus[];
calls: ICallStatus[];
callHistory: ICallHistoryEntry[];
contacts: IContact[];
voicemailCounts: Record<string, number>;
}

View File

@@ -1,36 +1,20 @@
/**
* SIP proxy — entry point.
* SIP proxy bootstrap.
*
* Spawns the Rust proxy-engine which handles ALL SIP protocol mechanics.
* TypeScript is the control plane:
* - Loads config and pushes it to Rust
* - Receives high-level events (incoming calls, registration, etc.)
* - Drives the web dashboard
* - Manages IVR, voicemail, announcements
* - Handles WebRTC browser signaling (forwarded to Rust in Phase 2)
*
* No raw SIP ever touches TypeScript.
* Spawns the Rust proxy-engine, wires runtime state/event handling,
* and starts the web dashboard plus browser signaling layer.
*/
import fs from 'node:fs';
import path from 'node:path';
import { loadConfig } from './config.ts';
import type { IAppConfig } from './config.ts';
import { loadConfig, type IAppConfig } from './config.ts';
import { broadcastWs, initWebUi } from './frontend.ts';
import {
initWebRtcSignaling,
sendToBrowserDevice,
getAllBrowserDeviceIds,
getBrowserDeviceWs,
} from './webrtcbridge.ts';
import { initAnnouncement } from './announcement.ts';
import { PromptCache } from './call/prompt-cache.ts';
import { initWebRtcSignaling, getAllBrowserDeviceIds, sendToBrowserDevice } from './webrtcbridge.ts';
import { VoiceboxManager } from './voicebox.ts';
import {
initProxyEngine,
configureProxyEngine,
onProxyEvent,
hangupCall,
makeCall,
shutdownProxyEngine,
@@ -38,648 +22,200 @@ import {
webrtcIce,
webrtcLink,
webrtcClose,
addLeg,
removeLeg,
} from './proxybridge.ts';
import type {
IIncomingCallEvent,
IOutboundCallEvent,
ICallEndedEvent,
IProviderRegisteredEvent,
IDeviceRegisteredEvent,
} from './proxybridge.ts';
// ---------------------------------------------------------------------------
// Config
// ---------------------------------------------------------------------------
import { registerProxyEventHandlers } from './runtime/proxy-events.ts';
import { StatusStore } from './runtime/status-store.ts';
import { WebRtcLinkManager, type IProviderMediaInfo } from './runtime/webrtc-linking.ts';
let appConfig: IAppConfig = loadConfig();
const LOG_PATH = path.join(process.cwd(), 'sip_trace.log');
// ---------------------------------------------------------------------------
// Logging
// ---------------------------------------------------------------------------
const startTime = Date.now();
const instanceId = `${Date.now()}-${Math.random().toString(36).slice(2, 8)}`;
const statusStore = new StatusStore(appConfig);
const webRtcLinks = new WebRtcLinkManager();
const voiceboxManager = new VoiceboxManager(log);
voiceboxManager.init(appConfig.voiceboxes ?? []);
initWebRtcSignaling({ log });
function now(): string {
return new Date().toISOString().replace('T', ' ').slice(0, 19);
}
function log(msg: string): void {
const line = `${now()} ${msg}\n`;
function log(message: string): void {
const line = `${now()} ${message}\n`;
fs.appendFileSync(LOG_PATH, line);
process.stdout.write(line);
broadcastWs('log', { message: msg });
broadcastWs('log', { message });
}
// ---------------------------------------------------------------------------
// Shadow state — maintained from Rust events for the dashboard
// ---------------------------------------------------------------------------
interface IProviderStatus {
id: string;
displayName: string;
registered: boolean;
publicIp: string | null;
function errorMessage(error: unknown): string {
return error instanceof Error ? error.message : String(error);
}
interface IDeviceStatus {
id: string;
displayName: string;
address: string | null;
port: number;
connected: boolean;
isBrowser: boolean;
}
interface IActiveLeg {
id: string;
type: 'sip-device' | 'sip-provider' | 'webrtc' | 'tool';
state: string;
codec: string | null;
rtpPort: number | null;
remoteMedia: string | null;
metadata: Record<string, unknown>;
}
interface IActiveCall {
id: string;
direction: string;
callerNumber: string | null;
calleeNumber: string | null;
providerUsed: string | null;
state: string;
startedAt: number;
legs: Map<string, IActiveLeg>;
}
interface IHistoryLeg {
id: string;
type: string;
metadata: Record<string, unknown>;
}
interface ICallHistoryEntry {
id: string;
direction: string;
callerNumber: string | null;
calleeNumber: string | null;
startedAt: number;
duration: number;
legs: IHistoryLeg[];
}
const providerStatuses = new Map<string, IProviderStatus>();
const deviceStatuses = new Map<string, IDeviceStatus>();
const activeCalls = new Map<string, IActiveCall>();
const callHistory: ICallHistoryEntry[] = [];
const MAX_HISTORY = 100;
// WebRTC session ↔ call linking state.
// Both pieces (session accept + call media info) can arrive in any order.
const webrtcSessionToCall = new Map<string, string>(); // sessionId → callId
const webrtcCallToSession = new Map<string, string>(); // callId → sessionId
const pendingCallMedia = new Map<string, { addr: string; port: number; sipPt: number }>(); // callId → provider media info
// Initialize provider statuses from config (all start as unregistered).
for (const p of appConfig.providers) {
providerStatuses.set(p.id, {
id: p.id,
displayName: p.displayName,
registered: false,
publicIp: null,
});
}
// Initialize device statuses from config.
for (const d of appConfig.devices) {
deviceStatuses.set(d.id, {
id: d.id,
displayName: d.displayName,
address: null,
port: 0,
connected: false,
isBrowser: false,
});
}
// ---------------------------------------------------------------------------
// Initialize subsystems
// ---------------------------------------------------------------------------
const promptCache = new PromptCache(log);
const voiceboxManager = new VoiceboxManager(log);
voiceboxManager.init(appConfig.voiceboxes ?? []);
// WebRTC signaling (browser device registration).
initWebRtcSignaling({ log });
// ---------------------------------------------------------------------------
// Status snapshot (fed to web dashboard)
// ---------------------------------------------------------------------------
function getStatus() {
// Merge SIP devices (from Rust) + browser devices (from TS WebSocket).
const devices = [...deviceStatuses.values()];
for (const bid of getAllBrowserDeviceIds()) {
devices.push({
id: bid,
displayName: 'Browser',
address: null,
port: 0,
connected: true,
isBrowser: true,
});
}
function buildProxyConfig(config: IAppConfig): Record<string, unknown> {
return {
instanceId,
uptime: Math.floor((Date.now() - startTime) / 1000),
lanIp: appConfig.proxy.lanIp,
providers: [...providerStatuses.values()],
devices,
calls: [...activeCalls.values()].map((c) => ({
...c,
duration: Math.floor((Date.now() - c.startedAt) / 1000),
legs: [...c.legs.values()].map((l) => ({
id: l.id,
type: l.type,
state: l.state,
codec: l.codec,
rtpPort: l.rtpPort,
remoteMedia: l.remoteMedia,
metadata: l.metadata || {},
pktSent: 0,
pktReceived: 0,
transcoding: false,
})),
})),
callHistory,
contacts: appConfig.contacts || [],
voicemailCounts: voiceboxManager.getAllUnheardCounts(),
proxy: config.proxy,
providers: config.providers,
devices: config.devices,
routing: config.routing,
voiceboxes: config.voiceboxes ?? [],
ivr: config.ivr,
};
}
// ---------------------------------------------------------------------------
// Start Rust proxy engine
// ---------------------------------------------------------------------------
function getStatus() {
return statusStore.buildStatusSnapshot(
instanceId,
startTime,
getAllBrowserDeviceIds(),
voiceboxManager.getAllUnheardCounts(),
);
}
function requestWebRtcLink(callId: string, sessionId: string, media: IProviderMediaInfo): void {
log(`[webrtc] linking session=${sessionId.slice(0, 8)} to call=${callId} media=${media.addr}:${media.port} pt=${media.sipPt}`);
void webrtcLink(sessionId, callId, media.addr, media.port, media.sipPt).then((ok) => {
log(`[webrtc] link result: ${ok}`);
});
}
async function configureRuntime(config: IAppConfig): Promise<boolean> {
return configureProxyEngine(buildProxyConfig(config));
}
async function reloadConfig(): Promise<void> {
try {
const previousConfig = appConfig;
const nextConfig = loadConfig();
appConfig = nextConfig;
statusStore.updateConfig(nextConfig);
voiceboxManager.init(nextConfig.voiceboxes ?? []);
if (nextConfig.proxy.lanPort !== previousConfig.proxy.lanPort) {
log('[config] proxy.lanPort changed; restart required for SIP socket rebinding');
}
if (nextConfig.proxy.webUiPort !== previousConfig.proxy.webUiPort) {
log('[config] proxy.webUiPort changed; restart required for web UI rebinding');
}
const configured = await configureRuntime(nextConfig);
if (configured) {
log('[config] reloaded - proxy engine reconfigured');
} else {
log('[config] reload failed - proxy engine rejected config');
}
} catch (error: unknown) {
log(`[config] reload failed: ${errorMessage(error)}`);
}
}
async function startProxyEngine(): Promise<void> {
const ok = await initProxyEngine(log);
if (!ok) {
const started = await initProxyEngine(log);
if (!started) {
log('[FATAL] failed to start proxy engine');
process.exit(1);
}
// Subscribe to events from Rust BEFORE sending configure.
onProxyEvent('provider_registered', (data: IProviderRegisteredEvent) => {
const ps = providerStatuses.get(data.provider_id);
if (ps) {
const wasRegistered = ps.registered;
ps.registered = data.registered;
ps.publicIp = data.public_ip;
if (data.registered && !wasRegistered) {
log(`[provider:${data.provider_id}] registered (publicIp=${data.public_ip})`);
} else if (!data.registered && wasRegistered) {
log(`[provider:${data.provider_id}] registration lost`);
}
broadcastWs('registration', { providerId: data.provider_id, registered: data.registered });
}
});
onProxyEvent('device_registered', (data: IDeviceRegisteredEvent) => {
const ds = deviceStatuses.get(data.device_id);
if (ds) {
ds.address = data.address;
ds.port = data.port;
ds.connected = true;
log(`[registrar] ${data.display_name} registered from ${data.address}:${data.port}`);
}
});
onProxyEvent('incoming_call', (data: IIncomingCallEvent) => {
log(`[call] incoming: ${data.from_uri}${data.to_number} via ${data.provider_id} (${data.call_id})`);
activeCalls.set(data.call_id, {
id: data.call_id,
direction: 'inbound',
callerNumber: data.from_uri,
calleeNumber: data.to_number,
providerUsed: data.provider_id,
state: 'ringing',
startedAt: Date.now(),
legs: new Map(),
});
// Notify browsers of the incoming call, but only if the matched inbound
// route asked for it. `ring_browsers !== false` preserves today's
// ring-by-default behavior for any Rust release that predates this
// field or for the fallback "no route matched" case (where Rust still
// sends `true`). Note: this is an informational toast — browsers do
// NOT race the SIP device to answer. First-to-answer-wins requires
// a multi-leg fork which is not yet implemented.
if (data.ring_browsers !== false) {
const browserIds = getAllBrowserDeviceIds();
for (const bid of browserIds) {
sendToBrowserDevice(bid, {
type: 'webrtc-incoming',
callId: data.call_id,
from: data.from_uri,
deviceId: bid,
});
}
}
});
onProxyEvent('outbound_device_call', (data: IOutboundCallEvent) => {
log(`[call] outbound: device ${data.from_device}${data.to_number} (${data.call_id})`);
activeCalls.set(data.call_id, {
id: data.call_id,
direction: 'outbound',
callerNumber: data.from_device,
calleeNumber: data.to_number,
providerUsed: null,
state: 'setting-up',
startedAt: Date.now(),
legs: new Map(),
});
});
onProxyEvent('outbound_call_started', (data: any) => {
log(`[call] outbound started: ${data.call_id}${data.number} via ${data.provider_id}`);
activeCalls.set(data.call_id, {
id: data.call_id,
direction: 'outbound',
callerNumber: null,
calleeNumber: data.number,
providerUsed: data.provider_id,
state: 'setting-up',
startedAt: Date.now(),
legs: new Map(),
});
// Notify all browser devices — they can connect via WebRTC to listen/talk.
const browserIds = getAllBrowserDeviceIds();
for (const bid of browserIds) {
sendToBrowserDevice(bid, {
type: 'webrtc-incoming',
callId: data.call_id,
from: data.number,
deviceId: bid,
});
}
});
onProxyEvent('call_ringing', (data: { call_id: string }) => {
const call = activeCalls.get(data.call_id);
if (call) call.state = 'ringing';
});
onProxyEvent('call_answered', (data: { call_id: string; provider_media_addr?: string; provider_media_port?: number; sip_pt?: number }) => {
const call = activeCalls.get(data.call_id);
if (call) {
call.state = 'connected';
log(`[call] ${data.call_id} connected`);
// Enrich provider leg with media info from the answered event.
if (data.provider_media_addr && data.provider_media_port) {
for (const leg of call.legs.values()) {
if (leg.type === 'sip-provider') {
leg.remoteMedia = `${data.provider_media_addr}:${data.provider_media_port}`;
if (data.sip_pt !== undefined) {
const codecNames: Record<number, string> = { 0: 'PCMU', 8: 'PCMA', 9: 'G.722', 111: 'Opus' };
leg.codec = codecNames[data.sip_pt] || `PT${data.sip_pt}`;
}
break;
}
}
}
}
// Try to link WebRTC session to this call for audio bridging.
if (data.provider_media_addr && data.provider_media_port) {
const sessionId = webrtcCallToSession.get(data.call_id);
if (sessionId) {
// Both session and media info available — link now.
const sipPt = data.sip_pt ?? 9;
log(`[webrtc] linking session=${sessionId.slice(0, 8)} to call=${data.call_id} media=${data.provider_media_addr}:${data.provider_media_port} pt=${sipPt}`);
webrtcLink(sessionId, data.call_id, data.provider_media_addr, data.provider_media_port, sipPt).then((ok) => {
log(`[webrtc] link result: ${ok}`);
});
} else {
// Session not yet accepted — store media info for when it arrives.
pendingCallMedia.set(data.call_id, {
addr: data.provider_media_addr,
port: data.provider_media_port,
sipPt: data.sip_pt ?? 9,
});
log(`[webrtc] media info cached for call=${data.call_id}, waiting for session accept`);
}
}
});
onProxyEvent('call_ended', (data: ICallEndedEvent) => {
const call = activeCalls.get(data.call_id);
if (call) {
log(`[call] ${data.call_id} ended: ${data.reason} (${data.duration}s)`);
// Snapshot legs with metadata for history.
const historyLegs: IHistoryLeg[] = [];
for (const [, leg] of call.legs) {
historyLegs.push({
id: leg.id,
type: leg.type,
metadata: leg.metadata || {},
});
}
// Move to history.
callHistory.unshift({
id: call.id,
direction: call.direction,
callerNumber: call.callerNumber,
calleeNumber: call.calleeNumber,
startedAt: call.startedAt,
duration: data.duration,
legs: historyLegs,
});
if (callHistory.length > MAX_HISTORY) callHistory.pop();
activeCalls.delete(data.call_id);
// Notify browser(s) that the call ended.
broadcastWs('webrtc-call-ended', { callId: data.call_id });
// Clean up WebRTC session mappings.
const sessionId = webrtcCallToSession.get(data.call_id);
if (sessionId) {
webrtcCallToSession.delete(data.call_id);
webrtcSessionToCall.delete(sessionId);
webrtcClose(sessionId).catch(() => {});
}
pendingCallMedia.delete(data.call_id);
}
});
onProxyEvent('sip_unhandled', (data: any) => {
log(`[sip] unhandled ${data.method_or_status} Call-ID=${data.call_id?.slice(0, 20)} from=${data.from_addr}:${data.from_port}`);
});
// Leg events (multiparty) — update shadow state so the dashboard shows legs.
onProxyEvent('leg_added', (data: any) => {
log(`[leg] added: call=${data.call_id} leg=${data.leg_id} kind=${data.kind} state=${data.state}`);
const call = activeCalls.get(data.call_id);
if (call) {
call.legs.set(data.leg_id, {
id: data.leg_id,
type: data.kind,
state: data.state,
codec: data.codec ?? null,
rtpPort: data.rtpPort ?? null,
remoteMedia: data.remoteMedia ?? null,
metadata: data.metadata || {},
});
}
});
onProxyEvent('leg_removed', (data: any) => {
log(`[leg] removed: call=${data.call_id} leg=${data.leg_id}`);
activeCalls.get(data.call_id)?.legs.delete(data.leg_id);
});
onProxyEvent('leg_state_changed', (data: any) => {
log(`[leg] state: call=${data.call_id} leg=${data.leg_id}${data.state}`);
const call = activeCalls.get(data.call_id);
if (!call) return;
const leg = call.legs.get(data.leg_id);
if (leg) {
leg.state = data.state;
if (data.metadata) leg.metadata = data.metadata;
} else {
// Initial legs (provider/device) don't emit leg_added — create on first state change.
const legId: string = data.leg_id;
const type = legId.includes('-prov') ? 'sip-provider' : legId.includes('-dev') ? 'sip-device' : 'webrtc';
call.legs.set(data.leg_id, {
id: data.leg_id,
type,
state: data.state,
codec: null,
rtpPort: null,
remoteMedia: null,
metadata: data.metadata || {},
});
}
});
// WebRTC events from Rust — forward ICE candidates to browser via WebSocket.
onProxyEvent('webrtc_ice_candidate', (data: any) => {
// Find the browser's WebSocket by session ID and send the ICE candidate.
broadcastWs('webrtc-ice', {
sessionId: data.session_id,
candidate: { candidate: data.candidate, sdpMid: data.sdp_mid, sdpMLineIndex: data.sdp_mline_index },
});
});
onProxyEvent('webrtc_state', (data: any) => {
log(`[webrtc] session=${data.session_id?.slice(0, 8)} state=${data.state}`);
});
onProxyEvent('webrtc_track', (data: any) => {
log(`[webrtc] session=${data.session_id?.slice(0, 8)} track=${data.kind} codec=${data.codec}`);
});
onProxyEvent('webrtc_audio_rx', (data: any) => {
if (data.packet_count === 1 || data.packet_count === 50) {
log(`[webrtc] session=${data.session_id?.slice(0, 8)} browser audio rx #${data.packet_count}`);
}
});
// Voicemail events.
onProxyEvent('voicemail_started', (data: any) => {
log(`[voicemail] started for call ${data.call_id} caller=${data.caller_number}`);
});
onProxyEvent('recording_done', (data: any) => {
log(`[voicemail] recording done: ${data.file_path} (${data.duration_ms}ms) caller=${data.caller_number}`);
// Save voicemail metadata via VoiceboxManager.
voiceboxManager.addMessage('default', {
callerNumber: data.caller_number || 'Unknown',
callerName: null,
fileName: data.file_path,
durationMs: data.duration_ms,
});
});
onProxyEvent('voicemail_error', (data: any) => {
log(`[voicemail] error: ${data.error} call=${data.call_id}`);
});
// Send full config to Rust — this binds the SIP socket and starts registrations.
const configured = await configureProxyEngine({
proxy: appConfig.proxy,
providers: appConfig.providers,
devices: appConfig.devices,
routing: appConfig.routing,
registerProxyEventHandlers({
log,
statusStore,
voiceboxManager,
webRtcLinks,
getBrowserDeviceIds: getAllBrowserDeviceIds,
sendToBrowserDevice,
broadcast: broadcastWs,
onLinkWebRtcSession: requestWebRtcLink,
onCloseWebRtcSession: (sessionId) => {
void webrtcClose(sessionId);
},
});
const configured = await configureRuntime(appConfig);
if (!configured) {
log('[FATAL] failed to configure proxy engine');
process.exit(1);
}
const providerList = appConfig.providers.map((p) => p.displayName).join(', ');
const deviceList = appConfig.devices.map((d) => d.displayName).join(', ');
const providerList = appConfig.providers.map((provider) => provider.displayName).join(', ');
const deviceList = appConfig.devices.map((device) => device.displayName).join(', ');
log(`proxy engine started | LAN ${appConfig.proxy.lanIp}:${appConfig.proxy.lanPort} | providers: ${providerList} | devices: ${deviceList}`);
// Generate TTS audio (WAV files on disk, played by Rust audio_player).
try {
await initAnnouncement(log);
// Pre-generate prompts.
await promptCache.generateBeep('voicemail-beep', 1000, 500, 8000);
for (const vb of appConfig.voiceboxes ?? []) {
if (!vb.enabled) continue;
const promptId = `voicemail-greeting-${vb.id}`;
if (vb.greetingWavPath) {
await promptCache.loadWavPrompt(promptId, vb.greetingWavPath);
} else {
const text = vb.greetingText || 'The person you are trying to reach is not available. Please leave a message after the tone.';
await promptCache.generatePrompt(promptId, text, vb.greetingVoice || 'af_bella');
}
}
if (appConfig.ivr?.enabled) {
for (const menu of appConfig.ivr.menus) {
await promptCache.generatePrompt(`ivr-menu-${menu.id}`, menu.promptText, menu.promptVoice || 'af_bella');
}
}
log(`[startup] prompts cached: ${promptCache.listIds().join(', ') || 'none'}`);
} catch (e) {
log(`[tts] init failed: ${e}`);
}
}
// ---------------------------------------------------------------------------
// Web UI
// ---------------------------------------------------------------------------
initWebUi(
initWebUi({
port: appConfig.proxy.webUiPort,
getStatus,
log,
(number, deviceId, providerId) => {
// Outbound calls from dashboard — send make_call command to Rust.
onStartCall: (number, deviceId, providerId) => {
log(`[dashboard] start call: ${number} device=${deviceId || 'any'} provider=${providerId || 'auto'}`);
// Fire-and-forget — the async result comes via events.
makeCall(number, deviceId, providerId).then((callId) => {
void makeCall(number, deviceId, providerId).then((callId) => {
if (callId) {
log(`[dashboard] call started: ${callId}`);
activeCalls.set(callId, {
id: callId,
direction: 'outbound',
callerNumber: null,
calleeNumber: number,
providerUsed: providerId || null,
state: 'setting-up',
startedAt: Date.now(),
legs: new Map(),
});
statusStore.noteDashboardCallStarted(callId, number, providerId);
} else {
log(`[dashboard] call failed for ${number}`);
}
});
// Return a temporary ID so the frontend doesn't show "failed" immediately.
return { id: `pending-${Date.now()}` };
},
(callId) => {
hangupCall(callId);
onHangupCall: (callId) => {
void hangupCall(callId);
return true;
},
() => {
// Config saved — reconfigure Rust engine.
try {
const fresh = loadConfig();
Object.assign(appConfig, fresh);
// Update shadow state.
for (const p of fresh.providers) {
if (!providerStatuses.has(p.id)) {
providerStatuses.set(p.id, {
id: p.id, displayName: p.displayName, registered: false, publicIp: null,
});
}
}
for (const d of fresh.devices) {
if (!deviceStatuses.has(d.id)) {
deviceStatuses.set(d.id, {
id: d.id, displayName: d.displayName, address: null, port: 0, connected: false, isBrowser: false,
});
}
}
// Re-send config to Rust.
configureProxyEngine({
proxy: fresh.proxy,
providers: fresh.providers,
devices: fresh.devices,
routing: fresh.routing,
}).then((ok) => {
if (ok) log('[config] reloaded — proxy engine reconfigured');
else log('[config] reload failed — proxy engine rejected config');
});
} catch (e: any) {
log(`[config] reload failed: ${e.message}`);
}
},
undefined, // callManager — legacy, replaced by Rust proxy-engine
voiceboxManager, // voiceboxManager
// WebRTC signaling → forwarded to Rust proxy-engine.
async (sessionId, sdp, ws) => {
onConfigSaved: reloadConfig,
voiceboxManager,
onWebRtcOffer: async (sessionId, sdp, ws) => {
log(`[webrtc] offer from browser session=${sessionId.slice(0, 8)} sdp_type=${typeof sdp} sdp_len=${sdp?.length || 0}`);
if (!sdp || typeof sdp !== 'string' || sdp.length < 10) {
log(`[webrtc] WARNING: invalid SDP (type=${typeof sdp}), skipping offer`);
return;
}
log(`[webrtc] sending offer to Rust (${sdp.length}b)...`);
const result = await webrtcOffer(sessionId, sdp);
log(`[webrtc] Rust result: ${JSON.stringify(result)?.slice(0, 200)}`);
if (result?.sdp) {
ws.send(JSON.stringify({ type: 'webrtc-answer', sessionId, sdp: result.sdp }));
log(`[webrtc] answer sent to browser session=${sessionId.slice(0, 8)}`);
} else {
log(`[webrtc] ERROR: no answer SDP from Rust`);
return;
}
log('[webrtc] ERROR: no answer SDP from Rust');
},
async (sessionId, candidate) => {
onWebRtcIce: async (sessionId, candidate) => {
await webrtcIce(sessionId, candidate);
},
async (sessionId) => {
onWebRtcClose: async (sessionId) => {
webRtcLinks.removeSession(sessionId);
await webrtcClose(sessionId);
},
// onWebRtcAccept — browser has accepted a call, linking session to call.
(callId: string, sessionId: string) => {
onWebRtcAccept: (callId, sessionId) => {
log(`[webrtc] accept: callId=${callId} sessionId=${sessionId.slice(0, 8)}`);
// Store bidirectional mapping.
webrtcSessionToCall.set(sessionId, callId);
webrtcCallToSession.set(callId, sessionId);
// Check if we already have media info for this call (provider answered first).
const media = pendingCallMedia.get(callId);
if (media) {
pendingCallMedia.delete(callId);
log(`[webrtc] linking session=${sessionId.slice(0, 8)} to call=${callId} media=${media.addr}:${media.port} pt=${media.sipPt}`);
webrtcLink(sessionId, callId, media.addr, media.port, media.sipPt).then((ok) => {
log(`[webrtc] link result: ${ok}`);
});
} else {
log(`[webrtc] session ${sessionId.slice(0, 8)} accepted, waiting for call_answered media info`);
const pendingMedia = webRtcLinks.acceptCall(callId, sessionId);
if (pendingMedia) {
requestWebRtcLink(callId, sessionId, pendingMedia);
return;
}
log(`[webrtc] session ${sessionId.slice(0, 8)} accepted, waiting for call_answered media info`);
},
);
});
// ---------------------------------------------------------------------------
// Start
// ---------------------------------------------------------------------------
void startProxyEngine();
startProxyEngine();
process.on('SIGINT', () => {
log('SIGINT, exiting');
shutdownProxyEngine();
process.exit(0);
});
process.on('SIGINT', () => { log('SIGINT, exiting'); shutdownProxyEngine(); process.exit(0); });
process.on('SIGTERM', () => { log('SIGTERM, exiting'); shutdownProxyEngine(); process.exit(0); });
process.on('SIGTERM', () => {
log('SIGTERM, exiting');
shutdownProxyEngine();
process.exit(0);
});

View File

@@ -5,8 +5,8 @@
* - Browser device registration/unregistration via WebSocket
* - WS → deviceId mapping
*
* All WebRTC media logic (PeerConnection, RTP, transcoding) lives in
* ts/call/webrtc-leg.ts and is managed by the CallManager.
* All WebRTC media logic (PeerConnection, RTP, transcoding, mixer wiring)
* lives in the Rust proxy-engine. This module only tracks browser sessions.
*/
import { WebSocket } from 'ws';
@@ -39,7 +39,7 @@ export function initWebRtcSignaling(cfg: IWebRtcSignalingConfig): void {
/**
* Handle a WebRTC signaling message from a browser client.
* Only handles registration; offer/ice/hangup are routed through CallManager.
* Only handles registration; offer/ice/hangup are routed through frontend.ts.
*/
export function handleWebRtcSignaling(
ws: WebSocket,
@@ -51,7 +51,7 @@ export function handleWebRtcSignaling(
handleRegister(ws, message.sessionId!, message.userAgent, message._remoteIp);
}
// Other webrtc-* types (offer, ice, hangup, accept) are handled
// by the CallManager via frontend.ts WebSocket handler.
// by the frontend.ts WebSocket handler and forwarded to Rust.
}
/**
@@ -64,13 +64,6 @@ export function sendToBrowserDevice(deviceId: string, data: unknown): boolean {
return true;
}
/**
* Get the WebSocket for a browser device (used by CallManager to create WebRtcLegs).
*/
export function getBrowserDeviceWs(deviceId: string): WebSocket | null {
return deviceIdToWs.get(deviceId) ?? null;
}
/**
* Get all registered browser device IDs.
*/

View File

@@ -3,6 +3,6 @@
*/
export const commitinfo = {
name: 'siprouter',
version: '1.20.3',
version: '1.25.2',
description: 'undefined'
}

View File

@@ -41,11 +41,10 @@ export class SipproxyDevices extends DeesElement {
},
},
{
key: 'contact',
key: 'address',
header: 'Contact',
renderer: (_val: any, row: any) => {
const c = row.contact;
const text = c ? (c.port ? `${c.address}:${c.port}` : c.address) : '--';
const text = row.address ? (row.port ? `${row.address}:${row.port}` : row.address) : '--';
return html`<span style="font-family:'JetBrains Mono',monospace;font-size:.75rem">${text}</span>`;
},
},

View File

@@ -32,8 +32,32 @@ const LEG_TYPE_LABELS: Record<string, string> = {
'sip-device': 'SIP Device',
'sip-provider': 'SIP Provider',
'webrtc': 'WebRTC',
'tool': 'Tool',
};
function renderHistoryLegs(legs: ICallHistoryEntry['legs']): TemplateResult {
if (!legs.length) {
return html`<span style="color:#64748b">-</span>`;
}
return html`
<div style="display:flex;flex-direction:column;gap:6px;font-size:.72rem;line-height:1.35;">
${legs.map(
(leg) => html`
<div>
<span class="badge" style="${legTypeBadgeStyle(leg.type)}">${LEG_TYPE_LABELS[leg.type] || leg.type}</span>
<span style="margin-left:6px;font-family:'JetBrains Mono',monospace;">${leg.codec || '--'}</span>
<span style="margin-left:6px;color:#94a3b8;">${STATE_LABELS[leg.state] || leg.state}</span>
${leg.remoteMedia
? html`<span style="display:block;color:#64748b;font-family:'JetBrains Mono',monospace;">${leg.remoteMedia}</span>`
: ''}
</div>
`,
)}
</div>
`;
}
function directionIcon(dir: string): string {
if (dir === 'inbound') return '\u2199';
if (dir === 'outbound') return '\u2197';
@@ -226,8 +250,8 @@ export class SipproxyViewCalls extends DeesElement {
`,
];
connectedCallback() {
super.connectedCallback();
async connectedCallback(): Promise<void> {
await super.connectedCallback();
this.rxSubscriptions.push({
unsubscribe: appState.subscribe((s) => {
this.appData = s;
@@ -490,6 +514,11 @@ export class SipproxyViewCalls extends DeesElement {
renderer: (val: number) =>
html`<span style="font-family:'JetBrains Mono',monospace;font-size:.75rem">${fmtDuration(val)}</span>`,
},
{
key: 'legs',
header: 'Legs',
renderer: (val: ICallHistoryEntry['legs']) => renderHistoryLegs(val),
},
];
}
@@ -551,9 +580,7 @@ export class SipproxyViewCalls extends DeesElement {
</span>
</td>
<td>
${leg.remoteMedia
? `${leg.remoteMedia.address}:${leg.remoteMedia.port}`
: '--'}
${leg.remoteMedia || '--'}
</td>
<td>${leg.rtpPort ?? '--'}</td>
<td>

View File

@@ -186,11 +186,10 @@ export class SipproxyViewOverview extends DeesElement {
},
},
{
key: 'contact',
key: 'address',
header: 'Contact',
renderer: (_val: any, row: any) => {
const c = row.contact;
const text = c ? (c.port ? `${c.address}:${c.port}` : c.address) : '--';
const text = row.address ? (row.port ? `${row.address}:${row.port}` : row.address) : '--';
return html`<span style="font-family:'JetBrains Mono',monospace;font-size:.75rem">${text}</span>`;
},
},

View File

@@ -164,173 +164,346 @@ export class SipproxyViewProviders extends DeesElement {
iconName: 'lucide:plus',
type: ['header'] as any,
actionFunc: async () => {
await this.openAddModal();
},
},
{
name: 'Add Sipgate',
iconName: 'lucide:phone',
type: ['header'] as any,
actionFunc: async () => {
await this.openAddModal(PROVIDER_TEMPLATES.sipgate, 'Sipgate');
},
},
{
name: 'Add O2/Alice',
iconName: 'lucide:phone',
type: ['header'] as any,
actionFunc: async () => {
await this.openAddModal(PROVIDER_TEMPLATES.o2, 'O2/Alice');
await this.openAddStepper();
},
},
];
}
// ---- add provider modal --------------------------------------------------
// ---- add provider stepper ------------------------------------------------
private async openAddModal(
template?: typeof PROVIDER_TEMPLATES.sipgate,
templateName?: string,
) {
const { DeesModal } = await import('@design.estate/dees-catalog');
private async openAddStepper() {
const { DeesStepper } = await import('@design.estate/dees-catalog');
type TDeesStepper = InstanceType<typeof DeesStepper>;
// IStep / menuOptions types: we keep content typing loose (`any[]`) to
// avoid having to import tsclass IMenuItem just for one parameter annotation.
const formData = {
displayName: templateName || '',
domain: template?.domain || '',
outboundProxyAddress: template?.outboundProxy?.address || '',
outboundProxyPort: String(template?.outboundProxy?.port ?? 5060),
type TProviderType = 'Custom' | 'Sipgate' | 'O2/Alice';
interface IAccumulator {
providerType: TProviderType;
displayName: string;
domain: string;
outboundProxyAddress: string;
outboundProxyPort: string;
username: string;
password: string;
// Advanced — exposed in step 4
registerIntervalSec: string;
codecs: string;
earlyMediaSilence: boolean;
}
const accumulator: IAccumulator = {
providerType: 'Custom',
displayName: '',
domain: '',
outboundProxyAddress: '',
outboundProxyPort: '5060',
username: '',
password: '',
registerIntervalSec: String(template?.registerIntervalSec ?? 300),
codecs: template?.codecs ? template.codecs.join(', ') : '9, 0, 8, 101',
earlyMediaSilence: template?.quirks?.earlyMediaSilence ?? false,
registerIntervalSec: '300',
codecs: '9, 0, 8, 101',
earlyMediaSilence: false,
};
const heading = template
? `Add ${templateName} Provider`
: 'Add Provider';
// Snapshot the currently-selected step's form (if any) into accumulator.
const snapshotActiveForm = async (stepper: TDeesStepper) => {
const form = stepper.activeForm;
if (!form) return;
const data: Record<string, any> = await form.collectFormData();
Object.assign(accumulator, data);
};
await DeesModal.createAndShow({
heading,
width: 'small',
showCloseButton: true,
// Overwrite template-owned fields. Keep user-owned fields (username,
// password) untouched. displayName is replaced only when empty or still
// holds a branded auto-fill.
const applyTemplate = (type: TProviderType) => {
const tpl =
type === 'Sipgate' ? PROVIDER_TEMPLATES.sipgate
: type === 'O2/Alice' ? PROVIDER_TEMPLATES.o2
: null;
if (!tpl) return;
accumulator.domain = tpl.domain;
accumulator.outboundProxyAddress = tpl.outboundProxy.address;
accumulator.outboundProxyPort = String(tpl.outboundProxy.port);
accumulator.registerIntervalSec = String(tpl.registerIntervalSec);
accumulator.codecs = tpl.codecs.join(', ');
accumulator.earlyMediaSilence = tpl.quirks.earlyMediaSilence;
if (
!accumulator.displayName ||
accumulator.displayName === 'Sipgate' ||
accumulator.displayName === 'O2/Alice'
) {
accumulator.displayName = type;
}
};
// --- Step builders (called after step 1 so accumulator is populated) ---
const buildConnectionStep = (): any => ({
title: 'Connection',
content: html`
<div style="display:flex;flex-direction:column;gap:12px;padding:4px 0;">
<dees-form>
<dees-input-text
.key=${'displayName'}
.label=${'Display Name'}
.value=${formData.displayName}
@input=${(e: Event) => { formData.displayName = (e.target as any).value; }}
.value=${accumulator.displayName}
.required=${true}
></dees-input-text>
<dees-input-text
.key=${'domain'}
.label=${'Domain'}
.value=${formData.domain}
@input=${(e: Event) => { formData.domain = (e.target as any).value; }}
.value=${accumulator.domain}
.required=${true}
></dees-input-text>
<dees-input-text
.key=${'outboundProxyAddress'}
.label=${'Outbound Proxy Address'}
.value=${formData.outboundProxyAddress}
@input=${(e: Event) => { formData.outboundProxyAddress = (e.target as any).value; }}
.value=${accumulator.outboundProxyAddress}
></dees-input-text>
<dees-input-text
.key=${'outboundProxyPort'}
.label=${'Outbound Proxy Port'}
.value=${formData.outboundProxyPort}
@input=${(e: Event) => { formData.outboundProxyPort = (e.target as any).value; }}
.value=${accumulator.outboundProxyPort}
></dees-input-text>
</dees-form>
`,
menuOptions: [
{
name: 'Continue',
iconName: 'lucide:arrow-right',
action: async (stepper: TDeesStepper) => {
await snapshotActiveForm(stepper);
stepper.goNext();
},
},
],
});
const buildCredentialsStep = (): any => ({
title: 'Credentials',
content: html`
<dees-form>
<dees-input-text
.key=${'username'}
.label=${'Username / Auth ID'}
.value=${formData.username}
@input=${(e: Event) => { formData.username = (e.target as any).value; }}
.value=${accumulator.username}
.required=${true}
></dees-input-text>
<dees-input-text
.key=${'password'}
.label=${'Password'}
.isPasswordBool=${true}
.value=${formData.password}
@input=${(e: Event) => { formData.password = (e.target as any).value; }}
.value=${accumulator.password}
.required=${true}
></dees-input-text>
</dees-form>
`,
menuOptions: [
{
name: 'Continue',
iconName: 'lucide:arrow-right',
action: async (stepper: TDeesStepper) => {
await snapshotActiveForm(stepper);
stepper.goNext();
},
},
],
});
const buildAdvancedStep = (): any => ({
title: 'Advanced',
content: html`
<dees-form>
<dees-input-text
.key=${'registerIntervalSec'}
.label=${'Register Interval (sec)'}
.value=${formData.registerIntervalSec}
@input=${(e: Event) => { formData.registerIntervalSec = (e.target as any).value; }}
.value=${accumulator.registerIntervalSec}
></dees-input-text>
<dees-input-text
.key=${'codecs'}
.label=${'Codecs (comma-separated payload types)'}
.value=${formData.codecs}
@input=${(e: Event) => { formData.codecs = (e.target as any).value; }}
.value=${accumulator.codecs}
></dees-input-text>
<dees-input-checkbox
.key=${'earlyMediaSilence'}
.label=${'Early Media Silence (quirk)'}
.value=${formData.earlyMediaSilence}
@newValue=${(e: CustomEvent) => { formData.earlyMediaSilence = e.detail; }}
.value=${accumulator.earlyMediaSilence}
></dees-input-checkbox>
</div>
</dees-form>
`,
menuOptions: [
{
name: 'Cancel',
iconName: 'lucide:x',
action: async (modalRef: any) => {
modalRef.destroy();
name: 'Continue',
iconName: 'lucide:arrow-right',
action: async (stepper: TDeesStepper) => {
await snapshotActiveForm(stepper);
// Rebuild the review step so its rendering reflects the latest
// accumulator values (the review step captures values at build time).
stepper.steps = [...stepper.steps.slice(0, 4), buildReviewStep()];
await (stepper as any).updateComplete;
stepper.goNext();
},
},
{
name: 'Create',
iconName: 'lucide:check',
action: async (modalRef: any) => {
if (!formData.displayName.trim() || !formData.domain.trim()) {
deesCatalog.DeesToast.error('Display name and domain are required');
return;
}
try {
const providerId = slugify(formData.displayName);
const codecs = formData.codecs
],
});
const buildReviewStep = (): any => {
const resolvedId = slugify(accumulator.displayName);
return {
title: 'Review & Create',
content: html`
<dees-panel>
<div
style="display:grid;grid-template-columns:auto 1fr;gap:6px 16px;font-size:.85rem;padding:8px 4px;"
>
<div style="color:#94a3b8;">Type</div>
<div>${accumulator.providerType}</div>
<div style="color:#94a3b8;">Display Name</div>
<div>${accumulator.displayName}</div>
<div style="color:#94a3b8;">ID</div>
<div style="font-family:'JetBrains Mono',monospace;">${resolvedId}</div>
<div style="color:#94a3b8;">Domain</div>
<div>${accumulator.domain}</div>
<div style="color:#94a3b8;">Outbound Proxy</div>
<div>
${accumulator.outboundProxyAddress || accumulator.domain}:${accumulator.outboundProxyPort}
</div>
<div style="color:#94a3b8;">Username</div>
<div>${accumulator.username}</div>
<div style="color:#94a3b8;">Password</div>
<div>${'*'.repeat(Math.min(accumulator.password.length, 12))}</div>
<div style="color:#94a3b8;">Register Interval</div>
<div>${accumulator.registerIntervalSec}s</div>
<div style="color:#94a3b8;">Codecs</div>
<div>${accumulator.codecs}</div>
<div style="color:#94a3b8;">Early-Media Silence</div>
<div>${accumulator.earlyMediaSilence ? 'yes' : 'no'}</div>
</div>
</dees-panel>
`,
menuOptions: [
{
name: 'Create Provider',
iconName: 'lucide:check',
action: async (stepper: TDeesStepper) => {
// Collision-resolve id against current state.
const existing = (this.appData.providers || []).map((p) => p.id);
let uniqueId = resolvedId;
let suffix = 2;
while (existing.includes(uniqueId)) {
uniqueId = `${resolvedId}-${suffix++}`;
}
const parsedCodecs = accumulator.codecs
.split(',')
.map((s: string) => parseInt(s.trim(), 10))
.filter((n: number) => !isNaN(n));
const newProvider: any = {
id: providerId,
displayName: formData.displayName.trim(),
domain: formData.domain.trim(),
id: uniqueId,
displayName: accumulator.displayName.trim(),
domain: accumulator.domain.trim(),
outboundProxy: {
address: formData.outboundProxyAddress.trim() || formData.domain.trim(),
port: parseInt(formData.outboundProxyPort, 10) || 5060,
address:
accumulator.outboundProxyAddress.trim() || accumulator.domain.trim(),
port: parseInt(accumulator.outboundProxyPort, 10) || 5060,
},
username: formData.username.trim(),
password: formData.password,
registerIntervalSec: parseInt(formData.registerIntervalSec, 10) || 300,
codecs,
username: accumulator.username.trim(),
password: accumulator.password,
registerIntervalSec: parseInt(accumulator.registerIntervalSec, 10) || 300,
codecs: parsedCodecs.length ? parsedCodecs : [9, 0, 8, 101],
quirks: {
earlyMediaSilence: formData.earlyMediaSilence,
earlyMediaSilence: accumulator.earlyMediaSilence,
},
};
const result = await appState.apiSaveConfig({
addProvider: newProvider,
});
if (result.ok) {
modalRef.destroy();
deesCatalog.DeesToast.success(`Provider "${formData.displayName}" created`);
} else {
deesCatalog.DeesToast.error('Failed to save provider');
try {
const result = await appState.apiSaveConfig({
addProvider: newProvider,
});
if (result.ok) {
await stepper.destroy();
deesCatalog.DeesToast.success(
`Provider "${newProvider.displayName}" created`,
);
} else {
deesCatalog.DeesToast.error('Failed to save provider');
}
} catch (err: any) {
console.error('Failed to create provider:', err);
deesCatalog.DeesToast.error('Failed to create provider');
}
},
},
],
};
};
// --- Step 1: Provider Type ------------------------------------------------
//
// Note: `DeesStepper.createAndShow()` dismisses on backdrop click; a user
// mid-form could lose work. Acceptable for v1 — revisit if users complain.
const typeOptions: { option: string; key: TProviderType }[] = [
{ option: 'Custom', key: 'Custom' },
{ option: 'Sipgate', key: 'Sipgate' },
{ option: 'O2 / Alice', key: 'O2/Alice' },
];
const currentTypeOption =
typeOptions.find((o) => o.key === accumulator.providerType) || null;
const typeStep: any = {
title: 'Choose provider type',
content: html`
<dees-form>
<dees-input-dropdown
.key=${'providerType'}
.label=${'Provider Type'}
.options=${typeOptions}
.selectedOption=${currentTypeOption}
.enableSearch=${false}
.required=${true}
></dees-input-dropdown>
</dees-form>
`,
menuOptions: [
{
name: 'Continue',
iconName: 'lucide:arrow-right',
action: async (stepper: TDeesStepper) => {
// `dees-input-dropdown.value` is an object `{option, key, payload?}`,
// not a plain string — extract the `key` directly instead of using
// the generic `snapshotActiveForm` helper (which would clobber
// `accumulator.providerType`'s string type via Object.assign).
const form = stepper.activeForm;
if (form) {
const data: Record<string, any> = await form.collectFormData();
const selected = data.providerType;
if (selected && typeof selected === 'object' && selected.key) {
accumulator.providerType = selected.key as TProviderType;
}
} catch (err: any) {
console.error('Failed to create provider:', err);
deesCatalog.DeesToast.error('Failed to create provider');
}
if (!accumulator.providerType) {
accumulator.providerType = 'Custom';
}
applyTemplate(accumulator.providerType);
// (Re)build steps 2-5 with current accumulator values.
stepper.steps = [
typeStep,
buildConnectionStep(),
buildCredentialsStep(),
buildAdvancedStep(),
buildReviewStep(),
];
await (stepper as any).updateComplete;
stepper.goNext();
},
},
],
});
};
await DeesStepper.createAndShow({ steps: [typeStep] });
}
// ---- edit provider modal -------------------------------------------------

File diff suppressed because it is too large Load Diff

View File

@@ -18,6 +18,12 @@ interface IVoicemailMessage {
heard: boolean;
}
interface IVoiceboxRow {
id: string;
unheardCount: number;
selected: boolean;
}
// ---------------------------------------------------------------------------
// Helpers
// ---------------------------------------------------------------------------
@@ -61,19 +67,6 @@ export class SipproxyViewVoicemail extends DeesElement {
.view-section {
margin-bottom: 24px;
}
.box-selector {
display: flex;
align-items: center;
gap: 12px;
margin-bottom: 24px;
}
.box-selector label {
font-size: 0.85rem;
font-weight: 600;
color: #94a3b8;
text-transform: uppercase;
letter-spacing: 0.04em;
}
.audio-player {
display: flex;
align-items: center;
@@ -135,10 +128,11 @@ export class SipproxyViewVoicemail extends DeesElement {
const cfg = await appState.apiGetConfig();
const boxes: { id: string }[] = cfg.voiceboxes || [];
this.voiceboxIds = boxes.map((b) => b.id);
if (this.voiceboxIds.length > 0 && !this.selectedBoxId) {
this.selectedBoxId = this.voiceboxIds[0];
await this.loadMessages();
}
const nextSelectedBoxId = this.voiceboxIds.includes(this.selectedBoxId)
? this.selectedBoxId
: (this.voiceboxIds[0] || '');
this.selectedBoxId = nextSelectedBoxId;
await this.loadMessages();
} catch {
// Config unavailable.
}
@@ -161,11 +155,22 @@ export class SipproxyViewVoicemail extends DeesElement {
}
private async selectBox(boxId: string) {
if (boxId === this.selectedBoxId) {
return;
}
this.selectedBoxId = boxId;
this.stopAudio();
await this.loadMessages();
}
private getVoiceboxRows(): IVoiceboxRow[] {
return this.voiceboxIds.map((id) => ({
id,
unheardCount: this.appData.voicemailCounts[id] || 0,
selected: id === this.selectedBoxId,
}));
}
// ---- audio playback ------------------------------------------------------
private playMessage(msg: IVoicemailMessage) {
@@ -341,6 +346,43 @@ export class SipproxyViewVoicemail extends DeesElement {
];
}
private getVoiceboxColumns() {
return [
{
key: 'id',
header: 'Voicebox',
sortable: true,
renderer: (val: string, row: IVoiceboxRow) => html`
<div style="display:flex;align-items:center;gap:10px;">
<span style="font-family:'JetBrains Mono',monospace;font-size:.85rem;">${val}</span>
${row.selected ? html`
<span style="display:inline-block;padding:2px 8px;border-radius:4px;font-size:.7rem;font-weight:600;text-transform:uppercase;background:#1e3a5f;color:#60a5fa">Viewing</span>
` : ''}
</div>
`,
},
{
key: 'unheardCount',
header: 'Unheard',
sortable: true,
renderer: (val: number) => {
const hasUnheard = val > 0;
return html`
<span style="display:inline-block;padding:2px 8px;border-radius:4px;font-size:.75rem;font-weight:600;background:${hasUnheard ? '#422006' : '#1f2937'};color:${hasUnheard ? '#f59e0b' : '#94a3b8'}">${val}</span>
`;
},
},
{
key: 'selected',
header: 'Status',
value: (row: IVoiceboxRow) => (row.selected ? 'Open' : 'Available'),
renderer: (val: string, row: IVoiceboxRow) => html`
<span style="color:${row.selected ? '#60a5fa' : '#94a3b8'};font-size:.8rem;">${val}</span>
`,
},
];
}
// ---- table actions -------------------------------------------------------
private getDataActions() {
@@ -390,21 +432,43 @@ export class SipproxyViewVoicemail extends DeesElement {
];
}
private getVoiceboxActions() {
return [
{
name: 'View Messages',
iconName: 'lucide:folder-open',
type: ['inRow'] as any,
actionFunc: async ({ item }: { item: IVoiceboxRow }) => {
await this.selectBox(item.id);
},
},
{
name: 'Refresh Boxes',
iconName: 'lucide:refreshCw',
type: ['header'] as any,
actionFunc: async () => {
await this.loadVoiceboxes();
deesCatalog.DeesToast.success('Voiceboxes refreshed');
},
},
];
}
// ---- render --------------------------------------------------------------
public render(): TemplateResult {
return html`
${this.voiceboxIds.length > 1 ? html`
<div class="box-selector">
<label>Voicebox</label>
<dees-input-dropdown
.key=${'voicebox'}
.selectedOption=${{ option: this.selectedBoxId, key: this.selectedBoxId }}
.options=${this.voiceboxIds.map((id) => ({ option: id, key: id }))}
@selectedOption=${(e: CustomEvent) => { this.selectBox(e.detail.key); }}
></dees-input-dropdown>
</div>
` : ''}
<div class="view-section">
<dees-table
heading1="Voiceboxes"
heading2="${this.voiceboxIds.length} configured"
dataName="voiceboxes"
.data=${this.getVoiceboxRows()}
.rowKey=${'id'}
.columns=${this.getVoiceboxColumns()}
.dataActions=${this.getVoiceboxActions()}
></dees-table>
</div>
<div class="view-section">
<dees-statsgrid

View File

@@ -2,72 +2,12 @@
* Application state — receives live updates from the proxy via WebSocket.
*/
export interface IProviderStatus {
id: string;
displayName: string;
registered: boolean;
publicIp: string | null;
}
import type { IContact } from '../../ts/config.ts';
import type { ICallHistoryEntry, ICallStatus, IDeviceStatus, IProviderStatus } from '../../ts/shared/status.ts';
export interface IDeviceStatus {
id: string;
displayName: string;
contact: { address: string; port: number } | null;
aor: string;
connected: boolean;
isBrowser: boolean;
}
export interface ILegStatus {
id: string;
type: 'sip-device' | 'sip-provider' | 'webrtc' | 'tool';
state: string;
remoteMedia: { address: string; port: number } | null;
rtpPort: number | null;
pktSent: number;
pktReceived: number;
codec: string | null;
transcoding: boolean;
metadata?: Record<string, unknown>;
}
export interface ICallStatus {
id: string;
state: string;
direction: 'inbound' | 'outbound' | 'internal';
callerNumber: string | null;
calleeNumber: string | null;
providerUsed: string | null;
createdAt: number;
duration: number;
legs: ILegStatus[];
}
export interface IHistoryLeg {
id: string;
type: string;
metadata: Record<string, unknown>;
}
export interface ICallHistoryEntry {
id: string;
direction: 'inbound' | 'outbound' | 'internal';
callerNumber: string | null;
calleeNumber: string | null;
providerUsed: string | null;
startedAt: number;
duration: number;
legs?: IHistoryLeg[];
}
export interface IContact {
id: string;
name: string;
number: string;
company?: string;
notes?: string;
starred?: boolean;
}
export type { IContact };
export type { ICallHistoryEntry, ICallStatus, IDeviceStatus, IProviderStatus };
export type { ILegStatus } from '../../ts/shared/status.ts';
export interface IAppState {
connected: boolean;