docs(readme): fix vLLM config example fence to jsonc
CI / Type Check & Lint (push) Successful in 6s
CI / Build Test (Current Platform) (push) Successful in 6s
CI / Build All Platforms (push) Successful in 39s

Switch the vLLM config example from a bash code fence to jsonc and
convert its inline `#` comments to `//` so the snippet is valid JSONC.
This commit is contained in:
2026-04-21 08:23:10 +00:00
parent 02bb3d2d8d
commit cec102e54e
+3 -3
View File
@@ -318,15 +318,15 @@ modelgrid cluster activate NODE # Mark a node active again
High-performance inference with PagedAttention and continuous batching.
```bash
```jsonc
{
"id": "vllm-1",
"type": "vllm",
"name": "vLLM Server",
"gpuIds": ["nvidia-0", "nvidia-1"], # Tensor parallelism
"gpuIds": ["nvidia-0", "nvidia-1"], // Tensor parallelism
"port": 8000,
"env": {
"HF_TOKEN": "your-huggingface-token" # For gated models
"HF_TOKEN": "your-huggingface-token" // For gated models
}
}
```