feat(OllamaProvider): add model options, streaming support, and thinking tokens
- Add IOllamaModelOptions interface for runtime options (num_ctx, temperature, etc.) - Extend IOllamaProviderOptions with defaultOptions and defaultTimeout - Add IOllamaChatOptions for per-request overrides - Add IOllamaStreamChunk and IOllamaChatResponse interfaces - Add chatStreamResponse() for async iteration with options - Add collectStreamResponse() for streaming with progress callback - Add chatWithOptions() for non-streaming with full options - Update chat() to use defaultOptions and defaultTimeout
This commit is contained in:
@@ -3,9 +3,10 @@
|
||||
## Dependencies
|
||||
|
||||
- Uses `@git.zone/tstest` v3.x for testing (import from `@git.zone/tstest/tapbundle`)
|
||||
- `@push.rocks/smartfile` is kept at v11 to avoid migration to factory pattern
|
||||
- `@push.rocks/smartfs` v1.x for file system operations (replaced smartfile)
|
||||
- `@anthropic-ai/sdk` v0.71.x with extended thinking support
|
||||
- `@mistralai/mistralai` v1.x for Mistral OCR and chat capabilities
|
||||
- `openai` v6.x for OpenAI API integration
|
||||
- `@push.rocks/smartrequest` v5.x - uses `response.stream()` + `Readable.fromWeb()` for streaming
|
||||
|
||||
## Important Notes
|
||||
|
||||
Reference in New Issue
Block a user