3.7 KiB
3.7 KiB
SmartAI Project Hints
Dependencies
- Uses
@git.zone/tstestv3.x for testing (import from@git.zone/tstest/tapbundle) @push.rocks/smartfsv1.x for file system operations@anthropic-ai/sdkv0.71.x with extended thinking support@mistralai/mistralaiv1.x for Mistral OCR and chat capabilitiesopenaiv6.x for OpenAI API integration@push.rocks/smartrequestv5.x - usesresponse.stream()+Readable.fromWeb()for streaming
Important Notes
- When extended thinking is enabled, temperature parameter must NOT be set (or set to 1)
- The
streamNode()method was removed in smartrequest v5, useresponse.stream()withReadable.fromWeb()instead
Provider Capabilities Summary
| Provider | Chat | Stream | TTS | Vision | Documents | Research | Images |
|---|---|---|---|---|---|---|---|
| OpenAI | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
| Anthropic | ✅ | ✅ | ❌ | ✅ | ✅ | ✅ | ❌ |
| Mistral | ✅ | ✅ | ❌ | ✅ | ✅ | ❌ | ❌ |
| ElevenLabs | ❌ | ❌ | ✅ | ❌ | ❌ | ❌ | ❌ |
| Ollama | ✅ | ✅ | ❌ | ✅ | ✅ | ❌ | ❌ |
| XAI | ✅ | ✅ | ❌ | ❌ | ✅ | ❌ | ❌ |
| Perplexity | ✅ | ✅ | ❌ | ❌ | ❌ | ✅ | ❌ |
| Groq | ✅ | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ |
| Exo | ✅ | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ |
Mistral Provider Integration
Overview
The Mistral provider supports:
- Document AI via Mistral OCR (December 2025) - native PDF processing without image conversion
- Chat capabilities using Mistral's chat models (
mistral-large-latest, etc.)
Key Advantage: Native PDF Support
Unlike other providers that require converting PDFs to images (using SmartPdf), Mistral OCR natively accepts PDF documents as base64-encoded data. This makes document processing potentially faster and more accurate for text extraction.
Configuration
import * as smartai from '@push.rocks/smartai';
const provider = new smartai.MistralProvider({
mistralToken: 'your-token-here',
chatModel: 'mistral-large-latest', // default
ocrModel: 'mistral-ocr-latest', // default
tableFormat: 'markdown', // 'markdown' or 'html'
});
await provider.start();
API Key
Tests require MISTRAL_API_KEY in .nogit/env.json.
Anthropic Extended Thinking Feature
Configuration
Extended thinking is configured at the provider level during instantiation:
import * as smartai from '@push.rocks/smartai';
const provider = new smartai.AnthropicProvider({
anthropicToken: 'your-token-here',
extendedThinking: 'normal', // Options: 'quick' | 'normal' | 'deep' | 'off'
});
Thinking Modes
| Mode | Budget Tokens | Use Case |
|---|---|---|
'quick' |
2,048 | Lightweight reasoning for simple queries |
'normal' |
8,000 | Default - Balanced reasoning for most tasks |
'deep' |
16,000 | Complex reasoning for difficult problems |
'off' |
0 | Disable extended thinking |
Implementation Details
- Extended thinking is implemented via
getThinkingConfig()private method - When thinking is enabled, temperature must NOT be set
- Uses
claude-sonnet-4-5-20250929model
Testing
Run tests with:
pnpm test
Run specific tests:
npx tstest test/test.something.ts --verbose