feat(provider.anthropic): Add support for vision and document processing in Anthropic provider

This commit is contained in:
2025-02-03 17:48:36 +01:00
parent 1c49af74ac
commit ad5dd4799b
4 changed files with 121 additions and 6 deletions

View File

@@ -26,7 +26,7 @@ This command installs the package and adds it to your project's dependencies.
### Anthropic
- Models: Claude-3-opus-20240229
- Features: Chat, Streaming
- Features: Chat, Streaming, Vision, Document Processing
- Configuration:
```typescript
anthropicToken: 'your-anthropic-token'
@@ -148,7 +148,7 @@ const audioStream = await smartAi.openaiProvider.audio({
### Document Processing
For providers that support document processing (OpenAI and Ollama):
For providers that support document processing (OpenAI, Ollama, and Anthropic):
```typescript
// Using OpenAI
@@ -166,6 +166,14 @@ const analysis = await smartAi.ollamaProvider.document({
messageHistory: [],
pdfDocuments: [pdfBuffer] // Uint8Array of PDF content
});
// Using Anthropic with Claude 3
const anthropicAnalysis = await smartAi.anthropicProvider.document({
systemMessage: 'You are a document analysis assistant',
userMessage: 'Please analyze this document and extract key information',
messageHistory: [],
pdfDocuments: [pdfBuffer] // Uint8Array of PDF content
});
```
Both providers will:
@@ -175,7 +183,7 @@ Both providers will:
### Vision Processing
For providers that support vision tasks (OpenAI and Ollama):
For providers that support vision tasks (OpenAI, Ollama, and Anthropic):
```typescript
// Using OpenAI's GPT-4 Vision
@@ -189,6 +197,12 @@ const analysis = await smartAi.ollamaProvider.vision({
image: imageBuffer,
prompt: 'Analyze this image in detail'
});
// Using Anthropic's Claude 3
const anthropicAnalysis = await smartAi.anthropicProvider.vision({
image: imageBuffer,
prompt: 'Please analyze this image and describe what you see'
});
```
## Error Handling