|
|
|
@@ -1,393 +1,476 @@
|
|
|
|
|
# @push.rocks/smartai
|
|
|
|
|
**One API to rule them all** 🚀
|
|
|
|
|
|
|
|
|
|
SmartAi is a powerful TypeScript library that provides a unified interface for integrating with multiple AI providers including OpenAI, Anthropic, Perplexity, Ollama, Groq, XAI, and Exo. It offers comprehensive support for chat interactions, streaming conversations, text-to-speech, document analysis, and vision processing.
|
|
|
|
|
[](https://www.npmjs.com/package/@push.rocks/smartai)
|
|
|
|
|
[](https://www.typescriptlang.org/)
|
|
|
|
|
[](https://opensource.org/licenses/MIT)
|
|
|
|
|
|
|
|
|
|
## Install
|
|
|
|
|
SmartAI unifies the world's leading AI providers - OpenAI, Anthropic, Perplexity, Ollama, Groq, XAI, and Exo - under a single, elegant TypeScript interface. Build AI applications at lightning speed without vendor lock-in.
|
|
|
|
|
|
|
|
|
|
To install SmartAi into your project, use pnpm:
|
|
|
|
|
## 🎯 Why SmartAI?
|
|
|
|
|
|
|
|
|
|
- **🔌 Universal Interface**: Write once, run with any AI provider. Switch between GPT-4, Claude, Llama, or Grok with a single line change.
|
|
|
|
|
- **🛡️ Type-Safe**: Full TypeScript support with comprehensive type definitions for all operations
|
|
|
|
|
- **🌊 Streaming First**: Built for real-time applications with native streaming support
|
|
|
|
|
- **🎨 Multi-Modal**: Seamlessly work with text, images, audio, and documents
|
|
|
|
|
- **🏠 Local & Cloud**: Support for both cloud providers and local models via Ollama
|
|
|
|
|
- **⚡ Zero Lock-In**: Your code remains portable across all AI providers
|
|
|
|
|
|
|
|
|
|
## 🚀 Quick Start
|
|
|
|
|
|
|
|
|
|
```bash
|
|
|
|
|
pnpm install @push.rocks/smartai
|
|
|
|
|
npm install @push.rocks/smartai
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
## Usage
|
|
|
|
|
|
|
|
|
|
SmartAi provides a clean, consistent API across all supported AI providers. This documentation covers all features with practical examples for each provider and capability.
|
|
|
|
|
|
|
|
|
|
### Initialization
|
|
|
|
|
|
|
|
|
|
First, initialize SmartAi with the API tokens and configuration for the providers you want to use:
|
|
|
|
|
|
|
|
|
|
```typescript
|
|
|
|
|
import { SmartAi } from '@push.rocks/smartai';
|
|
|
|
|
|
|
|
|
|
const smartAi = new SmartAi({
|
|
|
|
|
// OpenAI - for GPT models, DALL-E, and TTS
|
|
|
|
|
openaiToken: 'your-openai-api-key',
|
|
|
|
|
|
|
|
|
|
// Anthropic - for Claude models
|
|
|
|
|
anthropicToken: 'your-anthropic-api-key',
|
|
|
|
|
|
|
|
|
|
// Perplexity - for research-focused AI
|
|
|
|
|
perplexityToken: 'your-perplexity-api-key',
|
|
|
|
|
|
|
|
|
|
// Groq - for fast inference
|
|
|
|
|
groqToken: 'your-groq-api-key',
|
|
|
|
|
|
|
|
|
|
// XAI - for Grok models
|
|
|
|
|
xaiToken: 'your-xai-api-key',
|
|
|
|
|
|
|
|
|
|
// Ollama - for local models
|
|
|
|
|
ollama: {
|
|
|
|
|
baseUrl: 'http://localhost:11434',
|
|
|
|
|
model: 'llama2', // default model for chat
|
|
|
|
|
visionModel: 'llava' // default model for vision
|
|
|
|
|
},
|
|
|
|
|
|
|
|
|
|
// Exo - for distributed inference
|
|
|
|
|
exo: {
|
|
|
|
|
baseUrl: 'http://localhost:8080/v1',
|
|
|
|
|
apiKey: 'your-exo-api-key'
|
|
|
|
|
}
|
|
|
|
|
// Initialize with your favorite providers
|
|
|
|
|
const ai = new SmartAi({
|
|
|
|
|
openaiToken: 'sk-...',
|
|
|
|
|
anthropicToken: 'sk-ant-...'
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
// Start the SmartAi instance
|
|
|
|
|
await smartAi.start();
|
|
|
|
|
```
|
|
|
|
|
await ai.start();
|
|
|
|
|
|
|
|
|
|
## Supported Providers
|
|
|
|
|
|
|
|
|
|
SmartAi supports the following AI providers:
|
|
|
|
|
|
|
|
|
|
| Provider | Use Case | Key Features |
|
|
|
|
|
|----------|----------|--------------|
|
|
|
|
|
| **OpenAI** | General purpose, GPT models | Chat, streaming, TTS, vision, documents |
|
|
|
|
|
| **Anthropic** | Claude models, safety-focused | Chat, streaming, vision, documents |
|
|
|
|
|
| **Perplexity** | Research and factual queries | Chat, streaming, documents |
|
|
|
|
|
| **Groq** | Fast inference | Chat, streaming |
|
|
|
|
|
| **XAI** | Grok models | Chat, streaming |
|
|
|
|
|
| **Ollama** | Local models | Chat, streaming, vision |
|
|
|
|
|
| **Exo** | Distributed inference | Chat, streaming |
|
|
|
|
|
|
|
|
|
|
## Core Features
|
|
|
|
|
|
|
|
|
|
### 1. Chat Interactions
|
|
|
|
|
|
|
|
|
|
SmartAi provides both synchronous and streaming chat capabilities across all supported providers.
|
|
|
|
|
|
|
|
|
|
#### Synchronous Chat
|
|
|
|
|
|
|
|
|
|
Simple request-response interactions with any provider:
|
|
|
|
|
|
|
|
|
|
```typescript
|
|
|
|
|
// OpenAI Example
|
|
|
|
|
const openAiResponse = await smartAi.openaiProvider.chat({
|
|
|
|
|
// Same API, multiple providers
|
|
|
|
|
const response = await ai.openaiProvider.chat({
|
|
|
|
|
systemMessage: 'You are a helpful assistant.',
|
|
|
|
|
userMessage: 'What is the capital of France?',
|
|
|
|
|
userMessage: 'Explain quantum computing in simple terms',
|
|
|
|
|
messageHistory: []
|
|
|
|
|
});
|
|
|
|
|
console.log(openAiResponse.message); // "The capital of France is Paris."
|
|
|
|
|
|
|
|
|
|
// Anthropic Example
|
|
|
|
|
const anthropicResponse = await smartAi.anthropicProvider.chat({
|
|
|
|
|
systemMessage: 'You are a knowledgeable historian.',
|
|
|
|
|
userMessage: 'Tell me about the French Revolution',
|
|
|
|
|
messageHistory: []
|
|
|
|
|
});
|
|
|
|
|
console.log(anthropicResponse.message);
|
|
|
|
|
|
|
|
|
|
// Using message history for context
|
|
|
|
|
const contextualResponse = await smartAi.openaiProvider.chat({
|
|
|
|
|
systemMessage: 'You are a math tutor.',
|
|
|
|
|
userMessage: 'What about multiplication?',
|
|
|
|
|
messageHistory: [
|
|
|
|
|
{ role: 'user', content: 'Can you teach me math?' },
|
|
|
|
|
{ role: 'assistant', content: 'Of course! What would you like to learn?' }
|
|
|
|
|
]
|
|
|
|
|
});
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
#### Streaming Chat
|
|
|
|
|
## 📊 Provider Capabilities Matrix
|
|
|
|
|
|
|
|
|
|
For real-time, token-by-token responses:
|
|
|
|
|
Choose the right provider for your use case:
|
|
|
|
|
|
|
|
|
|
| Provider | Chat | Streaming | TTS | Vision | Documents | Highlights |
|
|
|
|
|
|----------|:----:|:---------:|:---:|:------:|:---------:|------------|
|
|
|
|
|
| **OpenAI** | ✅ | ✅ | ✅ | ✅ | ✅ | • GPT-4, DALL-E 3<br>• Industry standard<br>• Most features |
|
|
|
|
|
| **Anthropic** | ✅ | ✅ | ❌ | ✅ | ✅ | • Claude 3 Opus<br>• Superior reasoning<br>• 200k context |
|
|
|
|
|
| **Ollama** | ✅ | ✅ | ❌ | ✅ | ✅ | • 100% local<br>• Privacy-first<br>• No API costs |
|
|
|
|
|
| **XAI** | ✅ | ✅ | ❌ | ❌ | ✅ | • Grok models<br>• Real-time data<br>• Uncensored |
|
|
|
|
|
| **Perplexity** | ✅ | ✅ | ❌ | ❌ | ❌ | • Web-aware<br>• Research-focused<br>• Citations |
|
|
|
|
|
| **Groq** | ✅ | ✅ | ❌ | ❌ | ❌ | • 10x faster<br>• LPU inference<br>• Low latency |
|
|
|
|
|
| **Exo** | ✅ | ✅ | ❌ | ❌ | ❌ | • Distributed<br>• P2P compute<br>• Decentralized |
|
|
|
|
|
|
|
|
|
|
## 🎮 Core Features
|
|
|
|
|
|
|
|
|
|
### 💬 Universal Chat Interface
|
|
|
|
|
|
|
|
|
|
Works identically across all providers:
|
|
|
|
|
|
|
|
|
|
```typescript
|
|
|
|
|
// Create a readable stream for input
|
|
|
|
|
const { readable, writable } = new TransformStream();
|
|
|
|
|
const writer = writable.getWriter();
|
|
|
|
|
// Use GPT-4 for complex reasoning
|
|
|
|
|
const gptResponse = await ai.openaiProvider.chat({
|
|
|
|
|
systemMessage: 'You are a expert physicist.',
|
|
|
|
|
userMessage: 'Explain the implications of quantum entanglement',
|
|
|
|
|
messageHistory: []
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
// Send a message
|
|
|
|
|
const encoder = new TextEncoder();
|
|
|
|
|
await writer.write(encoder.encode(JSON.stringify({
|
|
|
|
|
role: 'user',
|
|
|
|
|
content: 'Write a haiku about programming'
|
|
|
|
|
})));
|
|
|
|
|
await writer.close();
|
|
|
|
|
// Use Claude for safety-critical applications
|
|
|
|
|
const claudeResponse = await ai.anthropicProvider.chat({
|
|
|
|
|
systemMessage: 'You are a medical advisor.',
|
|
|
|
|
userMessage: 'Review this patient data for concerns',
|
|
|
|
|
messageHistory: []
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
// Get streaming response
|
|
|
|
|
const responseStream = await smartAi.openaiProvider.chatStream(readable);
|
|
|
|
|
const reader = responseStream.getReader();
|
|
|
|
|
const decoder = new TextDecoder();
|
|
|
|
|
// Use Groq for lightning-fast responses
|
|
|
|
|
const groqResponse = await ai.groqProvider.chat({
|
|
|
|
|
systemMessage: 'You are a code reviewer.',
|
|
|
|
|
userMessage: 'Quick! Find the bug in this code: ...',
|
|
|
|
|
messageHistory: []
|
|
|
|
|
});
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
// Read the stream
|
|
|
|
|
### 🌊 Real-Time Streaming
|
|
|
|
|
|
|
|
|
|
Build responsive chat interfaces with token-by-token streaming:
|
|
|
|
|
|
|
|
|
|
```typescript
|
|
|
|
|
// Create a chat stream
|
|
|
|
|
const stream = await ai.openaiProvider.chatStream(inputStream);
|
|
|
|
|
const reader = stream.getReader();
|
|
|
|
|
|
|
|
|
|
// Display responses as they arrive
|
|
|
|
|
while (true) {
|
|
|
|
|
const { done, value } = await reader.read();
|
|
|
|
|
if (done) break;
|
|
|
|
|
process.stdout.write(value); // Print each chunk as it arrives
|
|
|
|
|
|
|
|
|
|
// Update UI in real-time
|
|
|
|
|
process.stdout.write(value);
|
|
|
|
|
}
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
### 2. Text-to-Speech (Audio Generation)
|
|
|
|
|
### 🎙️ Text-to-Speech
|
|
|
|
|
|
|
|
|
|
Convert text to natural-sounding speech (currently supported by OpenAI):
|
|
|
|
|
Generate natural voices with OpenAI:
|
|
|
|
|
|
|
|
|
|
```typescript
|
|
|
|
|
import * as fs from 'fs';
|
|
|
|
|
|
|
|
|
|
// Generate speech from text
|
|
|
|
|
const audioStream = await smartAi.openaiProvider.audio({
|
|
|
|
|
message: 'Hello world! This is a test of the text-to-speech system.'
|
|
|
|
|
const audioStream = await ai.openaiProvider.audio({
|
|
|
|
|
message: 'Welcome to the future of AI development!'
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
// Save to file
|
|
|
|
|
const writeStream = fs.createWriteStream('output.mp3');
|
|
|
|
|
audioStream.pipe(writeStream);
|
|
|
|
|
// Stream directly to speakers
|
|
|
|
|
audioStream.pipe(speakerOutput);
|
|
|
|
|
|
|
|
|
|
// Or use in your application directly
|
|
|
|
|
audioStream.on('data', (chunk) => {
|
|
|
|
|
// Process audio chunks
|
|
|
|
|
// Or save to file
|
|
|
|
|
audioStream.pipe(fs.createWriteStream('welcome.mp3'));
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
### 👁️ Vision Analysis
|
|
|
|
|
|
|
|
|
|
Understand images with multiple providers:
|
|
|
|
|
|
|
|
|
|
```typescript
|
|
|
|
|
const image = fs.readFileSync('product-photo.jpg');
|
|
|
|
|
|
|
|
|
|
// OpenAI: General purpose vision
|
|
|
|
|
const gptVision = await ai.openaiProvider.vision({
|
|
|
|
|
image,
|
|
|
|
|
prompt: 'Describe this product and suggest marketing angles'
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
// Anthropic: Detailed analysis
|
|
|
|
|
const claudeVision = await ai.anthropicProvider.vision({
|
|
|
|
|
image,
|
|
|
|
|
prompt: 'Identify any safety concerns or defects'
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
// Ollama: Private, local analysis
|
|
|
|
|
const ollamaVision = await ai.ollamaProvider.vision({
|
|
|
|
|
image,
|
|
|
|
|
prompt: 'Extract all text and categorize the content'
|
|
|
|
|
});
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
### 3. Vision Processing
|
|
|
|
|
### 📄 Document Intelligence
|
|
|
|
|
|
|
|
|
|
Analyze images and get detailed descriptions:
|
|
|
|
|
Extract insights from PDFs with AI:
|
|
|
|
|
|
|
|
|
|
```typescript
|
|
|
|
|
import * as fs from 'fs';
|
|
|
|
|
const contract = fs.readFileSync('contract.pdf');
|
|
|
|
|
const invoice = fs.readFileSync('invoice.pdf');
|
|
|
|
|
|
|
|
|
|
// Read an image file
|
|
|
|
|
const imageBuffer = fs.readFileSync('image.jpg');
|
|
|
|
|
|
|
|
|
|
// OpenAI Vision
|
|
|
|
|
const openAiVision = await smartAi.openaiProvider.vision({
|
|
|
|
|
image: imageBuffer,
|
|
|
|
|
prompt: 'What is in this image? Describe in detail.'
|
|
|
|
|
});
|
|
|
|
|
console.log('OpenAI:', openAiVision);
|
|
|
|
|
|
|
|
|
|
// Anthropic Vision
|
|
|
|
|
const anthropicVision = await smartAi.anthropicProvider.vision({
|
|
|
|
|
image: imageBuffer,
|
|
|
|
|
prompt: 'Analyze this image and identify any text or objects.'
|
|
|
|
|
});
|
|
|
|
|
console.log('Anthropic:', anthropicVision);
|
|
|
|
|
|
|
|
|
|
// Ollama Vision (using local model)
|
|
|
|
|
const ollamaVision = await smartAi.ollamaProvider.vision({
|
|
|
|
|
image: imageBuffer,
|
|
|
|
|
prompt: 'Describe the colors and composition of this image.'
|
|
|
|
|
});
|
|
|
|
|
console.log('Ollama:', ollamaVision);
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
### 4. Document Analysis
|
|
|
|
|
|
|
|
|
|
Process and analyze PDF documents with AI:
|
|
|
|
|
|
|
|
|
|
```typescript
|
|
|
|
|
import * as fs from 'fs';
|
|
|
|
|
|
|
|
|
|
// Read PDF documents
|
|
|
|
|
const pdfBuffer = fs.readFileSync('document.pdf');
|
|
|
|
|
|
|
|
|
|
// Analyze with OpenAI
|
|
|
|
|
const openAiAnalysis = await smartAi.openaiProvider.document({
|
|
|
|
|
systemMessage: 'You are a document analyst. Extract key information.',
|
|
|
|
|
userMessage: 'Summarize this document and list the main points.',
|
|
|
|
|
messageHistory: [],
|
|
|
|
|
pdfDocuments: [pdfBuffer]
|
|
|
|
|
});
|
|
|
|
|
console.log('OpenAI Analysis:', openAiAnalysis.message);
|
|
|
|
|
|
|
|
|
|
// Analyze with Anthropic
|
|
|
|
|
const anthropicAnalysis = await smartAi.anthropicProvider.document({
|
|
|
|
|
// Analyze documents
|
|
|
|
|
const analysis = await ai.openaiProvider.document({
|
|
|
|
|
systemMessage: 'You are a legal expert.',
|
|
|
|
|
userMessage: 'Identify any legal terms or implications in this document.',
|
|
|
|
|
userMessage: 'Compare these documents and highlight key differences',
|
|
|
|
|
messageHistory: [],
|
|
|
|
|
pdfDocuments: [pdfBuffer]
|
|
|
|
|
pdfDocuments: [contract, invoice]
|
|
|
|
|
});
|
|
|
|
|
console.log('Anthropic Analysis:', anthropicAnalysis.message);
|
|
|
|
|
|
|
|
|
|
// Process multiple documents
|
|
|
|
|
const doc1 = fs.readFileSync('contract1.pdf');
|
|
|
|
|
const doc2 = fs.readFileSync('contract2.pdf');
|
|
|
|
|
|
|
|
|
|
const comparison = await smartAi.openaiProvider.document({
|
|
|
|
|
systemMessage: 'You are a contract analyst.',
|
|
|
|
|
userMessage: 'Compare these two contracts and highlight the differences.',
|
|
|
|
|
// Multi-document analysis
|
|
|
|
|
const taxDocs = [form1099, w2, receipts];
|
|
|
|
|
const taxAnalysis = await ai.anthropicProvider.document({
|
|
|
|
|
systemMessage: 'You are a tax advisor.',
|
|
|
|
|
userMessage: 'Prepare a tax summary from these documents',
|
|
|
|
|
messageHistory: [],
|
|
|
|
|
pdfDocuments: [doc1, doc2]
|
|
|
|
|
pdfDocuments: taxDocs
|
|
|
|
|
});
|
|
|
|
|
console.log('Comparison:', comparison.message);
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
### 5. Conversation Management
|
|
|
|
|
### 🔄 Persistent Conversations
|
|
|
|
|
|
|
|
|
|
Create persistent conversation sessions with any provider:
|
|
|
|
|
Maintain context across interactions:
|
|
|
|
|
|
|
|
|
|
```typescript
|
|
|
|
|
// Create a conversation with OpenAI
|
|
|
|
|
const conversation = smartAi.createConversation('openai');
|
|
|
|
|
// Create a coding assistant conversation
|
|
|
|
|
const assistant = ai.createConversation('openai');
|
|
|
|
|
await assistant.setSystemMessage('You are an expert TypeScript developer.');
|
|
|
|
|
|
|
|
|
|
// Set the system message
|
|
|
|
|
await conversation.setSystemMessage('You are a helpful coding assistant.');
|
|
|
|
|
|
|
|
|
|
// Get input and output streams
|
|
|
|
|
const inputWriter = conversation.getInputStreamWriter();
|
|
|
|
|
const outputStream = conversation.getOutputStream();
|
|
|
|
|
|
|
|
|
|
// Set up output reader
|
|
|
|
|
const reader = outputStream.getReader();
|
|
|
|
|
const decoder = new TextDecoder();
|
|
|
|
|
|
|
|
|
|
// Send messages
|
|
|
|
|
await inputWriter.write('How do I create a REST API in Node.js?');
|
|
|
|
|
|
|
|
|
|
// Read responses
|
|
|
|
|
while (true) {
|
|
|
|
|
const { done, value } = await reader.read();
|
|
|
|
|
if (done) break;
|
|
|
|
|
console.log('Assistant:', decoder.decode(value));
|
|
|
|
|
}
|
|
|
|
|
// First question
|
|
|
|
|
const inputWriter = assistant.getInputStreamWriter();
|
|
|
|
|
await inputWriter.write('How do I implement a singleton pattern?');
|
|
|
|
|
|
|
|
|
|
// Continue the conversation
|
|
|
|
|
await inputWriter.write('Can you show me an example with Express?');
|
|
|
|
|
await inputWriter.write('Now show me how to make it thread-safe');
|
|
|
|
|
|
|
|
|
|
// Create conversations with different providers
|
|
|
|
|
const anthropicConversation = smartAi.createConversation('anthropic');
|
|
|
|
|
const groqConversation = smartAi.createConversation('groq');
|
|
|
|
|
// The assistant remembers the entire context
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
## Advanced Usage
|
|
|
|
|
## 🚀 Real-World Examples
|
|
|
|
|
|
|
|
|
|
### Error Handling
|
|
|
|
|
|
|
|
|
|
Always wrap AI operations in try-catch blocks for robust error handling:
|
|
|
|
|
### Build a Customer Support Bot
|
|
|
|
|
|
|
|
|
|
```typescript
|
|
|
|
|
try {
|
|
|
|
|
const response = await smartAi.openaiProvider.chat({
|
|
|
|
|
systemMessage: 'You are an assistant.',
|
|
|
|
|
userMessage: 'Hello!',
|
|
|
|
|
const supportBot = new SmartAi({
|
|
|
|
|
anthropicToken: process.env.ANTHROPIC_KEY // Claude for empathetic responses
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
async function handleCustomerQuery(query: string, history: ChatMessage[]) {
|
|
|
|
|
try {
|
|
|
|
|
const response = await supportBot.anthropicProvider.chat({
|
|
|
|
|
systemMessage: `You are a helpful customer support agent.
|
|
|
|
|
Be empathetic, professional, and solution-oriented.`,
|
|
|
|
|
userMessage: query,
|
|
|
|
|
messageHistory: history
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
return response.message;
|
|
|
|
|
} catch (error) {
|
|
|
|
|
// Fallback to another provider if needed
|
|
|
|
|
return await supportBot.openaiProvider.chat({...});
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
### Create a Code Review Assistant
|
|
|
|
|
|
|
|
|
|
```typescript
|
|
|
|
|
const codeReviewer = new SmartAi({
|
|
|
|
|
groqToken: process.env.GROQ_KEY // Groq for speed
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
async function reviewCode(code: string, language: string) {
|
|
|
|
|
const startTime = Date.now();
|
|
|
|
|
|
|
|
|
|
const review = await codeReviewer.groqProvider.chat({
|
|
|
|
|
systemMessage: `You are a ${language} expert. Review code for:
|
|
|
|
|
- Security vulnerabilities
|
|
|
|
|
- Performance issues
|
|
|
|
|
- Best practices
|
|
|
|
|
- Potential bugs`,
|
|
|
|
|
userMessage: `Review this code:\n\n${code}`,
|
|
|
|
|
messageHistory: []
|
|
|
|
|
});
|
|
|
|
|
console.log(response.message);
|
|
|
|
|
} catch (error) {
|
|
|
|
|
if (error.code === 'rate_limit_exceeded') {
|
|
|
|
|
console.error('Rate limit hit, please retry later');
|
|
|
|
|
} else if (error.code === 'invalid_api_key') {
|
|
|
|
|
console.error('Invalid API key provided');
|
|
|
|
|
} else {
|
|
|
|
|
console.error('Unexpected error:', error.message);
|
|
|
|
|
|
|
|
|
|
console.log(`Review completed in ${Date.now() - startTime}ms`);
|
|
|
|
|
return review.message;
|
|
|
|
|
}
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
### Build a Research Assistant
|
|
|
|
|
|
|
|
|
|
```typescript
|
|
|
|
|
const researcher = new SmartAi({
|
|
|
|
|
perplexityToken: process.env.PERPLEXITY_KEY
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
async function research(topic: string) {
|
|
|
|
|
// Perplexity excels at web-aware research
|
|
|
|
|
const findings = await researcher.perplexityProvider.chat({
|
|
|
|
|
systemMessage: 'You are a research assistant. Provide factual, cited information.',
|
|
|
|
|
userMessage: `Research the latest developments in ${topic}`,
|
|
|
|
|
messageHistory: []
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
return findings.message;
|
|
|
|
|
}
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
### Local AI for Sensitive Data
|
|
|
|
|
|
|
|
|
|
```typescript
|
|
|
|
|
const localAI = new SmartAi({
|
|
|
|
|
ollama: {
|
|
|
|
|
baseUrl: 'http://localhost:11434',
|
|
|
|
|
model: 'llama2',
|
|
|
|
|
visionModel: 'llava'
|
|
|
|
|
}
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
// Process sensitive documents without leaving your infrastructure
|
|
|
|
|
async function analyzeSensitiveDoc(pdfBuffer: Buffer) {
|
|
|
|
|
const analysis = await localAI.ollamaProvider.document({
|
|
|
|
|
systemMessage: 'Extract and summarize key information.',
|
|
|
|
|
userMessage: 'Analyze this confidential document',
|
|
|
|
|
messageHistory: [],
|
|
|
|
|
pdfDocuments: [pdfBuffer]
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
// Data never leaves your servers
|
|
|
|
|
return analysis.message;
|
|
|
|
|
}
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
## ⚡ Performance Tips
|
|
|
|
|
|
|
|
|
|
### 1. Provider Selection Strategy
|
|
|
|
|
|
|
|
|
|
```typescript
|
|
|
|
|
class SmartAIRouter {
|
|
|
|
|
constructor(private ai: SmartAi) {}
|
|
|
|
|
|
|
|
|
|
async query(message: string, requirements: {
|
|
|
|
|
speed?: boolean;
|
|
|
|
|
accuracy?: boolean;
|
|
|
|
|
cost?: boolean;
|
|
|
|
|
privacy?: boolean;
|
|
|
|
|
}) {
|
|
|
|
|
if (requirements.privacy) {
|
|
|
|
|
return this.ai.ollamaProvider.chat({...}); // Local only
|
|
|
|
|
}
|
|
|
|
|
if (requirements.speed) {
|
|
|
|
|
return this.ai.groqProvider.chat({...}); // 10x faster
|
|
|
|
|
}
|
|
|
|
|
if (requirements.accuracy) {
|
|
|
|
|
return this.ai.anthropicProvider.chat({...}); // Best reasoning
|
|
|
|
|
}
|
|
|
|
|
// Default fallback
|
|
|
|
|
return this.ai.openaiProvider.chat({...});
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
### Streaming with Custom Processing
|
|
|
|
|
|
|
|
|
|
Implement custom transformations on streaming responses:
|
|
|
|
|
### 2. Streaming for Large Responses
|
|
|
|
|
|
|
|
|
|
```typescript
|
|
|
|
|
// Create a custom transform stream
|
|
|
|
|
const customTransform = new TransformStream({
|
|
|
|
|
transform(chunk, controller) {
|
|
|
|
|
// Example: Add timestamps to each chunk
|
|
|
|
|
const timestamp = new Date().toISOString();
|
|
|
|
|
controller.enqueue(`[${timestamp}] ${chunk}`);
|
|
|
|
|
// Don't wait for the entire response
|
|
|
|
|
async function streamResponse(userQuery: string) {
|
|
|
|
|
const stream = await ai.openaiProvider.chatStream(createInputStream(userQuery));
|
|
|
|
|
|
|
|
|
|
// Process tokens as they arrive
|
|
|
|
|
for await (const chunk of stream) {
|
|
|
|
|
updateUI(chunk); // Immediate feedback
|
|
|
|
|
await processChunk(chunk); // Parallel processing
|
|
|
|
|
}
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
// Apply to streaming chat
|
|
|
|
|
const inputStream = new ReadableStream({
|
|
|
|
|
start(controller) {
|
|
|
|
|
controller.enqueue(new TextEncoder().encode(JSON.stringify({
|
|
|
|
|
role: 'user',
|
|
|
|
|
content: 'Tell me a story'
|
|
|
|
|
})));
|
|
|
|
|
controller.close();
|
|
|
|
|
}
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
const responseStream = await smartAi.openaiProvider.chatStream(inputStream);
|
|
|
|
|
const processedStream = responseStream.pipeThrough(customTransform);
|
|
|
|
|
|
|
|
|
|
// Read processed stream
|
|
|
|
|
const reader = processedStream.getReader();
|
|
|
|
|
while (true) {
|
|
|
|
|
const { done, value } = await reader.read();
|
|
|
|
|
if (done) break;
|
|
|
|
|
console.log(value);
|
|
|
|
|
}
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
### Provider-Specific Features
|
|
|
|
|
|
|
|
|
|
Each provider may have unique capabilities. Here's how to leverage them:
|
|
|
|
|
### 3. Parallel Multi-Provider Queries
|
|
|
|
|
|
|
|
|
|
```typescript
|
|
|
|
|
// OpenAI - Use specific models
|
|
|
|
|
const gpt4Response = await smartAi.openaiProvider.chat({
|
|
|
|
|
systemMessage: 'You are a helpful assistant.',
|
|
|
|
|
userMessage: 'Explain quantum computing',
|
|
|
|
|
messageHistory: []
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
// Anthropic - Use Claude's strength in analysis
|
|
|
|
|
const codeReview = await smartAi.anthropicProvider.chat({
|
|
|
|
|
systemMessage: 'You are a code reviewer.',
|
|
|
|
|
userMessage: 'Review this code for security issues: ...',
|
|
|
|
|
messageHistory: []
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
// Perplexity - Best for research and current events
|
|
|
|
|
const research = await smartAi.perplexityProvider.chat({
|
|
|
|
|
systemMessage: 'You are a research assistant.',
|
|
|
|
|
userMessage: 'What are the latest developments in renewable energy?',
|
|
|
|
|
messageHistory: []
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
// Groq - Optimized for speed
|
|
|
|
|
const quickResponse = await smartAi.groqProvider.chat({
|
|
|
|
|
systemMessage: 'You are a quick helper.',
|
|
|
|
|
userMessage: 'Give me a one-line summary of photosynthesis',
|
|
|
|
|
messageHistory: []
|
|
|
|
|
});
|
|
|
|
|
// Get the best answer from multiple AIs
|
|
|
|
|
async function consensusQuery(question: string) {
|
|
|
|
|
const providers = [
|
|
|
|
|
ai.openaiProvider.chat({...}),
|
|
|
|
|
ai.anthropicProvider.chat({...}),
|
|
|
|
|
ai.perplexityProvider.chat({...})
|
|
|
|
|
];
|
|
|
|
|
|
|
|
|
|
const responses = await Promise.all(providers);
|
|
|
|
|
return synthesizeResponses(responses);
|
|
|
|
|
}
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
### Performance Optimization
|
|
|
|
|
## 🛠️ Advanced Features
|
|
|
|
|
|
|
|
|
|
Tips for optimal performance:
|
|
|
|
|
### Custom Streaming Transformations
|
|
|
|
|
|
|
|
|
|
```typescript
|
|
|
|
|
// 1. Reuse providers instead of creating new instances
|
|
|
|
|
const smartAi = new SmartAi({ /* config */ });
|
|
|
|
|
await smartAi.start(); // Initialize once
|
|
|
|
|
// Add real-time translation
|
|
|
|
|
const translationStream = new TransformStream({
|
|
|
|
|
async transform(chunk, controller) {
|
|
|
|
|
const translated = await translateChunk(chunk);
|
|
|
|
|
controller.enqueue(translated);
|
|
|
|
|
}
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
// 2. Use streaming for long responses
|
|
|
|
|
// Streaming reduces time-to-first-token and memory usage
|
|
|
|
|
|
|
|
|
|
// 3. Batch operations when possible
|
|
|
|
|
const promises = [
|
|
|
|
|
smartAi.openaiProvider.chat({ /* ... */ }),
|
|
|
|
|
smartAi.anthropicProvider.chat({ /* ... */ })
|
|
|
|
|
];
|
|
|
|
|
const results = await Promise.all(promises);
|
|
|
|
|
|
|
|
|
|
// 4. Clean up resources
|
|
|
|
|
await smartAi.stop(); // When done
|
|
|
|
|
const responseStream = await ai.openaiProvider.chatStream(input);
|
|
|
|
|
const translatedStream = responseStream.pipeThrough(translationStream);
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
### Error Handling & Fallbacks
|
|
|
|
|
|
|
|
|
|
```typescript
|
|
|
|
|
class ResilientAI {
|
|
|
|
|
private providers = ['openai', 'anthropic', 'groq'];
|
|
|
|
|
|
|
|
|
|
async query(opts: ChatOptions): Promise<ChatResponse> {
|
|
|
|
|
for (const provider of this.providers) {
|
|
|
|
|
try {
|
|
|
|
|
return await this.ai[`${provider}Provider`].chat(opts);
|
|
|
|
|
} catch (error) {
|
|
|
|
|
console.warn(`${provider} failed, trying next...`);
|
|
|
|
|
continue;
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
throw new Error('All providers failed');
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
### Token Counting & Cost Management
|
|
|
|
|
|
|
|
|
|
```typescript
|
|
|
|
|
// Track usage across providers
|
|
|
|
|
class UsageTracker {
|
|
|
|
|
async trackedChat(provider: string, options: ChatOptions) {
|
|
|
|
|
const start = Date.now();
|
|
|
|
|
const response = await ai[`${provider}Provider`].chat(options);
|
|
|
|
|
|
|
|
|
|
const usage = {
|
|
|
|
|
provider,
|
|
|
|
|
duration: Date.now() - start,
|
|
|
|
|
inputTokens: estimateTokens(options),
|
|
|
|
|
outputTokens: estimateTokens(response.message)
|
|
|
|
|
};
|
|
|
|
|
|
|
|
|
|
await this.logUsage(usage);
|
|
|
|
|
return response;
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
## 📦 Installation & Setup
|
|
|
|
|
|
|
|
|
|
### Prerequisites
|
|
|
|
|
|
|
|
|
|
- Node.js 16+
|
|
|
|
|
- TypeScript 4.5+
|
|
|
|
|
- API keys for your chosen providers
|
|
|
|
|
|
|
|
|
|
### Environment Setup
|
|
|
|
|
|
|
|
|
|
```bash
|
|
|
|
|
# Install
|
|
|
|
|
npm install @push.rocks/smartai
|
|
|
|
|
|
|
|
|
|
# Set up environment variables
|
|
|
|
|
export OPENAI_API_KEY=sk-...
|
|
|
|
|
export ANTHROPIC_API_KEY=sk-ant-...
|
|
|
|
|
export PERPLEXITY_API_KEY=pplx-...
|
|
|
|
|
# ... etc
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
### TypeScript Configuration
|
|
|
|
|
|
|
|
|
|
```json
|
|
|
|
|
{
|
|
|
|
|
"compilerOptions": {
|
|
|
|
|
"target": "ES2022",
|
|
|
|
|
"module": "NodeNext",
|
|
|
|
|
"lib": ["ES2022"],
|
|
|
|
|
"strict": true,
|
|
|
|
|
"esModuleInterop": true,
|
|
|
|
|
"skipLibCheck": true
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
## 🎯 Choosing the Right Provider
|
|
|
|
|
|
|
|
|
|
| Use Case | Recommended Provider | Why |
|
|
|
|
|
|----------|---------------------|-----|
|
|
|
|
|
| **General Purpose** | OpenAI | Most features, stable, well-documented |
|
|
|
|
|
| **Complex Reasoning** | Anthropic | Superior logical thinking, safer outputs |
|
|
|
|
|
| **Research & Facts** | Perplexity | Web-aware, provides citations |
|
|
|
|
|
| **Speed Critical** | Groq | 10x faster inference, sub-second responses |
|
|
|
|
|
| **Privacy Critical** | Ollama | 100% local, no data leaves your servers |
|
|
|
|
|
| **Real-time Data** | XAI | Access to current information |
|
|
|
|
|
| **Cost Sensitive** | Ollama/Exo | Free (local) or distributed compute |
|
|
|
|
|
|
|
|
|
|
## 🤝 Contributing
|
|
|
|
|
|
|
|
|
|
SmartAI is open source and welcomes contributions! Visit our [GitHub repository](https://code.foss.global/push.rocks/smartai) to:
|
|
|
|
|
|
|
|
|
|
- Report issues
|
|
|
|
|
- Submit pull requests
|
|
|
|
|
- Request features
|
|
|
|
|
- Join discussions
|
|
|
|
|
|
|
|
|
|
## 📈 Roadmap
|
|
|
|
|
|
|
|
|
|
- [ ] Streaming function calls
|
|
|
|
|
- [ ] Image generation support
|
|
|
|
|
- [ ] Voice input processing
|
|
|
|
|
- [ ] Fine-tuning integration
|
|
|
|
|
- [ ] Embedding support
|
|
|
|
|
- [ ] Agent framework
|
|
|
|
|
- [ ] More providers (Cohere, AI21, etc.)
|
|
|
|
|
|
|
|
|
|
## License and Legal Information
|
|
|
|
|
|
|
|
|
|
This repository contains open-source code that is licensed under the MIT License. A copy of the MIT License can be found in the [license](license) file within this repository.
|
|
|
|
@@ -405,4 +488,4 @@ Registered at District court Bremen HRB 35230 HB, Germany
|
|
|
|
|
|
|
|
|
|
For any legal inquiries or if you require further information, please contact us via email at hello@task.vc.
|
|
|
|
|
|
|
|
|
|
By using this repository, you acknowledge that you have read this section, agree to comply with its terms, and understand that the licensing of the code does not imply endorsement by Task Venture Capital GmbH of any derivative works.
|
|
|
|
|
By using this repository, you acknowledge that you have read this section, agree to comply with its terms, and understand that the licensing of the code does not imply endorsement by Task Venture Capital GmbH of any derivative works.
|