2024-03-30 12:42:44 +01:00
# @push.rocks/smartai
2025-08-01 18:25:46 +00:00
**One API to rule them all** 🚀
2024-04-14 17:19:32 +02:00
2025-08-01 18:25:46 +00:00
[](https://www.npmjs.com/package/@push .rocks/smartai)
[](https://www.typescriptlang.org/)
[](https://opensource.org/licenses/MIT)
2025-02-25 19:04:40 +00:00
2025-08-01 18:25:46 +00:00
SmartAI unifies the world's leading AI providers - OpenAI, Anthropic, Perplexity, Ollama, Groq, XAI, and Exo - under a single, elegant TypeScript interface. Build AI applications at lightning speed without vendor lock-in.
2025-02-25 19:04:40 +00:00
2025-08-01 18:25:46 +00:00
## 🎯 Why SmartAI?
2024-04-14 17:19:32 +02:00
2025-08-01 18:25:46 +00:00
- **🔌 Universal Interface**: Write once, run with any AI provider. Switch between GPT-4, Claude, Llama, or Grok with a single line change.
- **🛡️ Type-Safe**: Full TypeScript support with comprehensive type definitions for all operations
- **🌊 Streaming First**: Built for real-time applications with native streaming support
- **🎨 Multi-Modal**: Seamlessly work with text, images, audio, and documents
- **🏠 Local & Cloud**: Support for both cloud providers and local models via Ollama
- **⚡ Zero Lock-In**: Your code remains portable across all AI providers
2025-02-08 12:08:14 +01:00
2025-08-01 18:25:46 +00:00
## 🚀 Quick Start
2024-03-30 12:42:44 +01:00
2025-08-01 18:25:46 +00:00
```bash
npm install @push .rocks/smartai
```
2024-04-04 02:47:44 +02:00
```typescript
2024-04-29 12:37:43 +02:00
import { SmartAi } from '@push .rocks/smartai';
2025-08-01 18:25:46 +00:00
// Initialize with your favorite providers
const ai = new SmartAi({
openaiToken: 'sk-...',
anthropicToken: 'sk-ant-...'
2024-04-04 02:47:44 +02:00
});
2025-08-01 18:25:46 +00:00
await ai.start();
2025-07-25 18:00:23 +00:00
2025-08-01 18:25:46 +00:00
// Same API, multiple providers
const response = await ai.openaiProvider.chat({
systemMessage: 'You are a helpful assistant.',
userMessage: 'Explain quantum computing in simple terms',
messageHistory: []
});
```
2025-07-25 18:00:23 +00:00
2025-08-01 18:25:46 +00:00
## 📊 Provider Capabilities Matrix
2025-07-25 18:00:23 +00:00
2025-08-01 18:25:46 +00:00
Choose the right provider for your use case:
2025-07-25 18:00:23 +00:00
2025-08-01 18:25:46 +00:00
| Provider | Chat | Streaming | TTS | Vision | Documents | Highlights |
|----------|:----:|:---------:|:---:|:------:|:---------:|------------|
| **OpenAI** | ✅ | ✅ | ✅ | ✅ | ✅ | • GPT-4, DALL-E 3< br > • Industry standard< br > • Most features |
| **Anthropic** | ✅ | ✅ | ❌ | ✅ | ✅ | • Claude 3 Opus< br > • Superior reasoning< br > • 200k context |
| **Ollama** | ✅ | ✅ | ❌ | ✅ | ✅ | • 100% local< br > • Privacy-first< br > • No API costs |
| **XAI** | ✅ | ✅ | ❌ | ❌ | ✅ | • Grok models< br > • Real-time data< br > • Uncensored |
| **Perplexity** | ✅ | ✅ | ❌ | ❌ | ❌ | • Web-aware< br > • Research-focused< br > • Citations |
| **Groq** | ✅ | ✅ | ❌ | ❌ | ❌ | • 10x faster< br > • LPU inference< br > • Low latency |
| **Exo** | ✅ | ✅ | ❌ | ❌ | ❌ | • Distributed< br > • P2P compute< br > • Decentralized |
2025-02-03 15:16:58 +01:00
2025-08-01 18:25:46 +00:00
## 🎮 Core Features
2025-02-25 19:04:40 +00:00
2025-08-01 18:25:46 +00:00
### 💬 Universal Chat Interface
2025-02-25 19:04:40 +00:00
2025-08-01 18:25:46 +00:00
Works identically across all providers:
2024-04-04 02:47:44 +02:00
```typescript
2025-08-01 18:25:46 +00:00
// Use GPT-4 for complex reasoning
const gptResponse = await ai.openaiProvider.chat({
systemMessage: 'You are a expert physicist.',
userMessage: 'Explain the implications of quantum entanglement',
2025-07-25 18:00:23 +00:00
messageHistory: []
2025-02-03 15:16:58 +01:00
});
2025-08-01 18:25:46 +00:00
// Use Claude for safety-critical applications
const claudeResponse = await ai.anthropicProvider.chat({
systemMessage: 'You are a medical advisor.',
userMessage: 'Review this patient data for concerns',
2025-07-25 18:00:23 +00:00
messageHistory: []
});
2025-08-01 18:25:46 +00:00
// Use Groq for lightning-fast responses
const groqResponse = await ai.groqProvider.chat({
systemMessage: 'You are a code reviewer.',
userMessage: 'Quick! Find the bug in this code: ...',
messageHistory: []
2025-07-25 18:00:23 +00:00
});
2024-04-04 02:47:44 +02:00
```
2025-08-01 18:25:46 +00:00
### 🌊 Real-Time Streaming
2024-04-29 12:37:43 +02:00
2025-08-01 18:25:46 +00:00
Build responsive chat interfaces with token-by-token streaming:
2025-02-03 15:16:58 +01:00
```typescript
2025-08-01 18:25:46 +00:00
// Create a chat stream
const stream = await ai.openaiProvider.chatStream(inputStream);
const reader = stream.getReader();
// Display responses as they arrive
2025-02-03 15:16:58 +01:00
while (true) {
const { done, value } = await reader.read();
if (done) break;
2025-08-01 18:25:46 +00:00
// Update UI in real-time
process.stdout.write(value);
2025-02-03 15:16:58 +01:00
}
```
2024-04-29 12:37:43 +02:00
2025-08-01 18:25:46 +00:00
### 🎙️ Text-to-Speech
2024-04-29 12:37:43 +02:00
2025-08-01 18:25:46 +00:00
Generate natural voices with OpenAI:
2024-04-04 02:47:44 +02:00
```typescript
2025-08-01 18:25:46 +00:00
const audioStream = await ai.openaiProvider.audio({
message: 'Welcome to the future of AI development!'
2024-04-29 12:37:43 +02:00
});
2025-08-01 18:25:46 +00:00
// Stream directly to speakers
audioStream.pipe(speakerOutput);
2025-07-25 18:00:23 +00:00
2025-08-01 18:25:46 +00:00
// Or save to file
audioStream.pipe(fs.createWriteStream('welcome.mp3'));
2024-04-04 02:47:44 +02:00
```
2025-08-01 18:25:46 +00:00
### 👁️ Vision Analysis
2024-04-04 02:47:44 +02:00
2025-08-01 18:25:46 +00:00
Understand images with multiple providers:
2024-04-04 02:47:44 +02:00
```typescript
2025-08-01 18:25:46 +00:00
const image = fs.readFileSync('product-photo.jpg');
2025-07-25 18:00:23 +00:00
2025-08-01 18:25:46 +00:00
// OpenAI: General purpose vision
const gptVision = await ai.openaiProvider.vision({
image,
prompt: 'Describe this product and suggest marketing angles'
2024-04-04 02:47:44 +02:00
});
2025-02-25 19:04:40 +00:00
2025-08-01 18:25:46 +00:00
// Anthropic: Detailed analysis
const claudeVision = await ai.anthropicProvider.vision({
image,
prompt: 'Identify any safety concerns or defects'
2025-07-25 18:00:23 +00:00
});
2025-08-01 18:25:46 +00:00
// Ollama: Private, local analysis
const ollamaVision = await ai.ollamaProvider.vision({
image,
prompt: 'Extract all text and categorize the content'
2025-07-25 18:00:23 +00:00
});
2025-02-05 14:21:26 +01:00
```
2025-08-01 18:25:46 +00:00
### 📄 Document Intelligence
2025-07-25 18:00:23 +00:00
2025-08-01 18:25:46 +00:00
Extract insights from PDFs with AI:
2025-02-03 15:26:00 +01:00
2025-02-05 14:21:26 +01:00
```typescript
2025-08-01 18:25:46 +00:00
const contract = fs.readFileSync('contract.pdf');
const invoice = fs.readFileSync('invoice.pdf');
2025-07-25 18:00:23 +00:00
2025-08-01 18:25:46 +00:00
// Analyze documents
const analysis = await ai.openaiProvider.document({
2025-07-25 18:00:23 +00:00
systemMessage: 'You are a legal expert.',
2025-08-01 18:25:46 +00:00
userMessage: 'Compare these documents and highlight key differences',
2025-02-03 15:26:00 +01:00
messageHistory: [],
2025-08-01 18:25:46 +00:00
pdfDocuments: [contract, invoice]
2025-02-03 15:26:00 +01:00
});
2025-07-25 18:00:23 +00:00
2025-08-01 18:25:46 +00:00
// Multi-document analysis
const taxDocs = [form1099, w2, receipts];
const taxAnalysis = await ai.anthropicProvider.document({
systemMessage: 'You are a tax advisor.',
userMessage: 'Prepare a tax summary from these documents',
2025-07-25 18:00:23 +00:00
messageHistory: [],
2025-08-01 18:25:46 +00:00
pdfDocuments: taxDocs
2025-07-25 18:00:23 +00:00
});
2025-02-03 15:26:00 +01:00
```
2025-08-01 18:25:46 +00:00
### 🔄 Persistent Conversations
2025-02-03 15:26:00 +01:00
2025-08-01 18:25:46 +00:00
Maintain context across interactions:
2025-02-03 15:26:00 +01:00
```typescript
2025-08-01 18:25:46 +00:00
// Create a coding assistant conversation
const assistant = ai.createConversation('openai');
await assistant.setSystemMessage('You are an expert TypeScript developer.');
2025-02-03 15:26:00 +01:00
2025-08-01 18:25:46 +00:00
// First question
const inputWriter = assistant.getInputStreamWriter();
await inputWriter.write('How do I implement a singleton pattern?');
2025-02-03 17:48:36 +01:00
2025-08-01 18:25:46 +00:00
// Continue the conversation
await inputWriter.write('Now show me how to make it thread-safe');
2025-02-25 19:04:40 +00:00
2025-08-01 18:25:46 +00:00
// The assistant remembers the entire context
```
2025-02-25 19:04:40 +00:00
2025-08-01 18:25:46 +00:00
## 🚀 Real-World Examples
2025-02-25 19:04:40 +00:00
2025-08-01 18:25:46 +00:00
### Build a Customer Support Bot
2025-07-25 18:00:23 +00:00
2025-08-01 18:25:46 +00:00
```typescript
const supportBot = new SmartAi({
anthropicToken: process.env.ANTHROPIC_KEY // Claude for empathetic responses
});
2025-07-25 18:00:23 +00:00
2025-08-01 18:25:46 +00:00
async function handleCustomerQuery(query: string, history: ChatMessage[]) {
try {
const response = await supportBot.anthropicProvider.chat({
systemMessage: `You are a helpful customer support agent.
Be empathetic, professional, and solution-oriented.`,
userMessage: query,
messageHistory: history
});
return response.message;
} catch (error) {
// Fallback to another provider if needed
return await supportBot.openaiProvider.chat({...});
}
}
2024-04-04 02:47:44 +02:00
```
2025-08-01 18:25:46 +00:00
### Create a Code Review Assistant
2025-07-25 18:00:23 +00:00
2025-08-01 18:25:46 +00:00
```typescript
const codeReviewer = new SmartAi({
groqToken: process.env.GROQ_KEY // Groq for speed
});
2024-04-04 02:47:44 +02:00
2025-08-01 18:25:46 +00:00
async function reviewCode(code: string, language: string) {
const startTime = Date.now();
const review = await codeReviewer.groqProvider.chat({
systemMessage: `You are a ${language} expert. Review code for:
- Security vulnerabilities
- Performance issues
- Best practices
- Potential bugs`,
userMessage: `Review this code:\n\n${code}` ,
messageHistory: []
});
console.log(`Review completed in ${Date.now() - startTime}ms` );
return review.message;
}
```
### Build a Research Assistant
2024-04-04 02:47:44 +02:00
2025-02-03 15:16:58 +01:00
```typescript
2025-08-01 18:25:46 +00:00
const researcher = new SmartAi({
perplexityToken: process.env.PERPLEXITY_KEY
});
async function research(topic: string) {
// Perplexity excels at web-aware research
const findings = await researcher.perplexityProvider.chat({
systemMessage: 'You are a research assistant. Provide factual, cited information.',
userMessage: `Research the latest developments in ${topic}` ,
2025-02-03 15:16:58 +01:00
messageHistory: []
});
2025-08-01 18:25:46 +00:00
return findings.message;
}
```
### Local AI for Sensitive Data
```typescript
const localAI = new SmartAi({
ollama: {
baseUrl: 'http://localhost:11434',
model: 'llama2',
visionModel: 'llava'
2025-07-25 18:00:23 +00:00
}
2025-08-01 18:25:46 +00:00
});
// Process sensitive documents without leaving your infrastructure
async function analyzeSensitiveDoc(pdfBuffer: Buffer) {
const analysis = await localAI.ollamaProvider.document({
systemMessage: 'Extract and summarize key information.',
userMessage: 'Analyze this confidential document',
messageHistory: [],
pdfDocuments: [pdfBuffer]
});
// Data never leaves your servers
return analysis.message;
2025-02-03 15:16:58 +01:00
}
```
2024-04-04 02:47:44 +02:00
2025-08-01 18:25:46 +00:00
## ⚡ Performance Tips
2025-02-05 14:21:26 +01:00
2025-08-01 18:25:46 +00:00
### 1. Provider Selection Strategy
2025-02-05 14:21:26 +01:00
2025-02-25 19:04:40 +00:00
```typescript
2025-08-01 18:25:46 +00:00
class SmartAIRouter {
constructor(private ai: SmartAi) {}
async query(message: string, requirements: {
speed?: boolean;
accuracy?: boolean;
cost?: boolean;
privacy?: boolean;
}) {
if (requirements.privacy) {
return this.ai.ollamaProvider.chat({...}); // Local only
}
if (requirements.speed) {
return this.ai.groqProvider.chat({...}); // 10x faster
}
if (requirements.accuracy) {
return this.ai.anthropicProvider.chat({...}); // Best reasoning
}
// Default fallback
return this.ai.openaiProvider.chat({...});
2025-02-25 19:04:40 +00:00
}
2025-08-01 18:25:46 +00:00
}
```
### 2. Streaming for Large Responses
2025-02-05 14:21:26 +01:00
2025-08-01 18:25:46 +00:00
```typescript
// Don't wait for the entire response
async function streamResponse(userQuery: string) {
const stream = await ai.openaiProvider.chatStream(createInputStream(userQuery));
// Process tokens as they arrive
for await (const chunk of stream) {
updateUI(chunk); // Immediate feedback
await processChunk(chunk); // Parallel processing
2025-07-25 18:00:23 +00:00
}
2025-08-01 18:25:46 +00:00
}
```
2025-07-25 18:00:23 +00:00
2025-08-01 18:25:46 +00:00
### 3. Parallel Multi-Provider Queries
2025-07-25 18:00:23 +00:00
2025-08-01 18:25:46 +00:00
```typescript
// Get the best answer from multiple AIs
async function consensusQuery(question: string) {
const providers = [
ai.openaiProvider.chat({...}),
ai.anthropicProvider.chat({...}),
ai.perplexityProvider.chat({...})
];
const responses = await Promise.all(providers);
return synthesizeResponses(responses);
2025-07-25 18:00:23 +00:00
}
2025-02-05 14:21:26 +01:00
```
2025-08-01 18:25:46 +00:00
## 🛠️ Advanced Features
2025-02-05 14:21:26 +01:00
2025-08-01 18:25:46 +00:00
### Custom Streaming Transformations
2025-02-05 14:21:26 +01:00
2025-02-25 19:04:40 +00:00
```typescript
2025-08-01 18:25:46 +00:00
// Add real-time translation
const translationStream = new TransformStream({
async transform(chunk, controller) {
const translated = await translateChunk(chunk);
controller.enqueue(translated);
}
2025-02-25 19:04:40 +00:00
});
2025-02-05 14:21:26 +01:00
2025-08-01 18:25:46 +00:00
const responseStream = await ai.openaiProvider.chatStream(input);
const translatedStream = responseStream.pipeThrough(translationStream);
```
2025-02-25 19:04:40 +00:00
2025-08-01 18:25:46 +00:00
### Error Handling & Fallbacks
2025-07-25 18:00:23 +00:00
2025-08-01 18:25:46 +00:00
```typescript
class ResilientAI {
private providers = ['openai', 'anthropic', 'groq'];
async query(opts: ChatOptions): Promise< ChatResponse > {
for (const provider of this.providers) {
try {
return await this.ai[`${provider}Provider` ].chat(opts);
} catch (error) {
console.warn(`${provider} failed, trying next...` );
continue;
}
}
throw new Error('All providers failed');
}
}
2025-02-05 14:21:26 +01:00
```
2025-08-01 18:25:46 +00:00
### Token Counting & Cost Management
2025-07-25 18:00:23 +00:00
```typescript
2025-08-01 18:25:46 +00:00
// Track usage across providers
class UsageTracker {
async trackedChat(provider: string, options: ChatOptions) {
const start = Date.now();
const response = await ai[`${provider}Provider` ].chat(options);
const usage = {
provider,
duration: Date.now() - start,
inputTokens: estimateTokens(options),
outputTokens: estimateTokens(response.message)
};
await this.logUsage(usage);
return response;
}
}
```
## 📦 Installation & Setup
### Prerequisites
- Node.js 16+
- TypeScript 4.5+
- API keys for your chosen providers
### Environment Setup
```bash
# Install
npm install @push .rocks/smartai
# Set up environment variables
export OPENAI_API_KEY=sk-...
export ANTHROPIC_API_KEY=sk-ant-...
export PERPLEXITY_API_KEY=pplx-...
# ... etc
```
### TypeScript Configuration
```json
{
"compilerOptions": {
"target": "ES2022",
"module": "NodeNext",
"lib": ["ES2022"],
"strict": true,
"esModuleInterop": true,
"skipLibCheck": true
}
}
2025-07-25 18:00:23 +00:00
```
2025-02-25 19:04:40 +00:00
2025-08-01 18:25:46 +00:00
## 🎯 Choosing the Right Provider
| Use Case | Recommended Provider | Why |
|----------|---------------------|-----|
| **General Purpose** | OpenAI | Most features, stable, well-documented |
| **Complex Reasoning** | Anthropic | Superior logical thinking, safer outputs |
| **Research & Facts** | Perplexity | Web-aware, provides citations |
| **Speed Critical** | Groq | 10x faster inference, sub-second responses |
| **Privacy Critical** | Ollama | 100% local, no data leaves your servers |
| **Real-time Data** | XAI | Access to current information |
| **Cost Sensitive** | Ollama/Exo | Free (local) or distributed compute |
## 🤝 Contributing
SmartAI is open source and welcomes contributions! Visit our [GitHub repository ](https://code.foss.global/push.rocks/smartai ) to:
- Report issues
- Submit pull requests
- Request features
- Join discussions
## 📈 Roadmap
- [ ] Streaming function calls
- [ ] Image generation support
- [ ] Voice input processing
- [ ] Fine-tuning integration
- [ ] Embedding support
- [ ] Agent framework
- [ ] More providers (Cohere, AI21, etc.)
2025-02-05 14:24:34 +01:00
## License and Legal Information
2025-02-05 14:21:26 +01:00
2025-02-05 14:24:34 +01:00
This repository contains open-source code that is licensed under the MIT License. A copy of the MIT License can be found in the [license ](license ) file within this repository.
2025-02-05 14:21:26 +01:00
2025-02-05 14:24:34 +01:00
**Please note:** The MIT License does not grant permission to use the trade names, trademarks, service marks, or product names of the project, except as required for reasonable and customary use in describing the origin of the work and reproducing the content of the NOTICE file.
2024-04-04 02:47:44 +02:00
### Trademarks
2025-02-05 14:24:34 +01:00
This project is owned and maintained by Task Venture Capital GmbH. The names and logos associated with Task Venture Capital GmbH and any related products or services are trademarks of Task Venture Capital GmbH and are not included within the scope of the MIT license granted herein. Use of these trademarks must comply with Task Venture Capital GmbH's Trademark Guidelines, and any usage must be approved in writing by Task Venture Capital GmbH.
2024-04-04 02:47:44 +02:00
### Company Information
Task Venture Capital GmbH
2025-02-05 14:24:34 +01:00
Registered at District court Bremen HRB 35230 HB, Germany
2025-02-05 14:21:26 +01:00
2025-02-05 14:24:34 +01:00
For any legal inquiries or if you require further information, please contact us via email at hello@task .vc.
2024-04-04 02:47:44 +02:00
2025-08-01 18:25:46 +00:00
By using this repository, you acknowledge that you have read this section, agree to comply with its terms, and understand that the licensing of the code does not imply endorsement by Task Venture Capital GmbH of any derivative works.