# @push.rocks/smartai **One API to rule them all** 🚀 [![npm version](https://img.shields.io/npm/v/@push.rocks/smartai.svg)](https://www.npmjs.com/package/@push.rocks/smartai) [![TypeScript](https://img.shields.io/badge/TypeScript-5.x-blue.svg)](https://www.typescriptlang.org/) [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) SmartAI unifies the world's leading AI providers - OpenAI, Anthropic, Perplexity, Ollama, Groq, XAI, and Exo - under a single, elegant TypeScript interface. Build AI applications at lightning speed without vendor lock-in. ## 🎯 Why SmartAI? - **🔌 Universal Interface**: Write once, run with any AI provider. Switch between GPT-4, Claude, Llama, or Grok with a single line change. - **🛡️ Type-Safe**: Full TypeScript support with comprehensive type definitions for all operations - **🌊 Streaming First**: Built for real-time applications with native streaming support - **🎨 Multi-Modal**: Seamlessly work with text, images, audio, and documents - **🏠 Local & Cloud**: Support for both cloud providers and local models via Ollama - **⚡ Zero Lock-In**: Your code remains portable across all AI providers ## 🚀 Quick Start ```bash npm install @push.rocks/smartai ``` ```typescript import { SmartAi } from '@push.rocks/smartai'; // Initialize with your favorite providers const ai = new SmartAi({ openaiToken: 'sk-...', anthropicToken: 'sk-ant-...' }); await ai.start(); // Same API, multiple providers const response = await ai.openaiProvider.chat({ systemMessage: 'You are a helpful assistant.', userMessage: 'Explain quantum computing in simple terms', messageHistory: [] }); ``` ## 📊 Provider Capabilities Matrix Choose the right provider for your use case: | Provider | Chat | Streaming | TTS | Vision | Documents | Highlights | |----------|:----:|:---------:|:---:|:------:|:---------:|------------| | **OpenAI** | ✅ | ✅ | ✅ | ✅ | ✅ | • GPT-4, DALL-E 3
• Industry standard
• Most features | | **Anthropic** | ✅ | ✅ | ❌ | ✅ | ✅ | • Claude 3 Opus
• Superior reasoning
• 200k context | | **Ollama** | ✅ | ✅ | ❌ | ✅ | ✅ | • 100% local
• Privacy-first
• No API costs | | **XAI** | ✅ | ✅ | ❌ | ❌ | ✅ | • Grok models
• Real-time data
• Uncensored | | **Perplexity** | ✅ | ✅ | ❌ | ❌ | ❌ | • Web-aware
• Research-focused
• Citations | | **Groq** | ✅ | ✅ | ❌ | ❌ | ❌ | • 10x faster
• LPU inference
• Low latency | | **Exo** | ✅ | ✅ | ❌ | ❌ | ❌ | • Distributed
• P2P compute
• Decentralized | ## 🎮 Core Features ### 💬 Universal Chat Interface Works identically across all providers: ```typescript // Use GPT-4 for complex reasoning const gptResponse = await ai.openaiProvider.chat({ systemMessage: 'You are a expert physicist.', userMessage: 'Explain the implications of quantum entanglement', messageHistory: [] }); // Use Claude for safety-critical applications const claudeResponse = await ai.anthropicProvider.chat({ systemMessage: 'You are a medical advisor.', userMessage: 'Review this patient data for concerns', messageHistory: [] }); // Use Groq for lightning-fast responses const groqResponse = await ai.groqProvider.chat({ systemMessage: 'You are a code reviewer.', userMessage: 'Quick! Find the bug in this code: ...', messageHistory: [] }); ``` ### 🌊 Real-Time Streaming Build responsive chat interfaces with token-by-token streaming: ```typescript // Create a chat stream const stream = await ai.openaiProvider.chatStream(inputStream); const reader = stream.getReader(); // Display responses as they arrive while (true) { const { done, value } = await reader.read(); if (done) break; // Update UI in real-time process.stdout.write(value); } ``` ### 🎙️ Text-to-Speech Generate natural voices with OpenAI: ```typescript const audioStream = await ai.openaiProvider.audio({ message: 'Welcome to the future of AI development!' }); // Stream directly to speakers audioStream.pipe(speakerOutput); // Or save to file audioStream.pipe(fs.createWriteStream('welcome.mp3')); ``` ### 👁️ Vision Analysis Understand images with multiple providers: ```typescript const image = fs.readFileSync('product-photo.jpg'); // OpenAI: General purpose vision const gptVision = await ai.openaiProvider.vision({ image, prompt: 'Describe this product and suggest marketing angles' }); // Anthropic: Detailed analysis const claudeVision = await ai.anthropicProvider.vision({ image, prompt: 'Identify any safety concerns or defects' }); // Ollama: Private, local analysis const ollamaVision = await ai.ollamaProvider.vision({ image, prompt: 'Extract all text and categorize the content' }); ``` ### 📄 Document Intelligence Extract insights from PDFs with AI: ```typescript const contract = fs.readFileSync('contract.pdf'); const invoice = fs.readFileSync('invoice.pdf'); // Analyze documents const analysis = await ai.openaiProvider.document({ systemMessage: 'You are a legal expert.', userMessage: 'Compare these documents and highlight key differences', messageHistory: [], pdfDocuments: [contract, invoice] }); // Multi-document analysis const taxDocs = [form1099, w2, receipts]; const taxAnalysis = await ai.anthropicProvider.document({ systemMessage: 'You are a tax advisor.', userMessage: 'Prepare a tax summary from these documents', messageHistory: [], pdfDocuments: taxDocs }); ``` ### 🔄 Persistent Conversations Maintain context across interactions: ```typescript // Create a coding assistant conversation const assistant = ai.createConversation('openai'); await assistant.setSystemMessage('You are an expert TypeScript developer.'); // First question const inputWriter = assistant.getInputStreamWriter(); await inputWriter.write('How do I implement a singleton pattern?'); // Continue the conversation await inputWriter.write('Now show me how to make it thread-safe'); // The assistant remembers the entire context ``` ## 🚀 Real-World Examples ### Build a Customer Support Bot ```typescript const supportBot = new SmartAi({ anthropicToken: process.env.ANTHROPIC_KEY // Claude for empathetic responses }); async function handleCustomerQuery(query: string, history: ChatMessage[]) { try { const response = await supportBot.anthropicProvider.chat({ systemMessage: `You are a helpful customer support agent. Be empathetic, professional, and solution-oriented.`, userMessage: query, messageHistory: history }); return response.message; } catch (error) { // Fallback to another provider if needed return await supportBot.openaiProvider.chat({...}); } } ``` ### Create a Code Review Assistant ```typescript const codeReviewer = new SmartAi({ groqToken: process.env.GROQ_KEY // Groq for speed }); async function reviewCode(code: string, language: string) { const startTime = Date.now(); const review = await codeReviewer.groqProvider.chat({ systemMessage: `You are a ${language} expert. Review code for: - Security vulnerabilities - Performance issues - Best practices - Potential bugs`, userMessage: `Review this code:\n\n${code}`, messageHistory: [] }); console.log(`Review completed in ${Date.now() - startTime}ms`); return review.message; } ``` ### Build a Research Assistant ```typescript const researcher = new SmartAi({ perplexityToken: process.env.PERPLEXITY_KEY }); async function research(topic: string) { // Perplexity excels at web-aware research const findings = await researcher.perplexityProvider.chat({ systemMessage: 'You are a research assistant. Provide factual, cited information.', userMessage: `Research the latest developments in ${topic}`, messageHistory: [] }); return findings.message; } ``` ### Local AI for Sensitive Data ```typescript const localAI = new SmartAi({ ollama: { baseUrl: 'http://localhost:11434', model: 'llama2', visionModel: 'llava' } }); // Process sensitive documents without leaving your infrastructure async function analyzeSensitiveDoc(pdfBuffer: Buffer) { const analysis = await localAI.ollamaProvider.document({ systemMessage: 'Extract and summarize key information.', userMessage: 'Analyze this confidential document', messageHistory: [], pdfDocuments: [pdfBuffer] }); // Data never leaves your servers return analysis.message; } ``` ## ⚡ Performance Tips ### 1. Provider Selection Strategy ```typescript class SmartAIRouter { constructor(private ai: SmartAi) {} async query(message: string, requirements: { speed?: boolean; accuracy?: boolean; cost?: boolean; privacy?: boolean; }) { if (requirements.privacy) { return this.ai.ollamaProvider.chat({...}); // Local only } if (requirements.speed) { return this.ai.groqProvider.chat({...}); // 10x faster } if (requirements.accuracy) { return this.ai.anthropicProvider.chat({...}); // Best reasoning } // Default fallback return this.ai.openaiProvider.chat({...}); } } ``` ### 2. Streaming for Large Responses ```typescript // Don't wait for the entire response async function streamResponse(userQuery: string) { const stream = await ai.openaiProvider.chatStream(createInputStream(userQuery)); // Process tokens as they arrive for await (const chunk of stream) { updateUI(chunk); // Immediate feedback await processChunk(chunk); // Parallel processing } } ``` ### 3. Parallel Multi-Provider Queries ```typescript // Get the best answer from multiple AIs async function consensusQuery(question: string) { const providers = [ ai.openaiProvider.chat({...}), ai.anthropicProvider.chat({...}), ai.perplexityProvider.chat({...}) ]; const responses = await Promise.all(providers); return synthesizeResponses(responses); } ``` ## 🛠️ Advanced Features ### Custom Streaming Transformations ```typescript // Add real-time translation const translationStream = new TransformStream({ async transform(chunk, controller) { const translated = await translateChunk(chunk); controller.enqueue(translated); } }); const responseStream = await ai.openaiProvider.chatStream(input); const translatedStream = responseStream.pipeThrough(translationStream); ``` ### Error Handling & Fallbacks ```typescript class ResilientAI { private providers = ['openai', 'anthropic', 'groq']; async query(opts: ChatOptions): Promise { for (const provider of this.providers) { try { return await this.ai[`${provider}Provider`].chat(opts); } catch (error) { console.warn(`${provider} failed, trying next...`); continue; } } throw new Error('All providers failed'); } } ``` ### Token Counting & Cost Management ```typescript // Track usage across providers class UsageTracker { async trackedChat(provider: string, options: ChatOptions) { const start = Date.now(); const response = await ai[`${provider}Provider`].chat(options); const usage = { provider, duration: Date.now() - start, inputTokens: estimateTokens(options), outputTokens: estimateTokens(response.message) }; await this.logUsage(usage); return response; } } ``` ## 📦 Installation & Setup ### Prerequisites - Node.js 16+ - TypeScript 4.5+ - API keys for your chosen providers ### Environment Setup ```bash # Install npm install @push.rocks/smartai # Set up environment variables export OPENAI_API_KEY=sk-... export ANTHROPIC_API_KEY=sk-ant-... export PERPLEXITY_API_KEY=pplx-... # ... etc ``` ### TypeScript Configuration ```json { "compilerOptions": { "target": "ES2022", "module": "NodeNext", "lib": ["ES2022"], "strict": true, "esModuleInterop": true, "skipLibCheck": true } } ``` ## 🎯 Choosing the Right Provider | Use Case | Recommended Provider | Why | |----------|---------------------|-----| | **General Purpose** | OpenAI | Most features, stable, well-documented | | **Complex Reasoning** | Anthropic | Superior logical thinking, safer outputs | | **Research & Facts** | Perplexity | Web-aware, provides citations | | **Speed Critical** | Groq | 10x faster inference, sub-second responses | | **Privacy Critical** | Ollama | 100% local, no data leaves your servers | | **Real-time Data** | XAI | Access to current information | | **Cost Sensitive** | Ollama/Exo | Free (local) or distributed compute | ## 📈 Roadmap - [ ] Streaming function calls - [ ] Image generation support - [ ] Voice input processing - [ ] Fine-tuning integration - [ ] Embedding support - [ ] Agent framework - [ ] More providers (Cohere, AI21, etc.) ## License and Legal Information This repository contains open-source code that is licensed under the MIT License. A copy of the MIT License can be found in the [license](license) file within this repository. **Please note:** The MIT License does not grant permission to use the trade names, trademarks, service marks, or product names of the project, except as required for reasonable and customary use in describing the origin of the work and reproducing the content of the NOTICE file. ### Trademarks This project is owned and maintained by Task Venture Capital GmbH. The names and logos associated with Task Venture Capital GmbH and any related products or services are trademarks of Task Venture Capital GmbH and are not included within the scope of the MIT license granted herein. Use of these trademarks must comply with Task Venture Capital GmbH's Trademark Guidelines, and any usage must be approved in writing by Task Venture Capital GmbH. ### Company Information Task Venture Capital GmbH Registered at District court Bremen HRB 35230 HB, Germany For any legal inquiries or if you require further information, please contact us via email at hello@task.vc. By using this repository, you acknowledge that you have read this section, agree to comply with its terms, and understand that the licensing of the code does not imply endorsement by Task Venture Capital GmbH of any derivative works.