@push.rocks/smartai
SmartAi is a comprehensive TypeScript library that provides a standardized interface for integrating and interacting with multiple AI models. It supports a range of operations from synchronous and streaming chat to audio generation, document processing, and vision tasks.
Table of Contents
- Features
- Installation
- Supported AI Providers
- Quick Start
- Usage Examples
- Error Handling
- Development
- Contributing
- License
- Legal Information
Features
- Unified API: Seamlessly integrate multiple AI providers with a consistent interface.
- Chat & Streaming: Support for both synchronous and real-time streaming chat interactions.
- Audio & Vision: Generate audio responses and perform detailed image analysis.
- Document Processing: Analyze PDFs and other documents using vision models.
- Extensible: Easily extend the library to support additional AI providers.
Installation
To install SmartAi, run the following command:
npm install @push.rocks/smartai
This will add the package to your project’s dependencies.
Supported AI Providers
SmartAi supports multiple AI providers. Configure each provider with its corresponding token or settings:
OpenAI
-
Models: GPT-4, GPT-3.5-turbo, GPT-4-vision-preview
-
Features: Chat, Streaming, Audio Generation, Vision, Document Processing
-
Configuration Example:
openaiToken: 'your-openai-token'
X.AI
-
Models: Grok-2-latest
-
Features: Chat, Streaming, Document Processing
-
Configuration Example:
xaiToken: 'your-xai-token'
Anthropic
-
Models: Claude-3-opus-20240229
-
Features: Chat, Streaming, Vision, Document Processing
-
Configuration Example:
anthropicToken: 'your-anthropic-token'
Perplexity
-
Models: Mixtral-8x7b-instruct
-
Features: Chat, Streaming
-
Configuration Example:
perplexityToken: 'your-perplexity-token'
Groq
-
Models: Llama-3.3-70b-versatile
-
Features: Chat, Streaming
-
Configuration Example:
groqToken: 'your-groq-token'
Ollama
-
Models: Configurable (default: llama2; use llava for vision/document tasks)
-
Features: Chat, Streaming, Vision, Document Processing
-
Configuration Example:
ollama: { baseUrl: 'http://localhost:11434', // Optional model: 'llama2', // Optional visionModel: 'llava' // Optional for vision and document tasks }
Quick Start
Initialize SmartAi with the provider configurations you plan to use:
import { SmartAi } from '@push.rocks/smartai';
const smartAi = new SmartAi({
openaiToken: 'your-openai-token',
xaiToken: 'your-xai-token',
anthropicToken: 'your-anthropic-token',
perplexityToken: 'your-perplexity-token',
groqToken: 'your-groq-token',
ollama: {
baseUrl: 'http://localhost:11434',
model: 'llama2'
}
});
await smartAi.start();
Usage Examples
Chat Interactions
Synchronous Chat:
const response = await smartAi.openaiProvider.chat({
systemMessage: 'You are a helpful assistant.',
userMessage: 'What is the capital of France?',
messageHistory: [] // Include previous conversation messages if applicable
});
console.log(response.message);
Streaming Chat
Real-Time Streaming:
const textEncoder = new TextEncoder();
const textDecoder = new TextDecoder();
// Create a transform stream for sending and receiving data
const { writable, readable } = new TransformStream();
const writer = writable.getWriter();
const message = {
role: 'user',
content: 'Tell me a story about a brave knight'
};
writer.write(textEncoder.encode(JSON.stringify(message) + '\n'));
// Start streaming the response
const stream = await smartAi.openaiProvider.chatStream(readable);
const reader = stream.getReader();
while (true) {
const { done, value } = await reader.read();
if (done) break;
console.log('AI:', value);
}
Audio Generation
Generate audio (supported by providers like OpenAI):
const audioStream = await smartAi.openaiProvider.audio({
message: 'Hello, this is a test of text-to-speech'
});
// Process the audio stream, for example, play it or save to a file.
Document Processing
Analyze and extract key information from documents:
// Example using OpenAI
const documentResult = await smartAi.openaiProvider.document({
systemMessage: 'Classify the document type',
userMessage: 'What type of document is this?',
messageHistory: [],
pdfDocuments: [pdfBuffer] // Uint8Array containing the PDF content
});
Other providers (e.g., Ollama and Anthropic) follow a similar pattern:
// Using Ollama for document processing
const ollamaResult = await smartAi.ollamaProvider.document({
systemMessage: 'You are a document analysis assistant',
userMessage: 'Extract key information from this document',
messageHistory: [],
pdfDocuments: [pdfBuffer]
});
// Using Anthropic for document processing
const anthropicResult = await smartAi.anthropicProvider.document({
systemMessage: 'Analyze the document',
userMessage: 'Please extract the main points',
messageHistory: [],
pdfDocuments: [pdfBuffer]
});
Vision Processing
Analyze images with vision capabilities:
// Using OpenAI GPT-4 Vision
const imageDescription = await smartAi.openaiProvider.vision({
image: imageBuffer, // Uint8Array containing image data
prompt: 'What do you see in this image?'
});
// Using Ollama for vision tasks
const ollamaImageAnalysis = await smartAi.ollamaProvider.vision({
image: imageBuffer,
prompt: 'Analyze this image in detail'
});
// Using Anthropic for vision analysis
const anthropicImageAnalysis = await smartAi.anthropicProvider.vision({
image: imageBuffer,
prompt: 'Describe the contents of this image'
});
Error Handling
Always wrap API calls in try-catch blocks to manage errors effectively:
try {
const response = await smartAi.openaiProvider.chat({
systemMessage: 'You are a helpful assistant.',
userMessage: 'Hello!',
messageHistory: []
});
console.log(response.message);
} catch (error: any) {
console.error('AI provider error:', error.message);
}
Development
Running Tests
To run the test suite, use the following command:
npm run test
Ensure your environment is configured with the appropriate tokens and settings for the providers you are testing.
Building the Project
Compile the TypeScript code and build the package using:
npm run build
This command prepares the library for distribution.
Contributing
Contributions are welcome! Please follow these steps:
- Fork the repository.
- Create a feature branch:
git checkout -b feature/my-feature
- Commit your changes with clear messages:
git commit -m 'Add new feature'
- Push your branch to your fork:
git push origin feature/my-feature
- Open a Pull Request with a detailed description of your changes.
License and Legal Information
This repository contains open-source code that is licensed under the MIT License. A copy of the MIT License can be found in the license file within this repository.
Please note: The MIT License does not grant permission to use the trade names, trademarks, service marks, or product names of the project, except as required for reasonable and customary use in describing the origin of the work and reproducing the content of the NOTICE file.
Trademarks
This project is owned and maintained by Task Venture Capital GmbH. The names and logos associated with Task Venture Capital GmbH and any related products or services are trademarks of Task Venture Capital GmbH and are not included within the scope of the MIT license granted herein. Use of these trademarks must comply with Task Venture Capital GmbH's Trademark Guidelines, and any usage must be approved in writing by Task Venture Capital GmbH.
Company Information
Task Venture Capital GmbH
Registered at District court Bremen HRB 35230 HB, Germany
For any legal inquiries or if you require further information, please contact us via email at hello@task.vc.
By using this repository, you acknowledge that you have read this section, agree to comply with its terms, and understand that the licensing of the code does not imply endorsement by Task Venture Capital GmbH of any derivative works.