6.9 KiB
@push.rocks/smartai
Provides a standardized interface for integrating and conversing with multiple AI models, supporting operations like chat, streaming interactions, and audio responses.
Install
To add @push.rocks/smartai to your project, run the following command in your terminal:
npm install @push.rocks/smartai
This command installs the package and adds it to your project's dependencies.
Supported AI Providers
@push.rocks/smartai supports multiple AI providers, each with its own unique capabilities:
OpenAI
- Models: GPT-4, GPT-3.5-turbo, GPT-4-vision-preview
- Features: Chat, Streaming, Audio Generation, Vision, Document Processing
- Configuration:
openaiToken: 'your-openai-token'
Anthropic
- Models: Claude-3-opus-20240229
- Features: Chat, Streaming
- Configuration:
anthropicToken: 'your-anthropic-token'
Perplexity
- Models: Mixtral-8x7b-instruct
- Features: Chat, Streaming
- Configuration:
perplexityToken: 'your-perplexity-token'
Groq
- Models: Llama-3.3-70b-versatile
- Features: Chat, Streaming
- Configuration:
groqToken: 'your-groq-token'
Ollama
- Models: Configurable (default: llama2, llava for vision/documents)
- Features: Chat, Streaming, Vision, Document Processing
- Configuration:
baseUrl: 'http://localhost:11434' // Optional model: 'llama2' // Optional visionModel: 'llava' // Optional, for vision and document tasks
Usage
The @push.rocks/smartai
package is a comprehensive solution for integrating and interacting with various AI models, designed to support operations ranging from chat interactions to audio responses. This documentation will guide you through the process of utilizing @push.rocks/smartai
in your applications.
Getting Started
Before you begin, ensure you have installed the package as described in the Install section above. Once installed, you can start integrating AI functionalities into your application.
Initializing SmartAi
The first step is to import and initialize the SmartAi
class with appropriate options for the AI services you plan to use:
import { SmartAi } from '@push.rocks/smartai';
const smartAi = new SmartAi({
openaiToken: 'your-openai-token',
anthropicToken: 'your-anthropic-token',
perplexityToken: 'your-perplexity-token',
groqToken: 'your-groq-token',
ollama: {
baseUrl: 'http://localhost:11434',
model: 'llama2'
}
});
await smartAi.start();
Chat Interactions
Synchronous Chat
For simple question-answer interactions:
const response = await smartAi.openaiProvider.chat({
systemMessage: 'You are a helpful assistant.',
userMessage: 'What is the capital of France?',
messageHistory: [] // Previous messages in the conversation
});
console.log(response.message);
Streaming Chat
For real-time, streaming interactions:
const textEncoder = new TextEncoder();
const textDecoder = new TextDecoder();
// Create input and output streams
const { writable, readable } = new TransformStream();
const writer = writable.getWriter();
// Send a message
const message = {
role: 'user',
content: 'Tell me a story about a brave knight'
};
writer.write(textEncoder.encode(JSON.stringify(message) + '\n'));
// Process the response stream
const stream = await smartAi.openaiProvider.chatStream(readable);
const reader = stream.getReader();
while (true) {
const { done, value } = await reader.read();
if (done) break;
console.log('AI:', value); // Process each chunk of the response
}
Audio Generation
For providers that support audio generation (currently OpenAI):
const audioStream = await smartAi.openaiProvider.audio({
message: 'Hello, this is a test of text-to-speech'
});
// Handle the audio stream (e.g., save to file or play)
Document Processing
For providers that support document processing (OpenAI and Ollama):
// Using OpenAI
const result = await smartAi.openaiProvider.document({
systemMessage: 'Classify the document type',
userMessage: 'What type of document is this?',
messageHistory: [],
pdfDocuments: [pdfBuffer] // Uint8Array of PDF content
});
// Using Ollama with llava
const analysis = await smartAi.ollamaProvider.document({
systemMessage: 'You are a document analysis assistant',
userMessage: 'Extract the key information from this document',
messageHistory: [],
pdfDocuments: [pdfBuffer] // Uint8Array of PDF content
});
Both providers will:
- Convert PDF documents to images
- Process each page using their vision models
- Return a comprehensive analysis based on the system message and user query
Vision Processing
For providers that support vision tasks (OpenAI and Ollama):
// Using OpenAI's GPT-4 Vision
const description = await smartAi.openaiProvider.vision({
image: imageBuffer, // Buffer containing the image data
prompt: 'What do you see in this image?'
});
// Using Ollama's Llava model
const analysis = await smartAi.ollamaProvider.vision({
image: imageBuffer,
prompt: 'Analyze this image in detail'
});
Error Handling
All providers implement proper error handling. It's recommended to wrap API calls in try-catch blocks:
try {
const response = await smartAi.openaiProvider.chat({
systemMessage: 'You are a helpful assistant.',
userMessage: 'Hello!',
messageHistory: []
});
} catch (error) {
console.error('AI provider error:', error.message);
}
License and Legal Information
This repository contains open-source code that is licensed under the MIT License. A copy of the MIT License can be found in the license file within this repository.
Please note: The MIT License does not grant permission to use the trade names, trademarks, service marks, or product names of the project, except as required for reasonable and customary use in describing the origin of the work and reproducing the content of the NOTICE file.
Trademarks
This project is owned and maintained by Task Venture Capital GmbH. The names and logos associated with Task Venture Capital GmbH and any related products or services are trademarks of Task Venture Capital GmbH and are not included within the scope of the MIT license granted herein. Use of these trademarks must comply with Task Venture Capital GmbH's Trademark Guidelines, and any usage must be approved in writing by Task Venture Capital GmbH.
Company Information
Task Venture Capital GmbH
Registered at District court Bremen HRB 35230 HB, Germany
For any legal inquiries or if you require further information, please contact us via email at hello@task.vc.
By using this repository, you acknowledge that you have read this section, agree to comply with its terms, and understand that the licensing of the code does not imply endorsement by Task Venture Capital GmbH of any derivative works.