7.8 KiB
@push.rocks/smartai
SmartAi is a TypeScript library providing a unified interface for integrating and interacting with multiple AI models, supporting chat interactions, audio and document processing, and vision tasks.
Install
To install SmartAi into your project, you need to run the following command in your terminal:
npm install @push.rocks/smartai
This command will add the SmartAi library to your project's dependencies, making it available for use in your TypeScript application.
Usage
SmartAi is designed to provide a comprehensive and unified API for working seamlessly with multiple AI providers like OpenAI, Anthropic, Perplexity, and others. Below we will delve into how to make the most out of this library, illustrating the setup and functionality with in-depth examples. Our scenarios will explore synchronous and streaming interactions, audio generation, document handling, and vision tasks with different AI providers.
Initialization
Initialization is the first step before using any AI functionalities. You should provide API tokens for each provider you plan to utilize.
import { SmartAi } from '@push.rocks/smartai';
const smartAi = new SmartAi({
openaiToken: 'your-openai-token',
anthropicToken: 'your-anthropic-token',
perplexityToken: 'your-perplexity-token',
xaiToken: 'your-xai-token',
groqToken: 'your-groq-token',
ollama: {
baseUrl: 'http://localhost:11434',
model: 'llama2',
visionModel: 'llava'
},
exo: {
baseUrl: 'http://localhost:8080/v1',
apiKey: 'your-api-key'
}
});
await smartAi.start();
Chat Interactions
Interaction through chat is a key feature. SmartAi caters to both synchronous and asynchronous (streaming) chats across several AI models.
Regular Synchronous Chat
Connect with AI models via straightforward request-response interactions.
const syncResponse = await smartAi.openaiProvider.chat({
systemMessage: 'You are a helpful assistant.',
userMessage: 'What is the capital of France?',
messageHistory: [] // Could include context or preceding messages
});
console.log(syncResponse.message); // Outputs: "The capital of France is Paris."
Real-Time Streaming Chat
For continuous interaction and lower latency, engage in streaming chat.
const textEncoder = new TextEncoder();
const textDecoder = new TextDecoder();
// Establish a transform stream
const { writable, readable } = new TransformStream();
const writer = writable.getWriter();
const message = {
role: 'user',
content: 'Tell me a story about a brave knight'
};
writer.write(textEncoder.encode(JSON.stringify(message) + '\n'));
// Initiate streaming
const stream = await smartAi.openaiProvider.chatStream(readable);
const reader = stream.getReader();
while (true) {
const { done, value } = await reader.read();
if (done) break;
console.log('AI:', value);
}
Audio Generation
Audio generation from textual input is possible using providers like OpenAI.
const audioStream = await smartAi.openaiProvider.audio({
message: 'This is a test message for generating speech.'
});
// Use the audioStream e.g., playing or saving it.
Document Analysis
SmartAi can ingest and process documents, extracting meaningful information or performing classifications.
const pdfBuffer = await fetchPdf('https://example.com/document.pdf');
const documentRes = await smartAi.openaiProvider.document({
systemMessage: 'Determine the nature of the document.',
userMessage: 'Classify this document.',
messageHistory: [],
pdfDocuments: [pdfBuffer]
});
console.log(documentRes.message); // Outputs: classified document type
SmartAi allows easy switching between providers, thus giving developers flexibility:
const anthopicRes = await smartAi.anthropicProvider.document({
systemMessage: 'Analyze this document.',
userMessage: 'Extract core points.',
messageHistory: [],
pdfDocuments: [pdfBuffer]
});
console.log(anthopicRes.message); // Outputs: summarized core points
Vision Processing
Engage AI models in analyzing and describing images:
const imageBuffer = await fetchImage('path/to/image.jpg');
// Using OpenAI's vision capabilities
const visionOutput = await smartAi.openaiProvider.vision({
image: imageBuffer,
prompt: 'Describe the image.'
});
console.log(visionOutput); // Outputs: image description
Use other providers for more varied analysis:
const ollamaOutput = await smartAi.ollamaProvider.vision({
image: imageBuffer,
prompt: 'Detailed analysis required.'
});
console.log(ollamaOutput); // Outputs: detailed analysis results
Error Handling
Due to the nature of external integrations, ensure to wrap AI calls within try-catch blocks.
try {
const response = await smartAi.anthropicProvider.chat({
systemMessage: 'Hello!',
userMessage: 'Help me out.',
messageHistory: []
});
console.log(response.message);
} catch (error: any) {
console.error('Encountered an error:', error.message);
}
Providers and Customization
The library supports provider-specific customization, enabling tailored interactions:
const smartAi = new SmartAi({
openaiToken: 'your-openai-token',
anthropicToken: 'your-anthropic-token',
ollama: {
baseUrl: 'http://localhost:11434',
model: 'llama2',
visionModel: 'llava'
}
});
await smartAi.start();
Advanced Streaming Customization
Developers can implement real-time processing pipelines with custom transformations:
const customProcessingStream = new TransformStream({
transform(chunk, controller) {
const processed = chunk.toUpperCase(); // Example transformation
controller.enqueue(processed);
}
});
const processedStream = stream.pipeThrough(customProcessingStream);
const processedReader = processedStream.getReader();
while (true) {
const { done, value } = await processedReader.read();
if (done) break;
console.log('Processed Output:', value);
}
This approach can facilitate adaptive content processing workflows.
Conclusion
SmartAi is a powerful toolkit for multi-faceted AI integration, offering robust solutions for chat, media, and document processing. Developers can enjoy a consistent API experience while leveraging the strengths of each supported AI model.
For futher exploration, developers might consider perusing individual provider's documentation to understand specific capabilities and limitations.
License and Legal Information
This repository contains open-source code that is licensed under the MIT License. A copy of the MIT License can be found in the license file within this repository.
Please note: The MIT License does not grant permission to use the trade names, trademarks, service marks, or product names of the project, except as required for reasonable and customary use in describing the origin of the work and reproducing the content of the NOTICE file.
Trademarks
This project is owned and maintained by Task Venture Capital GmbH. The names and logos associated with Task Venture Capital GmbH and any related products or services are trademarks of Task Venture Capital GmbH and are not included within the scope of the MIT license granted herein. Use of these trademarks must comply with Task Venture Capital GmbH's Trademark Guidelines, and any usage must be approved in writing by Task Venture Capital GmbH.
Company Information
Task Venture Capital GmbH
Registered at District court Bremen HRB 35230 HB, Germany
For any legal inquiries or if you require further information, please contact us via email at hello@task.vc.
By using this repository, you acknowledge that you have read this section, agree to comply with its terms, and understand that the licensing of the code does not imply endorsement by Task Venture Capital GmbH of any derivative works.