2024-03-30 12:42:44 +01:00
# @push.rocks/smartai
2024-04-14 17:19:32 +02:00
2025-02-05 14:21:26 +01:00
[![npm version ](https://badge.fury.io/js/%40push.rocks%2Fsmartai.svg )](https://www.npmjs.com/package/@push .rocks/smartai)
[![Build Status ](https://github.com/push.rocks/smartai/workflows/CI/badge.svg )](https://github.com/push.rocks/smartai/actions)
[![License: MIT ](https://img.shields.io/badge/License-MIT-yellow.svg )](LICENSE)
SmartAi is a comprehensive TypeScript library that provides a standardized interface for integrating and interacting with multiple AI models. It supports a range of operations from synchronous and streaming chat to audio generation, document processing, and vision tasks.
## Table of Contents
- [Features ](#features )
- [Installation ](#installation )
- [Supported AI Providers ](#supported-ai-providers )
- [Quick Start ](#quick-start )
- [Usage Examples ](#usage-examples )
- [Chat Interactions ](#chat-interactions )
- [Streaming Chat ](#streaming-chat )
- [Audio Generation ](#audio-generation )
- [Document Processing ](#document-processing )
- [Vision Processing ](#vision-processing )
- [Error Handling ](#error-handling )
- [Development ](#development )
- [Running Tests ](#running-tests )
- [Building the Project ](#building-the-project )
- [Contributing ](#contributing )
- [License ](#license )
- [Legal Information ](#legal-information )
## Features
- **Unified API:** Seamlessly integrate multiple AI providers with a consistent interface.
- **Chat & Streaming:** Support for both synchronous and real-time streaming chat interactions.
- **Audio & Vision:** Generate audio responses and perform detailed image analysis.
- **Document Processing:** Analyze PDFs and other documents using vision models.
- **Extensible:** Easily extend the library to support additional AI providers.
## Installation
To install SmartAi, run the following command:
2024-04-14 17:19:32 +02:00
2024-04-04 02:47:44 +02:00
```bash
npm install @push .rocks/smartai
```
2025-02-05 14:21:26 +01:00
This will add the package to your project’ s dependencies.
2024-03-30 12:42:44 +01:00
2025-02-03 15:16:58 +01:00
## Supported AI Providers
2025-02-05 14:21:26 +01:00
SmartAi supports multiple AI providers. Configure each provider with its corresponding token or settings:
2025-02-03 15:16:58 +01:00
### OpenAI
2025-02-05 14:21:26 +01:00
- **Models:** GPT-4, GPT-3.5-turbo, GPT-4-vision-preview
- **Features:** Chat, Streaming, Audio Generation, Vision, Document Processing
- **Configuration Example:**
2025-02-03 15:16:58 +01:00
```typescript
openaiToken: 'your-openai-token'
```
2025-02-05 14:09:06 +01:00
### X.AI
2025-02-05 14:21:26 +01:00
- **Models:** Grok-2-latest
- **Features:** Chat, Streaming, Document Processing
- **Configuration Example:**
2025-02-05 14:09:06 +01:00
```typescript
xaiToken: 'your-xai-token'
```
2025-02-03 15:16:58 +01:00
### Anthropic
2025-02-05 14:21:26 +01:00
- **Models:** Claude-3-opus-20240229
- **Features:** Chat, Streaming, Vision, Document Processing
- **Configuration Example:**
2025-02-03 15:16:58 +01:00
```typescript
anthropicToken: 'your-anthropic-token'
```
### Perplexity
2025-02-05 14:21:26 +01:00
- **Models:** Mixtral-8x7b-instruct
- **Features:** Chat, Streaming
- **Configuration Example:**
2025-02-03 15:16:58 +01:00
```typescript
perplexityToken: 'your-perplexity-token'
```
### Groq
2025-02-05 14:21:26 +01:00
- **Models:** Llama-3.3-70b-versatile
- **Features:** Chat, Streaming
- **Configuration Example:**
2025-02-03 15:16:58 +01:00
```typescript
groqToken: 'your-groq-token'
```
### Ollama
2025-02-05 14:21:26 +01:00
- **Models:** Configurable (default: llama2; use llava for vision/document tasks)
- **Features:** Chat, Streaming, Vision, Document Processing
- **Configuration Example:**
2025-02-03 15:16:58 +01:00
```typescript
2025-02-05 14:21:26 +01:00
ollama: {
baseUrl: 'http://localhost:11434', // Optional
model: 'llama2', // Optional
visionModel: 'llava' // Optional for vision and document tasks
}
2025-02-03 15:16:58 +01:00
```
2025-02-05 14:21:26 +01:00
## Quick Start
2024-03-30 12:42:44 +01:00
2025-02-05 14:21:26 +01:00
Initialize SmartAi with the provider configurations you plan to use:
2024-04-04 02:47:44 +02:00
```typescript
2024-04-29 12:37:43 +02:00
import { SmartAi } from '@push .rocks/smartai';
2024-04-04 02:47:44 +02:00
const smartAi = new SmartAi({
2025-02-03 15:16:58 +01:00
openaiToken: 'your-openai-token',
2025-02-05 14:09:06 +01:00
xaiToken: 'your-xai-token',
2025-02-03 15:16:58 +01:00
anthropicToken: 'your-anthropic-token',
perplexityToken: 'your-perplexity-token',
groqToken: 'your-groq-token',
ollama: {
baseUrl: 'http://localhost:11434',
model: 'llama2'
}
2024-04-04 02:47:44 +02:00
});
2024-04-29 12:37:43 +02:00
await smartAi.start();
```
2024-04-14 17:19:32 +02:00
2025-02-05 14:21:26 +01:00
## Usage Examples
2024-04-04 02:47:44 +02:00
2025-02-05 14:21:26 +01:00
### Chat Interactions
2025-02-03 15:16:58 +01:00
2025-02-05 14:21:26 +01:00
**Synchronous Chat:**
2024-04-04 02:47:44 +02:00
```typescript
2025-02-03 15:16:58 +01:00
const response = await smartAi.openaiProvider.chat({
systemMessage: 'You are a helpful assistant.',
userMessage: 'What is the capital of France?',
2025-02-05 14:21:26 +01:00
messageHistory: [] // Include previous conversation messages if applicable
2025-02-03 15:16:58 +01:00
});
console.log(response.message);
2024-04-04 02:47:44 +02:00
```
2025-02-05 14:21:26 +01:00
### Streaming Chat
2024-04-29 12:37:43 +02:00
2025-02-05 14:21:26 +01:00
**Real-Time Streaming:**
2025-02-03 15:16:58 +01:00
```typescript
const textEncoder = new TextEncoder();
const textDecoder = new TextDecoder();
2025-02-05 14:21:26 +01:00
// Create a transform stream for sending and receiving data
2025-02-03 15:16:58 +01:00
const { writable, readable } = new TransformStream();
const writer = writable.getWriter();
const message = {
role: 'user',
content: 'Tell me a story about a brave knight'
};
writer.write(textEncoder.encode(JSON.stringify(message) + '\n'));
2025-02-05 14:21:26 +01:00
// Start streaming the response
2025-02-03 15:16:58 +01:00
const stream = await smartAi.openaiProvider.chatStream(readable);
const reader = stream.getReader();
while (true) {
const { done, value } = await reader.read();
if (done) break;
2025-02-05 14:21:26 +01:00
console.log('AI:', value);
2025-02-03 15:16:58 +01:00
}
```
2024-04-29 12:37:43 +02:00
2025-02-03 15:16:58 +01:00
### Audio Generation
2024-04-29 12:37:43 +02:00
2025-02-05 14:21:26 +01:00
Generate audio (supported by providers like OpenAI):
2024-04-04 02:47:44 +02:00
```typescript
2025-02-03 15:16:58 +01:00
const audioStream = await smartAi.openaiProvider.audio({
message: 'Hello, this is a test of text-to-speech'
2024-04-29 12:37:43 +02:00
});
2025-02-05 14:21:26 +01:00
// Process the audio stream, for example, play it or save to a file.
2024-04-04 02:47:44 +02:00
```
2025-02-03 15:16:58 +01:00
### Document Processing
2024-04-04 02:47:44 +02:00
2025-02-05 14:21:26 +01:00
Analyze and extract key information from documents:
2024-04-04 02:47:44 +02:00
```typescript
2025-02-05 14:21:26 +01:00
// Example using OpenAI
const documentResult = await smartAi.openaiProvider.document({
2025-02-03 15:16:58 +01:00
systemMessage: 'Classify the document type',
userMessage: 'What type of document is this?',
messageHistory: [],
2025-02-05 14:21:26 +01:00
pdfDocuments: [pdfBuffer] // Uint8Array containing the PDF content
2024-04-04 02:47:44 +02:00
});
2025-02-05 14:21:26 +01:00
```
Other providers (e.g., Ollama and Anthropic) follow a similar pattern:
2025-02-03 15:26:00 +01:00
2025-02-05 14:21:26 +01:00
```typescript
// Using Ollama for document processing
const ollamaResult = await smartAi.ollamaProvider.document({
2025-02-03 15:26:00 +01:00
systemMessage: 'You are a document analysis assistant',
2025-02-05 14:21:26 +01:00
userMessage: 'Extract key information from this document',
2025-02-03 15:26:00 +01:00
messageHistory: [],
2025-02-05 14:21:26 +01:00
pdfDocuments: [pdfBuffer]
2025-02-03 15:26:00 +01:00
});
2025-02-05 14:21:26 +01:00
```
2025-02-03 17:48:36 +01:00
2025-02-05 14:21:26 +01:00
```typescript
// Using Anthropic for document processing
const anthropicResult = await smartAi.anthropicProvider.document({
systemMessage: 'Analyze the document',
userMessage: 'Please extract the main points',
2025-02-03 17:48:36 +01:00
messageHistory: [],
2025-02-05 14:21:26 +01:00
pdfDocuments: [pdfBuffer]
2025-02-03 17:48:36 +01:00
});
2025-02-03 15:26:00 +01:00
```
### Vision Processing
2025-02-05 14:21:26 +01:00
Analyze images with vision capabilities:
2025-02-03 15:26:00 +01:00
```typescript
2025-02-05 14:21:26 +01:00
// Using OpenAI GPT-4 Vision
const imageDescription = await smartAi.openaiProvider.vision({
image: imageBuffer, // Uint8Array containing image data
2025-02-03 15:26:00 +01:00
prompt: 'What do you see in this image?'
});
2025-02-05 14:21:26 +01:00
// Using Ollama for vision tasks
const ollamaImageAnalysis = await smartAi.ollamaProvider.vision({
2025-02-03 15:26:00 +01:00
image: imageBuffer,
prompt: 'Analyze this image in detail'
});
2025-02-03 17:48:36 +01:00
2025-02-05 14:21:26 +01:00
// Using Anthropic for vision analysis
const anthropicImageAnalysis = await smartAi.anthropicProvider.vision({
2025-02-03 17:48:36 +01:00
image: imageBuffer,
2025-02-05 14:21:26 +01:00
prompt: 'Describe the contents of this image'
2025-02-03 17:48:36 +01:00
});
2024-04-04 02:47:44 +02:00
```
2025-02-03 15:16:58 +01:00
## Error Handling
2024-04-04 02:47:44 +02:00
2025-02-05 14:21:26 +01:00
Always wrap API calls in try-catch blocks to manage errors effectively:
2024-04-04 02:47:44 +02:00
2025-02-03 15:16:58 +01:00
```typescript
try {
const response = await smartAi.openaiProvider.chat({
systemMessage: 'You are a helpful assistant.',
userMessage: 'Hello!',
messageHistory: []
});
2025-02-05 14:21:26 +01:00
console.log(response.message);
} catch (error: any) {
2025-02-03 15:16:58 +01:00
console.error('AI provider error:', error.message);
}
```
2024-04-04 02:47:44 +02:00
2025-02-05 14:21:26 +01:00
## Development
### Running Tests
To run the test suite, use the following command:
```bash
npm run test
```
Ensure your environment is configured with the appropriate tokens and settings for the providers you are testing.
### Building the Project
Compile the TypeScript code and build the package using:
```bash
npm run build
```
This command prepares the library for distribution.
2024-04-04 02:47:44 +02:00
2025-02-05 14:21:26 +01:00
## Contributing
2024-04-04 02:47:44 +02:00
2025-02-05 14:21:26 +01:00
Contributions are welcome! Please follow these steps:
1. Fork the repository.
2. Create a feature branch:
```bash
git checkout -b feature/my-feature
```
3. Commit your changes with clear messages:
```bash
git commit -m 'Add new feature'
```
4. Push your branch to your fork:
```bash
git push origin feature/my-feature
```
5. Open a Pull Request with a detailed description of your changes.
## License
This project is licensed under the [MIT License ](LICENSE ).
## Legal Information
2024-04-04 02:47:44 +02:00
### Trademarks
2025-02-05 14:21:26 +01:00
This project is owned and maintained by Task Venture Capital GmbH. The names and logos associated with Task Venture Capital GmbH and its related products or services are trademarks of Task Venture Capital GmbH and are not covered by the MIT License. Use of these trademarks must comply with Task Venture Capital GmbH's Trademark Guidelines.
2024-04-04 02:47:44 +02:00
### Company Information
Task Venture Capital GmbH
2025-02-05 14:21:26 +01:00
Registered at District Court Bremen HRB 35230 HB, Germany
Contact: hello@task .vc
By using this repository, you agree to the terms outlined in this section.
2024-04-04 02:47:44 +02:00
2025-02-05 14:21:26 +01:00
---
2024-04-04 02:47:44 +02:00
2025-02-05 14:21:26 +01:00
Happy coding with SmartAi!