8 Commits

7 changed files with 508 additions and 80 deletions

View File

@ -1,5 +1,33 @@
# Changelog
## 2025-02-05 - 0.3.2 - fix(documentation)
Remove redundant badges from readme
- Removed Build Status badge from the readme file.
- Removed License badge from the readme file.
## 2025-02-05 - 0.3.1 - fix(documentation)
Updated README structure and added detailed usage examples
- Introduced a Table of Contents
- Included comprehensive sections for chat, streaming chat, audio generation, document processing, and vision processing
- Added example code and detailed configuration steps for supported AI providers
- Clarified the development setup with instructions for running tests and building the project
## 2025-02-05 - 0.3.0 - feat(integration-xai)
Add support for X.AI provider with chat and document processing capabilities.
- Introduced XAIProvider class for integrating X.AI features.
- Implemented chat streaming and synchronous chat for X.AI.
- Enabled document processing capabilities with PDF conversion in X.AI.
## 2025-02-03 - 0.2.0 - feat(provider.anthropic)
Add support for vision and document processing in Anthropic provider
- Implemented vision tasks for Anthropic provider using Claude-3-opus-20240229 model.
- Implemented document processing for Anthropic provider, supporting conversion of PDF documents to images and analysis with Claude-3-opus-20240229 model.
- Updated documentation to reflect the new capabilities of the Anthropic provider.
## 2025-02-03 - 0.1.0 - feat(providers)
Add vision and document processing capabilities to providers

19
license Normal file
View File

@ -0,0 +1,19 @@
Copyright (c) 2024 Task Venture Capital GmbH (hello@task.vc)
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

View File

@ -1,6 +1,6 @@
{
"name": "@push.rocks/smartai",
"version": "0.1.0",
"version": "0.3.2",
"private": false,
"description": "A TypeScript library for integrating and interacting with multiple AI models, offering capabilities for chat and potentially audio responses.",
"main": "dist_ts/index.js",

256
readme.md
View File

@ -1,80 +1,125 @@
# @push.rocks/smartai
Provides a standardized interface for integrating and conversing with multiple AI models, supporting operations like chat, streaming interactions, and audio responses.
[![npm version](https://badge.fury.io/js/%40push.rocks%2Fsmartai.svg)](https://www.npmjs.com/package/@push.rocks/smartai)
## Install
SmartAi is a comprehensive TypeScript library that provides a standardized interface for integrating and interacting with multiple AI models. It supports a range of operations from synchronous and streaming chat to audio generation, document processing, and vision tasks.
To add @push.rocks/smartai to your project, run the following command in your terminal:
## Table of Contents
- [Features](#features)
- [Installation](#installation)
- [Supported AI Providers](#supported-ai-providers)
- [Quick Start](#quick-start)
- [Usage Examples](#usage-examples)
- [Chat Interactions](#chat-interactions)
- [Streaming Chat](#streaming-chat)
- [Audio Generation](#audio-generation)
- [Document Processing](#document-processing)
- [Vision Processing](#vision-processing)
- [Error Handling](#error-handling)
- [Development](#development)
- [Running Tests](#running-tests)
- [Building the Project](#building-the-project)
- [Contributing](#contributing)
- [License](#license)
- [Legal Information](#legal-information)
## Features
- **Unified API:** Seamlessly integrate multiple AI providers with a consistent interface.
- **Chat & Streaming:** Support for both synchronous and real-time streaming chat interactions.
- **Audio & Vision:** Generate audio responses and perform detailed image analysis.
- **Document Processing:** Analyze PDFs and other documents using vision models.
- **Extensible:** Easily extend the library to support additional AI providers.
## Installation
To install SmartAi, run the following command:
```bash
npm install @push.rocks/smartai
```
This command installs the package and adds it to your project's dependencies.
This will add the package to your projects dependencies.
## Supported AI Providers
@push.rocks/smartai supports multiple AI providers, each with its own unique capabilities:
SmartAi supports multiple AI providers. Configure each provider with its corresponding token or settings:
### OpenAI
- Models: GPT-4, GPT-3.5-turbo, GPT-4-vision-preview
- Features: Chat, Streaming, Audio Generation, Vision, Document Processing
- Configuration:
- **Models:** GPT-4, GPT-3.5-turbo, GPT-4-vision-preview
- **Features:** Chat, Streaming, Audio Generation, Vision, Document Processing
- **Configuration Example:**
```typescript
openaiToken: 'your-openai-token'
```
### X.AI
- **Models:** Grok-2-latest
- **Features:** Chat, Streaming, Document Processing
- **Configuration Example:**
```typescript
xaiToken: 'your-xai-token'
```
### Anthropic
- Models: Claude-3-opus-20240229
- Features: Chat, Streaming
- Configuration:
- **Models:** Claude-3-opus-20240229
- **Features:** Chat, Streaming, Vision, Document Processing
- **Configuration Example:**
```typescript
anthropicToken: 'your-anthropic-token'
```
### Perplexity
- Models: Mixtral-8x7b-instruct
- Features: Chat, Streaming
- Configuration:
- **Models:** Mixtral-8x7b-instruct
- **Features:** Chat, Streaming
- **Configuration Example:**
```typescript
perplexityToken: 'your-perplexity-token'
```
### Groq
- Models: Llama-3.3-70b-versatile
- Features: Chat, Streaming
- Configuration:
- **Models:** Llama-3.3-70b-versatile
- **Features:** Chat, Streaming
- **Configuration Example:**
```typescript
groqToken: 'your-groq-token'
```
### Ollama
- Models: Configurable (default: llama2, llava for vision/documents)
- Features: Chat, Streaming, Vision, Document Processing
- Configuration:
- **Models:** Configurable (default: llama2; use llava for vision/document tasks)
- **Features:** Chat, Streaming, Vision, Document Processing
- **Configuration Example:**
```typescript
baseUrl: 'http://localhost:11434' // Optional
model: 'llama2' // Optional
visionModel: 'llava' // Optional, for vision and document tasks
ollama: {
baseUrl: 'http://localhost:11434', // Optional
model: 'llama2', // Optional
visionModel: 'llava' // Optional for vision and document tasks
}
```
## Usage
## Quick Start
The `@push.rocks/smartai` package is a comprehensive solution for integrating and interacting with various AI models, designed to support operations ranging from chat interactions to audio responses. This documentation will guide you through the process of utilizing `@push.rocks/smartai` in your applications.
### Getting Started
Before you begin, ensure you have installed the package as described in the **Install** section above. Once installed, you can start integrating AI functionalities into your application.
### Initializing SmartAi
The first step is to import and initialize the `SmartAi` class with appropriate options for the AI services you plan to use:
Initialize SmartAi with the provider configurations you plan to use:
```typescript
import { SmartAi } from '@push.rocks/smartai';
const smartAi = new SmartAi({
openaiToken: 'your-openai-token',
xaiToken: 'your-xai-token',
anthropicToken: 'your-anthropic-token',
perplexityToken: 'your-perplexity-token',
groqToken: 'your-groq-token',
@ -87,35 +132,34 @@ const smartAi = new SmartAi({
await smartAi.start();
```
## Usage Examples
### Chat Interactions
#### Synchronous Chat
For simple question-answer interactions:
**Synchronous Chat:**
```typescript
const response = await smartAi.openaiProvider.chat({
systemMessage: 'You are a helpful assistant.',
userMessage: 'What is the capital of France?',
messageHistory: [] // Previous messages in the conversation
messageHistory: [] // Include previous conversation messages if applicable
});
console.log(response.message);
```
#### Streaming Chat
### Streaming Chat
For real-time, streaming interactions:
**Real-Time Streaming:**
```typescript
const textEncoder = new TextEncoder();
const textDecoder = new TextDecoder();
// Create input and output streams
// Create a transform stream for sending and receiving data
const { writable, readable } = new TransformStream();
const writer = writable.getWriter();
// Send a message
const message = {
role: 'user',
content: 'Tell me a story about a brave knight'
@ -123,77 +167,92 @@ const message = {
writer.write(textEncoder.encode(JSON.stringify(message) + '\n'));
// Process the response stream
// Start streaming the response
const stream = await smartAi.openaiProvider.chatStream(readable);
const reader = stream.getReader();
while (true) {
const { done, value } = await reader.read();
if (done) break;
console.log('AI:', value); // Process each chunk of the response
console.log('AI:', value);
}
```
### Audio Generation
For providers that support audio generation (currently OpenAI):
Generate audio (supported by providers like OpenAI):
```typescript
const audioStream = await smartAi.openaiProvider.audio({
message: 'Hello, this is a test of text-to-speech'
});
// Handle the audio stream (e.g., save to file or play)
// Process the audio stream, for example, play it or save to a file.
```
### Document Processing
For providers that support document processing (OpenAI and Ollama):
Analyze and extract key information from documents:
```typescript
// Using OpenAI
const result = await smartAi.openaiProvider.document({
// Example using OpenAI
const documentResult = await smartAi.openaiProvider.document({
systemMessage: 'Classify the document type',
userMessage: 'What type of document is this?',
messageHistory: [],
pdfDocuments: [pdfBuffer] // Uint8Array of PDF content
});
// Using Ollama with llava
const analysis = await smartAi.ollamaProvider.document({
systemMessage: 'You are a document analysis assistant',
userMessage: 'Extract the key information from this document',
messageHistory: [],
pdfDocuments: [pdfBuffer] // Uint8Array of PDF content
pdfDocuments: [pdfBuffer] // Uint8Array containing the PDF content
});
```
Both providers will:
1. Convert PDF documents to images
2. Process each page using their vision models
3. Return a comprehensive analysis based on the system message and user query
Other providers (e.g., Ollama and Anthropic) follow a similar pattern:
```typescript
// Using Ollama for document processing
const ollamaResult = await smartAi.ollamaProvider.document({
systemMessage: 'You are a document analysis assistant',
userMessage: 'Extract key information from this document',
messageHistory: [],
pdfDocuments: [pdfBuffer]
});
```
```typescript
// Using Anthropic for document processing
const anthropicResult = await smartAi.anthropicProvider.document({
systemMessage: 'Analyze the document',
userMessage: 'Please extract the main points',
messageHistory: [],
pdfDocuments: [pdfBuffer]
});
```
### Vision Processing
For providers that support vision tasks (OpenAI and Ollama):
Analyze images with vision capabilities:
```typescript
// Using OpenAI's GPT-4 Vision
const description = await smartAi.openaiProvider.vision({
image: imageBuffer, // Buffer containing the image data
// Using OpenAI GPT-4 Vision
const imageDescription = await smartAi.openaiProvider.vision({
image: imageBuffer, // Uint8Array containing image data
prompt: 'What do you see in this image?'
});
// Using Ollama's Llava model
const analysis = await smartAi.ollamaProvider.vision({
// Using Ollama for vision tasks
const ollamaImageAnalysis = await smartAi.ollamaProvider.vision({
image: imageBuffer,
prompt: 'Analyze this image in detail'
});
// Using Anthropic for vision analysis
const anthropicImageAnalysis = await smartAi.anthropicProvider.vision({
image: imageBuffer,
prompt: 'Describe the contents of this image'
});
```
## Error Handling
All providers implement proper error handling. It's recommended to wrap API calls in try-catch blocks:
Always wrap API calls in try-catch blocks to manage errors effectively:
```typescript
try {
@ -202,26 +261,71 @@ try {
userMessage: 'Hello!',
messageHistory: []
});
} catch (error) {
console.log(response.message);
} catch (error: any) {
console.error('AI provider error:', error.message);
}
```
## License and Legal Information
## Development
This repository contains open-source code that is licensed under the MIT License. A copy of the MIT License can be found in the [license](license) file within this repository.
### Running Tests
**Please note:** The MIT License does not grant permission to use the trade names, trademarks, service marks, or product names of the project, except as required for reasonable and customary use in describing the origin of the work and reproducing the content of the NOTICE file.
To run the test suite, use the following command:
```bash
npm run test
```
Ensure your environment is configured with the appropriate tokens and settings for the providers you are testing.
### Building the Project
Compile the TypeScript code and build the package using:
```bash
npm run build
```
This command prepares the library for distribution.
## Contributing
Contributions are welcome! Please follow these steps:
1. Fork the repository.
2. Create a feature branch:
```bash
git checkout -b feature/my-feature
```
3. Commit your changes with clear messages:
```bash
git commit -m 'Add new feature'
```
4. Push your branch to your fork:
```bash
git push origin feature/my-feature
```
5. Open a Pull Request with a detailed description of your changes.
## License
This project is licensed under the [MIT License](LICENSE).
## Legal Information
### Trademarks
This project is owned and maintained by Task Venture Capital GmbH. The names and logos associated with Task Venture Capital GmbH and any related products or services are trademarks of Task Venture Capital GmbH and are not included within the scope of the MIT license granted herein. Use of these trademarks must comply with Task Venture Capital GmbH's Trademark Guidelines, and any usage must be approved in writing by Task Venture Capital GmbH.
This project is owned and maintained by Task Venture Capital GmbH. The names and logos associated with Task Venture Capital GmbH and its related products or services are trademarks of Task Venture Capital GmbH and are not covered by the MIT License. Use of these trademarks must comply with Task Venture Capital GmbH's Trademark Guidelines.
### Company Information
Task Venture Capital GmbH
Registered at District court Bremen HRB 35230 HB, Germany
Registered at District Court Bremen HRB 35230 HB, Germany
Contact: hello@task.vc
For any legal inquiries or if you require further information, please contact us via email at hello@task.vc.
By using this repository, you agree to the terms outlined in this section.
By using this repository, you acknowledge that you have read this section, agree to comply with its terms, and understand that the licensing of the code does not imply endorsement by Task Venture Capital GmbH of any derivative works.
---
Happy coding with SmartAi!

View File

@ -3,6 +3,6 @@
*/
export const commitinfo = {
name: '@push.rocks/smartai',
version: '0.1.0',
version: '0.3.2',
description: 'A TypeScript library for integrating and interacting with multiple AI models, offering capabilities for chat and potentially audio responses.'
}

View File

@ -2,6 +2,9 @@ import * as plugins from './plugins.js';
import * as paths from './paths.js';
import { MultiModalModel } from './abstract.classes.multimodal.js';
import type { ChatOptions, ChatResponse, ChatMessage } from './abstract.classes.multimodal.js';
import type { ImageBlockParam, TextBlockParam } from '@anthropic-ai/sdk/resources/messages';
type ContentBlock = ImageBlockParam | TextBlockParam;
export interface IAnthropicProviderOptions {
anthropicToken: string;
@ -132,7 +135,40 @@ export class AnthropicProvider extends MultiModalModel {
}
public async vision(optionsArg: { image: Buffer; prompt: string }): Promise<string> {
throw new Error('Vision tasks are not yet supported by Anthropic.');
const base64Image = optionsArg.image.toString('base64');
const content: ContentBlock[] = [
{
type: 'text',
text: optionsArg.prompt
},
{
type: 'image',
source: {
type: 'base64',
media_type: 'image/jpeg',
data: base64Image
}
}
];
const result = await this.anthropicApiClient.messages.create({
model: 'claude-3-opus-20240229',
messages: [{
role: 'user',
content
}],
max_tokens: 1024
});
// Extract text content from the response
let message = '';
for (const block of result.content) {
if ('text' in block) {
message += block.text;
}
}
return message;
}
public async document(optionsArg: {
@ -141,6 +177,64 @@ export class AnthropicProvider extends MultiModalModel {
pdfDocuments: Uint8Array[];
messageHistory: ChatMessage[];
}): Promise<{ message: any }> {
throw new Error('Document processing is not yet supported by Anthropic.');
// Convert PDF documents to images using SmartPDF
const smartpdfInstance = new plugins.smartpdf.SmartPdf();
let documentImageBytesArray: Uint8Array[] = [];
for (const pdfDocument of optionsArg.pdfDocuments) {
const documentImageArray = await smartpdfInstance.convertPDFToPngBytes(pdfDocument);
documentImageBytesArray = documentImageBytesArray.concat(documentImageArray);
}
// Convert message history to Anthropic format
const messages = optionsArg.messageHistory.map(msg => ({
role: msg.role === 'assistant' ? 'assistant' as const : 'user' as const,
content: msg.content
}));
// Create content array with text and images
const content: ContentBlock[] = [
{
type: 'text',
text: optionsArg.userMessage
}
];
// Add each document page as an image
for (const imageBytes of documentImageBytesArray) {
content.push({
type: 'image',
source: {
type: 'base64',
media_type: 'image/jpeg',
data: Buffer.from(imageBytes).toString('base64')
}
});
}
const result = await this.anthropicApiClient.messages.create({
model: 'claude-3-opus-20240229',
system: optionsArg.systemMessage,
messages: [
...messages,
{ role: 'user', content }
],
max_tokens: 4096
});
// Extract text content from the response
let message = '';
for (const block of result.content) {
if ('text' in block) {
message += block.text;
}
}
return {
message: {
role: 'assistant',
content: message
}
};
}
}

183
ts/provider.xai.ts Normal file
View File

@ -0,0 +1,183 @@
import * as plugins from './plugins.js';
import * as paths from './paths.js';
import { MultiModalModel } from './abstract.classes.multimodal.js';
import type { ChatOptions, ChatResponse, ChatMessage } from './abstract.classes.multimodal.js';
import type { ChatCompletionMessageParam } from 'openai/resources/chat/completions';
export interface IXAIProviderOptions {
xaiToken: string;
}
export class XAIProvider extends MultiModalModel {
private options: IXAIProviderOptions;
public openAiApiClient: plugins.openai.default;
public smartpdfInstance: plugins.smartpdf.SmartPdf;
constructor(optionsArg: IXAIProviderOptions) {
super();
this.options = optionsArg;
}
public async start() {
this.openAiApiClient = new plugins.openai.default({
apiKey: this.options.xaiToken,
baseURL: 'https://api.x.ai/v1',
});
this.smartpdfInstance = new plugins.smartpdf.SmartPdf();
}
public async stop() {}
public async chatStream(input: ReadableStream<Uint8Array>): Promise<ReadableStream<string>> {
// Create a TextDecoder to handle incoming chunks
const decoder = new TextDecoder();
let buffer = '';
let currentMessage: { role: string; content: string; } | null = null;
// Create a TransformStream to process the input
const transform = new TransformStream<Uint8Array, string>({
async transform(chunk, controller) {
buffer += decoder.decode(chunk, { stream: true });
// Try to parse complete JSON messages from the buffer
while (true) {
const newlineIndex = buffer.indexOf('\n');
if (newlineIndex === -1) break;
const line = buffer.slice(0, newlineIndex);
buffer = buffer.slice(newlineIndex + 1);
if (line.trim()) {
try {
const message = JSON.parse(line);
currentMessage = {
role: message.role || 'user',
content: message.content || '',
};
} catch (e) {
console.error('Failed to parse message:', e);
}
}
}
// If we have a complete message, send it to X.AI
if (currentMessage) {
const stream = await this.openAiApiClient.chat.completions.create({
model: 'grok-2-latest',
messages: [{ role: currentMessage.role, content: currentMessage.content }],
stream: true,
});
// Process each chunk from X.AI
for await (const chunk of stream) {
const content = chunk.choices[0]?.delta?.content;
if (content) {
controller.enqueue(content);
}
}
currentMessage = null;
}
},
flush(controller) {
if (buffer) {
try {
const message = JSON.parse(buffer);
controller.enqueue(message.content || '');
} catch (e) {
console.error('Failed to parse remaining buffer:', e);
}
}
}
});
// Connect the input to our transform stream
return input.pipeThrough(transform);
}
public async chat(optionsArg: {
systemMessage: string;
userMessage: string;
messageHistory: { role: string; content: string; }[];
}): Promise<{ role: 'assistant'; message: string; }> {
// Prepare messages array with system message, history, and user message
const messages: ChatCompletionMessageParam[] = [
{ role: 'system', content: optionsArg.systemMessage },
...optionsArg.messageHistory.map(msg => ({
role: msg.role as 'system' | 'user' | 'assistant',
content: msg.content
})),
{ role: 'user', content: optionsArg.userMessage }
];
// Call X.AI's chat completion API
const completion = await this.openAiApiClient.chat.completions.create({
model: 'grok-2-latest',
messages: messages,
stream: false,
});
// Return the assistant's response
return {
role: 'assistant',
message: completion.choices[0]?.message?.content || ''
};
}
public async audio(optionsArg: { message: string }): Promise<NodeJS.ReadableStream> {
throw new Error('Audio generation is not supported by X.AI');
}
public async vision(optionsArg: { image: Buffer; prompt: string }): Promise<string> {
throw new Error('Vision tasks are not supported by X.AI');
}
public async document(optionsArg: {
systemMessage: string;
userMessage: string;
pdfDocuments: Uint8Array[];
messageHistory: { role: string; content: string; }[];
}): Promise<{ message: any }> {
// First convert PDF documents to images
let pdfDocumentImageBytesArray: Uint8Array[] = [];
for (const pdfDocument of optionsArg.pdfDocuments) {
const documentImageArray = await this.smartpdfInstance.convertPDFToPngBytes(pdfDocument);
pdfDocumentImageBytesArray = pdfDocumentImageBytesArray.concat(documentImageArray);
}
// Convert images to base64 for inclusion in the message
const imageBase64Array = pdfDocumentImageBytesArray.map(bytes =>
Buffer.from(bytes).toString('base64')
);
// Combine document images into the user message
const enhancedUserMessage = `
${optionsArg.userMessage}
Document contents (as images):
${imageBase64Array.map((img, i) => `Image ${i + 1}: <image data>`).join('\n')}
`;
// Use chat completion to analyze the documents
const messages: ChatCompletionMessageParam[] = [
{ role: 'system', content: optionsArg.systemMessage },
...optionsArg.messageHistory.map(msg => ({
role: msg.role as 'system' | 'user' | 'assistant',
content: msg.content
})),
{ role: 'user', content: enhancedUserMessage }
];
const completion = await this.openAiApiClient.chat.completions.create({
model: 'grok-2-latest',
messages: messages,
stream: false,
});
return {
message: completion.choices[0]?.message?.content || ''
};
}
}