Compare commits

..

No commits in common. "master" and "v0.0.14" have entirely different histories.

18 changed files with 3316 additions and 10148 deletions

View File

@ -1,155 +0,0 @@
# Changelog
## 2025-04-03 - 0.5.3 - fix(package.json)
Add explicit packageManager field to package.json
- Include the packageManager property to specify the pnpm version and checksum.
- Align package metadata with current standards.
## 2025-04-03 - 0.5.2 - fix(readme)
Remove redundant conclusion section from README to streamline documentation.
- Eliminated the conclusion block describing SmartAi's capabilities and documentation pointers.
## 2025-02-25 - 0.5.1 - fix(OpenAiProvider)
Corrected audio model ID in OpenAiProvider
- Fixed audio model identifier from 'o3-mini' to 'tts-1-hd' in the OpenAiProvider's audio method.
- Addressed minor code formatting issues in test suite for better readability.
- Corrected spelling errors in test documentation and comments.
## 2025-02-25 - 0.5.0 - feat(documentation and configuration)
Enhanced package and README documentation
- Expanded the package description to better reflect the library's capabilities.
- Improved README with detailed usage examples for initialization, chat interactions, streaming chat, audio generation, document analysis, and vision processing.
- Provided error handling strategies and advanced streaming customization examples.
## 2025-02-25 - 0.4.2 - fix(core)
Fix OpenAI chat streaming and PDF document processing logic.
- Updated OpenAI chat streaming to handle new async iterable format.
- Improved PDF document processing by filtering out empty image buffers.
- Removed unsupported temperature options from OpenAI requests.
## 2025-02-25 - 0.4.1 - fix(provider)
Fix provider modules for consistency
- Updated TypeScript interfaces and options in provider modules for better type safety.
- Modified transform stream handlers in Exo, Groq, and Ollama providers for consistency.
- Added optional model options to OpenAI provider for custom model usage.
## 2025-02-08 - 0.4.0 - feat(core)
Added support for Exo AI provider
- Introduced ExoProvider with chat functionalities.
- Updated SmartAi class to initialize ExoProvider.
- Extended Conversation class to support ExoProvider.
## 2025-02-05 - 0.3.3 - fix(documentation)
Update readme with detailed license and legal information.
- Added explicit section on License and Legal Information in the README.
- Clarified the use of trademarks and company information.
## 2025-02-05 - 0.3.2 - fix(documentation)
Remove redundant badges from readme
- Removed Build Status badge from the readme file.
- Removed License badge from the readme file.
## 2025-02-05 - 0.3.1 - fix(documentation)
Updated README structure and added detailed usage examples
- Introduced a Table of Contents
- Included comprehensive sections for chat, streaming chat, audio generation, document processing, and vision processing
- Added example code and detailed configuration steps for supported AI providers
- Clarified the development setup with instructions for running tests and building the project
## 2025-02-05 - 0.3.0 - feat(integration-xai)
Add support for X.AI provider with chat and document processing capabilities.
- Introduced XAIProvider class for integrating X.AI features.
- Implemented chat streaming and synchronous chat for X.AI.
- Enabled document processing capabilities with PDF conversion in X.AI.
## 2025-02-03 - 0.2.0 - feat(provider.anthropic)
Add support for vision and document processing in Anthropic provider
- Implemented vision tasks for Anthropic provider using Claude-3-opus-20240229 model.
- Implemented document processing for Anthropic provider, supporting conversion of PDF documents to images and analysis with Claude-3-opus-20240229 model.
- Updated documentation to reflect the new capabilities of the Anthropic provider.
## 2025-02-03 - 0.1.0 - feat(providers)
Add vision and document processing capabilities to providers
- OpenAI and Ollama providers now support vision tasks using GPT-4 Vision and Llava models respectively.
- Document processing has been implemented for OpenAI and Ollama providers, converting PDFs to images for analysis.
- Introduced abstract methods for vision and document processing in the MultiModalModel class.
- Updated the readme file with examples for vision and document processing.
## 2025-02-03 - 0.0.19 - fix(core)
Enhanced chat streaming and error handling across providers
- Refactored chatStream method to properly handle input streams and processes in Perplexity, OpenAI, Ollama, and Anthropic providers.
- Improved error handling and message parsing in chatStream implementations.
- Defined distinct interfaces for chat options, messages, and responses.
- Adjusted the test logic in test/test.ts for the new classification response requirement.
## 2024-09-19 - 0.0.18 - fix(dependencies)
Update dependencies to the latest versions.
- Updated @git.zone/tsbuild from ^2.1.76 to ^2.1.84
- Updated @git.zone/tsrun from ^1.2.46 to ^1.2.49
- Updated @push.rocks/tapbundle from ^5.0.23 to ^5.3.0
- Updated @types/node from ^20.12.12 to ^22.5.5
- Updated @anthropic-ai/sdk from ^0.21.0 to ^0.27.3
- Updated @push.rocks/smartfile from ^11.0.14 to ^11.0.21
- Updated @push.rocks/smartpromise from ^4.0.3 to ^4.0.4
- Updated @push.rocks/webstream from ^1.0.8 to ^1.0.10
- Updated openai from ^4.47.1 to ^4.62.1
## 2024-05-29 - 0.0.17 - Documentation
Updated project description.
- Improved project description for clarity and details.
## 2024-05-17 - 0.0.16 to 0.0.15 - Core
Fixes and updates.
- Various core updates and fixes for stability improvements.
## 2024-04-29 - 0.0.14 to 0.0.13 - Core
Fixes and updates.
- Multiple core updates and fixes for enhanced functionality.
## 2024-04-29 - 0.0.12 - Core
Fixes and updates.
- Core update and bug fixes.
## 2024-04-29 - 0.0.11 - Provider
Fix integration for anthropic provider.
- Correction in the integration process with anthropic provider for better compatibility.
## 2024-04-27 - 0.0.10 to 0.0.9 - Core
Fixes and updates.
- Updates and fixes to core components.
- Updated tsconfig for improved TypeScript configuration.
## 2024-04-01 - 0.0.8 to 0.0.7 - Core and npmextra
Core updates and npmextra configuration.
- Core fixes and updates.
- Updates to npmextra.json for githost configuration.
## 2024-03-31 - 0.0.6 to 0.0.2 - Core
Initial core updates and fixes.
- Multiple updates and fixes to core following initial versions.
This summarizes the relevant updates and changes based on the provided commit messages. The changelog excludes commits that are version tags without meaningful content or repeated entries.

19
license
View File

@ -1,19 +0,0 @@
Copyright (c) 2024 Task Venture Capital GmbH (hello@task.vc)
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

View File

@ -5,33 +5,20 @@
"githost": "code.foss.global", "githost": "code.foss.global",
"gitscope": "push.rocks", "gitscope": "push.rocks",
"gitrepo": "smartai", "gitrepo": "smartai",
"description": "SmartAi is a versatile TypeScript library designed to facilitate integration and interaction with various AI models, offering functionalities for chat, audio generation, document processing, and vision tasks.", "description": "A TypeScript library for integrating and interacting with multiple AI models, offering capabilities for chat and potentially audio responses.",
"npmPackagename": "@push.rocks/smartai", "npmPackagename": "@push.rocks/smartai",
"license": "MIT", "license": "MIT",
"projectDomain": "push.rocks", "projectDomain": "push.rocks",
"keywords": [ "keywords": [
"AI integration", "AI integration",
"TypeScript",
"chatbot", "chatbot",
"TypeScript",
"OpenAI", "OpenAI",
"Anthropic", "Anthropic",
"multi-model", "multi-model support",
"audio generation", "audio responses",
"text-to-speech", "text-to-speech",
"document processing", "streaming chat"
"vision processing",
"streaming chat",
"API",
"multiple providers",
"AI models",
"synchronous chat",
"asynchronous chat",
"real-time interaction",
"content analysis",
"image description",
"document classification",
"AI toolkit",
"provider switching"
] ]
} }
}, },

View File

@ -1,8 +1,8 @@
{ {
"name": "@push.rocks/smartai", "name": "@push.rocks/smartai",
"version": "0.5.3", "version": "0.0.14",
"private": false, "private": false,
"description": "SmartAi is a versatile TypeScript library designed to facilitate integration and interaction with various AI models, offering functionalities for chat, audio generation, document processing, and vision tasks.", "description": "A TypeScript library for integrating and interacting with multiple AI models, offering capabilities for chat and potentially audio responses.",
"main": "dist_ts/index.js", "main": "dist_ts/index.js",
"typings": "dist_ts/index.d.ts", "typings": "dist_ts/index.d.ts",
"type": "module", "type": "module",
@ -14,33 +14,33 @@
"buildDocs": "(tsdoc)" "buildDocs": "(tsdoc)"
}, },
"devDependencies": { "devDependencies": {
"@git.zone/tsbuild": "^2.2.1", "@git.zone/tsbuild": "^2.1.25",
"@git.zone/tsbundle": "^2.2.5", "@git.zone/tsbundle": "^2.0.5",
"@git.zone/tsrun": "^1.3.3", "@git.zone/tsrun": "^1.2.46",
"@git.zone/tstest": "^1.0.96", "@git.zone/tstest": "^1.0.90",
"@push.rocks/qenv": "^6.1.0", "@push.rocks/qenv": "^6.0.5",
"@push.rocks/tapbundle": "^5.5.6", "@push.rocks/tapbundle": "^5.0.23",
"@types/node": "^22.13.5" "@types/node": "^20.12.7"
}, },
"dependencies": { "dependencies": {
"@anthropic-ai/sdk": "^0.37.0", "@anthropic-ai/sdk": "^0.20.7",
"@push.rocks/smartarray": "^1.1.0", "@push.rocks/smartarray": "^1.0.8",
"@push.rocks/smartfile": "^11.2.0", "@push.rocks/smartfile": "^11.0.14",
"@push.rocks/smartpath": "^5.0.18", "@push.rocks/smartpath": "^5.0.18",
"@push.rocks/smartpdf": "^3.2.2", "@push.rocks/smartpdf": "^3.1.5",
"@push.rocks/smartpromise": "^4.2.3", "@push.rocks/smartpromise": "^4.0.3",
"@push.rocks/smartrequest": "^2.0.23", "@push.rocks/smartrequest": "^2.0.22",
"@push.rocks/webstream": "^1.0.10", "@push.rocks/webstream": "^1.0.8",
"openai": "^4.85.4" "openai": "^4.38.5"
}, },
"repository": { "repository": {
"type": "git", "type": "git",
"url": "https://code.foss.global/push.rocks/smartai.git" "url": "git+https://code.foss.global/push.rocks/smartai.git"
}, },
"bugs": { "bugs": {
"url": "https://code.foss.global/push.rocks/smartai/issues" "url": "https://code.foss.global/push.rocks/smartai/issues"
}, },
"homepage": "https://code.foss.global/push.rocks/smartai", "homepage": "https://code.foss.global/push.rocks/smartai#readme",
"browserslist": [ "browserslist": [
"last 1 chrome versions" "last 1 chrome versions"
], ],
@ -58,32 +58,13 @@
], ],
"keywords": [ "keywords": [
"AI integration", "AI integration",
"TypeScript",
"chatbot", "chatbot",
"TypeScript",
"OpenAI", "OpenAI",
"Anthropic", "Anthropic",
"multi-model", "multi-model support",
"audio generation", "audio responses",
"text-to-speech", "text-to-speech",
"document processing", "streaming chat"
"vision processing",
"streaming chat",
"API",
"multiple providers",
"AI models",
"synchronous chat",
"asynchronous chat",
"real-time interaction",
"content analysis",
"image description",
"document classification",
"AI toolkit",
"provider switching"
],
"pnpm": {
"onlyBuiltDependencies": [
"puppeteer"
] ]
},
"packageManager": "pnpm@10.7.0+sha512.6b865ad4b62a1d9842b61d674a393903b871d9244954f652b8842c2b553c72176b278f64c463e52d40fff8aba385c235c8c9ecf5cc7de4fd78b8bb6d49633ab6"
} }

11412
pnpm-lock.yaml generated

File diff suppressed because it is too large Load Diff

223
readme.md
View File

@ -1,222 +1,95 @@
# @push.rocks/smartai # @push.rocks/smartai
SmartAi is a TypeScript library providing a unified interface for integrating and interacting with multiple AI models, supporting chat interactions, audio and document processing, and vision tasks. Provides a standardized interface for integrating and conversing with multiple AI models, supporting operations like chat and potentially audio responses.
## Install ## Install
To install SmartAi into your project, you need to run the following command in your terminal: To add @push.rocks/smartai to your project, run the following command in your terminal:
```bash ```bash
npm install @push.rocks/smartai npm install @push.rocks/smartai
``` ```
This command will add the SmartAi library to your project's dependencies, making it available for use in your TypeScript application. This command installs the package and adds it to your project's dependencies.
## Usage ## Usage
SmartAi is designed to provide a comprehensive and unified API for working seamlessly with multiple AI providers like OpenAI, Anthropic, Perplexity, and others. Below we will delve into how to make the most out of this library, illustrating the setup and functionality with in-depth examples. Our scenarios will explore synchronous and streaming interactions, audio generation, document handling, and vision tasks with different AI providers. The `@push.rocks/smartai` package is a comprehensive solution for integrating and interacting with various AI models, designed to support operations ranging from chat interactions to possibly handling audio responses. This documentation will guide you through the process of utilizing `@push.rocks/smartai` in your applications, focusing on TypeScript and ESM syntax to demonstrate its full capabilities.
### Initialization ### Getting Started
Initialization is the first step before using any AI functionalities. You should provide API tokens for each provider you plan to utilize. Before you begin, ensure you have installed the package in your project as described in the **Install** section above. Once installed, you can start integrating AI functionalities into your application.
### Initializing SmartAi
The first step is to import and initialize the `SmartAi` class with appropriate options, including tokens for the AI services you plan to use:
```typescript ```typescript
import { SmartAi } from '@push.rocks/smartai'; import { SmartAi } from '@push.rocks/smartai';
const smartAi = new SmartAi({ const smartAi = new SmartAi({
openaiToken: 'your-openai-token', openaiToken: 'your-openai-access-token',
anthropicToken: 'your-anthropic-token', anthropicToken: 'your-anthropic-access-token'
perplexityToken: 'your-perplexity-token',
xaiToken: 'your-xai-token',
groqToken: 'your-groq-token',
ollama: {
baseUrl: 'http://localhost:11434',
model: 'llama2',
visionModel: 'llava'
},
exo: {
baseUrl: 'http://localhost:8080/v1',
apiKey: 'your-api-key'
}
}); });
await smartAi.start(); await smartAi.start();
``` ```
### Chat Interactions ### Creating Conversations with AI
Interaction through chat is a key feature. SmartAi caters to both synchronous and asynchronous (streaming) chats across several AI models. `SmartAi` provides a flexible interface to create and manage conversations with different AI providers. You can create a conversation with any supported AI provider like OpenAI or Anthropic by specifying the provider you want to use:
#### Regular Synchronous Chat
Connect with AI models via straightforward request-response interactions.
```typescript ```typescript
const syncResponse = await smartAi.openaiProvider.chat({ const openAiConversation = await smartAi.createConversation('openai');
systemMessage: 'You are a helpful assistant.', const anthropicConversation = await smartAi.createConversation('anthropic');
userMessage: 'What is the capital of France?', ```
messageHistory: [] // Could include context or preceding messages
### Chatting with AI
Once you have a conversation instance, you can start sending messages to the AI and receive responses. Each conversation object provides methods to interact in a synchronous or asynchronous manner, depending on your use case.
#### Synchronous Chat Example
Here's how you can have a synchronous chat with OpenAI:
```typescript
const response = await openAiConversation.chat({
systemMessage: 'This is a greeting from the system.',
userMessage: 'Hello, AI! How are you today?',
messageHistory: [] // Previous messages in the conversation
}); });
console.log(syncResponse.message); // Outputs: "The capital of France is Paris." console.log(response.message); // Log the response from AI
``` ```
#### Real-Time Streaming Chat #### Streaming Chat Example
For continuous interaction and lower latency, engage in streaming chat. For real-time, streaming interactions, you can utilize the streaming capabilities provided by the conversation object. This enables a continuous exchange of messages between your application and the AI:
```typescript ```typescript
const textEncoder = new TextEncoder(); const inputStreamWriter = openAiConversation.getInputStreamWriter();
const textDecoder = new TextDecoder(); const outputStream = openAiConversation.getOutputStream();
// Establish a transform stream inputStreamWriter.write('Hello, AI! Can you stream responses?');
const { writable, readable } = new TransformStream();
const writer = writable.getWriter();
const message = { const reader = outputStream.getReader();
role: 'user', reader.read().then(function processText({done, value}) {
content: 'Tell me a story about a brave knight' if (done) {
}; console.log('Stream finished.');
return;
writer.write(textEncoder.encode(JSON.stringify(message) + '\n'));
// Initiate streaming
const stream = await smartAi.openaiProvider.chatStream(readable);
const reader = stream.getReader();
while (true) {
const { done, value } = await reader.read();
if (done) break;
console.log('AI:', value);
}
```
### Audio Generation
Audio generation from textual input is possible using providers like OpenAI.
```typescript
const audioStream = await smartAi.openaiProvider.audio({
message: 'This is a test message for generating speech.'
});
// Use the audioStream e.g., playing or saving it.
```
### Document Analysis
SmartAi can ingest and process documents, extracting meaningful information or performing classifications.
```typescript
const pdfBuffer = await fetchPdf('https://example.com/document.pdf');
const documentRes = await smartAi.openaiProvider.document({
systemMessage: 'Determine the nature of the document.',
userMessage: 'Classify this document.',
messageHistory: [],
pdfDocuments: [pdfBuffer]
});
console.log(documentRes.message); // Outputs: classified document type
```
SmartAi allows easy switching between providers, thus giving developers flexibility:
```typescript
const anthopicRes = await smartAi.anthropicProvider.document({
systemMessage: 'Analyze this document.',
userMessage: 'Extract core points.',
messageHistory: [],
pdfDocuments: [pdfBuffer]
});
console.log(anthopicRes.message); // Outputs: summarized core points
```
### Vision Processing
Engage AI models in analyzing and describing images:
```typescript
const imageBuffer = await fetchImage('path/to/image.jpg');
// Using OpenAI's vision capabilities
const visionOutput = await smartAi.openaiProvider.vision({
image: imageBuffer,
prompt: 'Describe the image.'
});
console.log(visionOutput); // Outputs: image description
```
Use other providers for more varied analysis:
```typescript
const ollamaOutput = await smartAi.ollamaProvider.vision({
image: imageBuffer,
prompt: 'Detailed analysis required.'
});
console.log(ollamaOutput); // Outputs: detailed analysis results
```
### Error Handling
Due to the nature of external integrations, ensure to wrap AI calls within try-catch blocks.
```typescript
try {
const response = await smartAi.anthropicProvider.chat({
systemMessage: 'Hello!',
userMessage: 'Help me out.',
messageHistory: []
});
console.log(response.message);
} catch (error: any) {
console.error('Encountered an error:', error.message);
}
```
### Providers and Customization
The library supports provider-specific customization, enabling tailored interactions:
```typescript
const smartAi = new SmartAi({
openaiToken: 'your-openai-token',
anthropicToken: 'your-anthropic-token',
ollama: {
baseUrl: 'http://localhost:11434',
model: 'llama2',
visionModel: 'llava'
} }
console.log('AI says:', value);
reader.read().then(processText); // Continue reading messages
}); });
await smartAi.start();
``` ```
### Advanced Streaming Customization ### Extending Conversations
Developers can implement real-time processing pipelines with custom transformations: The modular design of `@push.rocks/smartai` allows you to extend conversations with additional features, such as handling audio responses or integrating other AI-powered functionalities. Utilize the provided AI providers' APIs to explore and implement a wide range of AI interactions within your conversations.
```typescript ### Conclusion
const customProcessingStream = new TransformStream({
transform(chunk, controller) {
const processed = chunk.toUpperCase(); // Example transformation
controller.enqueue(processed);
}
});
const processedStream = stream.pipeThrough(customProcessingStream); With `@push.rocks/smartai`, integrating AI functionalities into your applications becomes streamlined and efficient. By leveraging the standardized interface provided by the package, you can easily converse with multiple AI models, expanding the capabilities of your applications with cutting-edge AI features. Whether you're implementing simple chat interactions or complex, real-time communication flows, `@push.rocks/smartai` offers the tools and flexibility needed to create engaging, AI-enhanced experiences.
const processedReader = processedStream.getReader();
while (true) {
const { done, value } = await processedReader.read();
if (done) break;
console.log('Processed Output:', value);
}
```
This approach can facilitate adaptive content processing workflows.
## License and Legal Information ## License and Legal Information

View File

@ -21,17 +21,18 @@ tap.test('should create chat response with openai', async () => {
const response = await testSmartai.openaiProvider.chat({ const response = await testSmartai.openaiProvider.chat({
systemMessage: 'Hello', systemMessage: 'Hello',
userMessage: userMessage, userMessage: userMessage,
messageHistory: [], messageHistory: [
],
}); });
console.log(`userMessage: ${userMessage}`); console.log(`userMessage: ${userMessage}`);
console.log(response.message); console.log(response.message.content);
}); });
tap.test('should document a pdf', async () => { tap.test('should document a pdf', async () => {
const pdfUrl = 'https://www.w3.org/WAI/ER/tests/xhtml/testfiles/resources/pdf/dummy.pdf'; const pdfUrl = 'https://www.w3.org/WAI/ER/tests/xhtml/testfiles/resources/pdf/dummy.pdf';
const pdfResponse = await smartrequest.getBinary(pdfUrl); const pdfResponse = await smartrequest.getBinary(pdfUrl);
const result = await testSmartai.openaiProvider.document({ const result = await testSmartai.openaiProvider.document({
systemMessage: 'Classify the document. Only the following answers are allowed: "invoice", "bank account statement", "contract", "other". The answer should only contain the keyword for machine use.', systemMessage: 'Classify the document. Only the following answers are allowed: "invoice", "bank account statement", "contract", "other"',
userMessage: "Classify the document.", userMessage: "Classify the document.",
messageHistory: [], messageHistory: [],
pdfDocuments: [pdfResponse.body], pdfDocuments: [pdfResponse.body],
@ -54,7 +55,7 @@ tap.test('should recognize companies in a pdf', async () => {
address: string; address: string;
city: string; city: string;
country: string; country: string;
EU: boolean; // whether the entity is within EU EU: boolean; // wether the entity is within EU
}; };
entityReceiver: { entityReceiver: {
type: 'official state entity' | 'company' | 'person'; type: 'official state entity' | 'company' | 'person';
@ -62,7 +63,7 @@ tap.test('should recognize companies in a pdf', async () => {
address: string; address: string;
city: string; city: string;
country: string; country: string;
EU: boolean; // whether the entity is within EU EU: boolean; // wether the entity is within EU
}; };
date: string; // the date of the document as YYYY-MM-DD date: string; // the date of the document as YYYY-MM-DD
title: string; // a short title, suitable for a filename title: string; // a short title, suitable for a filename
@ -74,24 +75,7 @@ tap.test('should recognize companies in a pdf', async () => {
pdfDocuments: [pdfBuffer], pdfDocuments: [pdfBuffer],
}); });
console.log(result); console.log(result);
}); })
tap.test('should create audio response with openai', async () => {
// Call the audio method with a sample message.
const audioStream = await testSmartai.openaiProvider.audio({
message: 'This is a test of audio generation.',
});
// Read all chunks from the stream.
const chunks: Uint8Array[] = [];
for await (const chunk of audioStream) {
chunks.push(chunk as Uint8Array);
}
const audioBuffer = Buffer.concat(chunks);
await smartfile.fs.toFs(audioBuffer, './.nogit/testoutput.mp3');
console.log(`Audio Buffer length: ${audioBuffer.length}`);
// Assert that the resulting buffer is not empty.
expect(audioBuffer.length).toBeGreaterThan(0);
});
tap.test('should stop the smartai instance', async () => { tap.test('should stop the smartai instance', async () => {
await testSmartai.stop(); await testSmartai.stop();

View File

@ -1,8 +1,8 @@
/** /**
* autocreated commitinfo by @push.rocks/commitinfo * autocreated commitinfo by @pushrocks/commitinfo
*/ */
export const commitinfo = { export const commitinfo = {
name: '@push.rocks/smartai', name: '@push.rocks/smartai',
version: '0.5.3', version: '0.0.14',
description: 'SmartAi is a versatile TypeScript library designed to facilitate integration and interaction with various AI models, offering functionalities for chat, audio generation, document processing, and vision tasks.' description: 'A TypeScript library for integrating and interacting with multiple AI models, offering capabilities for chat and potentially audio responses.'
} }

View File

@ -1,86 +1,32 @@
/**
* Message format for chat interactions
*/
export interface ChatMessage {
role: 'assistant' | 'user' | 'system';
content: string;
}
/**
* Options for chat interactions
*/
export interface ChatOptions {
systemMessage: string;
userMessage: string;
messageHistory: ChatMessage[];
}
/**
* Response format for chat interactions
*/
export interface ChatResponse {
role: 'assistant';
message: string;
}
/**
* Abstract base class for multi-modal AI models.
* Provides a common interface for different AI providers (OpenAI, Anthropic, Perplexity, Ollama)
*/
export abstract class MultiModalModel { export abstract class MultiModalModel {
/** /**
* Initializes the model and any necessary resources * starts the model
* Should be called before using any other methods
*/ */
abstract start(): Promise<void>; abstract start(): Promise<void>;
/** /**
* Cleans up any resources used by the model * stops the model
* Should be called when the model is no longer needed
*/ */
abstract stop(): Promise<void>; abstract stop(): Promise<void>;
/** public abstract chat(optionsArg: {
* Synchronous chat interaction with the model systemMessage: string,
* @param optionsArg Options containing system message, user message, and message history userMessage: string,
* @returns Promise resolving to the assistant's response messageHistory: {
*/ role: 'assistant' | 'user';
public abstract chat(optionsArg: ChatOptions): Promise<ChatResponse>; content: string;
}[]
}): Promise<{
role: 'assistant';
message: string;
}>
/** /**
* Streaming interface for chat interactions * Defines a streaming interface for chat interactions.
* Allows for real-time responses from the model * The implementation will vary based on the specific AI model.
* @param input Stream of user messages * @param input
* @returns Stream of model responses
*/ */
public abstract chatStream(input: ReadableStream<Uint8Array>): Promise<ReadableStream<string>>; public abstract chatStream(input: ReadableStream<string>): Promise<ReadableStream<string>>;
/**
* Text-to-speech conversion
* @param optionsArg Options containing the message to convert to speech
* @returns Promise resolving to a readable stream of audio data
* @throws Error if the provider doesn't support audio generation
*/
public abstract audio(optionsArg: { message: string }): Promise<NodeJS.ReadableStream>;
/**
* Vision-language processing
* @param optionsArg Options containing the image and prompt for analysis
* @returns Promise resolving to the model's description or analysis of the image
* @throws Error if the provider doesn't support vision tasks
*/
public abstract vision(optionsArg: { image: Buffer; prompt: string }): Promise<string>;
/**
* Document analysis and processing
* @param optionsArg Options containing system message, user message, PDF documents, and message history
* @returns Promise resolving to the model's analysis of the documents
* @throws Error if the provider doesn't support document processing
*/
public abstract document(optionsArg: {
systemMessage: string;
userMessage: string;
pdfDocuments: Uint8Array[];
messageHistory: ChatMessage[];
}): Promise<{ message: any }>;
} }

View File

@ -48,18 +48,6 @@ export class Conversation {
return conversation; return conversation;
} }
public static async createWithExo(smartaiRefArg: SmartAi) {
if (!smartaiRefArg.exoProvider) {
throw new Error('Exo provider not available');
}
const conversation = new Conversation(smartaiRefArg, {
processFunction: async (input) => {
return '' // TODO implement proper streaming
}
});
return conversation;
}
public static async createWithOllama(smartaiRefArg: SmartAi) { public static async createWithOllama(smartaiRefArg: SmartAi) {
if (!smartaiRefArg.ollamaProvider) { if (!smartaiRefArg.ollamaProvider) {
throw new Error('Ollama provider not available'); throw new Error('Ollama provider not available');
@ -72,30 +60,6 @@ export class Conversation {
return conversation; return conversation;
} }
public static async createWithGroq(smartaiRefArg: SmartAi) {
if (!smartaiRefArg.groqProvider) {
throw new Error('Groq provider not available');
}
const conversation = new Conversation(smartaiRefArg, {
processFunction: async (input) => {
return '' // TODO implement proper streaming
}
});
return conversation;
}
public static async createWithXai(smartaiRefArg: SmartAi) {
if (!smartaiRefArg.xaiProvider) {
throw new Error('XAI provider not available');
}
const conversation = new Conversation(smartaiRefArg, {
processFunction: async (input) => {
return '' // TODO implement proper streaming
}
});
return conversation;
}
// INSTANCE // INSTANCE
smartaiRef: SmartAi smartaiRef: SmartAi
private systemMessage: string; private systemMessage: string;

View File

@ -1,32 +1,18 @@
import { Conversation } from './classes.conversation.js'; import { Conversation } from './classes.conversation.js';
import * as plugins from './plugins.js'; import * as plugins from './plugins.js';
import { AnthropicProvider } from './provider.anthropic.js'; import { AnthropicProvider } from './provider.anthropic.js';
import { OllamaProvider } from './provider.ollama.js'; import type { OllamaProvider } from './provider.ollama.js';
import { OpenAiProvider } from './provider.openai.js'; import { OpenAiProvider } from './provider.openai.js';
import { PerplexityProvider } from './provider.perplexity.js'; import type { PerplexityProvider } from './provider.perplexity.js';
import { ExoProvider } from './provider.exo.js';
import { GroqProvider } from './provider.groq.js';
import { XAIProvider } from './provider.xai.js';
export interface ISmartAiOptions { export interface ISmartAiOptions {
openaiToken?: string; openaiToken?: string;
anthropicToken?: string; anthropicToken?: string;
perplexityToken?: string; perplexityToken?: string;
groqToken?: string;
xaiToken?: string;
exo?: {
baseUrl?: string;
apiKey?: string;
};
ollama?: {
baseUrl?: string;
model?: string;
visionModel?: string;
};
} }
export type TProvider = 'openai' | 'anthropic' | 'perplexity' | 'ollama' | 'exo' | 'groq' | 'xai'; export type TProvider = 'openai' | 'anthropic' | 'perplexity' | 'ollama';
export class SmartAi { export class SmartAi {
public options: ISmartAiOptions; public options: ISmartAiOptions;
@ -35,9 +21,6 @@ export class SmartAi {
public anthropicProvider: AnthropicProvider; public anthropicProvider: AnthropicProvider;
public perplexityProvider: PerplexityProvider; public perplexityProvider: PerplexityProvider;
public ollamaProvider: OllamaProvider; public ollamaProvider: OllamaProvider;
public exoProvider: ExoProvider;
public groqProvider: GroqProvider;
public xaiProvider: XAIProvider;
constructor(optionsArg: ISmartAiOptions) { constructor(optionsArg: ISmartAiOptions) {
this.options = optionsArg; this.options = optionsArg;
@ -54,40 +37,6 @@ export class SmartAi {
this.anthropicProvider = new AnthropicProvider({ this.anthropicProvider = new AnthropicProvider({
anthropicToken: this.options.anthropicToken, anthropicToken: this.options.anthropicToken,
}); });
await this.anthropicProvider.start();
}
if (this.options.perplexityToken) {
this.perplexityProvider = new PerplexityProvider({
perplexityToken: this.options.perplexityToken,
});
await this.perplexityProvider.start();
}
if (this.options.groqToken) {
this.groqProvider = new GroqProvider({
groqToken: this.options.groqToken,
});
await this.groqProvider.start();
}
if (this.options.xaiToken) {
this.xaiProvider = new XAIProvider({
xaiToken: this.options.xaiToken,
});
await this.xaiProvider.start();
}
if (this.options.ollama) {
this.ollamaProvider = new OllamaProvider({
baseUrl: this.options.ollama.baseUrl,
model: this.options.ollama.model,
visionModel: this.options.ollama.visionModel,
});
await this.ollamaProvider.start();
}
if (this.options.exo) {
this.exoProvider = new ExoProvider({
exoBaseUrl: this.options.exo.baseUrl,
apiKey: this.options.exo.apiKey,
});
await this.exoProvider.start();
} }
} }
@ -98,8 +47,6 @@ export class SmartAi {
*/ */
createConversation(provider: TProvider) { createConversation(provider: TProvider) {
switch (provider) { switch (provider) {
case 'exo':
return Conversation.createWithExo(this);
case 'openai': case 'openai':
return Conversation.createWithOpenAi(this); return Conversation.createWithOpenAi(this);
case 'anthropic': case 'anthropic':
@ -108,10 +55,6 @@ export class SmartAi {
return Conversation.createWithPerplexity(this); return Conversation.createWithPerplexity(this);
case 'ollama': case 'ollama':
return Conversation.createWithOllama(this); return Conversation.createWithOllama(this);
case 'groq':
return Conversation.createWithGroq(this);
case 'xai':
return Conversation.createWithXai(this);
default: default:
throw new Error('Provider not available'); throw new Error('Provider not available');
} }

View File

@ -1,10 +1,6 @@
import * as plugins from './plugins.js'; import * as plugins from './plugins.js';
import * as paths from './paths.js'; import * as paths from './paths.js';
import { MultiModalModel } from './abstract.classes.multimodal.js'; import { MultiModalModel } from './abstract.classes.multimodal.js';
import type { ChatOptions, ChatResponse, ChatMessage } from './abstract.classes.multimodal.js';
import type { ImageBlockParam, TextBlockParam } from '@anthropic-ai/sdk/resources/messages';
type ContentBlock = ImageBlockParam | TextBlockParam;
export interface IAnthropicProviderOptions { export interface IAnthropicProviderOptions {
anthropicToken: string; anthropicToken: string;
@ -27,214 +23,40 @@ export class AnthropicProvider extends MultiModalModel {
async stop() {} async stop() {}
public async chatStream(input: ReadableStream<Uint8Array>): Promise<ReadableStream<string>> { public async chatStream(input: ReadableStream<string>): Promise<ReadableStream<string>> {
// Create a TextDecoder to handle incoming chunks // TODO: implement for OpenAI
const decoder = new TextDecoder();
let buffer = '';
let currentMessage: { role: string; content: string; } | null = null;
// Create a TransformStream to process the input const returnStream = new ReadableStream();
const transform = new TransformStream<Uint8Array, string>({ return returnStream;
async transform(chunk, controller) {
buffer += decoder.decode(chunk, { stream: true });
// Try to parse complete JSON messages from the buffer
while (true) {
const newlineIndex = buffer.indexOf('\n');
if (newlineIndex === -1) break;
const line = buffer.slice(0, newlineIndex);
buffer = buffer.slice(newlineIndex + 1);
if (line.trim()) {
try {
const message = JSON.parse(line);
currentMessage = {
role: message.role || 'user',
content: message.content || '',
};
} catch (e) {
console.error('Failed to parse message:', e);
}
}
}
// If we have a complete message, send it to Anthropic
if (currentMessage) {
const stream = await this.anthropicApiClient.messages.create({
model: 'claude-3-opus-20240229',
messages: [{ role: currentMessage.role, content: currentMessage.content }],
system: '',
stream: true,
max_tokens: 4000,
});
// Process each chunk from Anthropic
for await (const chunk of stream) {
const content = chunk.delta?.text;
if (content) {
controller.enqueue(content);
}
}
currentMessage = null;
}
},
flush(controller) {
if (buffer) {
try {
const message = JSON.parse(buffer);
controller.enqueue(message.content || '');
} catch (e) {
console.error('Failed to parse remaining buffer:', e);
}
}
}
});
// Connect the input to our transform stream
return input.pipeThrough(transform);
} }
// Implementing the synchronous chat interaction // Implementing the synchronous chat interaction
public async chat(optionsArg: ChatOptions): Promise<ChatResponse> { public async chat(optionsArg: {
// Convert message history to Anthropic format systemMessage: string;
const messages = optionsArg.messageHistory.map(msg => ({ userMessage: string;
role: msg.role === 'assistant' ? 'assistant' as const : 'user' as const, messageHistory: {
content: msg.content role: 'assistant' | 'user';
})); content: string;
}[];
}) {
const result = await this.anthropicApiClient.messages.create({ const result = await this.anthropicApiClient.messages.create({
model: 'claude-3-opus-20240229', model: 'claude-3-opus-20240229',
system: optionsArg.systemMessage, system: optionsArg.systemMessage,
messages: [ messages: [
...messages, ...optionsArg.messageHistory,
{ role: 'user' as const, content: optionsArg.userMessage } { role: 'user', content: optionsArg.userMessage },
], ],
max_tokens: 4000, max_tokens: 4000,
}); });
// Extract text content from the response
let message = '';
for (const block of result.content) {
if ('text' in block) {
message += block.text;
}
}
return { return {
role: 'assistant' as const, role: result.role as 'assistant',
message, message: result.content.join('\n'),
}; };
} }
public async audio(optionsArg: { message: string }): Promise<NodeJS.ReadableStream> { private async audio(messageArg: string) {
// Anthropic does not provide an audio API, so this method is not implemented. // Anthropic does not provide an audio API, so this method is not implemented.
throw new Error('Audio generation is not yet supported by Anthropic.'); throw new Error('Audio generation is not yet supported by Anthropic.');
} }
public async vision(optionsArg: { image: Buffer; prompt: string }): Promise<string> {
const base64Image = optionsArg.image.toString('base64');
const content: ContentBlock[] = [
{
type: 'text',
text: optionsArg.prompt
},
{
type: 'image',
source: {
type: 'base64',
media_type: 'image/jpeg',
data: base64Image
}
}
];
const result = await this.anthropicApiClient.messages.create({
model: 'claude-3-opus-20240229',
messages: [{
role: 'user',
content
}],
max_tokens: 1024
});
// Extract text content from the response
let message = '';
for (const block of result.content) {
if ('text' in block) {
message += block.text;
}
}
return message;
}
public async document(optionsArg: {
systemMessage: string;
userMessage: string;
pdfDocuments: Uint8Array[];
messageHistory: ChatMessage[];
}): Promise<{ message: any }> {
// Convert PDF documents to images using SmartPDF
const smartpdfInstance = new plugins.smartpdf.SmartPdf();
let documentImageBytesArray: Uint8Array[] = [];
for (const pdfDocument of optionsArg.pdfDocuments) {
const documentImageArray = await smartpdfInstance.convertPDFToPngBytes(pdfDocument);
documentImageBytesArray = documentImageBytesArray.concat(documentImageArray);
}
// Convert message history to Anthropic format
const messages = optionsArg.messageHistory.map(msg => ({
role: msg.role === 'assistant' ? 'assistant' as const : 'user' as const,
content: msg.content
}));
// Create content array with text and images
const content: ContentBlock[] = [
{
type: 'text',
text: optionsArg.userMessage
}
];
// Add each document page as an image
for (const imageBytes of documentImageBytesArray) {
content.push({
type: 'image',
source: {
type: 'base64',
media_type: 'image/jpeg',
data: Buffer.from(imageBytes).toString('base64')
}
});
}
const result = await this.anthropicApiClient.messages.create({
model: 'claude-3-opus-20240229',
system: optionsArg.systemMessage,
messages: [
...messages,
{ role: 'user', content }
],
max_tokens: 4096
});
// Extract text content from the response
let message = '';
for (const block of result.content) {
if ('text' in block) {
message += block.text;
}
}
return {
message: {
role: 'assistant',
content: message
}
};
}
} }

View File

@ -1,128 +0,0 @@
import * as plugins from './plugins.js';
import * as paths from './paths.js';
import { MultiModalModel } from './abstract.classes.multimodal.js';
import type { ChatOptions, ChatResponse, ChatMessage } from './abstract.classes.multimodal.js';
import type { ChatCompletionMessageParam } from 'openai/resources/chat/completions';
export interface IExoProviderOptions {
exoBaseUrl?: string;
apiKey?: string;
}
export class ExoProvider extends MultiModalModel {
private options: IExoProviderOptions;
public openAiApiClient: plugins.openai.default;
constructor(optionsArg: IExoProviderOptions = {}) {
super();
this.options = {
exoBaseUrl: 'http://localhost:8080/v1', // Default Exo API endpoint
...optionsArg
};
}
public async start() {
this.openAiApiClient = new plugins.openai.default({
apiKey: this.options.apiKey || 'not-needed', // Exo might not require an API key for local deployment
baseURL: this.options.exoBaseUrl,
});
}
public async stop() {}
public async chatStream(input: ReadableStream<Uint8Array>): Promise<ReadableStream<string>> {
// Create a TextDecoder to handle incoming chunks
const decoder = new TextDecoder();
let buffer = '';
let currentMessage: { role: string; content: string; } | null = null;
// Create a TransformStream to process the input
const transform = new TransformStream<Uint8Array, string>({
transform: async (chunk, controller) => {
buffer += decoder.decode(chunk, { stream: true });
// Try to parse complete JSON messages from the buffer
while (true) {
const newlineIndex = buffer.indexOf('\n');
if (newlineIndex === -1) break;
const line = buffer.slice(0, newlineIndex);
buffer = buffer.slice(newlineIndex + 1);
if (line.trim()) {
try {
const message = JSON.parse(line);
currentMessage = message;
// Process the message based on its type
if (message.type === 'message') {
const response = await this.chat({
systemMessage: '',
userMessage: message.content,
messageHistory: [{ role: message.role as 'user' | 'assistant' | 'system', content: message.content }]
});
controller.enqueue(JSON.stringify(response) + '\n');
}
} catch (error) {
console.error('Error processing message:', error);
}
}
}
},
flush(controller) {
if (buffer) {
try {
const message = JSON.parse(buffer);
currentMessage = message;
} catch (error) {
console.error('Error processing remaining buffer:', error);
}
}
}
});
return input.pipeThrough(transform);
}
public async chat(options: ChatOptions): Promise<ChatResponse> {
const messages: ChatCompletionMessageParam[] = [
{ role: 'system', content: options.systemMessage },
...options.messageHistory,
{ role: 'user', content: options.userMessage }
];
try {
const response = await this.openAiApiClient.chat.completions.create({
model: 'local-model', // Exo uses local models
messages: messages,
stream: false
});
return {
role: 'assistant',
message: response.choices[0]?.message?.content || ''
};
} catch (error) {
console.error('Error in chat completion:', error);
throw error;
}
}
public async audio(optionsArg: { message: string }): Promise<NodeJS.ReadableStream> {
throw new Error('Audio generation is not supported by Exo provider');
}
public async vision(optionsArg: { image: Buffer; prompt: string }): Promise<string> {
throw new Error('Vision processing is not supported by Exo provider');
}
public async document(optionsArg: {
systemMessage: string;
userMessage: string;
pdfDocuments: Uint8Array[];
messageHistory: ChatMessage[];
}): Promise<{ message: any }> {
throw new Error('Document processing is not supported by Exo provider');
}
}

View File

@ -1,192 +0,0 @@
import * as plugins from './plugins.js';
import * as paths from './paths.js';
import { MultiModalModel } from './abstract.classes.multimodal.js';
import type { ChatOptions, ChatResponse, ChatMessage } from './abstract.classes.multimodal.js';
export interface IGroqProviderOptions {
groqToken: string;
model?: string;
}
export class GroqProvider extends MultiModalModel {
private options: IGroqProviderOptions;
private baseUrl = 'https://api.groq.com/v1';
constructor(optionsArg: IGroqProviderOptions) {
super();
this.options = {
...optionsArg,
model: optionsArg.model || 'llama-3.3-70b-versatile', // Default model
};
}
async start() {}
async stop() {}
public async chatStream(input: ReadableStream<Uint8Array>): Promise<ReadableStream<string>> {
// Create a TextDecoder to handle incoming chunks
const decoder = new TextDecoder();
let buffer = '';
let currentMessage: { role: string; content: string; } | null = null;
// Create a TransformStream to process the input
const transform = new TransformStream<Uint8Array, string>({
transform: async (chunk, controller) => {
buffer += decoder.decode(chunk, { stream: true });
// Try to parse complete JSON messages from the buffer
while (true) {
const newlineIndex = buffer.indexOf('\n');
if (newlineIndex === -1) break;
const line = buffer.slice(0, newlineIndex);
buffer = buffer.slice(newlineIndex + 1);
if (line.trim()) {
try {
const message = JSON.parse(line);
currentMessage = {
role: message.role || 'user',
content: message.content || '',
};
} catch (e) {
console.error('Failed to parse message:', e);
}
}
}
// If we have a complete message, send it to Groq
if (currentMessage) {
const response = await fetch(`${this.baseUrl}/chat/completions`, {
method: 'POST',
headers: {
'Authorization': `Bearer ${this.options.groqToken}`,
'Content-Type': 'application/json',
},
body: JSON.stringify({
model: this.options.model,
messages: [{ role: currentMessage.role, content: currentMessage.content }],
stream: true,
}),
});
// Process each chunk from Groq
const reader = response.body?.getReader();
if (reader) {
try {
while (true) {
const { done, value } = await reader.read();
if (done) break;
const chunk = new TextDecoder().decode(value);
const lines = chunk.split('\n');
for (const line of lines) {
if (line.startsWith('data: ')) {
const data = line.slice(6);
if (data === '[DONE]') break;
try {
const parsed = JSON.parse(data);
const content = parsed.choices[0]?.delta?.content;
if (content) {
controller.enqueue(content);
}
} catch (e) {
console.error('Failed to parse SSE data:', e);
}
}
}
}
} finally {
reader.releaseLock();
}
}
currentMessage = null;
}
},
flush(controller) {
if (buffer) {
try {
const message = JSON.parse(buffer);
controller.enqueue(message.content || '');
} catch (e) {
console.error('Failed to parse remaining buffer:', e);
}
}
}
});
// Connect the input to our transform stream
return input.pipeThrough(transform);
}
// Implementing the synchronous chat interaction
public async chat(optionsArg: ChatOptions): Promise<ChatResponse> {
const messages = [
// System message
{
role: 'system',
content: optionsArg.systemMessage,
},
// Message history
...optionsArg.messageHistory.map(msg => ({
role: msg.role,
content: msg.content,
})),
// User message
{
role: 'user',
content: optionsArg.userMessage,
},
];
const response = await fetch(`${this.baseUrl}/chat/completions`, {
method: 'POST',
headers: {
'Authorization': `Bearer ${this.options.groqToken}`,
'Content-Type': 'application/json',
},
body: JSON.stringify({
model: this.options.model,
messages,
temperature: 0.7,
max_completion_tokens: 1024,
stream: false,
}),
});
if (!response.ok) {
const error = await response.json();
throw new Error(`Groq API error: ${error.message || response.statusText}`);
}
const result = await response.json();
return {
role: 'assistant',
message: result.choices[0].message.content,
};
}
public async audio(optionsArg: { message: string }): Promise<NodeJS.ReadableStream> {
// Groq does not provide an audio API, so this method is not implemented.
throw new Error('Audio generation is not yet supported by Groq.');
}
public async vision(optionsArg: { image: Buffer; prompt: string }): Promise<string> {
throw new Error('Vision tasks are not yet supported by Groq.');
}
public async document(optionsArg: {
systemMessage: string;
userMessage: string;
pdfDocuments: Uint8Array[];
messageHistory: ChatMessage[];
}): Promise<{ message: any }> {
throw new Error('Document processing is not yet supported by Groq.');
}
}

View File

@ -1,252 +1,3 @@
import * as plugins from './plugins.js'; import * as plugins from './plugins.js';
import * as paths from './paths.js';
import { MultiModalModel } from './abstract.classes.multimodal.js';
import type { ChatOptions, ChatResponse, ChatMessage } from './abstract.classes.multimodal.js';
export interface IOllamaProviderOptions { export class OllamaProvider {}
baseUrl?: string;
model?: string;
visionModel?: string; // Model to use for vision tasks (e.g. 'llava')
}
export class OllamaProvider extends MultiModalModel {
private options: IOllamaProviderOptions;
private baseUrl: string;
private model: string;
private visionModel: string;
constructor(optionsArg: IOllamaProviderOptions = {}) {
super();
this.options = optionsArg;
this.baseUrl = optionsArg.baseUrl || 'http://localhost:11434';
this.model = optionsArg.model || 'llama2';
this.visionModel = optionsArg.visionModel || 'llava';
}
async start() {
// Verify Ollama is running
try {
const response = await fetch(`${this.baseUrl}/api/tags`);
if (!response.ok) {
throw new Error('Failed to connect to Ollama server');
}
} catch (error) {
throw new Error(`Failed to connect to Ollama server at ${this.baseUrl}: ${error.message}`);
}
}
async stop() {}
public async chatStream(input: ReadableStream<Uint8Array>): Promise<ReadableStream<string>> {
// Create a TextDecoder to handle incoming chunks
const decoder = new TextDecoder();
let buffer = '';
let currentMessage: { role: string; content: string; } | null = null;
// Create a TransformStream to process the input
const transform = new TransformStream<Uint8Array, string>({
transform: async (chunk, controller) => {
buffer += decoder.decode(chunk, { stream: true });
// Try to parse complete JSON messages from the buffer
while (true) {
const newlineIndex = buffer.indexOf('\n');
if (newlineIndex === -1) break;
const line = buffer.slice(0, newlineIndex);
buffer = buffer.slice(newlineIndex + 1);
if (line.trim()) {
try {
const message = JSON.parse(line);
currentMessage = {
role: message.role || 'user',
content: message.content || '',
};
} catch (e) {
console.error('Failed to parse message:', e);
}
}
}
// If we have a complete message, send it to Ollama
if (currentMessage) {
const response = await fetch(`${this.baseUrl}/api/chat`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({
model: this.model,
messages: [{ role: currentMessage.role, content: currentMessage.content }],
stream: true,
}),
});
// Process each chunk from Ollama
const reader = response.body?.getReader();
if (reader) {
try {
while (true) {
const { done, value } = await reader.read();
if (done) break;
const chunk = new TextDecoder().decode(value);
const lines = chunk.split('\n');
for (const line of lines) {
if (line.trim()) {
try {
const parsed = JSON.parse(line);
const content = parsed.message?.content;
if (content) {
controller.enqueue(content);
}
} catch (e) {
console.error('Failed to parse Ollama response:', e);
}
}
}
}
} finally {
reader.releaseLock();
}
}
currentMessage = null;
}
},
flush(controller) {
if (buffer) {
try {
const message = JSON.parse(buffer);
controller.enqueue(message.content || '');
} catch (e) {
console.error('Failed to parse remaining buffer:', e);
}
}
}
});
// Connect the input to our transform stream
return input.pipeThrough(transform);
}
// Implementing the synchronous chat interaction
public async chat(optionsArg: ChatOptions): Promise<ChatResponse> {
// Format messages for Ollama
const messages = [
{ role: 'system', content: optionsArg.systemMessage },
...optionsArg.messageHistory,
{ role: 'user', content: optionsArg.userMessage }
];
// Make API call to Ollama
const response = await fetch(`${this.baseUrl}/api/chat`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({
model: this.model,
messages: messages,
stream: false
}),
});
if (!response.ok) {
throw new Error(`Ollama API error: ${response.statusText}`);
}
const result = await response.json();
return {
role: 'assistant' as const,
message: result.message.content,
};
}
public async audio(optionsArg: { message: string }): Promise<NodeJS.ReadableStream> {
throw new Error('Audio generation is not supported by Ollama.');
}
public async vision(optionsArg: { image: Buffer; prompt: string }): Promise<string> {
const base64Image = optionsArg.image.toString('base64');
const response = await fetch(`${this.baseUrl}/api/chat`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({
model: this.visionModel,
messages: [{
role: 'user',
content: optionsArg.prompt,
images: [base64Image]
}],
stream: false
}),
});
if (!response.ok) {
throw new Error(`Ollama API error: ${response.statusText}`);
}
const result = await response.json();
return result.message.content;
}
public async document(optionsArg: {
systemMessage: string;
userMessage: string;
pdfDocuments: Uint8Array[];
messageHistory: ChatMessage[];
}): Promise<{ message: any }> {
// Convert PDF documents to images using SmartPDF
const smartpdfInstance = new plugins.smartpdf.SmartPdf();
let documentImageBytesArray: Uint8Array[] = [];
for (const pdfDocument of optionsArg.pdfDocuments) {
const documentImageArray = await smartpdfInstance.convertPDFToPngBytes(pdfDocument);
documentImageBytesArray = documentImageBytesArray.concat(documentImageArray);
}
// Convert images to base64
const base64Images = documentImageBytesArray.map(bytes => Buffer.from(bytes).toString('base64'));
// Send request to Ollama with images
const response = await fetch(`${this.baseUrl}/api/chat`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({
model: this.visionModel,
messages: [
{ role: 'system', content: optionsArg.systemMessage },
...optionsArg.messageHistory,
{
role: 'user',
content: optionsArg.userMessage,
images: base64Images
}
],
stream: false
}),
});
if (!response.ok) {
throw new Error(`Ollama API error: ${response.statusText}`);
}
const result = await response.json();
return {
message: {
role: 'assistant',
content: result.message.content
}
};
}
}

View File

@ -1,20 +1,10 @@
import * as plugins from './plugins.js'; import * as plugins from './plugins.js';
import * as paths from './paths.js'; import * as paths from './paths.js';
// Custom type definition for chat completion messages
export type TChatCompletionRequestMessage = {
role: "system" | "user" | "assistant";
content: string;
};
import { MultiModalModel } from './abstract.classes.multimodal.js'; import { MultiModalModel } from './abstract.classes.multimodal.js';
export interface IOpenaiProviderOptions { export interface IOpenaiProviderOptions {
openaiToken: string; openaiToken: string;
chatModel?: string;
audioModel?: string;
visionModel?: string;
// Optionally add more model options (e.g., documentModel) if needed.
} }
export class OpenAiProvider extends MultiModalModel { export class OpenAiProvider extends MultiModalModel {
@ -37,79 +27,11 @@ export class OpenAiProvider extends MultiModalModel {
public async stop() {} public async stop() {}
public async chatStream(input: ReadableStream<Uint8Array>): Promise<ReadableStream<string>> { public async chatStream(input: ReadableStream<string>): Promise<ReadableStream<string>> {
// Create a TextDecoder to handle incoming chunks // TODO: implement for OpenAI
const decoder = new TextDecoder();
let buffer = '';
let currentMessage: {
role: "function" | "user" | "system" | "assistant" | "tool" | "developer";
content: string;
} | null = null;
// Create a TransformStream to process the input const returnStream = new ReadableStream();
const transform = new TransformStream<Uint8Array, string>({ return returnStream;
transform: async (chunk, controller) => {
buffer += decoder.decode(chunk, { stream: true });
// Try to parse complete JSON messages from the buffer
while (true) {
const newlineIndex = buffer.indexOf('\n');
if (newlineIndex === -1) break;
const line = buffer.slice(0, newlineIndex);
buffer = buffer.slice(newlineIndex + 1);
if (line.trim()) {
try {
const message = JSON.parse(line);
currentMessage = {
role: (message.role || 'user') as "function" | "user" | "system" | "assistant" | "tool" | "developer",
content: message.content || '',
};
} catch (e) {
console.error('Failed to parse message:', e);
}
}
}
// If we have a complete message, send it to OpenAI
if (currentMessage) {
const messageToSend = { role: "user" as const, content: currentMessage.content };
const chatModel = this.options.chatModel ?? 'o3-mini';
const requestParams: any = {
model: chatModel,
messages: [messageToSend],
stream: true,
};
// Temperature is omitted since the model does not support it.
const stream = await this.openAiApiClient.chat.completions.create(requestParams);
// Explicitly cast the stream as an async iterable to satisfy TypeScript.
const streamAsyncIterable = stream as unknown as AsyncIterableIterator<any>;
// Process each chunk from OpenAI
for await (const chunk of streamAsyncIterable) {
const content = chunk.choices[0]?.delta?.content;
if (content) {
controller.enqueue(content);
}
}
currentMessage = null;
}
},
flush(controller) {
if (buffer) {
try {
const message = JSON.parse(buffer);
controller.enqueue(message.content || '');
} catch (e) {
console.error('Failed to parse remaining buffer:', e);
}
}
}
});
// Connect the input to our transform stream
return input.pipeThrough(transform);
} }
// Implementing the synchronous chat interaction // Implementing the synchronous chat interaction
@ -121,17 +43,15 @@ export class OpenAiProvider extends MultiModalModel {
content: string; content: string;
}[]; }[];
}) { }) {
const chatModel = this.options.chatModel ?? 'o3-mini'; const result = await this.openAiApiClient.chat.completions.create({
const requestParams: any = { model: 'gpt-4-turbo-preview',
model: chatModel,
messages: [ messages: [
{ role: 'system', content: optionsArg.systemMessage }, { role: 'system', content: optionsArg.systemMessage },
...optionsArg.messageHistory, ...optionsArg.messageHistory,
{ role: 'user', content: optionsArg.userMessage }, { role: 'user', content: optionsArg.userMessage },
], ],
}; });
// Temperature parameter removed to avoid unsupported error.
const result = await this.openAiApiClient.chat.completions.create(requestParams);
return { return {
role: result.choices[0].message.role as 'assistant', role: result.choices[0].message.role as 'assistant',
message: result.choices[0].message.content, message: result.choices[0].message.content,
@ -141,7 +61,7 @@ export class OpenAiProvider extends MultiModalModel {
public async audio(optionsArg: { message: string }): Promise<NodeJS.ReadableStream> { public async audio(optionsArg: { message: string }): Promise<NodeJS.ReadableStream> {
const done = plugins.smartpromise.defer<NodeJS.ReadableStream>(); const done = plugins.smartpromise.defer<NodeJS.ReadableStream>();
const result = await this.openAiApiClient.audio.speech.create({ const result = await this.openAiApiClient.audio.speech.create({
model: this.options.audioModel ?? 'tts-1-hd', model: 'tts-1-hd',
input: optionsArg.message, input: optionsArg.message,
voice: 'nova', voice: 'nova',
response_format: 'mp3', response_format: 'mp3',
@ -163,30 +83,27 @@ export class OpenAiProvider extends MultiModalModel {
}) { }) {
let pdfDocumentImageBytesArray: Uint8Array[] = []; let pdfDocumentImageBytesArray: Uint8Array[] = [];
// Convert each PDF into one or more image byte arrays.
const smartpdfInstance = new plugins.smartpdf.SmartPdf();
await smartpdfInstance.start();
for (const pdfDocument of optionsArg.pdfDocuments) { for (const pdfDocument of optionsArg.pdfDocuments) {
const documentImageArray = await smartpdfInstance.convertPDFToPngBytes(pdfDocument); const documentImageArray = await this.smartpdfInstance.convertPDFToPngBytes(pdfDocument);
pdfDocumentImageBytesArray = pdfDocumentImageBytesArray.concat(documentImageArray); pdfDocumentImageBytesArray = pdfDocumentImageBytesArray.concat(documentImageArray);
} }
await smartpdfInstance.stop();
console.log(`image smartfile array`); console.log(`image smartfile array`);
console.log(pdfDocumentImageBytesArray.map((smartfile) => smartfile.length)); console.log(pdfDocumentImageBytesArray.map((smartfile) => smartfile.length));
// Filter out any empty buffers to avoid sending invalid image URLs. const smartfileArray = await plugins.smartarray.map(
const validImageBytesArray = pdfDocumentImageBytesArray.filter(imageBytes => imageBytes && imageBytes.length > 0); pdfDocumentImageBytesArray,
const imageAttachments = validImageBytesArray.map(imageBytes => ({ async (pdfDocumentImageBytes) => {
type: 'image_url', return plugins.smartfile.SmartFile.fromBuffer(
image_url: { 'pdfDocumentImage.jpg',
url: 'data:image/png;base64,' + Buffer.from(imageBytes).toString('base64'), Buffer.from(pdfDocumentImageBytes)
}, );
})); }
);
const chatModel = this.options.chatModel ?? 'gpt-4o'; const result = await this.openAiApiClient.chat.completions.create({
const requestParams: any = { model: 'gpt-4-vision-preview',
model: chatModel, // response_format: { type: "json_object" }, // not supported for now
messages: [ messages: [
{ role: 'system', content: optionsArg.systemMessage }, { role: 'system', content: optionsArg.systemMessage },
...optionsArg.messageHistory, ...optionsArg.messageHistory,
@ -194,39 +111,24 @@ export class OpenAiProvider extends MultiModalModel {
role: 'user', role: 'user',
content: [ content: [
{ type: 'text', text: optionsArg.userMessage }, { type: 'text', text: optionsArg.userMessage },
...imageAttachments, ...(() => {
const returnArray = [];
for (const imageBytes of pdfDocumentImageBytesArray) {
returnArray.push({
type: 'image_url',
image_url: {
url: 'data:image/png;base64,' + Buffer.from(imageBytes).toString('base64'),
},
});
}
return returnArray;
})(),
], ],
}, },
], ],
}; });
// Temperature parameter removed.
const result = await this.openAiApiClient.chat.completions.create(requestParams);
return { return {
message: result.choices[0].message, message: result.choices[0].message,
}; };
} }
public async vision(optionsArg: { image: Buffer; prompt: string }): Promise<string> {
const visionModel = this.options.visionModel ?? 'gpt-4o';
const requestParams: any = {
model: visionModel,
messages: [
{
role: 'user',
content: [
{ type: 'text', text: optionsArg.prompt },
{
type: 'image_url',
image_url: {
url: `data:image/jpeg;base64,${optionsArg.image.toString('base64')}`
}
}
]
}
],
max_tokens: 300
};
const result = await this.openAiApiClient.chat.completions.create(requestParams);
return result.choices[0].message.content || '';
}
} }

View File

@ -1,171 +1,3 @@
import * as plugins from './plugins.js'; import * as plugins from './plugins.js';
import * as paths from './paths.js';
import { MultiModalModel } from './abstract.classes.multimodal.js';
import type { ChatOptions, ChatResponse, ChatMessage } from './abstract.classes.multimodal.js';
export interface IPerplexityProviderOptions { export class PerplexityProvider {}
perplexityToken: string;
}
export class PerplexityProvider extends MultiModalModel {
private options: IPerplexityProviderOptions;
constructor(optionsArg: IPerplexityProviderOptions) {
super();
this.options = optionsArg;
}
async start() {
// Initialize any necessary clients or resources
}
async stop() {}
public async chatStream(input: ReadableStream<Uint8Array>): Promise<ReadableStream<string>> {
// Create a TextDecoder to handle incoming chunks
const decoder = new TextDecoder();
let buffer = '';
let currentMessage: { role: string; content: string; } | null = null;
// Create a TransformStream to process the input
const transform = new TransformStream<Uint8Array, string>({
async transform(chunk, controller) {
buffer += decoder.decode(chunk, { stream: true });
// Try to parse complete JSON messages from the buffer
while (true) {
const newlineIndex = buffer.indexOf('\n');
if (newlineIndex === -1) break;
const line = buffer.slice(0, newlineIndex);
buffer = buffer.slice(newlineIndex + 1);
if (line.trim()) {
try {
const message = JSON.parse(line);
currentMessage = {
role: message.role || 'user',
content: message.content || '',
};
} catch (e) {
console.error('Failed to parse message:', e);
}
}
}
// If we have a complete message, send it to Perplexity
if (currentMessage) {
const response = await fetch('https://api.perplexity.ai/chat/completions', {
method: 'POST',
headers: {
'Authorization': `Bearer ${this.options.perplexityToken}`,
'Content-Type': 'application/json',
},
body: JSON.stringify({
model: 'mixtral-8x7b-instruct',
messages: [{ role: currentMessage.role, content: currentMessage.content }],
stream: true,
}),
});
// Process each chunk from Perplexity
const reader = response.body?.getReader();
if (reader) {
try {
while (true) {
const { done, value } = await reader.read();
if (done) break;
const chunk = new TextDecoder().decode(value);
const lines = chunk.split('\n');
for (const line of lines) {
if (line.startsWith('data: ')) {
const data = line.slice(6);
if (data === '[DONE]') break;
try {
const parsed = JSON.parse(data);
const content = parsed.choices[0]?.delta?.content;
if (content) {
controller.enqueue(content);
}
} catch (e) {
console.error('Failed to parse SSE data:', e);
}
}
}
}
} finally {
reader.releaseLock();
}
}
currentMessage = null;
}
},
flush(controller) {
if (buffer) {
try {
const message = JSON.parse(buffer);
controller.enqueue(message.content || '');
} catch (e) {
console.error('Failed to parse remaining buffer:', e);
}
}
}
});
// Connect the input to our transform stream
return input.pipeThrough(transform);
}
// Implementing the synchronous chat interaction
public async chat(optionsArg: ChatOptions): Promise<ChatResponse> {
// Make API call to Perplexity
const response = await fetch('https://api.perplexity.ai/chat/completions', {
method: 'POST',
headers: {
'Authorization': `Bearer ${this.options.perplexityToken}`,
'Content-Type': 'application/json',
},
body: JSON.stringify({
model: 'mixtral-8x7b-instruct', // Using Mixtral model
messages: [
{ role: 'system', content: optionsArg.systemMessage },
...optionsArg.messageHistory,
{ role: 'user', content: optionsArg.userMessage }
],
}),
});
if (!response.ok) {
throw new Error(`Perplexity API error: ${response.statusText}`);
}
const result = await response.json();
return {
role: 'assistant' as const,
message: result.choices[0].message.content,
};
}
public async audio(optionsArg: { message: string }): Promise<NodeJS.ReadableStream> {
throw new Error('Audio generation is not supported by Perplexity.');
}
public async vision(optionsArg: { image: Buffer; prompt: string }): Promise<string> {
throw new Error('Vision tasks are not supported by Perplexity.');
}
public async document(optionsArg: {
systemMessage: string;
userMessage: string;
pdfDocuments: Uint8Array[];
messageHistory: ChatMessage[];
}): Promise<{ message: any }> {
throw new Error('Document processing is not supported by Perplexity.');
}
}

View File

@ -1,183 +0,0 @@
import * as plugins from './plugins.js';
import * as paths from './paths.js';
import { MultiModalModel } from './abstract.classes.multimodal.js';
import type { ChatOptions, ChatResponse, ChatMessage } from './abstract.classes.multimodal.js';
import type { ChatCompletionMessageParam } from 'openai/resources/chat/completions';
export interface IXAIProviderOptions {
xaiToken: string;
}
export class XAIProvider extends MultiModalModel {
private options: IXAIProviderOptions;
public openAiApiClient: plugins.openai.default;
public smartpdfInstance: plugins.smartpdf.SmartPdf;
constructor(optionsArg: IXAIProviderOptions) {
super();
this.options = optionsArg;
}
public async start() {
this.openAiApiClient = new plugins.openai.default({
apiKey: this.options.xaiToken,
baseURL: 'https://api.x.ai/v1',
});
this.smartpdfInstance = new plugins.smartpdf.SmartPdf();
}
public async stop() {}
public async chatStream(input: ReadableStream<Uint8Array>): Promise<ReadableStream<string>> {
// Create a TextDecoder to handle incoming chunks
const decoder = new TextDecoder();
let buffer = '';
let currentMessage: { role: string; content: string; } | null = null;
// Create a TransformStream to process the input
const transform = new TransformStream<Uint8Array, string>({
async transform(chunk, controller) {
buffer += decoder.decode(chunk, { stream: true });
// Try to parse complete JSON messages from the buffer
while (true) {
const newlineIndex = buffer.indexOf('\n');
if (newlineIndex === -1) break;
const line = buffer.slice(0, newlineIndex);
buffer = buffer.slice(newlineIndex + 1);
if (line.trim()) {
try {
const message = JSON.parse(line);
currentMessage = {
role: message.role || 'user',
content: message.content || '',
};
} catch (e) {
console.error('Failed to parse message:', e);
}
}
}
// If we have a complete message, send it to X.AI
if (currentMessage) {
const stream = await this.openAiApiClient.chat.completions.create({
model: 'grok-2-latest',
messages: [{ role: currentMessage.role, content: currentMessage.content }],
stream: true,
});
// Process each chunk from X.AI
for await (const chunk of stream) {
const content = chunk.choices[0]?.delta?.content;
if (content) {
controller.enqueue(content);
}
}
currentMessage = null;
}
},
flush(controller) {
if (buffer) {
try {
const message = JSON.parse(buffer);
controller.enqueue(message.content || '');
} catch (e) {
console.error('Failed to parse remaining buffer:', e);
}
}
}
});
// Connect the input to our transform stream
return input.pipeThrough(transform);
}
public async chat(optionsArg: {
systemMessage: string;
userMessage: string;
messageHistory: { role: string; content: string; }[];
}): Promise<{ role: 'assistant'; message: string; }> {
// Prepare messages array with system message, history, and user message
const messages: ChatCompletionMessageParam[] = [
{ role: 'system', content: optionsArg.systemMessage },
...optionsArg.messageHistory.map(msg => ({
role: msg.role as 'system' | 'user' | 'assistant',
content: msg.content
})),
{ role: 'user', content: optionsArg.userMessage }
];
// Call X.AI's chat completion API
const completion = await this.openAiApiClient.chat.completions.create({
model: 'grok-2-latest',
messages: messages,
stream: false,
});
// Return the assistant's response
return {
role: 'assistant',
message: completion.choices[0]?.message?.content || ''
};
}
public async audio(optionsArg: { message: string }): Promise<NodeJS.ReadableStream> {
throw new Error('Audio generation is not supported by X.AI');
}
public async vision(optionsArg: { image: Buffer; prompt: string }): Promise<string> {
throw new Error('Vision tasks are not supported by X.AI');
}
public async document(optionsArg: {
systemMessage: string;
userMessage: string;
pdfDocuments: Uint8Array[];
messageHistory: { role: string; content: string; }[];
}): Promise<{ message: any }> {
// First convert PDF documents to images
let pdfDocumentImageBytesArray: Uint8Array[] = [];
for (const pdfDocument of optionsArg.pdfDocuments) {
const documentImageArray = await this.smartpdfInstance.convertPDFToPngBytes(pdfDocument);
pdfDocumentImageBytesArray = pdfDocumentImageBytesArray.concat(documentImageArray);
}
// Convert images to base64 for inclusion in the message
const imageBase64Array = pdfDocumentImageBytesArray.map(bytes =>
Buffer.from(bytes).toString('base64')
);
// Combine document images into the user message
const enhancedUserMessage = `
${optionsArg.userMessage}
Document contents (as images):
${imageBase64Array.map((img, i) => `Image ${i + 1}: <image data>`).join('\n')}
`;
// Use chat completion to analyze the documents
const messages: ChatCompletionMessageParam[] = [
{ role: 'system', content: optionsArg.systemMessage },
...optionsArg.messageHistory.map(msg => ({
role: msg.role as 'system' | 'user' | 'assistant',
content: msg.content
})),
{ role: 'user', content: enhancedUserMessage }
];
const completion = await this.openAiApiClient.chat.completions.create({
model: 'grok-2-latest',
messages: messages,
stream: false,
});
return {
message: completion.choices[0]?.message?.content || ''
};
}
}