36 Commits

Author SHA1 Message Date
6916dd9e2a 0.4.0 2025-02-08 12:08:14 +01:00
f89888a542 feat(core): Added support for Exo AI provider 2025-02-08 12:08:14 +01:00
d93b198b09 0.3.3 2025-02-05 14:24:34 +01:00
9e390d0fdb fix(documentation): Update readme with detailed license and legal information. 2025-02-05 14:24:34 +01:00
8329ee861e 0.3.2 2025-02-05 14:22:41 +01:00
b8585a0afb fix(documentation): Remove redundant badges from readme 2025-02-05 14:22:41 +01:00
c96f5118cf 0.3.1 2025-02-05 14:21:27 +01:00
17e1a1f1e1 fix(documentation): Updated README structure and added detailed usage examples 2025-02-05 14:21:26 +01:00
de940dff75 0.3.0 2025-02-05 14:09:07 +01:00
4fc1e029e4 feat(integration-xai): Add support for X.AI provider with chat and document processing capabilities. 2025-02-05 14:09:06 +01:00
d0a4151a2b 0.2.0 2025-02-03 17:48:37 +01:00
ad5dd4799b feat(provider.anthropic): Add support for vision and document processing in Anthropic provider 2025-02-03 17:48:36 +01:00
1c49af74ac 0.1.0 2025-02-03 15:26:00 +01:00
eda8ce36df feat(providers): Add vision and document processing capabilities to providers 2025-02-03 15:26:00 +01:00
e82c510094 0.0.19 2025-02-03 15:16:59 +01:00
0378308721 fix(core): Enhanced chat streaming and error handling across providers 2025-02-03 15:16:58 +01:00
189a32683f 0.0.18 2024-09-19 12:56:35 +02:00
f731b9f78d fix(dependencies): Update dependencies to the latest versions. 2024-09-19 12:56:35 +02:00
3701e21284 update description 2024-05-29 14:11:41 +02:00
490d4996d2 0.0.17 2024-05-17 17:18:26 +02:00
f099a8f1ed fix(core): update 2024-05-17 17:18:26 +02:00
a0228a0abc 0.0.16 2024-05-17 16:25:22 +02:00
a5257b52e7 fix(core): update 2024-05-17 16:25:22 +02:00
a4144fc071 0.0.15 2024-04-29 18:04:14 +02:00
af46b3e81e fix(core): update 2024-04-29 18:04:14 +02:00
d50427937c 0.0.14 2024-04-29 12:38:25 +02:00
ffde2e0bf1 fix(core): update 2024-04-29 12:38:25 +02:00
82abc06da4 0.0.13 2024-04-29 12:37:43 +02:00
3a5f2d52e5 fix(core): update 2024-04-29 12:37:43 +02:00
f628a71184 0.0.12 2024-04-29 11:18:41 +02:00
d1465fc868 fix(provider): fix anthropic integration 2024-04-29 11:18:40 +02:00
9e19d320e1 0.0.11 2024-04-27 12:47:50 +02:00
158d49fa95 fix(core): update 2024-04-27 12:47:49 +02:00
1ce412fd00 0.0.10 2024-04-25 10:49:08 +02:00
92c382c16e fix(core): update 2024-04-25 10:49:07 +02:00
63d3b7c9bb update tsconfig 2024-04-14 17:19:32 +02:00
22 changed files with 6949 additions and 3038 deletions

116
changelog.md Normal file
View File

@ -0,0 +1,116 @@
# Changelog
## 2025-02-08 - 0.4.0 - feat(core)
Added support for Exo AI provider
- Introduced ExoProvider with chat functionalities.
- Updated SmartAi class to initialize ExoProvider.
- Extended Conversation class to support ExoProvider.
## 2025-02-05 - 0.3.3 - fix(documentation)
Update readme with detailed license and legal information.
- Added explicit section on License and Legal Information in the README.
- Clarified the use of trademarks and company information.
## 2025-02-05 - 0.3.2 - fix(documentation)
Remove redundant badges from readme
- Removed Build Status badge from the readme file.
- Removed License badge from the readme file.
## 2025-02-05 - 0.3.1 - fix(documentation)
Updated README structure and added detailed usage examples
- Introduced a Table of Contents
- Included comprehensive sections for chat, streaming chat, audio generation, document processing, and vision processing
- Added example code and detailed configuration steps for supported AI providers
- Clarified the development setup with instructions for running tests and building the project
## 2025-02-05 - 0.3.0 - feat(integration-xai)
Add support for X.AI provider with chat and document processing capabilities.
- Introduced XAIProvider class for integrating X.AI features.
- Implemented chat streaming and synchronous chat for X.AI.
- Enabled document processing capabilities with PDF conversion in X.AI.
## 2025-02-03 - 0.2.0 - feat(provider.anthropic)
Add support for vision and document processing in Anthropic provider
- Implemented vision tasks for Anthropic provider using Claude-3-opus-20240229 model.
- Implemented document processing for Anthropic provider, supporting conversion of PDF documents to images and analysis with Claude-3-opus-20240229 model.
- Updated documentation to reflect the new capabilities of the Anthropic provider.
## 2025-02-03 - 0.1.0 - feat(providers)
Add vision and document processing capabilities to providers
- OpenAI and Ollama providers now support vision tasks using GPT-4 Vision and Llava models respectively.
- Document processing has been implemented for OpenAI and Ollama providers, converting PDFs to images for analysis.
- Introduced abstract methods for vision and document processing in the MultiModalModel class.
- Updated the readme file with examples for vision and document processing.
## 2025-02-03 - 0.0.19 - fix(core)
Enhanced chat streaming and error handling across providers
- Refactored chatStream method to properly handle input streams and processes in Perplexity, OpenAI, Ollama, and Anthropic providers.
- Improved error handling and message parsing in chatStream implementations.
- Defined distinct interfaces for chat options, messages, and responses.
- Adjusted the test logic in test/test.ts for the new classification response requirement.
## 2024-09-19 - 0.0.18 - fix(dependencies)
Update dependencies to the latest versions.
- Updated @git.zone/tsbuild from ^2.1.76 to ^2.1.84
- Updated @git.zone/tsrun from ^1.2.46 to ^1.2.49
- Updated @push.rocks/tapbundle from ^5.0.23 to ^5.3.0
- Updated @types/node from ^20.12.12 to ^22.5.5
- Updated @anthropic-ai/sdk from ^0.21.0 to ^0.27.3
- Updated @push.rocks/smartfile from ^11.0.14 to ^11.0.21
- Updated @push.rocks/smartpromise from ^4.0.3 to ^4.0.4
- Updated @push.rocks/webstream from ^1.0.8 to ^1.0.10
- Updated openai from ^4.47.1 to ^4.62.1
## 2024-05-29 - 0.0.17 - Documentation
Updated project description.
- Improved project description for clarity and details.
## 2024-05-17 - 0.0.16 to 0.0.15 - Core
Fixes and updates.
- Various core updates and fixes for stability improvements.
## 2024-04-29 - 0.0.14 to 0.0.13 - Core
Fixes and updates.
- Multiple core updates and fixes for enhanced functionality.
## 2024-04-29 - 0.0.12 - Core
Fixes and updates.
- Core update and bug fixes.
## 2024-04-29 - 0.0.11 - Provider
Fix integration for anthropic provider.
- Correction in the integration process with anthropic provider for better compatibility.
## 2024-04-27 - 0.0.10 to 0.0.9 - Core
Fixes and updates.
- Updates and fixes to core components.
- Updated tsconfig for improved TypeScript configuration.
## 2024-04-01 - 0.0.8 to 0.0.7 - Core and npmextra
Core updates and npmextra configuration.
- Core fixes and updates.
- Updates to npmextra.json for githost configuration.
## 2024-03-31 - 0.0.6 to 0.0.2 - Core
Initial core updates and fixes.
- Multiple updates and fixes to core following initial versions.
This summarizes the relevant updates and changes based on the provided commit messages. The changelog excludes commits that are version tags without meaningful content or repeated entries.

19
license Normal file
View File

@ -0,0 +1,19 @@
Copyright (c) 2024 Task Venture Capital GmbH (hello@task.vc)
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

View File

@ -5,18 +5,20 @@
"githost": "code.foss.global",
"gitscope": "push.rocks",
"gitrepo": "smartai",
"description": "Provides a standardized interface for integrating and conversing with multiple AI models, supporting operations like chat and potentially audio responses.",
"description": "A TypeScript library for integrating and interacting with multiple AI models, offering capabilities for chat and potentially audio responses.",
"npmPackagename": "@push.rocks/smartai",
"license": "MIT",
"projectDomain": "push.rocks",
"keywords": [
"AI models integration",
"OpenAI GPT",
"Anthropic AI",
"text-to-speech",
"conversation stream",
"AI integration",
"chatbot",
"TypeScript",
"ESM"
"OpenAI",
"Anthropic",
"multi-model support",
"audio responses",
"text-to-speech",
"streaming chat"
]
}
},

View File

@ -1,8 +1,8 @@
{
"name": "@push.rocks/smartai",
"version": "0.0.9",
"version": "0.4.0",
"private": false,
"description": "Provides a standardized interface for integrating and conversing with multiple AI models, supporting operations like chat and potentially audio responses.",
"description": "A TypeScript library for integrating and interacting with multiple AI models, offering capabilities for chat and potentially audio responses.",
"main": "dist_ts/index.js",
"typings": "dist_ts/index.d.ts",
"type": "module",
@ -14,29 +14,33 @@
"buildDocs": "(tsdoc)"
},
"devDependencies": {
"@git.zone/tsbuild": "^2.1.25",
"@git.zone/tsbuild": "^2.1.84",
"@git.zone/tsbundle": "^2.0.5",
"@git.zone/tsrun": "^1.2.46",
"@git.zone/tstest": "^1.0.44",
"@push.rocks/tapbundle": "^5.0.15",
"@types/node": "^20.8.7"
"@git.zone/tsrun": "^1.2.49",
"@git.zone/tstest": "^1.0.90",
"@push.rocks/qenv": "^6.0.5",
"@push.rocks/tapbundle": "^5.3.0",
"@types/node": "^22.5.5"
},
"dependencies": {
"@anthropic-ai/sdk": "^0.19.1",
"@push.rocks/qenv": "^6.0.5",
"@push.rocks/smartfile": "^11.0.4",
"@push.rocks/smartpath": "^5.0.11",
"@push.rocks/smartpromise": "^4.0.3",
"openai": "^4.31.0"
"@anthropic-ai/sdk": "^0.27.3",
"@push.rocks/smartarray": "^1.0.8",
"@push.rocks/smartfile": "^11.0.21",
"@push.rocks/smartpath": "^5.0.18",
"@push.rocks/smartpdf": "^3.1.6",
"@push.rocks/smartpromise": "^4.0.4",
"@push.rocks/smartrequest": "^2.0.22",
"@push.rocks/webstream": "^1.0.10",
"openai": "^4.62.1"
},
"repository": {
"type": "git",
"url": "git+https://code.foss.global/push.rocks/smartai.git"
"url": "https://code.foss.global/push.rocks/smartai.git"
},
"bugs": {
"url": "https://code.foss.global/push.rocks/smartai/issues"
},
"homepage": "https://code.foss.global/push.rocks/smartai#readme",
"homepage": "https://code.foss.global/push.rocks/smartai",
"browserslist": [
"last 1 chrome versions"
],
@ -53,12 +57,14 @@
"readme.md"
],
"keywords": [
"AI models integration",
"OpenAI GPT",
"Anthropic AI",
"text-to-speech",
"conversation stream",
"AI integration",
"chatbot",
"TypeScript",
"ESM"
"OpenAI",
"Anthropic",
"multi-model support",
"audio responses",
"text-to-speech",
"streaming chat"
]
}

7618
pnpm-lock.yaml generated

File diff suppressed because it is too large Load Diff

View File

@ -1,2 +1,4 @@
required:
- OPENAI_TOKEN
- ANTHROPIC_TOKEN
- PERPLEXITY_TOKEN

1
readme.hints.md Normal file
View File

@ -0,0 +1 @@

367
readme.md
View File

@ -1,108 +1,329 @@
# @push.rocks/smartai
a standardized interface to talk to AI models
## Install
To install `@push.rocks/smartai`, run the following command in your terminal:
[![npm version](https://badge.fury.io/js/%40push.rocks%2Fsmartai.svg)](https://www.npmjs.com/package/@push.rocks/smartai)
SmartAi is a comprehensive TypeScript library that provides a standardized interface for integrating and interacting with multiple AI models. It supports a range of operations from synchronous and streaming chat to audio generation, document processing, and vision tasks.
## Table of Contents
- [Features](#features)
- [Installation](#installation)
- [Supported AI Providers](#supported-ai-providers)
- [Quick Start](#quick-start)
- [Usage Examples](#usage-examples)
- [Chat Interactions](#chat-interactions)
- [Streaming Chat](#streaming-chat)
- [Audio Generation](#audio-generation)
- [Document Processing](#document-processing)
- [Vision Processing](#vision-processing)
- [Error Handling](#error-handling)
- [Development](#development)
- [Running Tests](#running-tests)
- [Building the Project](#building-the-project)
- [Contributing](#contributing)
- [License](#license)
- [Legal Information](#legal-information)
## Features
- **Unified API:** Seamlessly integrate multiple AI providers with a consistent interface.
- **Chat & Streaming:** Support for both synchronous and real-time streaming chat interactions.
- **Audio & Vision:** Generate audio responses and perform detailed image analysis.
- **Document Processing:** Analyze PDFs and other documents using vision models.
- **Extensible:** Easily extend the library to support additional AI providers.
## Installation
To install SmartAi, run the following command:
```bash
npm install @push.rocks/smartai
```
This will add the package to your project's dependencies.
This will add the package to your projects dependencies.
## Usage
## Supported AI Providers
In the following guide, you'll learn how to leverage `@push.rocks/smartai` for integrating AI models into your applications using TypeScript with ESM syntax.
SmartAi supports multiple AI providers. Configure each provider with its corresponding token or settings:
### Getting Started
### OpenAI
First, you'll need to import the necessary modules from `@push.rocks/smartai`. This typically includes the main `SmartAi` class along with any specific provider classes you intend to use, such as `OpenAiProvider` or `AnthropicProvider`.
- **Models:** GPT-4, GPT-3.5-turbo, GPT-4-vision-preview
- **Features:** Chat, Streaming, Audio Generation, Vision, Document Processing
- **Configuration Example:**
```typescript
import { SmartAi, OpenAiProvider, AnthropicProvider } from '@push.rocks/smartai';
openaiToken: 'your-openai-token'
```
### Initialization
### X.AI
Create an instance of `SmartAi` by providing the required options, which include authentication tokens for the AI providers you plan to use.
- **Models:** Grok-2-latest
- **Features:** Chat, Streaming, Document Processing
- **Configuration Example:**
```typescript
xaiToken: 'your-xai-token'
```
### Anthropic
- **Models:** Claude-3-opus-20240229
- **Features:** Chat, Streaming, Vision, Document Processing
- **Configuration Example:**
```typescript
anthropicToken: 'your-anthropic-token'
```
### Perplexity
- **Models:** Mixtral-8x7b-instruct
- **Features:** Chat, Streaming
- **Configuration Example:**
```typescript
perplexityToken: 'your-perplexity-token'
```
### Groq
- **Models:** Llama-3.3-70b-versatile
- **Features:** Chat, Streaming
- **Configuration Example:**
```typescript
groqToken: 'your-groq-token'
```
### Ollama
- **Models:** Configurable (default: llama2; use llava for vision/document tasks)
- **Features:** Chat, Streaming, Vision, Document Processing
- **Configuration Example:**
```typescript
ollama: {
baseUrl: 'http://localhost:11434', // Optional
model: 'llama2', // Optional
visionModel: 'llava' // Optional for vision and document tasks
}
```
### Exo
- **Models:** Configurable (supports LLaMA, Mistral, LlaVA, Qwen, and Deepseek)
- **Features:** Chat, Streaming
- **Configuration Example:**
```typescript
exo: {
baseUrl: 'http://localhost:8080/v1', // Optional
apiKey: 'your-api-key' // Optional for local deployments
}
```
## Quick Start
Initialize SmartAi with the provider configurations you plan to use:
```typescript
import { SmartAi } from '@push.rocks/smartai';
const smartAi = new SmartAi({
openaiToken: 'your-openai-token-here',
anthropicToken: 'your-anthropic-token-here'
openaiToken: 'your-openai-token',
xaiToken: 'your-xai-token',
anthropicToken: 'your-anthropic-token',
perplexityToken: 'your-perplexity-token',
groqToken: 'your-groq-token',
ollama: {
baseUrl: 'http://localhost:11434',
model: 'llama2'
},
exo: {
baseUrl: 'http://localhost:8080/v1',
apiKey: 'your-api-key'
}
});
await smartAi.start();
```
## Usage Examples
### Chat Interactions
**Synchronous Chat:**
```typescript
const response = await smartAi.openaiProvider.chat({
systemMessage: 'You are a helpful assistant.',
userMessage: 'What is the capital of France?',
messageHistory: [] // Include previous conversation messages if applicable
});
console.log(response.message);
```
### Streaming Chat
**Real-Time Streaming:**
```typescript
const textEncoder = new TextEncoder();
const textDecoder = new TextDecoder();
// Create a transform stream for sending and receiving data
const { writable, readable } = new TransformStream();
const writer = writable.getWriter();
const message = {
role: 'user',
content: 'Tell me a story about a brave knight'
};
writer.write(textEncoder.encode(JSON.stringify(message) + '\n'));
// Start streaming the response
const stream = await smartAi.openaiProvider.chatStream(readable);
const reader = stream.getReader();
while (true) {
const { done, value } = await reader.read();
if (done) break;
console.log('AI:', value);
}
```
### Audio Generation
Generate audio (supported by providers like OpenAI):
```typescript
const audioStream = await smartAi.openaiProvider.audio({
message: 'Hello, this is a test of text-to-speech'
});
// Process the audio stream, for example, play it or save to a file.
```
### Document Processing
Analyze and extract key information from documents:
```typescript
// Example using OpenAI
const documentResult = await smartAi.openaiProvider.document({
systemMessage: 'Classify the document type',
userMessage: 'What type of document is this?',
messageHistory: [],
pdfDocuments: [pdfBuffer] // Uint8Array containing the PDF content
});
```
### Creating a Conversation
`@push.rocks/smartai` offers a versatile way to handle conversations with AI. To create a conversation using OpenAI, for instance:
Other providers (e.g., Ollama and Anthropic) follow a similar pattern:
```typescript
async function createOpenAiConversation() {
const conversation = await smartAi.createOpenApiConversation();
}
```
For Anthropic-based conversations:
```typescript
async function createAnthropicConversation() {
const conversation = await smartAi.createAnthropicConversation();
}
```
### Advanced Usage: Streaming and Chat
Advanced use cases might require direct access to the streaming APIs provided by the AI models. For instance, handling a chat stream with OpenAI can be achieved as follows:
#### Set Up the Conversation Stream
First, create a conversation and obtain the input and output streams.
```typescript
const conversation = await smartAi.createOpenApiConversation();
const inputStreamWriter = conversation.getInputStreamWriter();
const outputStream = conversation.getOutputStream();
```
#### Write to Input Stream
To send messages to the AI model, use the input stream writer.
```typescript
await inputStreamWriter.write('Hello, SmartAI!');
```
#### Processing Output Stream
Output from the AI model can be processed by reading from the output stream.
```typescript
const reader = outputStream.getReader();
reader.read().then(function processText({ done, value }) {
if (done) {
console.log("Stream complete");
return;
}
console.log("Received from AI:", value);
reader.read().then(processText);
// Using Ollama for document processing
const ollamaResult = await smartAi.ollamaProvider.document({
systemMessage: 'You are a document analysis assistant',
userMessage: 'Extract key information from this document',
messageHistory: [],
pdfDocuments: [pdfBuffer]
});
```
### Handling Audio
`@push.rocks/smartai` also supports handling audio responses from AI models. To generate and retrieve audio output:
```typescript
const tts = await TTS.createWithOpenAi(smartAi);
// Using Anthropic for document processing
const anthropicResult = await smartAi.anthropicProvider.document({
systemMessage: 'Analyze the document',
userMessage: 'Please extract the main points',
messageHistory: [],
pdfDocuments: [pdfBuffer]
});
```
This code snippet initializes text-to-speech (TTS) capabilities using the OpenAI model. Further customization and usage of audio APIs will depend on the capabilities offered by the specific AI model and provider you are working with.
### Vision Processing
### Conclusion
Analyze images with vision capabilities:
`@push.rocks/smartai` offers a flexible and standardized interface for interacting with AI models, streamlining the development of applications that leverage AI capabilities. Through the outlined examples, you've seen how to initialize the library, create conversations, and handle both text and audio interactions with AI models in a TypeScript environment following ESM syntax.
```typescript
// Using OpenAI GPT-4 Vision
const imageDescription = await smartAi.openaiProvider.vision({
image: imageBuffer, // Uint8Array containing image data
prompt: 'What do you see in this image?'
});
For a comprehensive understanding of all features and to explore more advanced use cases, refer to the official [documentation](https://code.foss.global/push.rocks/smartai#readme) and check the `npmextra.json` file's `tsdocs` section for additional insights on module usage.
// Using Ollama for vision tasks
const ollamaImageAnalysis = await smartAi.ollamaProvider.vision({
image: imageBuffer,
prompt: 'Analyze this image in detail'
});
// Using Anthropic for vision analysis
const anthropicImageAnalysis = await smartAi.anthropicProvider.vision({
image: imageBuffer,
prompt: 'Describe the contents of this image'
});
```
## Error Handling
Always wrap API calls in try-catch blocks to manage errors effectively:
```typescript
try {
const response = await smartAi.openaiProvider.chat({
systemMessage: 'You are a helpful assistant.',
userMessage: 'Hello!',
messageHistory: []
});
console.log(response.message);
} catch (error: any) {
console.error('AI provider error:', error.message);
}
```
## Development
### Running Tests
To run the test suite, use the following command:
```bash
npm run test
```
Ensure your environment is configured with the appropriate tokens and settings for the providers you are testing.
### Building the Project
Compile the TypeScript code and build the package using:
```bash
npm run build
```
This command prepares the library for distribution.
## Contributing
Contributions are welcome! Please follow these steps:
1. Fork the repository.
2. Create a feature branch:
```bash
git checkout -b feature/my-feature
```
3. Commit your changes with clear messages:
```bash
git commit -m 'Add new feature'
```
4. Push your branch to your fork:
```bash
git push origin feature/my-feature
```
5. Open a Pull Request with a detailed description of your changes.
## License and Legal Information

View File

@ -1,8 +1,84 @@
import { expect, expectAsync, tap } from '@push.rocks/tapbundle';
import * as smartai from '../ts/index.js'
import * as qenv from '@push.rocks/qenv';
import * as smartrequest from '@push.rocks/smartrequest';
import * as smartfile from '@push.rocks/smartfile';
tap.test('first test', async () => {
console.log(smartai)
const testQenv = new qenv.Qenv('./', './.nogit/');
import * as smartai from '../ts/index.js';
let testSmartai: smartai.SmartAi;
tap.test('should create a smartai instance', async () => {
testSmartai = new smartai.SmartAi({
openaiToken: await testQenv.getEnvVarOnDemand('OPENAI_TOKEN'),
});
await testSmartai.start();
});
tap.test('should create chat response with openai', async () => {
const userMessage = 'How are you?';
const response = await testSmartai.openaiProvider.chat({
systemMessage: 'Hello',
userMessage: userMessage,
messageHistory: [
],
});
console.log(`userMessage: ${userMessage}`);
console.log(response.message);
});
tap.test('should document a pdf', async () => {
const pdfUrl = 'https://www.w3.org/WAI/ER/tests/xhtml/testfiles/resources/pdf/dummy.pdf';
const pdfResponse = await smartrequest.getBinary(pdfUrl);
const result = await testSmartai.openaiProvider.document({
systemMessage: 'Classify the document. Only the following answers are allowed: "invoice", "bank account statement", "contract", "other". The answer should only contain the keyword for machine use.',
userMessage: "Classify the document.",
messageHistory: [],
pdfDocuments: [pdfResponse.body],
});
console.log(result);
});
tap.test('should recognize companies in a pdf', async () => {
const pdfBuffer = await smartfile.fs.toBuffer('./.nogit/demo_without_textlayer.pdf');
const result = await testSmartai.openaiProvider.document({
systemMessage: `
summarize the document.
answer in JSON format, adhering to the following schema:
\`\`\`typescript
type TAnswer = {
entitySender: {
type: 'official state entity' | 'company' | 'person';
name: string;
address: string;
city: string;
country: string;
EU: boolean; // wether the entity is within EU
};
entityReceiver: {
type: 'official state entity' | 'company' | 'person';
name: string;
address: string;
city: string;
country: string;
EU: boolean; // wether the entity is within EU
};
date: string; // the date of the document as YYYY-MM-DD
title: string; // a short title, suitable for a filename
}
\`\`\`
`,
userMessage: "Classify the document.",
messageHistory: [],
pdfDocuments: [pdfBuffer],
});
console.log(result);
})
tap.start()
tap.test('should stop the smartai instance', async () => {
await testSmartai.stop();
});
export default tap.start();

View File

@ -1,8 +1,8 @@
/**
* autocreated commitinfo by @pushrocks/commitinfo
* autocreated commitinfo by @push.rocks/commitinfo
*/
export const commitinfo = {
name: '@push.rocks/smartai',
version: '0.0.9',
description: 'Provides a standardized interface for integrating and conversing with multiple AI models, supporting operations like chat and potentially audio responses.'
version: '0.4.0',
description: 'A TypeScript library for integrating and interacting with multiple AI models, offering capabilities for chat and potentially audio responses.'
}

View File

@ -1,15 +1,86 @@
/**
* Message format for chat interactions
*/
export interface ChatMessage {
role: 'assistant' | 'user' | 'system';
content: string;
}
/**
* Options for chat interactions
*/
export interface ChatOptions {
systemMessage: string;
userMessage: string;
messageHistory: ChatMessage[];
}
/**
* Response format for chat interactions
*/
export interface ChatResponse {
role: 'assistant';
message: string;
}
/**
* Abstract base class for multi-modal AI models.
* Provides a common interface for different AI providers (OpenAI, Anthropic, Perplexity, Ollama)
*/
export abstract class MultiModalModel {
/**
* starts the model
* Initializes the model and any necessary resources
* Should be called before using any other methods
*/
abstract start(): Promise<void>;
/**
* stops the model
* Cleans up any resources used by the model
* Should be called when the model is no longer needed
*/
abstract stop(): Promise<void>;
// Defines a streaming interface for chat interactions.
// The implementation will vary based on the specific AI model.
abstract chatStream(input: ReadableStream<string>): ReadableStream<string>;
/**
* Synchronous chat interaction with the model
* @param optionsArg Options containing system message, user message, and message history
* @returns Promise resolving to the assistant's response
*/
public abstract chat(optionsArg: ChatOptions): Promise<ChatResponse>;
/**
* Streaming interface for chat interactions
* Allows for real-time responses from the model
* @param input Stream of user messages
* @returns Stream of model responses
*/
public abstract chatStream(input: ReadableStream<Uint8Array>): Promise<ReadableStream<string>>;
/**
* Text-to-speech conversion
* @param optionsArg Options containing the message to convert to speech
* @returns Promise resolving to a readable stream of audio data
* @throws Error if the provider doesn't support audio generation
*/
public abstract audio(optionsArg: { message: string }): Promise<NodeJS.ReadableStream>;
/**
* Vision-language processing
* @param optionsArg Options containing the image and prompt for analysis
* @returns Promise resolving to the model's description or analysis of the image
* @throws Error if the provider doesn't support vision tasks
*/
public abstract vision(optionsArg: { image: Buffer; prompt: string }): Promise<string>;
/**
* Document analysis and processing
* @param optionsArg Options containing system message, user message, PDF documents, and message history
* @returns Promise resolving to the model's analysis of the documents
* @throws Error if the provider doesn't support document processing
*/
public abstract document(optionsArg: {
systemMessage: string;
userMessage: string;
pdfDocuments: Uint8Array[];
messageHistory: ChatMessage[];
}): Promise<{ message: any }>;
}

View File

@ -12,9 +12,11 @@ export interface IConversationOptions {
*/
export class Conversation {
// STATIC
public static async createWithOpenAi(smartaiRef: SmartAi) {
const openaiProvider = new OpenAiProvider(smartaiRef.options.openaiToken);
const conversation = new Conversation(smartaiRef, {
public static async createWithOpenAi(smartaiRefArg: SmartAi) {
if (!smartaiRefArg.openaiProvider) {
throw new Error('OpenAI provider not available');
}
const conversation = new Conversation(smartaiRefArg, {
processFunction: async (input) => {
return '' // TODO implement proper streaming
}
@ -22,9 +24,11 @@ export class Conversation {
return conversation;
}
public static async createWithAnthropic(smartaiRef: SmartAi) {
const anthropicProvider = new OpenAiProvider(smartaiRef.options.anthropicToken);
const conversation = new Conversation(smartaiRef, {
public static async createWithAnthropic(smartaiRefArg: SmartAi) {
if (!smartaiRefArg.anthropicProvider) {
throw new Error('Anthropic provider not available');
}
const conversation = new Conversation(smartaiRefArg, {
processFunction: async (input) => {
return '' // TODO implement proper streaming
}
@ -32,6 +36,65 @@ export class Conversation {
return conversation;
}
public static async createWithPerplexity(smartaiRefArg: SmartAi) {
if (!smartaiRefArg.perplexityProvider) {
throw new Error('Perplexity provider not available');
}
const conversation = new Conversation(smartaiRefArg, {
processFunction: async (input) => {
return '' // TODO implement proper streaming
}
});
return conversation;
}
public static async createWithExo(smartaiRefArg: SmartAi) {
if (!smartaiRefArg.exoProvider) {
throw new Error('Exo provider not available');
}
const conversation = new Conversation(smartaiRefArg, {
processFunction: async (input) => {
return '' // TODO implement proper streaming
}
});
return conversation;
}
public static async createWithOllama(smartaiRefArg: SmartAi) {
if (!smartaiRefArg.ollamaProvider) {
throw new Error('Ollama provider not available');
}
const conversation = new Conversation(smartaiRefArg, {
processFunction: async (input) => {
return '' // TODO implement proper streaming
}
});
return conversation;
}
public static async createWithGroq(smartaiRefArg: SmartAi) {
if (!smartaiRefArg.groqProvider) {
throw new Error('Groq provider not available');
}
const conversation = new Conversation(smartaiRefArg, {
processFunction: async (input) => {
return '' // TODO implement proper streaming
}
});
return conversation;
}
public static async createWithXai(smartaiRefArg: SmartAi) {
if (!smartaiRefArg.xaiProvider) {
throw new Error('XAI provider not available');
}
const conversation = new Conversation(smartaiRefArg, {
processFunction: async (input) => {
return '' // TODO implement proper streaming
}
});
return conversation;
}
// INSTANCE
smartaiRef: SmartAi
@ -44,8 +107,8 @@ export class Conversation {
this.processFunction = options.processFunction;
}
setSystemMessage(systemMessage: string) {
this.systemMessage = systemMessage;
public async setSystemMessage(systemMessageArg: string) {
this.systemMessage = systemMessageArg;
}
private setupOutputStream(): ReadableStream<string> {
@ -57,7 +120,7 @@ export class Conversation {
}
private setupInputStream(): WritableStream<string> {
return new WritableStream<string>({
const writableStream = new WritableStream<string>({
write: async (chunk) => {
const processedData = await this.processFunction(chunk);
if (this.outputStreamController) {
@ -72,6 +135,7 @@ export class Conversation {
this.outputStreamController?.error(err);
}
});
return writableStream;
}
public getInputStreamWriter(): WritableStreamDefaultWriter<string> {

View File

@ -1,30 +1,119 @@
import { Conversation } from './classes.conversation.js';
import * as plugins from './plugins.js';
import { AnthropicProvider } from './provider.anthropic.js';
import { OllamaProvider } from './provider.ollama.js';
import { OpenAiProvider } from './provider.openai.js';
import { PerplexityProvider } from './provider.perplexity.js';
import { ExoProvider } from './provider.exo.js';
import { GroqProvider } from './provider.groq.js';
import { XAIProvider } from './provider.xai.js';
export interface ISmartAiOptions {
openaiToken: string;
anthropicToken: string;
openaiToken?: string;
anthropicToken?: string;
perplexityToken?: string;
groqToken?: string;
xaiToken?: string;
exo?: {
baseUrl?: string;
apiKey?: string;
};
ollama?: {
baseUrl?: string;
model?: string;
visionModel?: string;
};
}
export type TProvider = 'openai' | 'anthropic' | 'perplexity' | 'ollama' | 'exo' | 'groq' | 'xai';
export class SmartAi {
public options: ISmartAiOptions;
public openaiProvider: OpenAiProvider;
public anthropicProvider: AnthropicProvider;
public perplexityProvider: PerplexityProvider;
public ollamaProvider: OllamaProvider;
public exoProvider: ExoProvider;
public groqProvider: GroqProvider;
public xaiProvider: XAIProvider;
constructor(optionsArg: ISmartAiOptions) {
this.options = optionsArg;
}
/**
* creates an OpenAI conversation
*/
public async createOpenApiConversation() {
const conversation = await Conversation.createWithOpenAi(this);
public async start() {
if (this.options.openaiToken) {
this.openaiProvider = new OpenAiProvider({
openaiToken: this.options.openaiToken,
});
await this.openaiProvider.start();
}
if (this.options.anthropicToken) {
this.anthropicProvider = new AnthropicProvider({
anthropicToken: this.options.anthropicToken,
});
await this.anthropicProvider.start();
}
if (this.options.perplexityToken) {
this.perplexityProvider = new PerplexityProvider({
perplexityToken: this.options.perplexityToken,
});
await this.perplexityProvider.start();
}
if (this.options.groqToken) {
this.groqProvider = new GroqProvider({
groqToken: this.options.groqToken,
});
await this.groqProvider.start();
}
if (this.options.xaiToken) {
this.xaiProvider = new XAIProvider({
xaiToken: this.options.xaiToken,
});
await this.xaiProvider.start();
}
if (this.options.ollama) {
this.ollamaProvider = new OllamaProvider({
baseUrl: this.options.ollama.baseUrl,
model: this.options.ollama.model,
visionModel: this.options.ollama.visionModel,
});
await this.ollamaProvider.start();
}
if (this.options.exo) {
this.exoProvider = new ExoProvider({
exoBaseUrl: this.options.exo.baseUrl,
apiKey: this.options.exo.apiKey,
});
await this.exoProvider.start();
}
}
public async stop() {}
/**
* creates an OpenAI conversation
* create a new conversation
*/
public async createAnthropicConversation() {
const conversation = await Conversation.createWithAnthropic(this);
createConversation(provider: TProvider) {
switch (provider) {
case 'exo':
return Conversation.createWithExo(this);
case 'openai':
return Conversation.createWithOpenAi(this);
case 'anthropic':
return Conversation.createWithAnthropic(this);
case 'perplexity':
return Conversation.createWithPerplexity(this);
case 'ollama':
return Conversation.createWithOllama(this);
case 'groq':
return Conversation.createWithGroq(this);
case 'xai':
return Conversation.createWithXai(this);
default:
throw new Error('Provider not available');
}
}
}

0
ts/interfaces.ts Normal file
View File

View File

@ -7,15 +7,23 @@ export {
// @push.rocks scope
import * as qenv from '@push.rocks/qenv';
import * as smartpath from '@push.rocks/smartpath';
import * as smartpromise from '@push.rocks/smartpromise';
import * as smartarray from '@push.rocks/smartarray';
import * as smartfile from '@push.rocks/smartfile';
import * as smartpath from '@push.rocks/smartpath';
import * as smartpdf from '@push.rocks/smartpdf';
import * as smartpromise from '@push.rocks/smartpromise';
import * as smartrequest from '@push.rocks/smartrequest';
import * as webstream from '@push.rocks/webstream';
export {
smartarray,
qenv,
smartpath,
smartpromise,
smartfile,
smartpath,
smartpdf,
smartpromise,
smartrequest,
webstream,
}
// third party

View File

@ -1,75 +1,240 @@
import * as plugins from './plugins.js';
import * as paths from './paths.js';
import { MultiModalModel } from './abstract.classes.multimodal.js';
import type { ChatOptions, ChatResponse, ChatMessage } from './abstract.classes.multimodal.js';
import type { ImageBlockParam, TextBlockParam } from '@anthropic-ai/sdk/resources/messages';
type ContentBlock = ImageBlockParam | TextBlockParam;
export interface IAnthropicProviderOptions {
anthropicToken: string;
}
export class AnthropicProvider extends MultiModalModel {
private anthropicToken: string;
private options: IAnthropicProviderOptions;
public anthropicApiClient: plugins.anthropic.default;
constructor(anthropicToken: string) {
constructor(optionsArg: IAnthropicProviderOptions) {
super();
this.anthropicToken = anthropicToken; // Ensure the token is stored
this.options = optionsArg // Ensure the token is stored
}
async start() {
this.anthropicApiClient = new plugins.anthropic.default({
apiKey: this.anthropicToken,
apiKey: this.options.anthropicToken,
});
}
async stop() {}
chatStream(input: ReadableStream<string>): ReadableStream<string> {
public async chatStream(input: ReadableStream<Uint8Array>): Promise<ReadableStream<string>> {
// Create a TextDecoder to handle incoming chunks
const decoder = new TextDecoder();
let messageHistory: { role: 'assistant' | 'user'; content: string }[] = [];
let buffer = '';
let currentMessage: { role: string; content: string; } | null = null;
return new ReadableStream({
async start(controller) {
const reader = input.getReader();
// Create a TransformStream to process the input
const transform = new TransformStream<Uint8Array, string>({
async transform(chunk, controller) {
buffer += decoder.decode(chunk, { stream: true });
// Try to parse complete JSON messages from the buffer
while (true) {
const newlineIndex = buffer.indexOf('\n');
if (newlineIndex === -1) break;
const line = buffer.slice(0, newlineIndex);
buffer = buffer.slice(newlineIndex + 1);
if (line.trim()) {
try {
let done, value;
while ((({ done, value } = await reader.read()), !done)) {
const userMessage = decoder.decode(value, { stream: true });
messageHistory.push({ role: 'user', content: userMessage });
const aiResponse = await this.chat('', userMessage, messageHistory);
messageHistory.push({ role: 'assistant', content: aiResponse.message });
// Directly enqueue the string response instead of encoding it first
controller.enqueue(aiResponse.message);
const message = JSON.parse(line);
currentMessage = {
role: message.role || 'user',
content: message.content || '',
};
} catch (e) {
console.error('Failed to parse message:', e);
}
controller.close();
} catch (err) {
controller.error(err);
}
}
// If we have a complete message, send it to Anthropic
if (currentMessage) {
const stream = await this.anthropicApiClient.messages.create({
model: 'claude-3-opus-20240229',
messages: [{ role: currentMessage.role, content: currentMessage.content }],
system: '',
stream: true,
max_tokens: 4000,
});
// Process each chunk from Anthropic
for await (const chunk of stream) {
const content = chunk.delta?.text;
if (content) {
controller.enqueue(content);
}
}
currentMessage = null;
}
},
flush(controller) {
if (buffer) {
try {
const message = JSON.parse(buffer);
controller.enqueue(message.content || '');
} catch (e) {
console.error('Failed to parse remaining buffer:', e);
}
}
}
});
// Connect the input to our transform stream
return input.pipeThrough(transform);
}
// Implementing the synchronous chat interaction
public async chat(
systemMessage: string,
userMessage: string,
messageHistory: {
role: 'assistant' | 'user';
content: string;
}[]
) {
public async chat(optionsArg: ChatOptions): Promise<ChatResponse> {
// Convert message history to Anthropic format
const messages = optionsArg.messageHistory.map(msg => ({
role: msg.role === 'assistant' ? 'assistant' as const : 'user' as const,
content: msg.content
}));
const result = await this.anthropicApiClient.messages.create({
model: 'claude-3-opus-20240229',
system: systemMessage,
system: optionsArg.systemMessage,
messages: [
...messageHistory,
{ role: 'user', content: userMessage },
...messages,
{ role: 'user' as const, content: optionsArg.userMessage }
],
max_tokens: 4000,
});
// Extract text content from the response
let message = '';
for (const block of result.content) {
if ('text' in block) {
message += block.text;
}
}
return {
message: result.content,
role: 'assistant' as const,
message,
};
}
public async audio(messageArg: string) {
public async audio(optionsArg: { message: string }): Promise<NodeJS.ReadableStream> {
// Anthropic does not provide an audio API, so this method is not implemented.
throw new Error('Audio generation is not supported by Anthropic.');
throw new Error('Audio generation is not yet supported by Anthropic.');
}
public async vision(optionsArg: { image: Buffer; prompt: string }): Promise<string> {
const base64Image = optionsArg.image.toString('base64');
const content: ContentBlock[] = [
{
type: 'text',
text: optionsArg.prompt
},
{
type: 'image',
source: {
type: 'base64',
media_type: 'image/jpeg',
data: base64Image
}
}
];
const result = await this.anthropicApiClient.messages.create({
model: 'claude-3-opus-20240229',
messages: [{
role: 'user',
content
}],
max_tokens: 1024
});
// Extract text content from the response
let message = '';
for (const block of result.content) {
if ('text' in block) {
message += block.text;
}
}
return message;
}
public async document(optionsArg: {
systemMessage: string;
userMessage: string;
pdfDocuments: Uint8Array[];
messageHistory: ChatMessage[];
}): Promise<{ message: any }> {
// Convert PDF documents to images using SmartPDF
const smartpdfInstance = new plugins.smartpdf.SmartPdf();
let documentImageBytesArray: Uint8Array[] = [];
for (const pdfDocument of optionsArg.pdfDocuments) {
const documentImageArray = await smartpdfInstance.convertPDFToPngBytes(pdfDocument);
documentImageBytesArray = documentImageBytesArray.concat(documentImageArray);
}
// Convert message history to Anthropic format
const messages = optionsArg.messageHistory.map(msg => ({
role: msg.role === 'assistant' ? 'assistant' as const : 'user' as const,
content: msg.content
}));
// Create content array with text and images
const content: ContentBlock[] = [
{
type: 'text',
text: optionsArg.userMessage
}
];
// Add each document page as an image
for (const imageBytes of documentImageBytesArray) {
content.push({
type: 'image',
source: {
type: 'base64',
media_type: 'image/jpeg',
data: Buffer.from(imageBytes).toString('base64')
}
});
}
const result = await this.anthropicApiClient.messages.create({
model: 'claude-3-opus-20240229',
system: optionsArg.systemMessage,
messages: [
...messages,
{ role: 'user', content }
],
max_tokens: 4096
});
// Extract text content from the response
let message = '';
for (const block of result.content) {
if ('text' in block) {
message += block.text;
}
}
return {
message: {
role: 'assistant',
content: message
}
};
}
}

128
ts/provider.exo.ts Normal file
View File

@ -0,0 +1,128 @@
import * as plugins from './plugins.js';
import * as paths from './paths.js';
import { MultiModalModel } from './abstract.classes.multimodal.js';
import type { ChatOptions, ChatResponse, ChatMessage } from './abstract.classes.multimodal.js';
import type { ChatCompletionMessageParam } from 'openai/resources/chat/completions';
export interface IExoProviderOptions {
exoBaseUrl?: string;
apiKey?: string;
}
export class ExoProvider extends MultiModalModel {
private options: IExoProviderOptions;
public openAiApiClient: plugins.openai.default;
constructor(optionsArg: IExoProviderOptions = {}) {
super();
this.options = {
exoBaseUrl: 'http://localhost:8080/v1', // Default Exo API endpoint
...optionsArg
};
}
public async start() {
this.openAiApiClient = new plugins.openai.default({
apiKey: this.options.apiKey || 'not-needed', // Exo might not require an API key for local deployment
baseURL: this.options.exoBaseUrl,
});
}
public async stop() {}
public async chatStream(input: ReadableStream<Uint8Array>): Promise<ReadableStream<string>> {
// Create a TextDecoder to handle incoming chunks
const decoder = new TextDecoder();
let buffer = '';
let currentMessage: { role: string; content: string; } | null = null;
// Create a TransformStream to process the input
const transform = new TransformStream<Uint8Array, string>({
async transform(chunk, controller) {
buffer += decoder.decode(chunk, { stream: true });
// Try to parse complete JSON messages from the buffer
while (true) {
const newlineIndex = buffer.indexOf('\n');
if (newlineIndex === -1) break;
const line = buffer.slice(0, newlineIndex);
buffer = buffer.slice(newlineIndex + 1);
if (line.trim()) {
try {
const message = JSON.parse(line);
currentMessage = message;
// Process the message based on its type
if (message.type === 'message') {
const response = await this.chat({
systemMessage: '',
userMessage: message.content,
messageHistory: [{ role: message.role as 'user' | 'assistant' | 'system', content: message.content }]
});
controller.enqueue(JSON.stringify(response) + '\n');
}
} catch (error) {
console.error('Error processing message:', error);
}
}
}
},
flush(controller) {
if (buffer) {
try {
const message = JSON.parse(buffer);
currentMessage = message;
} catch (error) {
console.error('Error processing remaining buffer:', error);
}
}
}
});
return input.pipeThrough(transform);
}
public async chat(options: ChatOptions): Promise<ChatResponse> {
const messages: ChatCompletionMessageParam[] = [
{ role: 'system', content: options.systemMessage },
...options.messageHistory,
{ role: 'user', content: options.userMessage }
];
try {
const response = await this.openAiApiClient.chat.completions.create({
model: 'local-model', // Exo uses local models
messages: messages,
stream: false
});
return {
role: 'assistant',
message: response.choices[0]?.message?.content || ''
};
} catch (error) {
console.error('Error in chat completion:', error);
throw error;
}
}
public async audio(optionsArg: { message: string }): Promise<NodeJS.ReadableStream> {
throw new Error('Audio generation is not supported by Exo provider');
}
public async vision(optionsArg: { image: Buffer; prompt: string }): Promise<string> {
throw new Error('Vision processing is not supported by Exo provider');
}
public async document(optionsArg: {
systemMessage: string;
userMessage: string;
pdfDocuments: Uint8Array[];
messageHistory: ChatMessage[];
}): Promise<{ message: any }> {
throw new Error('Document processing is not supported by Exo provider');
}
}

192
ts/provider.groq.ts Normal file
View File

@ -0,0 +1,192 @@
import * as plugins from './plugins.js';
import * as paths from './paths.js';
import { MultiModalModel } from './abstract.classes.multimodal.js';
import type { ChatOptions, ChatResponse, ChatMessage } from './abstract.classes.multimodal.js';
export interface IGroqProviderOptions {
groqToken: string;
model?: string;
}
export class GroqProvider extends MultiModalModel {
private options: IGroqProviderOptions;
private baseUrl = 'https://api.groq.com/v1';
constructor(optionsArg: IGroqProviderOptions) {
super();
this.options = {
...optionsArg,
model: optionsArg.model || 'llama-3.3-70b-versatile', // Default model
};
}
async start() {}
async stop() {}
public async chatStream(input: ReadableStream<Uint8Array>): Promise<ReadableStream<string>> {
// Create a TextDecoder to handle incoming chunks
const decoder = new TextDecoder();
let buffer = '';
let currentMessage: { role: string; content: string; } | null = null;
// Create a TransformStream to process the input
const transform = new TransformStream<Uint8Array, string>({
async transform(chunk, controller) {
buffer += decoder.decode(chunk, { stream: true });
// Try to parse complete JSON messages from the buffer
while (true) {
const newlineIndex = buffer.indexOf('\n');
if (newlineIndex === -1) break;
const line = buffer.slice(0, newlineIndex);
buffer = buffer.slice(newlineIndex + 1);
if (line.trim()) {
try {
const message = JSON.parse(line);
currentMessage = {
role: message.role || 'user',
content: message.content || '',
};
} catch (e) {
console.error('Failed to parse message:', e);
}
}
}
// If we have a complete message, send it to Groq
if (currentMessage) {
const response = await fetch(`${this.baseUrl}/chat/completions`, {
method: 'POST',
headers: {
'Authorization': `Bearer ${this.options.groqToken}`,
'Content-Type': 'application/json',
},
body: JSON.stringify({
model: this.options.model,
messages: [{ role: currentMessage.role, content: currentMessage.content }],
stream: true,
}),
});
// Process each chunk from Groq
const reader = response.body?.getReader();
if (reader) {
try {
while (true) {
const { done, value } = await reader.read();
if (done) break;
const chunk = new TextDecoder().decode(value);
const lines = chunk.split('\n');
for (const line of lines) {
if (line.startsWith('data: ')) {
const data = line.slice(6);
if (data === '[DONE]') break;
try {
const parsed = JSON.parse(data);
const content = parsed.choices[0]?.delta?.content;
if (content) {
controller.enqueue(content);
}
} catch (e) {
console.error('Failed to parse SSE data:', e);
}
}
}
}
} finally {
reader.releaseLock();
}
}
currentMessage = null;
}
},
flush(controller) {
if (buffer) {
try {
const message = JSON.parse(buffer);
controller.enqueue(message.content || '');
} catch (e) {
console.error('Failed to parse remaining buffer:', e);
}
}
}
});
// Connect the input to our transform stream
return input.pipeThrough(transform);
}
// Implementing the synchronous chat interaction
public async chat(optionsArg: ChatOptions): Promise<ChatResponse> {
const messages = [
// System message
{
role: 'system',
content: optionsArg.systemMessage,
},
// Message history
...optionsArg.messageHistory.map(msg => ({
role: msg.role,
content: msg.content,
})),
// User message
{
role: 'user',
content: optionsArg.userMessage,
},
];
const response = await fetch(`${this.baseUrl}/chat/completions`, {
method: 'POST',
headers: {
'Authorization': `Bearer ${this.options.groqToken}`,
'Content-Type': 'application/json',
},
body: JSON.stringify({
model: this.options.model,
messages,
temperature: 0.7,
max_completion_tokens: 1024,
stream: false,
}),
});
if (!response.ok) {
const error = await response.json();
throw new Error(`Groq API error: ${error.message || response.statusText}`);
}
const result = await response.json();
return {
role: 'assistant',
message: result.choices[0].message.content,
};
}
public async audio(optionsArg: { message: string }): Promise<NodeJS.ReadableStream> {
// Groq does not provide an audio API, so this method is not implemented.
throw new Error('Audio generation is not yet supported by Groq.');
}
public async vision(optionsArg: { image: Buffer; prompt: string }): Promise<string> {
throw new Error('Vision tasks are not yet supported by Groq.');
}
public async document(optionsArg: {
systemMessage: string;
userMessage: string;
pdfDocuments: Uint8Array[];
messageHistory: ChatMessage[];
}): Promise<{ message: any }> {
throw new Error('Document processing is not yet supported by Groq.');
}
}

252
ts/provider.ollama.ts Normal file
View File

@ -0,0 +1,252 @@
import * as plugins from './plugins.js';
import * as paths from './paths.js';
import { MultiModalModel } from './abstract.classes.multimodal.js';
import type { ChatOptions, ChatResponse, ChatMessage } from './abstract.classes.multimodal.js';
export interface IOllamaProviderOptions {
baseUrl?: string;
model?: string;
visionModel?: string; // Model to use for vision tasks (e.g. 'llava')
}
export class OllamaProvider extends MultiModalModel {
private options: IOllamaProviderOptions;
private baseUrl: string;
private model: string;
private visionModel: string;
constructor(optionsArg: IOllamaProviderOptions = {}) {
super();
this.options = optionsArg;
this.baseUrl = optionsArg.baseUrl || 'http://localhost:11434';
this.model = optionsArg.model || 'llama2';
this.visionModel = optionsArg.visionModel || 'llava';
}
async start() {
// Verify Ollama is running
try {
const response = await fetch(`${this.baseUrl}/api/tags`);
if (!response.ok) {
throw new Error('Failed to connect to Ollama server');
}
} catch (error) {
throw new Error(`Failed to connect to Ollama server at ${this.baseUrl}: ${error.message}`);
}
}
async stop() {}
public async chatStream(input: ReadableStream<Uint8Array>): Promise<ReadableStream<string>> {
// Create a TextDecoder to handle incoming chunks
const decoder = new TextDecoder();
let buffer = '';
let currentMessage: { role: string; content: string; } | null = null;
// Create a TransformStream to process the input
const transform = new TransformStream<Uint8Array, string>({
async transform(chunk, controller) {
buffer += decoder.decode(chunk, { stream: true });
// Try to parse complete JSON messages from the buffer
while (true) {
const newlineIndex = buffer.indexOf('\n');
if (newlineIndex === -1) break;
const line = buffer.slice(0, newlineIndex);
buffer = buffer.slice(newlineIndex + 1);
if (line.trim()) {
try {
const message = JSON.parse(line);
currentMessage = {
role: message.role || 'user',
content: message.content || '',
};
} catch (e) {
console.error('Failed to parse message:', e);
}
}
}
// If we have a complete message, send it to Ollama
if (currentMessage) {
const response = await fetch(`${this.baseUrl}/api/chat`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({
model: this.model,
messages: [{ role: currentMessage.role, content: currentMessage.content }],
stream: true,
}),
});
// Process each chunk from Ollama
const reader = response.body?.getReader();
if (reader) {
try {
while (true) {
const { done, value } = await reader.read();
if (done) break;
const chunk = new TextDecoder().decode(value);
const lines = chunk.split('\n');
for (const line of lines) {
if (line.trim()) {
try {
const parsed = JSON.parse(line);
const content = parsed.message?.content;
if (content) {
controller.enqueue(content);
}
} catch (e) {
console.error('Failed to parse Ollama response:', e);
}
}
}
}
} finally {
reader.releaseLock();
}
}
currentMessage = null;
}
},
flush(controller) {
if (buffer) {
try {
const message = JSON.parse(buffer);
controller.enqueue(message.content || '');
} catch (e) {
console.error('Failed to parse remaining buffer:', e);
}
}
}
});
// Connect the input to our transform stream
return input.pipeThrough(transform);
}
// Implementing the synchronous chat interaction
public async chat(optionsArg: ChatOptions): Promise<ChatResponse> {
// Format messages for Ollama
const messages = [
{ role: 'system', content: optionsArg.systemMessage },
...optionsArg.messageHistory,
{ role: 'user', content: optionsArg.userMessage }
];
// Make API call to Ollama
const response = await fetch(`${this.baseUrl}/api/chat`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({
model: this.model,
messages: messages,
stream: false
}),
});
if (!response.ok) {
throw new Error(`Ollama API error: ${response.statusText}`);
}
const result = await response.json();
return {
role: 'assistant' as const,
message: result.message.content,
};
}
public async audio(optionsArg: { message: string }): Promise<NodeJS.ReadableStream> {
throw new Error('Audio generation is not supported by Ollama.');
}
public async vision(optionsArg: { image: Buffer; prompt: string }): Promise<string> {
const base64Image = optionsArg.image.toString('base64');
const response = await fetch(`${this.baseUrl}/api/chat`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({
model: this.visionModel,
messages: [{
role: 'user',
content: optionsArg.prompt,
images: [base64Image]
}],
stream: false
}),
});
if (!response.ok) {
throw new Error(`Ollama API error: ${response.statusText}`);
}
const result = await response.json();
return result.message.content;
}
public async document(optionsArg: {
systemMessage: string;
userMessage: string;
pdfDocuments: Uint8Array[];
messageHistory: ChatMessage[];
}): Promise<{ message: any }> {
// Convert PDF documents to images using SmartPDF
const smartpdfInstance = new plugins.smartpdf.SmartPdf();
let documentImageBytesArray: Uint8Array[] = [];
for (const pdfDocument of optionsArg.pdfDocuments) {
const documentImageArray = await smartpdfInstance.convertPDFToPngBytes(pdfDocument);
documentImageBytesArray = documentImageBytesArray.concat(documentImageArray);
}
// Convert images to base64
const base64Images = documentImageBytesArray.map(bytes => Buffer.from(bytes).toString('base64'));
// Send request to Ollama with images
const response = await fetch(`${this.baseUrl}/api/chat`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({
model: this.visionModel,
messages: [
{ role: 'system', content: optionsArg.systemMessage },
...optionsArg.messageHistory,
{
role: 'user',
content: optionsArg.userMessage,
images: base64Images
}
],
stream: false
}),
});
if (!response.ok) {
throw new Error(`Ollama API error: ${response.statusText}`);
}
const result = await response.json();
return {
message: {
role: 'assistant',
content: result.message.content
}
};
}
}

View File

@ -3,67 +3,189 @@ import * as paths from './paths.js';
import { MultiModalModel } from './abstract.classes.multimodal.js';
export class OpenAiProvider extends MultiModalModel {
private openAiToken: string;
public openAiApiClient: plugins.openai.default;
constructor(openaiToken: string) {
super();
this.openAiToken = openaiToken; // Ensure the token is stored
export interface IOpenaiProviderOptions {
openaiToken: string;
}
async start() {
export class OpenAiProvider extends MultiModalModel {
private options: IOpenaiProviderOptions;
public openAiApiClient: plugins.openai.default;
public smartpdfInstance: plugins.smartpdf.SmartPdf;
constructor(optionsArg: IOpenaiProviderOptions) {
super();
this.options = optionsArg;
}
public async start() {
this.openAiApiClient = new plugins.openai.default({
apiKey: this.openAiToken,
apiKey: this.options.openaiToken,
dangerouslyAllowBrowser: true,
});
this.smartpdfInstance = new plugins.smartpdf.SmartPdf();
}
async stop() {}
public async stop() {}
chatStream(input: ReadableStream<string>): ReadableStream<string> {
public async chatStream(input: ReadableStream<Uint8Array>): Promise<ReadableStream<string>> {
// Create a TextDecoder to handle incoming chunks
const decoder = new TextDecoder();
let messageHistory: { role: 'assistant' | 'user'; content: string }[] = [];
let buffer = '';
let currentMessage: { role: string; content: string; } | null = null;
return new ReadableStream({
async start(controller) {
const reader = input.getReader();
// Create a TransformStream to process the input
const transform = new TransformStream<Uint8Array, string>({
async transform(chunk, controller) {
buffer += decoder.decode(chunk, { stream: true });
// Try to parse complete JSON messages from the buffer
while (true) {
const newlineIndex = buffer.indexOf('\n');
if (newlineIndex === -1) break;
const line = buffer.slice(0, newlineIndex);
buffer = buffer.slice(newlineIndex + 1);
if (line.trim()) {
try {
let done, value;
while ((({ done, value } = await reader.read()), !done)) {
const userMessage = decoder.decode(value, { stream: true });
messageHistory.push({ role: 'user', content: userMessage });
const aiResponse = await this.chat('', userMessage, messageHistory);
messageHistory.push({ role: 'assistant', content: aiResponse.message });
// Directly enqueue the string response instead of encoding it first
controller.enqueue(aiResponse.message);
const message = JSON.parse(line);
currentMessage = {
role: message.role || 'user',
content: message.content || '',
};
} catch (e) {
console.error('Failed to parse message:', e);
}
controller.close();
} catch (err) {
controller.error(err);
}
}
// If we have a complete message, send it to OpenAI
if (currentMessage) {
const stream = await this.openAiApiClient.chat.completions.create({
model: 'gpt-4',
messages: [{ role: currentMessage.role, content: currentMessage.content }],
stream: true,
});
// Process each chunk from OpenAI
for await (const chunk of stream) {
const content = chunk.choices[0]?.delta?.content;
if (content) {
controller.enqueue(content);
}
}
currentMessage = null;
}
},
flush(controller) {
if (buffer) {
try {
const message = JSON.parse(buffer);
controller.enqueue(message.content || '');
} catch (e) {
console.error('Failed to parse remaining buffer:', e);
}
}
}
});
// Connect the input to our transform stream
return input.pipeThrough(transform);
}
// Implementing the synchronous chat interaction
public async chat(
systemMessage: string,
userMessage: string,
public async chat(optionsArg: {
systemMessage: string;
userMessage: string;
messageHistory: {
role: 'assistant' | 'user';
content: string;
}[]
) {
}[];
}) {
const result = await this.openAiApiClient.chat.completions.create({
model: 'gpt-4-turbo-preview',
model: 'gpt-4o',
messages: [
{ role: 'system', content: systemMessage },
...messageHistory,
{ role: 'user', content: userMessage },
{ role: 'system', content: optionsArg.systemMessage },
...optionsArg.messageHistory,
{ role: 'user', content: optionsArg.userMessage },
],
});
return {
role: result.choices[0].message.role as 'assistant',
message: result.choices[0].message.content,
};
}
public async audio(optionsArg: { message: string }): Promise<NodeJS.ReadableStream> {
const done = plugins.smartpromise.defer<NodeJS.ReadableStream>();
const result = await this.openAiApiClient.audio.speech.create({
model: 'tts-1-hd',
input: optionsArg.message,
voice: 'nova',
response_format: 'mp3',
speed: 1,
});
const stream = result.body;
done.resolve(stream);
return done.promise;
}
public async document(optionsArg: {
systemMessage: string;
userMessage: string;
pdfDocuments: Uint8Array[];
messageHistory: {
role: 'assistant' | 'user';
content: any;
}[];
}) {
let pdfDocumentImageBytesArray: Uint8Array[] = [];
for (const pdfDocument of optionsArg.pdfDocuments) {
const documentImageArray = await this.smartpdfInstance.convertPDFToPngBytes(pdfDocument);
pdfDocumentImageBytesArray = pdfDocumentImageBytesArray.concat(documentImageArray);
}
console.log(`image smartfile array`);
console.log(pdfDocumentImageBytesArray.map((smartfile) => smartfile.length));
const smartfileArray = await plugins.smartarray.map(
pdfDocumentImageBytesArray,
async (pdfDocumentImageBytes) => {
return plugins.smartfile.SmartFile.fromBuffer(
'pdfDocumentImage.jpg',
Buffer.from(pdfDocumentImageBytes)
);
}
);
const result = await this.openAiApiClient.chat.completions.create({
model: 'gpt-4o',
// response_format: { type: "json_object" }, // not supported for now
messages: [
{ role: 'system', content: optionsArg.systemMessage },
...optionsArg.messageHistory,
{
role: 'user',
content: [
{ type: 'text', text: optionsArg.userMessage },
...(() => {
const returnArray = [];
for (const imageBytes of pdfDocumentImageBytesArray) {
returnArray.push({
type: 'image_url',
image_url: {
url: 'data:image/png;base64,' + Buffer.from(imageBytes).toString('base64'),
},
});
}
return returnArray;
})(),
],
},
],
});
return {
@ -71,19 +193,26 @@ export class OpenAiProvider extends MultiModalModel {
};
}
public async audio(messageArg: string) {
const done = plugins.smartpromise.defer();
const result = await this.openAiApiClient.audio.speech.create({
model: 'tts-1-hd',
input: messageArg,
voice: 'nova',
response_format: 'mp3',
speed: 1,
});
const stream = result.body.pipe(plugins.smartfile.fsStream.createWriteStream(plugins.path.join(paths.nogitDir, 'output.mp3')));
stream.on('finish', () => {
done.resolve();
});
return done.promise;
public async vision(optionsArg: { image: Buffer; prompt: string }): Promise<string> {
const result = await this.openAiApiClient.chat.completions.create({
model: 'gpt-4-vision-preview',
messages: [
{
role: 'user',
content: [
{ type: 'text', text: optionsArg.prompt },
{
type: 'image_url',
image_url: {
url: `data:image/jpeg;base64,${optionsArg.image.toString('base64')}`
}
}
]
}
],
max_tokens: 300
});
return result.choices[0].message.content || '';
}
}

171
ts/provider.perplexity.ts Normal file
View File

@ -0,0 +1,171 @@
import * as plugins from './plugins.js';
import * as paths from './paths.js';
import { MultiModalModel } from './abstract.classes.multimodal.js';
import type { ChatOptions, ChatResponse, ChatMessage } from './abstract.classes.multimodal.js';
export interface IPerplexityProviderOptions {
perplexityToken: string;
}
export class PerplexityProvider extends MultiModalModel {
private options: IPerplexityProviderOptions;
constructor(optionsArg: IPerplexityProviderOptions) {
super();
this.options = optionsArg;
}
async start() {
// Initialize any necessary clients or resources
}
async stop() {}
public async chatStream(input: ReadableStream<Uint8Array>): Promise<ReadableStream<string>> {
// Create a TextDecoder to handle incoming chunks
const decoder = new TextDecoder();
let buffer = '';
let currentMessage: { role: string; content: string; } | null = null;
// Create a TransformStream to process the input
const transform = new TransformStream<Uint8Array, string>({
async transform(chunk, controller) {
buffer += decoder.decode(chunk, { stream: true });
// Try to parse complete JSON messages from the buffer
while (true) {
const newlineIndex = buffer.indexOf('\n');
if (newlineIndex === -1) break;
const line = buffer.slice(0, newlineIndex);
buffer = buffer.slice(newlineIndex + 1);
if (line.trim()) {
try {
const message = JSON.parse(line);
currentMessage = {
role: message.role || 'user',
content: message.content || '',
};
} catch (e) {
console.error('Failed to parse message:', e);
}
}
}
// If we have a complete message, send it to Perplexity
if (currentMessage) {
const response = await fetch('https://api.perplexity.ai/chat/completions', {
method: 'POST',
headers: {
'Authorization': `Bearer ${this.options.perplexityToken}`,
'Content-Type': 'application/json',
},
body: JSON.stringify({
model: 'mixtral-8x7b-instruct',
messages: [{ role: currentMessage.role, content: currentMessage.content }],
stream: true,
}),
});
// Process each chunk from Perplexity
const reader = response.body?.getReader();
if (reader) {
try {
while (true) {
const { done, value } = await reader.read();
if (done) break;
const chunk = new TextDecoder().decode(value);
const lines = chunk.split('\n');
for (const line of lines) {
if (line.startsWith('data: ')) {
const data = line.slice(6);
if (data === '[DONE]') break;
try {
const parsed = JSON.parse(data);
const content = parsed.choices[0]?.delta?.content;
if (content) {
controller.enqueue(content);
}
} catch (e) {
console.error('Failed to parse SSE data:', e);
}
}
}
}
} finally {
reader.releaseLock();
}
}
currentMessage = null;
}
},
flush(controller) {
if (buffer) {
try {
const message = JSON.parse(buffer);
controller.enqueue(message.content || '');
} catch (e) {
console.error('Failed to parse remaining buffer:', e);
}
}
}
});
// Connect the input to our transform stream
return input.pipeThrough(transform);
}
// Implementing the synchronous chat interaction
public async chat(optionsArg: ChatOptions): Promise<ChatResponse> {
// Make API call to Perplexity
const response = await fetch('https://api.perplexity.ai/chat/completions', {
method: 'POST',
headers: {
'Authorization': `Bearer ${this.options.perplexityToken}`,
'Content-Type': 'application/json',
},
body: JSON.stringify({
model: 'mixtral-8x7b-instruct', // Using Mixtral model
messages: [
{ role: 'system', content: optionsArg.systemMessage },
...optionsArg.messageHistory,
{ role: 'user', content: optionsArg.userMessage }
],
}),
});
if (!response.ok) {
throw new Error(`Perplexity API error: ${response.statusText}`);
}
const result = await response.json();
return {
role: 'assistant' as const,
message: result.choices[0].message.content,
};
}
public async audio(optionsArg: { message: string }): Promise<NodeJS.ReadableStream> {
throw new Error('Audio generation is not supported by Perplexity.');
}
public async vision(optionsArg: { image: Buffer; prompt: string }): Promise<string> {
throw new Error('Vision tasks are not supported by Perplexity.');
}
public async document(optionsArg: {
systemMessage: string;
userMessage: string;
pdfDocuments: Uint8Array[];
messageHistory: ChatMessage[];
}): Promise<{ message: any }> {
throw new Error('Document processing is not supported by Perplexity.');
}
}

183
ts/provider.xai.ts Normal file
View File

@ -0,0 +1,183 @@
import * as plugins from './plugins.js';
import * as paths from './paths.js';
import { MultiModalModel } from './abstract.classes.multimodal.js';
import type { ChatOptions, ChatResponse, ChatMessage } from './abstract.classes.multimodal.js';
import type { ChatCompletionMessageParam } from 'openai/resources/chat/completions';
export interface IXAIProviderOptions {
xaiToken: string;
}
export class XAIProvider extends MultiModalModel {
private options: IXAIProviderOptions;
public openAiApiClient: plugins.openai.default;
public smartpdfInstance: plugins.smartpdf.SmartPdf;
constructor(optionsArg: IXAIProviderOptions) {
super();
this.options = optionsArg;
}
public async start() {
this.openAiApiClient = new plugins.openai.default({
apiKey: this.options.xaiToken,
baseURL: 'https://api.x.ai/v1',
});
this.smartpdfInstance = new plugins.smartpdf.SmartPdf();
}
public async stop() {}
public async chatStream(input: ReadableStream<Uint8Array>): Promise<ReadableStream<string>> {
// Create a TextDecoder to handle incoming chunks
const decoder = new TextDecoder();
let buffer = '';
let currentMessage: { role: string; content: string; } | null = null;
// Create a TransformStream to process the input
const transform = new TransformStream<Uint8Array, string>({
async transform(chunk, controller) {
buffer += decoder.decode(chunk, { stream: true });
// Try to parse complete JSON messages from the buffer
while (true) {
const newlineIndex = buffer.indexOf('\n');
if (newlineIndex === -1) break;
const line = buffer.slice(0, newlineIndex);
buffer = buffer.slice(newlineIndex + 1);
if (line.trim()) {
try {
const message = JSON.parse(line);
currentMessage = {
role: message.role || 'user',
content: message.content || '',
};
} catch (e) {
console.error('Failed to parse message:', e);
}
}
}
// If we have a complete message, send it to X.AI
if (currentMessage) {
const stream = await this.openAiApiClient.chat.completions.create({
model: 'grok-2-latest',
messages: [{ role: currentMessage.role, content: currentMessage.content }],
stream: true,
});
// Process each chunk from X.AI
for await (const chunk of stream) {
const content = chunk.choices[0]?.delta?.content;
if (content) {
controller.enqueue(content);
}
}
currentMessage = null;
}
},
flush(controller) {
if (buffer) {
try {
const message = JSON.parse(buffer);
controller.enqueue(message.content || '');
} catch (e) {
console.error('Failed to parse remaining buffer:', e);
}
}
}
});
// Connect the input to our transform stream
return input.pipeThrough(transform);
}
public async chat(optionsArg: {
systemMessage: string;
userMessage: string;
messageHistory: { role: string; content: string; }[];
}): Promise<{ role: 'assistant'; message: string; }> {
// Prepare messages array with system message, history, and user message
const messages: ChatCompletionMessageParam[] = [
{ role: 'system', content: optionsArg.systemMessage },
...optionsArg.messageHistory.map(msg => ({
role: msg.role as 'system' | 'user' | 'assistant',
content: msg.content
})),
{ role: 'user', content: optionsArg.userMessage }
];
// Call X.AI's chat completion API
const completion = await this.openAiApiClient.chat.completions.create({
model: 'grok-2-latest',
messages: messages,
stream: false,
});
// Return the assistant's response
return {
role: 'assistant',
message: completion.choices[0]?.message?.content || ''
};
}
public async audio(optionsArg: { message: string }): Promise<NodeJS.ReadableStream> {
throw new Error('Audio generation is not supported by X.AI');
}
public async vision(optionsArg: { image: Buffer; prompt: string }): Promise<string> {
throw new Error('Vision tasks are not supported by X.AI');
}
public async document(optionsArg: {
systemMessage: string;
userMessage: string;
pdfDocuments: Uint8Array[];
messageHistory: { role: string; content: string; }[];
}): Promise<{ message: any }> {
// First convert PDF documents to images
let pdfDocumentImageBytesArray: Uint8Array[] = [];
for (const pdfDocument of optionsArg.pdfDocuments) {
const documentImageArray = await this.smartpdfInstance.convertPDFToPngBytes(pdfDocument);
pdfDocumentImageBytesArray = pdfDocumentImageBytesArray.concat(documentImageArray);
}
// Convert images to base64 for inclusion in the message
const imageBase64Array = pdfDocumentImageBytesArray.map(bytes =>
Buffer.from(bytes).toString('base64')
);
// Combine document images into the user message
const enhancedUserMessage = `
${optionsArg.userMessage}
Document contents (as images):
${imageBase64Array.map((img, i) => `Image ${i + 1}: <image data>`).join('\n')}
`;
// Use chat completion to analyze the documents
const messages: ChatCompletionMessageParam[] = [
{ role: 'system', content: optionsArg.systemMessage },
...optionsArg.messageHistory.map(msg => ({
role: msg.role as 'system' | 'user' | 'assistant',
content: msg.content
})),
{ role: 'user', content: enhancedUserMessage }
];
const completion = await this.openAiApiClient.chat.completions.create({
model: 'grok-2-latest',
messages: messages,
stream: false,
});
return {
message: completion.choices[0]?.message?.content || ''
};
}
}