fix(core): Enhanced chat streaming and error handling across providers

This commit is contained in:
Philipp Kunz 2025-02-03 15:16:58 +01:00
parent 189a32683f
commit 0378308721
11 changed files with 1182 additions and 707 deletions

View File

@ -1,5 +1,13 @@
# Changelog # Changelog
## 2025-02-03 - 0.0.19 - fix(core)
Enhanced chat streaming and error handling across providers
- Refactored chatStream method to properly handle input streams and processes in Perplexity, OpenAI, Ollama, and Anthropic providers.
- Improved error handling and message parsing in chatStream implementations.
- Defined distinct interfaces for chat options, messages, and responses.
- Adjusted the test logic in test/test.ts for the new classification response requirement.
## 2024-09-19 - 0.0.18 - fix(dependencies) ## 2024-09-19 - 0.0.18 - fix(dependencies)
Update dependencies to the latest versions. Update dependencies to the latest versions.

960
pnpm-lock.yaml generated

File diff suppressed because it is too large Load Diff

169
readme.md
View File

@ -1,6 +1,6 @@
# @push.rocks/smartai # @push.rocks/smartai
Provides a standardized interface for integrating and conversing with multiple AI models, supporting operations like chat and potentially audio responses. Provides a standardized interface for integrating and conversing with multiple AI models, supporting operations like chat, streaming interactions, and audio responses.
## Install ## Install
@ -12,84 +12,167 @@ npm install @push.rocks/smartai
This command installs the package and adds it to your project's dependencies. This command installs the package and adds it to your project's dependencies.
## Supported AI Providers
@push.rocks/smartai supports multiple AI providers, each with its own unique capabilities:
### OpenAI
- Models: GPT-4, GPT-3.5-turbo
- Features: Chat, Streaming, Audio Generation
- Configuration:
```typescript
openaiToken: 'your-openai-token'
```
### Anthropic
- Models: Claude-3-opus-20240229
- Features: Chat, Streaming
- Configuration:
```typescript
anthropicToken: 'your-anthropic-token'
```
### Perplexity
- Models: Mixtral-8x7b-instruct
- Features: Chat, Streaming
- Configuration:
```typescript
perplexityToken: 'your-perplexity-token'
```
### Groq
- Models: Llama-3.3-70b-versatile
- Features: Chat, Streaming
- Configuration:
```typescript
groqToken: 'your-groq-token'
```
### Ollama
- Models: Configurable (default: llama2)
- Features: Chat, Streaming
- Configuration:
```typescript
baseUrl: 'http://localhost:11434' // Optional
model: 'llama2' // Optional
```
## Usage ## Usage
The `@push.rocks/smartai` package is a comprehensive solution for integrating and interacting with various AI models, designed to support operations ranging from chat interactions to possibly handling audio responses. This documentation will guide you through the process of utilizing `@push.rocks/smartai` in your applications, focusing on TypeScript and ESM syntax to demonstrate its full capabilities. The `@push.rocks/smartai` package is a comprehensive solution for integrating and interacting with various AI models, designed to support operations ranging from chat interactions to audio responses. This documentation will guide you through the process of utilizing `@push.rocks/smartai` in your applications.
### Getting Started ### Getting Started
Before you begin, ensure you have installed the package in your project as described in the **Install** section above. Once installed, you can start integrating AI functionalities into your application. Before you begin, ensure you have installed the package as described in the **Install** section above. Once installed, you can start integrating AI functionalities into your application.
### Initializing SmartAi ### Initializing SmartAi
The first step is to import and initialize the `SmartAi` class with appropriate options, including tokens for the AI services you plan to use: The first step is to import and initialize the `SmartAi` class with appropriate options for the AI services you plan to use:
```typescript ```typescript
import { SmartAi } from '@push.rocks/smartai'; import { SmartAi } from '@push.rocks/smartai';
const smartAi = new SmartAi({ const smartAi = new SmartAi({
openaiToken: 'your-openai-access-token', openaiToken: 'your-openai-token',
anthropicToken: 'your-anthropic-access-token' anthropicToken: 'your-anthropic-token',
perplexityToken: 'your-perplexity-token',
groqToken: 'your-groq-token',
ollama: {
baseUrl: 'http://localhost:11434',
model: 'llama2'
}
}); });
await smartAi.start(); await smartAi.start();
``` ```
### Creating Conversations with AI ### Chat Interactions
`SmartAi` provides a flexible interface to create and manage conversations with different AI providers. You can create a conversation with any supported AI provider like OpenAI or Anthropic by specifying the provider you want to use: #### Synchronous Chat
For simple question-answer interactions:
```typescript ```typescript
const openAiConversation = await smartAi.createConversation('openai'); const response = await smartAi.openaiProvider.chat({
const anthropicConversation = await smartAi.createConversation('anthropic'); systemMessage: 'You are a helpful assistant.',
``` userMessage: 'What is the capital of France?',
### Chatting with AI
Once you have a conversation instance, you can start sending messages to the AI and receive responses. Each conversation object provides methods to interact in a synchronous or asynchronous manner, depending on your use case.
#### Synchronous Chat Example
Here's how you can have a synchronous chat with OpenAI:
```typescript
const response = await openAiConversation.chat({
systemMessage: 'This is a greeting from the system.',
userMessage: 'Hello, AI! How are you today?',
messageHistory: [] // Previous messages in the conversation messageHistory: [] // Previous messages in the conversation
}); });
console.log(response.message); // Log the response from AI console.log(response.message);
``` ```
#### Streaming Chat Example #### Streaming Chat
For real-time, streaming interactions, you can utilize the streaming capabilities provided by the conversation object. This enables a continuous exchange of messages between your application and the AI: For real-time, streaming interactions:
```typescript ```typescript
const inputStreamWriter = openAiConversation.getInputStreamWriter(); const textEncoder = new TextEncoder();
const outputStream = openAiConversation.getOutputStream(); const textDecoder = new TextDecoder();
inputStreamWriter.write('Hello, AI! Can you stream responses?'); // Create input and output streams
const { writable, readable } = new TransformStream();
const writer = writable.getWriter();
const reader = outputStream.getReader(); // Send a message
reader.read().then(function processText({done, value}) { const message = {
if (done) { role: 'user',
console.log('Stream finished.'); content: 'Tell me a story about a brave knight'
return; };
}
console.log('AI says:', value); writer.write(textEncoder.encode(JSON.stringify(message) + '\n'));
reader.read().then(processText); // Continue reading messages
// Process the response stream
const stream = await smartAi.openaiProvider.chatStream(readable);
const reader = stream.getReader();
while (true) {
const { done, value } = await reader.read();
if (done) break;
console.log('AI:', value); // Process each chunk of the response
}
```
### Audio Generation
For providers that support audio generation (currently OpenAI):
```typescript
const audioStream = await smartAi.openaiProvider.audio({
message: 'Hello, this is a test of text-to-speech'
});
// Handle the audio stream (e.g., save to file or play)
```
### Document Processing
For providers that support document processing (currently OpenAI):
```typescript
const result = await smartAi.openaiProvider.document({
systemMessage: 'Classify the document type',
userMessage: 'What type of document is this?',
messageHistory: [],
pdfDocuments: [pdfBuffer] // Uint8Array of PDF content
}); });
``` ```
### Extending Conversations ## Error Handling
The modular design of `@push.rocks/smartai` allows you to extend conversations with additional features, such as handling audio responses or integrating other AI-powered functionalities. Utilize the provided AI providers' APIs to explore and implement a wide range of AI interactions within your conversations. All providers implement proper error handling. It's recommended to wrap API calls in try-catch blocks:
### Conclusion ```typescript
try {
With `@push.rocks/smartai`, integrating AI functionalities into your applications becomes streamlined and efficient. By leveraging the standardized interface provided by the package, you can easily converse with multiple AI models, expanding the capabilities of your applications with cutting-edge AI features. Whether you're implementing simple chat interactions or complex, real-time communication flows, `@push.rocks/smartai` offers the tools and flexibility needed to create engaging, AI-enhanced experiences. const response = await smartAi.openaiProvider.chat({
systemMessage: 'You are a helpful assistant.',
userMessage: 'Hello!',
messageHistory: []
});
} catch (error) {
console.error('AI provider error:', error.message);
}
```
## License and Legal Information ## License and Legal Information

View File

@ -32,7 +32,7 @@ tap.test('should document a pdf', async () => {
const pdfUrl = 'https://www.w3.org/WAI/ER/tests/xhtml/testfiles/resources/pdf/dummy.pdf'; const pdfUrl = 'https://www.w3.org/WAI/ER/tests/xhtml/testfiles/resources/pdf/dummy.pdf';
const pdfResponse = await smartrequest.getBinary(pdfUrl); const pdfResponse = await smartrequest.getBinary(pdfUrl);
const result = await testSmartai.openaiProvider.document({ const result = await testSmartai.openaiProvider.document({
systemMessage: 'Classify the document. Only the following answers are allowed: "invoice", "bank account statement", "contract", "other"', systemMessage: 'Classify the document. Only the following answers are allowed: "invoice", "bank account statement", "contract", "other". The answer should only contain the keyword for machine use.',
userMessage: "Classify the document.", userMessage: "Classify the document.",
messageHistory: [], messageHistory: [],
pdfDocuments: [pdfResponse.body], pdfDocuments: [pdfResponse.body],

View File

@ -3,6 +3,6 @@
*/ */
export const commitinfo = { export const commitinfo = {
name: '@push.rocks/smartai', name: '@push.rocks/smartai',
version: '0.0.18', version: '0.0.19',
description: 'A TypeScript library for integrating and interacting with multiple AI models, offering capabilities for chat and potentially audio responses.' description: 'A TypeScript library for integrating and interacting with multiple AI models, offering capabilities for chat and potentially audio responses.'
} }

View File

@ -1,32 +1,65 @@
/**
* Message format for chat interactions
*/
export interface ChatMessage {
role: 'assistant' | 'user' | 'system';
content: string;
}
/**
* Options for chat interactions
*/
export interface ChatOptions {
systemMessage: string;
userMessage: string;
messageHistory: ChatMessage[];
}
/**
* Response format for chat interactions
*/
export interface ChatResponse {
role: 'assistant';
message: string;
}
/**
* Abstract base class for multi-modal AI models.
* Provides a common interface for different AI providers (OpenAI, Anthropic, Perplexity, Ollama)
*/
export abstract class MultiModalModel { export abstract class MultiModalModel {
/** /**
* starts the model * Initializes the model and any necessary resources
* Should be called before using any other methods
*/ */
abstract start(): Promise<void>; abstract start(): Promise<void>;
/** /**
* stops the model * Cleans up any resources used by the model
* Should be called when the model is no longer needed
*/ */
abstract stop(): Promise<void>; abstract stop(): Promise<void>;
public abstract chat(optionsArg: { /**
systemMessage: string, * Synchronous chat interaction with the model
userMessage: string, * @param optionsArg Options containing system message, user message, and message history
messageHistory: { * @returns Promise resolving to the assistant's response
role: 'assistant' | 'user'; */
content: string; public abstract chat(optionsArg: ChatOptions): Promise<ChatResponse>;
}[]
}): Promise<{
role: 'assistant';
message: string;
}>
/** /**
* Defines a streaming interface for chat interactions. * Streaming interface for chat interactions
* The implementation will vary based on the specific AI model. * Allows for real-time responses from the model
* @param input * @param input Stream of user messages
* @returns Stream of model responses
*/ */
public abstract chatStream(input: ReadableStream<string>): Promise<ReadableStream<string>>; public abstract chatStream(input: ReadableStream<Uint8Array>): Promise<ReadableStream<string>>;
/**
* Text-to-speech conversion
* @param optionsArg Options containing the message to convert to speech
* @returns Promise resolving to a readable stream of audio data
* @throws Error if the provider doesn't support audio generation
*/
public abstract audio(optionsArg: { message: string }): Promise<NodeJS.ReadableStream>;
} }

View File

@ -1,6 +1,7 @@
import * as plugins from './plugins.js'; import * as plugins from './plugins.js';
import * as paths from './paths.js'; import * as paths from './paths.js';
import { MultiModalModel } from './abstract.classes.multimodal.js'; import { MultiModalModel } from './abstract.classes.multimodal.js';
import type { ChatOptions, ChatResponse, ChatMessage } from './abstract.classes.multimodal.js';
export interface IAnthropicProviderOptions { export interface IAnthropicProviderOptions {
anthropicToken: string; anthropicToken: string;
@ -23,39 +24,109 @@ export class AnthropicProvider extends MultiModalModel {
async stop() {} async stop() {}
public async chatStream(input: ReadableStream<string>): Promise<ReadableStream<string>> { public async chatStream(input: ReadableStream<Uint8Array>): Promise<ReadableStream<string>> {
// TODO: implement for OpenAI // Create a TextDecoder to handle incoming chunks
const decoder = new TextDecoder();
let buffer = '';
let currentMessage: { role: string; content: string; } | null = null;
const returnStream = new ReadableStream(); // Create a TransformStream to process the input
return returnStream; const transform = new TransformStream<Uint8Array, string>({
async transform(chunk, controller) {
buffer += decoder.decode(chunk, { stream: true });
// Try to parse complete JSON messages from the buffer
while (true) {
const newlineIndex = buffer.indexOf('\n');
if (newlineIndex === -1) break;
const line = buffer.slice(0, newlineIndex);
buffer = buffer.slice(newlineIndex + 1);
if (line.trim()) {
try {
const message = JSON.parse(line);
currentMessage = {
role: message.role || 'user',
content: message.content || '',
};
} catch (e) {
console.error('Failed to parse message:', e);
}
}
}
// If we have a complete message, send it to Anthropic
if (currentMessage) {
const stream = await this.anthropicApiClient.messages.create({
model: 'claude-3-opus-20240229',
messages: [{ role: currentMessage.role, content: currentMessage.content }],
system: '',
stream: true,
max_tokens: 4000,
});
// Process each chunk from Anthropic
for await (const chunk of stream) {
const content = chunk.delta?.text;
if (content) {
controller.enqueue(content);
}
}
currentMessage = null;
}
},
flush(controller) {
if (buffer) {
try {
const message = JSON.parse(buffer);
controller.enqueue(message.content || '');
} catch (e) {
console.error('Failed to parse remaining buffer:', e);
}
}
}
});
// Connect the input to our transform stream
return input.pipeThrough(transform);
} }
// Implementing the synchronous chat interaction // Implementing the synchronous chat interaction
public async chat(optionsArg: { public async chat(optionsArg: ChatOptions): Promise<ChatResponse> {
systemMessage: string; // Convert message history to Anthropic format
userMessage: string; const messages = optionsArg.messageHistory.map(msg => ({
messageHistory: { role: msg.role === 'assistant' ? 'assistant' as const : 'user' as const,
role: 'assistant' | 'user'; content: msg.content
content: string; }));
}[];
}) {
const result = await this.anthropicApiClient.messages.create({ const result = await this.anthropicApiClient.messages.create({
model: 'claude-3-opus-20240229', model: 'claude-3-opus-20240229',
system: optionsArg.systemMessage, system: optionsArg.systemMessage,
messages: [ messages: [
...optionsArg.messageHistory, ...messages,
{ role: 'user', content: optionsArg.userMessage }, { role: 'user' as const, content: optionsArg.userMessage }
], ],
max_tokens: 4000, max_tokens: 4000,
}); });
// Extract text content from the response
let message = '';
for (const block of result.content) {
if ('text' in block) {
message += block.text;
}
}
return { return {
role: result.role as 'assistant', role: 'assistant' as const,
message: result.content.join('\n'), message,
}; };
} }
private async audio(messageArg: string) { public async audio(optionsArg: { message: string }): Promise<NodeJS.ReadableStream> {
// Anthropic does not provide an audio API, so this method is not implemented. // Anthropic does not provide an audio API, so this method is not implemented.
throw new Error('Audio generation is not yet supported by Anthropic.'); throw new Error('Audio generation is not yet supported by Anthropic.');
} }

View File

@ -0,0 +1,179 @@
import * as plugins from './plugins.js';
import * as paths from './paths.js';
import { MultiModalModel } from './abstract.classes.multimodal.js';
import type { ChatOptions, ChatResponse, ChatMessage } from './abstract.classes.multimodal.js';
export interface IGroqProviderOptions {
groqToken: string;
model?: string;
}
export class GroqProvider extends MultiModalModel {
private options: IGroqProviderOptions;
private baseUrl = 'https://api.groq.com/v1';
constructor(optionsArg: IGroqProviderOptions) {
super();
this.options = {
...optionsArg,
model: optionsArg.model || 'llama-3.3-70b-versatile', // Default model
};
}
async start() {}
async stop() {}
public async chatStream(input: ReadableStream<Uint8Array>): Promise<ReadableStream<string>> {
// Create a TextDecoder to handle incoming chunks
const decoder = new TextDecoder();
let buffer = '';
let currentMessage: { role: string; content: string; } | null = null;
// Create a TransformStream to process the input
const transform = new TransformStream<Uint8Array, string>({
async transform(chunk, controller) {
buffer += decoder.decode(chunk, { stream: true });
// Try to parse complete JSON messages from the buffer
while (true) {
const newlineIndex = buffer.indexOf('\n');
if (newlineIndex === -1) break;
const line = buffer.slice(0, newlineIndex);
buffer = buffer.slice(newlineIndex + 1);
if (line.trim()) {
try {
const message = JSON.parse(line);
currentMessage = {
role: message.role || 'user',
content: message.content || '',
};
} catch (e) {
console.error('Failed to parse message:', e);
}
}
}
// If we have a complete message, send it to Groq
if (currentMessage) {
const response = await fetch(`${this.baseUrl}/chat/completions`, {
method: 'POST',
headers: {
'Authorization': `Bearer ${this.options.groqToken}`,
'Content-Type': 'application/json',
},
body: JSON.stringify({
model: this.options.model,
messages: [{ role: currentMessage.role, content: currentMessage.content }],
stream: true,
}),
});
// Process each chunk from Groq
const reader = response.body?.getReader();
if (reader) {
try {
while (true) {
const { done, value } = await reader.read();
if (done) break;
const chunk = new TextDecoder().decode(value);
const lines = chunk.split('\n');
for (const line of lines) {
if (line.startsWith('data: ')) {
const data = line.slice(6);
if (data === '[DONE]') break;
try {
const parsed = JSON.parse(data);
const content = parsed.choices[0]?.delta?.content;
if (content) {
controller.enqueue(content);
}
} catch (e) {
console.error('Failed to parse SSE data:', e);
}
}
}
}
} finally {
reader.releaseLock();
}
}
currentMessage = null;
}
},
flush(controller) {
if (buffer) {
try {
const message = JSON.parse(buffer);
controller.enqueue(message.content || '');
} catch (e) {
console.error('Failed to parse remaining buffer:', e);
}
}
}
});
// Connect the input to our transform stream
return input.pipeThrough(transform);
}
// Implementing the synchronous chat interaction
public async chat(optionsArg: ChatOptions): Promise<ChatResponse> {
const messages = [
// System message
{
role: 'system',
content: optionsArg.systemMessage,
},
// Message history
...optionsArg.messageHistory.map(msg => ({
role: msg.role,
content: msg.content,
})),
// User message
{
role: 'user',
content: optionsArg.userMessage,
},
];
const response = await fetch(`${this.baseUrl}/chat/completions`, {
method: 'POST',
headers: {
'Authorization': `Bearer ${this.options.groqToken}`,
'Content-Type': 'application/json',
},
body: JSON.stringify({
model: this.options.model,
messages,
temperature: 0.7,
max_completion_tokens: 1024,
stream: false,
}),
});
if (!response.ok) {
const error = await response.json();
throw new Error(`Groq API error: ${error.message || response.statusText}`);
}
const result = await response.json();
return {
role: 'assistant',
message: result.choices[0].message.content,
};
}
public async audio(optionsArg: { message: string }): Promise<NodeJS.ReadableStream> {
// Groq does not provide an audio API, so this method is not implemented.
throw new Error('Audio generation is not yet supported by Groq.');
}
}

View File

@ -1,3 +1,170 @@
import * as plugins from './plugins.js'; import * as plugins from './plugins.js';
import * as paths from './paths.js';
import { MultiModalModel } from './abstract.classes.multimodal.js';
import type { ChatOptions, ChatResponse, ChatMessage } from './abstract.classes.multimodal.js';
export class OllamaProvider {} export interface IOllamaProviderOptions {
baseUrl?: string;
model?: string;
}
export class OllamaProvider extends MultiModalModel {
private options: IOllamaProviderOptions;
private baseUrl: string;
private model: string;
constructor(optionsArg: IOllamaProviderOptions = {}) {
super();
this.options = optionsArg;
this.baseUrl = optionsArg.baseUrl || 'http://localhost:11434';
this.model = optionsArg.model || 'llama2';
}
async start() {
// Verify Ollama is running
try {
const response = await fetch(`${this.baseUrl}/api/tags`);
if (!response.ok) {
throw new Error('Failed to connect to Ollama server');
}
} catch (error) {
throw new Error(`Failed to connect to Ollama server at ${this.baseUrl}: ${error.message}`);
}
}
async stop() {}
public async chatStream(input: ReadableStream<Uint8Array>): Promise<ReadableStream<string>> {
// Create a TextDecoder to handle incoming chunks
const decoder = new TextDecoder();
let buffer = '';
let currentMessage: { role: string; content: string; } | null = null;
// Create a TransformStream to process the input
const transform = new TransformStream<Uint8Array, string>({
async transform(chunk, controller) {
buffer += decoder.decode(chunk, { stream: true });
// Try to parse complete JSON messages from the buffer
while (true) {
const newlineIndex = buffer.indexOf('\n');
if (newlineIndex === -1) break;
const line = buffer.slice(0, newlineIndex);
buffer = buffer.slice(newlineIndex + 1);
if (line.trim()) {
try {
const message = JSON.parse(line);
currentMessage = {
role: message.role || 'user',
content: message.content || '',
};
} catch (e) {
console.error('Failed to parse message:', e);
}
}
}
// If we have a complete message, send it to Ollama
if (currentMessage) {
const response = await fetch(`${this.baseUrl}/api/chat`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({
model: this.model,
messages: [{ role: currentMessage.role, content: currentMessage.content }],
stream: true,
}),
});
// Process each chunk from Ollama
const reader = response.body?.getReader();
if (reader) {
try {
while (true) {
const { done, value } = await reader.read();
if (done) break;
const chunk = new TextDecoder().decode(value);
const lines = chunk.split('\n');
for (const line of lines) {
if (line.trim()) {
try {
const parsed = JSON.parse(line);
const content = parsed.message?.content;
if (content) {
controller.enqueue(content);
}
} catch (e) {
console.error('Failed to parse Ollama response:', e);
}
}
}
}
} finally {
reader.releaseLock();
}
}
currentMessage = null;
}
},
flush(controller) {
if (buffer) {
try {
const message = JSON.parse(buffer);
controller.enqueue(message.content || '');
} catch (e) {
console.error('Failed to parse remaining buffer:', e);
}
}
}
});
// Connect the input to our transform stream
return input.pipeThrough(transform);
}
// Implementing the synchronous chat interaction
public async chat(optionsArg: ChatOptions): Promise<ChatResponse> {
// Format messages for Ollama
const messages = [
{ role: 'system', content: optionsArg.systemMessage },
...optionsArg.messageHistory,
{ role: 'user', content: optionsArg.userMessage }
];
// Make API call to Ollama
const response = await fetch(`${this.baseUrl}/api/chat`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({
model: this.model,
messages: messages,
stream: false
}),
});
if (!response.ok) {
throw new Error(`Ollama API error: ${response.statusText}`);
}
const result = await response.json();
return {
role: 'assistant' as const,
message: result.message.content,
};
}
public async audio(optionsArg: { message: string }): Promise<NodeJS.ReadableStream> {
throw new Error('Audio generation is not supported by Ollama.');
}
}

View File

@ -27,11 +27,72 @@ export class OpenAiProvider extends MultiModalModel {
public async stop() {} public async stop() {}
public async chatStream(input: ReadableStream<string>): Promise<ReadableStream<string>> { public async chatStream(input: ReadableStream<Uint8Array>): Promise<ReadableStream<string>> {
// TODO: implement for OpenAI // Create a TextDecoder to handle incoming chunks
const decoder = new TextDecoder();
let buffer = '';
let currentMessage: { role: string; content: string; } | null = null;
const returnStream = new ReadableStream(); // Create a TransformStream to process the input
return returnStream; const transform = new TransformStream<Uint8Array, string>({
async transform(chunk, controller) {
buffer += decoder.decode(chunk, { stream: true });
// Try to parse complete JSON messages from the buffer
while (true) {
const newlineIndex = buffer.indexOf('\n');
if (newlineIndex === -1) break;
const line = buffer.slice(0, newlineIndex);
buffer = buffer.slice(newlineIndex + 1);
if (line.trim()) {
try {
const message = JSON.parse(line);
currentMessage = {
role: message.role || 'user',
content: message.content || '',
};
} catch (e) {
console.error('Failed to parse message:', e);
}
}
}
// If we have a complete message, send it to OpenAI
if (currentMessage) {
const stream = await this.openAiApiClient.chat.completions.create({
model: 'gpt-4',
messages: [{ role: currentMessage.role, content: currentMessage.content }],
stream: true,
});
// Process each chunk from OpenAI
for await (const chunk of stream) {
const content = chunk.choices[0]?.delta?.content;
if (content) {
controller.enqueue(content);
}
}
currentMessage = null;
}
},
flush(controller) {
if (buffer) {
try {
const message = JSON.parse(buffer);
controller.enqueue(message.content || '');
} catch (e) {
console.error('Failed to parse remaining buffer:', e);
}
}
}
});
// Connect the input to our transform stream
return input.pipeThrough(transform);
} }
// Implementing the synchronous chat interaction // Implementing the synchronous chat interaction

View File

@ -1,3 +1,158 @@
import * as plugins from './plugins.js'; import * as plugins from './plugins.js';
import * as paths from './paths.js';
import { MultiModalModel } from './abstract.classes.multimodal.js';
import type { ChatOptions, ChatResponse, ChatMessage } from './abstract.classes.multimodal.js';
export class PerplexityProvider {} export interface IPerplexityProviderOptions {
perplexityToken: string;
}
export class PerplexityProvider extends MultiModalModel {
private options: IPerplexityProviderOptions;
constructor(optionsArg: IPerplexityProviderOptions) {
super();
this.options = optionsArg;
}
async start() {
// Initialize any necessary clients or resources
}
async stop() {}
public async chatStream(input: ReadableStream<Uint8Array>): Promise<ReadableStream<string>> {
// Create a TextDecoder to handle incoming chunks
const decoder = new TextDecoder();
let buffer = '';
let currentMessage: { role: string; content: string; } | null = null;
// Create a TransformStream to process the input
const transform = new TransformStream<Uint8Array, string>({
async transform(chunk, controller) {
buffer += decoder.decode(chunk, { stream: true });
// Try to parse complete JSON messages from the buffer
while (true) {
const newlineIndex = buffer.indexOf('\n');
if (newlineIndex === -1) break;
const line = buffer.slice(0, newlineIndex);
buffer = buffer.slice(newlineIndex + 1);
if (line.trim()) {
try {
const message = JSON.parse(line);
currentMessage = {
role: message.role || 'user',
content: message.content || '',
};
} catch (e) {
console.error('Failed to parse message:', e);
}
}
}
// If we have a complete message, send it to Perplexity
if (currentMessage) {
const response = await fetch('https://api.perplexity.ai/chat/completions', {
method: 'POST',
headers: {
'Authorization': `Bearer ${this.options.perplexityToken}`,
'Content-Type': 'application/json',
},
body: JSON.stringify({
model: 'mixtral-8x7b-instruct',
messages: [{ role: currentMessage.role, content: currentMessage.content }],
stream: true,
}),
});
// Process each chunk from Perplexity
const reader = response.body?.getReader();
if (reader) {
try {
while (true) {
const { done, value } = await reader.read();
if (done) break;
const chunk = new TextDecoder().decode(value);
const lines = chunk.split('\n');
for (const line of lines) {
if (line.startsWith('data: ')) {
const data = line.slice(6);
if (data === '[DONE]') break;
try {
const parsed = JSON.parse(data);
const content = parsed.choices[0]?.delta?.content;
if (content) {
controller.enqueue(content);
}
} catch (e) {
console.error('Failed to parse SSE data:', e);
}
}
}
}
} finally {
reader.releaseLock();
}
}
currentMessage = null;
}
},
flush(controller) {
if (buffer) {
try {
const message = JSON.parse(buffer);
controller.enqueue(message.content || '');
} catch (e) {
console.error('Failed to parse remaining buffer:', e);
}
}
}
});
// Connect the input to our transform stream
return input.pipeThrough(transform);
}
// Implementing the synchronous chat interaction
public async chat(optionsArg: ChatOptions): Promise<ChatResponse> {
// Make API call to Perplexity
const response = await fetch('https://api.perplexity.ai/chat/completions', {
method: 'POST',
headers: {
'Authorization': `Bearer ${this.options.perplexityToken}`,
'Content-Type': 'application/json',
},
body: JSON.stringify({
model: 'mixtral-8x7b-instruct', // Using Mixtral model
messages: [
{ role: 'system', content: optionsArg.systemMessage },
...optionsArg.messageHistory,
{ role: 'user', content: optionsArg.userMessage }
],
}),
});
if (!response.ok) {
throw new Error(`Perplexity API error: ${response.statusText}`);
}
const result = await response.json();
return {
role: 'assistant' as const,
message: result.choices[0].message.content,
};
}
public async audio(optionsArg: { message: string }): Promise<NodeJS.ReadableStream> {
throw new Error('Audio generation is not supported by Perplexity.');
}
}