fix(core): Enhanced chat streaming and error handling across providers

This commit is contained in:
2025-02-03 15:16:58 +01:00
parent 189a32683f
commit 0378308721
11 changed files with 1182 additions and 707 deletions

169
readme.md
View File

@@ -1,6 +1,6 @@
# @push.rocks/smartai
Provides a standardized interface for integrating and conversing with multiple AI models, supporting operations like chat and potentially audio responses.
Provides a standardized interface for integrating and conversing with multiple AI models, supporting operations like chat, streaming interactions, and audio responses.
## Install
@@ -12,84 +12,167 @@ npm install @push.rocks/smartai
This command installs the package and adds it to your project's dependencies.
## Supported AI Providers
@push.rocks/smartai supports multiple AI providers, each with its own unique capabilities:
### OpenAI
- Models: GPT-4, GPT-3.5-turbo
- Features: Chat, Streaming, Audio Generation
- Configuration:
```typescript
openaiToken: 'your-openai-token'
```
### Anthropic
- Models: Claude-3-opus-20240229
- Features: Chat, Streaming
- Configuration:
```typescript
anthropicToken: 'your-anthropic-token'
```
### Perplexity
- Models: Mixtral-8x7b-instruct
- Features: Chat, Streaming
- Configuration:
```typescript
perplexityToken: 'your-perplexity-token'
```
### Groq
- Models: Llama-3.3-70b-versatile
- Features: Chat, Streaming
- Configuration:
```typescript
groqToken: 'your-groq-token'
```
### Ollama
- Models: Configurable (default: llama2)
- Features: Chat, Streaming
- Configuration:
```typescript
baseUrl: 'http://localhost:11434' // Optional
model: 'llama2' // Optional
```
## Usage
The `@push.rocks/smartai` package is a comprehensive solution for integrating and interacting with various AI models, designed to support operations ranging from chat interactions to possibly handling audio responses. This documentation will guide you through the process of utilizing `@push.rocks/smartai` in your applications, focusing on TypeScript and ESM syntax to demonstrate its full capabilities.
The `@push.rocks/smartai` package is a comprehensive solution for integrating and interacting with various AI models, designed to support operations ranging from chat interactions to audio responses. This documentation will guide you through the process of utilizing `@push.rocks/smartai` in your applications.
### Getting Started
Before you begin, ensure you have installed the package in your project as described in the **Install** section above. Once installed, you can start integrating AI functionalities into your application.
Before you begin, ensure you have installed the package as described in the **Install** section above. Once installed, you can start integrating AI functionalities into your application.
### Initializing SmartAi
The first step is to import and initialize the `SmartAi` class with appropriate options, including tokens for the AI services you plan to use:
The first step is to import and initialize the `SmartAi` class with appropriate options for the AI services you plan to use:
```typescript
import { SmartAi } from '@push.rocks/smartai';
const smartAi = new SmartAi({
openaiToken: 'your-openai-access-token',
anthropicToken: 'your-anthropic-access-token'
openaiToken: 'your-openai-token',
anthropicToken: 'your-anthropic-token',
perplexityToken: 'your-perplexity-token',
groqToken: 'your-groq-token',
ollama: {
baseUrl: 'http://localhost:11434',
model: 'llama2'
}
});
await smartAi.start();
```
### Creating Conversations with AI
### Chat Interactions
`SmartAi` provides a flexible interface to create and manage conversations with different AI providers. You can create a conversation with any supported AI provider like OpenAI or Anthropic by specifying the provider you want to use:
#### Synchronous Chat
For simple question-answer interactions:
```typescript
const openAiConversation = await smartAi.createConversation('openai');
const anthropicConversation = await smartAi.createConversation('anthropic');
```
### Chatting with AI
Once you have a conversation instance, you can start sending messages to the AI and receive responses. Each conversation object provides methods to interact in a synchronous or asynchronous manner, depending on your use case.
#### Synchronous Chat Example
Here's how you can have a synchronous chat with OpenAI:
```typescript
const response = await openAiConversation.chat({
systemMessage: 'This is a greeting from the system.',
userMessage: 'Hello, AI! How are you today?',
const response = await smartAi.openaiProvider.chat({
systemMessage: 'You are a helpful assistant.',
userMessage: 'What is the capital of France?',
messageHistory: [] // Previous messages in the conversation
});
console.log(response.message); // Log the response from AI
console.log(response.message);
```
#### Streaming Chat Example
#### Streaming Chat
For real-time, streaming interactions, you can utilize the streaming capabilities provided by the conversation object. This enables a continuous exchange of messages between your application and the AI:
For real-time, streaming interactions:
```typescript
const inputStreamWriter = openAiConversation.getInputStreamWriter();
const outputStream = openAiConversation.getOutputStream();
const textEncoder = new TextEncoder();
const textDecoder = new TextDecoder();
inputStreamWriter.write('Hello, AI! Can you stream responses?');
// Create input and output streams
const { writable, readable } = new TransformStream();
const writer = writable.getWriter();
const reader = outputStream.getReader();
reader.read().then(function processText({done, value}) {
if (done) {
console.log('Stream finished.');
return;
}
console.log('AI says:', value);
reader.read().then(processText); // Continue reading messages
// Send a message
const message = {
role: 'user',
content: 'Tell me a story about a brave knight'
};
writer.write(textEncoder.encode(JSON.stringify(message) + '\n'));
// Process the response stream
const stream = await smartAi.openaiProvider.chatStream(readable);
const reader = stream.getReader();
while (true) {
const { done, value } = await reader.read();
if (done) break;
console.log('AI:', value); // Process each chunk of the response
}
```
### Audio Generation
For providers that support audio generation (currently OpenAI):
```typescript
const audioStream = await smartAi.openaiProvider.audio({
message: 'Hello, this is a test of text-to-speech'
});
// Handle the audio stream (e.g., save to file or play)
```
### Document Processing
For providers that support document processing (currently OpenAI):
```typescript
const result = await smartAi.openaiProvider.document({
systemMessage: 'Classify the document type',
userMessage: 'What type of document is this?',
messageHistory: [],
pdfDocuments: [pdfBuffer] // Uint8Array of PDF content
});
```
### Extending Conversations
## Error Handling
The modular design of `@push.rocks/smartai` allows you to extend conversations with additional features, such as handling audio responses or integrating other AI-powered functionalities. Utilize the provided AI providers' APIs to explore and implement a wide range of AI interactions within your conversations.
All providers implement proper error handling. It's recommended to wrap API calls in try-catch blocks:
### Conclusion
With `@push.rocks/smartai`, integrating AI functionalities into your applications becomes streamlined and efficient. By leveraging the standardized interface provided by the package, you can easily converse with multiple AI models, expanding the capabilities of your applications with cutting-edge AI features. Whether you're implementing simple chat interactions or complex, real-time communication flows, `@push.rocks/smartai` offers the tools and flexibility needed to create engaging, AI-enhanced experiences.
```typescript
try {
const response = await smartAi.openaiProvider.chat({
systemMessage: 'You are a helpful assistant.',
userMessage: 'Hello!',
messageHistory: []
});
} catch (error) {
console.error('AI provider error:', error.message);
}
```
## License and Legal Information