fix(documentation): Updated README structure and added detailed usage examples
This commit is contained in:
parent
de940dff75
commit
17e1a1f1e1
@ -1,5 +1,13 @@
|
||||
# Changelog
|
||||
|
||||
## 2025-02-05 - 0.3.1 - fix(documentation)
|
||||
Updated README structure and added detailed usage examples
|
||||
|
||||
- Introduced a Table of Contents
|
||||
- Included comprehensive sections for chat, streaming chat, audio generation, document processing, and vision processing
|
||||
- Added example code and detailed configuration steps for supported AI providers
|
||||
- Clarified the development setup with instructions for running tests and building the project
|
||||
|
||||
## 2025-02-05 - 0.3.0 - feat(integration-xai)
|
||||
Add support for X.AI provider with chat and document processing capabilities.
|
||||
|
||||
|
263
readme.md
263
readme.md
@ -1,82 +1,120 @@
|
||||
# @push.rocks/smartai
|
||||
|
||||
Provides a standardized interface for integrating and conversing with multiple AI models, supporting operations like chat, streaming interactions, and audio responses.
|
||||
[![npm version](https://badge.fury.io/js/%40push.rocks%2Fsmartai.svg)](https://www.npmjs.com/package/@push.rocks/smartai)
|
||||
[![Build Status](https://github.com/push.rocks/smartai/workflows/CI/badge.svg)](https://github.com/push.rocks/smartai/actions)
|
||||
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](LICENSE)
|
||||
|
||||
## Install
|
||||
SmartAi is a comprehensive TypeScript library that provides a standardized interface for integrating and interacting with multiple AI models. It supports a range of operations from synchronous and streaming chat to audio generation, document processing, and vision tasks.
|
||||
|
||||
To add @push.rocks/smartai to your project, run the following command in your terminal:
|
||||
## Table of Contents
|
||||
|
||||
- [Features](#features)
|
||||
- [Installation](#installation)
|
||||
- [Supported AI Providers](#supported-ai-providers)
|
||||
- [Quick Start](#quick-start)
|
||||
- [Usage Examples](#usage-examples)
|
||||
- [Chat Interactions](#chat-interactions)
|
||||
- [Streaming Chat](#streaming-chat)
|
||||
- [Audio Generation](#audio-generation)
|
||||
- [Document Processing](#document-processing)
|
||||
- [Vision Processing](#vision-processing)
|
||||
- [Error Handling](#error-handling)
|
||||
- [Development](#development)
|
||||
- [Running Tests](#running-tests)
|
||||
- [Building the Project](#building-the-project)
|
||||
- [Contributing](#contributing)
|
||||
- [License](#license)
|
||||
- [Legal Information](#legal-information)
|
||||
|
||||
## Features
|
||||
|
||||
- **Unified API:** Seamlessly integrate multiple AI providers with a consistent interface.
|
||||
- **Chat & Streaming:** Support for both synchronous and real-time streaming chat interactions.
|
||||
- **Audio & Vision:** Generate audio responses and perform detailed image analysis.
|
||||
- **Document Processing:** Analyze PDFs and other documents using vision models.
|
||||
- **Extensible:** Easily extend the library to support additional AI providers.
|
||||
|
||||
## Installation
|
||||
|
||||
To install SmartAi, run the following command:
|
||||
|
||||
```bash
|
||||
npm install @push.rocks/smartai
|
||||
```
|
||||
|
||||
This command installs the package and adds it to your project's dependencies.
|
||||
This will add the package to your project’s dependencies.
|
||||
|
||||
## Supported AI Providers
|
||||
|
||||
@push.rocks/smartai supports multiple AI providers, each with its own unique capabilities:
|
||||
SmartAi supports multiple AI providers. Configure each provider with its corresponding token or settings:
|
||||
|
||||
### OpenAI
|
||||
- Models: GPT-4, GPT-3.5-turbo, GPT-4-vision-preview
|
||||
- Features: Chat, Streaming, Audio Generation, Vision, Document Processing
|
||||
- Configuration:
|
||||
|
||||
- **Models:** GPT-4, GPT-3.5-turbo, GPT-4-vision-preview
|
||||
- **Features:** Chat, Streaming, Audio Generation, Vision, Document Processing
|
||||
- **Configuration Example:**
|
||||
|
||||
```typescript
|
||||
openaiToken: 'your-openai-token'
|
||||
```
|
||||
|
||||
### X.AI
|
||||
- Models: Grok-2-latest
|
||||
- Features: Chat, Streaming, Document Processing
|
||||
- Configuration:
|
||||
|
||||
- **Models:** Grok-2-latest
|
||||
- **Features:** Chat, Streaming, Document Processing
|
||||
- **Configuration Example:**
|
||||
|
||||
```typescript
|
||||
xaiToken: 'your-xai-token'
|
||||
```
|
||||
|
||||
### Anthropic
|
||||
- Models: Claude-3-opus-20240229
|
||||
- Features: Chat, Streaming, Vision, Document Processing
|
||||
- Configuration:
|
||||
|
||||
- **Models:** Claude-3-opus-20240229
|
||||
- **Features:** Chat, Streaming, Vision, Document Processing
|
||||
- **Configuration Example:**
|
||||
|
||||
```typescript
|
||||
anthropicToken: 'your-anthropic-token'
|
||||
```
|
||||
|
||||
### Perplexity
|
||||
- Models: Mixtral-8x7b-instruct
|
||||
- Features: Chat, Streaming
|
||||
- Configuration:
|
||||
|
||||
- **Models:** Mixtral-8x7b-instruct
|
||||
- **Features:** Chat, Streaming
|
||||
- **Configuration Example:**
|
||||
|
||||
```typescript
|
||||
perplexityToken: 'your-perplexity-token'
|
||||
```
|
||||
|
||||
### Groq
|
||||
- Models: Llama-3.3-70b-versatile
|
||||
- Features: Chat, Streaming
|
||||
- Configuration:
|
||||
|
||||
- **Models:** Llama-3.3-70b-versatile
|
||||
- **Features:** Chat, Streaming
|
||||
- **Configuration Example:**
|
||||
|
||||
```typescript
|
||||
groqToken: 'your-groq-token'
|
||||
```
|
||||
|
||||
### Ollama
|
||||
- Models: Configurable (default: llama2, llava for vision/documents)
|
||||
- Features: Chat, Streaming, Vision, Document Processing
|
||||
- Configuration:
|
||||
|
||||
- **Models:** Configurable (default: llama2; use llava for vision/document tasks)
|
||||
- **Features:** Chat, Streaming, Vision, Document Processing
|
||||
- **Configuration Example:**
|
||||
|
||||
```typescript
|
||||
baseUrl: 'http://localhost:11434' // Optional
|
||||
model: 'llama2' // Optional
|
||||
visionModel: 'llava' // Optional, for vision and document tasks
|
||||
ollama: {
|
||||
baseUrl: 'http://localhost:11434', // Optional
|
||||
model: 'llama2', // Optional
|
||||
visionModel: 'llava' // Optional for vision and document tasks
|
||||
}
|
||||
```
|
||||
|
||||
## Usage
|
||||
## Quick Start
|
||||
|
||||
The `@push.rocks/smartai` package is a comprehensive solution for integrating and interacting with various AI models, designed to support operations ranging from chat interactions to audio responses. This documentation will guide you through the process of utilizing `@push.rocks/smartai` in your applications.
|
||||
|
||||
### Getting Started
|
||||
|
||||
Before you begin, ensure you have installed the package as described in the **Install** section above. Once installed, you can start integrating AI functionalities into your application.
|
||||
|
||||
### Initializing SmartAi
|
||||
|
||||
The first step is to import and initialize the `SmartAi` class with appropriate options for the AI services you plan to use:
|
||||
Initialize SmartAi with the provider configurations you plan to use:
|
||||
|
||||
```typescript
|
||||
import { SmartAi } from '@push.rocks/smartai';
|
||||
@ -96,35 +134,34 @@ const smartAi = new SmartAi({
|
||||
await smartAi.start();
|
||||
```
|
||||
|
||||
## Usage Examples
|
||||
|
||||
### Chat Interactions
|
||||
|
||||
#### Synchronous Chat
|
||||
|
||||
For simple question-answer interactions:
|
||||
**Synchronous Chat:**
|
||||
|
||||
```typescript
|
||||
const response = await smartAi.openaiProvider.chat({
|
||||
systemMessage: 'You are a helpful assistant.',
|
||||
userMessage: 'What is the capital of France?',
|
||||
messageHistory: [] // Previous messages in the conversation
|
||||
messageHistory: [] // Include previous conversation messages if applicable
|
||||
});
|
||||
|
||||
console.log(response.message);
|
||||
```
|
||||
|
||||
#### Streaming Chat
|
||||
### Streaming Chat
|
||||
|
||||
For real-time, streaming interactions:
|
||||
**Real-Time Streaming:**
|
||||
|
||||
```typescript
|
||||
const textEncoder = new TextEncoder();
|
||||
const textDecoder = new TextDecoder();
|
||||
|
||||
// Create input and output streams
|
||||
// Create a transform stream for sending and receiving data
|
||||
const { writable, readable } = new TransformStream();
|
||||
const writer = writable.getWriter();
|
||||
|
||||
// Send a message
|
||||
const message = {
|
||||
role: 'user',
|
||||
content: 'Tell me a story about a brave knight'
|
||||
@ -132,91 +169,92 @@ const message = {
|
||||
|
||||
writer.write(textEncoder.encode(JSON.stringify(message) + '\n'));
|
||||
|
||||
// Process the response stream
|
||||
// Start streaming the response
|
||||
const stream = await smartAi.openaiProvider.chatStream(readable);
|
||||
const reader = stream.getReader();
|
||||
|
||||
while (true) {
|
||||
const { done, value } = await reader.read();
|
||||
if (done) break;
|
||||
console.log('AI:', value); // Process each chunk of the response
|
||||
console.log('AI:', value);
|
||||
}
|
||||
```
|
||||
|
||||
### Audio Generation
|
||||
|
||||
For providers that support audio generation (currently OpenAI):
|
||||
Generate audio (supported by providers like OpenAI):
|
||||
|
||||
```typescript
|
||||
const audioStream = await smartAi.openaiProvider.audio({
|
||||
message: 'Hello, this is a test of text-to-speech'
|
||||
});
|
||||
|
||||
// Handle the audio stream (e.g., save to file or play)
|
||||
// Process the audio stream, for example, play it or save to a file.
|
||||
```
|
||||
|
||||
### Document Processing
|
||||
|
||||
For providers that support document processing (OpenAI, Ollama, and Anthropic):
|
||||
Analyze and extract key information from documents:
|
||||
|
||||
```typescript
|
||||
// Using OpenAI
|
||||
const result = await smartAi.openaiProvider.document({
|
||||
// Example using OpenAI
|
||||
const documentResult = await smartAi.openaiProvider.document({
|
||||
systemMessage: 'Classify the document type',
|
||||
userMessage: 'What type of document is this?',
|
||||
messageHistory: [],
|
||||
pdfDocuments: [pdfBuffer] // Uint8Array of PDF content
|
||||
});
|
||||
|
||||
// Using Ollama with llava
|
||||
const analysis = await smartAi.ollamaProvider.document({
|
||||
systemMessage: 'You are a document analysis assistant',
|
||||
userMessage: 'Extract the key information from this document',
|
||||
messageHistory: [],
|
||||
pdfDocuments: [pdfBuffer] // Uint8Array of PDF content
|
||||
});
|
||||
|
||||
// Using Anthropic with Claude 3
|
||||
const anthropicAnalysis = await smartAi.anthropicProvider.document({
|
||||
systemMessage: 'You are a document analysis assistant',
|
||||
userMessage: 'Please analyze this document and extract key information',
|
||||
messageHistory: [],
|
||||
pdfDocuments: [pdfBuffer] // Uint8Array of PDF content
|
||||
pdfDocuments: [pdfBuffer] // Uint8Array containing the PDF content
|
||||
});
|
||||
```
|
||||
|
||||
Both providers will:
|
||||
1. Convert PDF documents to images
|
||||
2. Process each page using their vision models
|
||||
3. Return a comprehensive analysis based on the system message and user query
|
||||
Other providers (e.g., Ollama and Anthropic) follow a similar pattern:
|
||||
|
||||
```typescript
|
||||
// Using Ollama for document processing
|
||||
const ollamaResult = await smartAi.ollamaProvider.document({
|
||||
systemMessage: 'You are a document analysis assistant',
|
||||
userMessage: 'Extract key information from this document',
|
||||
messageHistory: [],
|
||||
pdfDocuments: [pdfBuffer]
|
||||
});
|
||||
```
|
||||
|
||||
```typescript
|
||||
// Using Anthropic for document processing
|
||||
const anthropicResult = await smartAi.anthropicProvider.document({
|
||||
systemMessage: 'Analyze the document',
|
||||
userMessage: 'Please extract the main points',
|
||||
messageHistory: [],
|
||||
pdfDocuments: [pdfBuffer]
|
||||
});
|
||||
```
|
||||
|
||||
### Vision Processing
|
||||
|
||||
For providers that support vision tasks (OpenAI, Ollama, and Anthropic):
|
||||
Analyze images with vision capabilities:
|
||||
|
||||
```typescript
|
||||
// Using OpenAI's GPT-4 Vision
|
||||
const description = await smartAi.openaiProvider.vision({
|
||||
image: imageBuffer, // Buffer containing the image data
|
||||
// Using OpenAI GPT-4 Vision
|
||||
const imageDescription = await smartAi.openaiProvider.vision({
|
||||
image: imageBuffer, // Uint8Array containing image data
|
||||
prompt: 'What do you see in this image?'
|
||||
});
|
||||
|
||||
// Using Ollama's Llava model
|
||||
const analysis = await smartAi.ollamaProvider.vision({
|
||||
// Using Ollama for vision tasks
|
||||
const ollamaImageAnalysis = await smartAi.ollamaProvider.vision({
|
||||
image: imageBuffer,
|
||||
prompt: 'Analyze this image in detail'
|
||||
});
|
||||
|
||||
// Using Anthropic's Claude 3
|
||||
const anthropicAnalysis = await smartAi.anthropicProvider.vision({
|
||||
// Using Anthropic for vision analysis
|
||||
const anthropicImageAnalysis = await smartAi.anthropicProvider.vision({
|
||||
image: imageBuffer,
|
||||
prompt: 'Please analyze this image and describe what you see'
|
||||
prompt: 'Describe the contents of this image'
|
||||
});
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
All providers implement proper error handling. It's recommended to wrap API calls in try-catch blocks:
|
||||
Always wrap API calls in try-catch blocks to manage errors effectively:
|
||||
|
||||
```typescript
|
||||
try {
|
||||
@ -225,26 +263,71 @@ try {
|
||||
userMessage: 'Hello!',
|
||||
messageHistory: []
|
||||
});
|
||||
} catch (error) {
|
||||
console.log(response.message);
|
||||
} catch (error: any) {
|
||||
console.error('AI provider error:', error.message);
|
||||
}
|
||||
```
|
||||
|
||||
## License and Legal Information
|
||||
## Development
|
||||
|
||||
This repository contains open-source code that is licensed under the MIT License. A copy of the MIT License can be found in the [license](license) file within this repository.
|
||||
### Running Tests
|
||||
|
||||
**Please note:** The MIT License does not grant permission to use the trade names, trademarks, service marks, or product names of the project, except as required for reasonable and customary use in describing the origin of the work and reproducing the content of the NOTICE file.
|
||||
To run the test suite, use the following command:
|
||||
|
||||
```bash
|
||||
npm run test
|
||||
```
|
||||
|
||||
Ensure your environment is configured with the appropriate tokens and settings for the providers you are testing.
|
||||
|
||||
### Building the Project
|
||||
|
||||
Compile the TypeScript code and build the package using:
|
||||
|
||||
```bash
|
||||
npm run build
|
||||
```
|
||||
|
||||
This command prepares the library for distribution.
|
||||
|
||||
## Contributing
|
||||
|
||||
Contributions are welcome! Please follow these steps:
|
||||
|
||||
1. Fork the repository.
|
||||
2. Create a feature branch:
|
||||
```bash
|
||||
git checkout -b feature/my-feature
|
||||
```
|
||||
3. Commit your changes with clear messages:
|
||||
```bash
|
||||
git commit -m 'Add new feature'
|
||||
```
|
||||
4. Push your branch to your fork:
|
||||
```bash
|
||||
git push origin feature/my-feature
|
||||
```
|
||||
5. Open a Pull Request with a detailed description of your changes.
|
||||
|
||||
## License
|
||||
|
||||
This project is licensed under the [MIT License](LICENSE).
|
||||
|
||||
## Legal Information
|
||||
|
||||
### Trademarks
|
||||
|
||||
This project is owned and maintained by Task Venture Capital GmbH. The names and logos associated with Task Venture Capital GmbH and any related products or services are trademarks of Task Venture Capital GmbH and are not included within the scope of the MIT license granted herein. Use of these trademarks must comply with Task Venture Capital GmbH's Trademark Guidelines, and any usage must be approved in writing by Task Venture Capital GmbH.
|
||||
This project is owned and maintained by Task Venture Capital GmbH. The names and logos associated with Task Venture Capital GmbH and its related products or services are trademarks of Task Venture Capital GmbH and are not covered by the MIT License. Use of these trademarks must comply with Task Venture Capital GmbH's Trademark Guidelines.
|
||||
|
||||
### Company Information
|
||||
|
||||
Task Venture Capital GmbH
|
||||
Registered at District court Bremen HRB 35230 HB, Germany
|
||||
Registered at District Court Bremen HRB 35230 HB, Germany
|
||||
Contact: hello@task.vc
|
||||
|
||||
For any legal inquiries or if you require further information, please contact us via email at hello@task.vc.
|
||||
By using this repository, you agree to the terms outlined in this section.
|
||||
|
||||
By using this repository, you acknowledge that you have read this section, agree to comply with its terms, and understand that the licensing of the code does not imply endorsement by Task Venture Capital GmbH of any derivative works.
|
||||
---
|
||||
|
||||
Happy coding with SmartAi!
|
@ -3,6 +3,6 @@
|
||||
*/
|
||||
export const commitinfo = {
|
||||
name: '@push.rocks/smartai',
|
||||
version: '0.3.0',
|
||||
version: '0.3.1',
|
||||
description: 'A TypeScript library for integrating and interacting with multiple AI models, offering capabilities for chat and potentially audio responses.'
|
||||
}
|
||||
|
Loading…
x
Reference in New Issue
Block a user