feat(documentation and configuration): Enhanced package and README documentation
This commit is contained in:
parent
0a80ac0a8a
commit
6b241f8889
@ -1,5 +1,12 @@
|
|||||||
# Changelog
|
# Changelog
|
||||||
|
|
||||||
|
## 2025-02-25 - 0.5.0 - feat(documentation and configuration)
|
||||||
|
Enhanced package and README documentation
|
||||||
|
|
||||||
|
- Expanded the package description to better reflect the library's capabilities.
|
||||||
|
- Improved README with detailed usage examples for initialization, chat interactions, streaming chat, audio generation, document analysis, and vision processing.
|
||||||
|
- Provided error handling strategies and advanced streaming customization examples.
|
||||||
|
|
||||||
## 2025-02-25 - 0.4.2 - fix(core)
|
## 2025-02-25 - 0.4.2 - fix(core)
|
||||||
Fix OpenAI chat streaming and PDF document processing logic.
|
Fix OpenAI chat streaming and PDF document processing logic.
|
||||||
|
|
||||||
|
@ -5,20 +5,33 @@
|
|||||||
"githost": "code.foss.global",
|
"githost": "code.foss.global",
|
||||||
"gitscope": "push.rocks",
|
"gitscope": "push.rocks",
|
||||||
"gitrepo": "smartai",
|
"gitrepo": "smartai",
|
||||||
"description": "A TypeScript library for integrating and interacting with multiple AI models, offering capabilities for chat and potentially audio responses.",
|
"description": "SmartAi is a versatile TypeScript library designed to facilitate integration and interaction with various AI models, offering functionalities for chat, audio generation, document processing, and vision tasks.",
|
||||||
"npmPackagename": "@push.rocks/smartai",
|
"npmPackagename": "@push.rocks/smartai",
|
||||||
"license": "MIT",
|
"license": "MIT",
|
||||||
"projectDomain": "push.rocks",
|
"projectDomain": "push.rocks",
|
||||||
"keywords": [
|
"keywords": [
|
||||||
"AI integration",
|
"AI integration",
|
||||||
"chatbot",
|
|
||||||
"TypeScript",
|
"TypeScript",
|
||||||
|
"chatbot",
|
||||||
"OpenAI",
|
"OpenAI",
|
||||||
"Anthropic",
|
"Anthropic",
|
||||||
"multi-model support",
|
"multi-model",
|
||||||
"audio responses",
|
"audio generation",
|
||||||
"text-to-speech",
|
"text-to-speech",
|
||||||
"streaming chat"
|
"document processing",
|
||||||
|
"vision processing",
|
||||||
|
"streaming chat",
|
||||||
|
"API",
|
||||||
|
"multiple providers",
|
||||||
|
"AI models",
|
||||||
|
"synchronous chat",
|
||||||
|
"asynchronous chat",
|
||||||
|
"real-time interaction",
|
||||||
|
"content analysis",
|
||||||
|
"image description",
|
||||||
|
"document classification",
|
||||||
|
"AI toolkit",
|
||||||
|
"provider switching"
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
|
25
package.json
25
package.json
@ -2,7 +2,7 @@
|
|||||||
"name": "@push.rocks/smartai",
|
"name": "@push.rocks/smartai",
|
||||||
"version": "0.4.2",
|
"version": "0.4.2",
|
||||||
"private": false,
|
"private": false,
|
||||||
"description": "A TypeScript library for integrating and interacting with multiple AI models, offering capabilities for chat and potentially audio responses.",
|
"description": "SmartAi is a versatile TypeScript library designed to facilitate integration and interaction with various AI models, offering functionalities for chat, audio generation, document processing, and vision tasks.",
|
||||||
"main": "dist_ts/index.js",
|
"main": "dist_ts/index.js",
|
||||||
"typings": "dist_ts/index.d.ts",
|
"typings": "dist_ts/index.d.ts",
|
||||||
"type": "module",
|
"type": "module",
|
||||||
@ -58,18 +58,31 @@
|
|||||||
],
|
],
|
||||||
"keywords": [
|
"keywords": [
|
||||||
"AI integration",
|
"AI integration",
|
||||||
"chatbot",
|
|
||||||
"TypeScript",
|
"TypeScript",
|
||||||
|
"chatbot",
|
||||||
"OpenAI",
|
"OpenAI",
|
||||||
"Anthropic",
|
"Anthropic",
|
||||||
"multi-model support",
|
"multi-model",
|
||||||
"audio responses",
|
"audio generation",
|
||||||
"text-to-speech",
|
"text-to-speech",
|
||||||
"streaming chat"
|
"document processing",
|
||||||
|
"vision processing",
|
||||||
|
"streaming chat",
|
||||||
|
"API",
|
||||||
|
"multiple providers",
|
||||||
|
"AI models",
|
||||||
|
"synchronous chat",
|
||||||
|
"asynchronous chat",
|
||||||
|
"real-time interaction",
|
||||||
|
"content analysis",
|
||||||
|
"image description",
|
||||||
|
"document classification",
|
||||||
|
"AI toolkit",
|
||||||
|
"provider switching"
|
||||||
],
|
],
|
||||||
"pnpm": {
|
"pnpm": {
|
||||||
"onlyBuiltDependencies": [
|
"onlyBuiltDependencies": [
|
||||||
"puppeteer"
|
"puppeteer"
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
}
|
}
|
304
readme.md
304
readme.md
@ -1,144 +1,38 @@
|
|||||||
# @push.rocks/smartai
|
# @push.rocks/smartai
|
||||||
|
|
||||||
[](https://www.npmjs.com/package/@push.rocks/smartai)
|
SmartAi is a TypeScript library providing a unified interface for integrating and interacting with multiple AI models, supporting chat interactions, audio and document processing, and vision tasks.
|
||||||
|
|
||||||
SmartAi is a comprehensive TypeScript library that provides a standardized interface for integrating and interacting with multiple AI models. It supports a range of operations from synchronous and streaming chat to audio generation, document processing, and vision tasks.
|
## Install
|
||||||
|
|
||||||
## Table of Contents
|
To install SmartAi into your project, you need to run the following command in your terminal:
|
||||||
|
|
||||||
- [Features](#features)
|
|
||||||
- [Installation](#installation)
|
|
||||||
- [Supported AI Providers](#supported-ai-providers)
|
|
||||||
- [Quick Start](#quick-start)
|
|
||||||
- [Usage Examples](#usage-examples)
|
|
||||||
- [Chat Interactions](#chat-interactions)
|
|
||||||
- [Streaming Chat](#streaming-chat)
|
|
||||||
- [Audio Generation](#audio-generation)
|
|
||||||
- [Document Processing](#document-processing)
|
|
||||||
- [Vision Processing](#vision-processing)
|
|
||||||
- [Error Handling](#error-handling)
|
|
||||||
- [Development](#development)
|
|
||||||
- [Running Tests](#running-tests)
|
|
||||||
- [Building the Project](#building-the-project)
|
|
||||||
- [Contributing](#contributing)
|
|
||||||
- [License](#license)
|
|
||||||
- [Legal Information](#legal-information)
|
|
||||||
|
|
||||||
## Features
|
|
||||||
|
|
||||||
- **Unified API:** Seamlessly integrate multiple AI providers with a consistent interface.
|
|
||||||
- **Chat & Streaming:** Support for both synchronous and real-time streaming chat interactions.
|
|
||||||
- **Audio & Vision:** Generate audio responses and perform detailed image analysis.
|
|
||||||
- **Document Processing:** Analyze PDFs and other documents using vision models.
|
|
||||||
- **Extensible:** Easily extend the library to support additional AI providers.
|
|
||||||
|
|
||||||
## Installation
|
|
||||||
|
|
||||||
To install SmartAi, run the following command:
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
npm install @push.rocks/smartai
|
npm install @push.rocks/smartai
|
||||||
```
|
```
|
||||||
|
|
||||||
This will add the package to your project’s dependencies.
|
This command will add the SmartAi library to your project's dependencies, making it available for use in your TypeScript application.
|
||||||
|
|
||||||
## Supported AI Providers
|
## Usage
|
||||||
|
|
||||||
SmartAi supports multiple AI providers. Configure each provider with its corresponding token or settings:
|
SmartAi is designed to provide a comprehensive and unified API for working seamlessly with multiple AI providers like OpenAI, Anthropic, Perplexity, and others. Below we will delve into how to make the most out of this library, illustrating the setup and functionality with in-depth examples. Our scenarios will explore synchronous and streaming interactions, audio generation, document handling, and vision tasks with different AI providers.
|
||||||
|
|
||||||
### OpenAI
|
### Initialization
|
||||||
|
|
||||||
- **Models:** GPT-4, GPT-3.5-turbo, GPT-4-vision-preview
|
Initialization is the first step before using any AI functionalities. You should provide API tokens for each provider you plan to utilize.
|
||||||
- **Features:** Chat, Streaming, Audio Generation, Vision, Document Processing
|
|
||||||
- **Configuration Example:**
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
openaiToken: 'your-openai-token'
|
|
||||||
```
|
|
||||||
|
|
||||||
### X.AI
|
|
||||||
|
|
||||||
- **Models:** Grok-2-latest
|
|
||||||
- **Features:** Chat, Streaming, Document Processing
|
|
||||||
- **Configuration Example:**
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
xaiToken: 'your-xai-token'
|
|
||||||
```
|
|
||||||
|
|
||||||
### Anthropic
|
|
||||||
|
|
||||||
- **Models:** Claude-3-opus-20240229
|
|
||||||
- **Features:** Chat, Streaming, Vision, Document Processing
|
|
||||||
- **Configuration Example:**
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
anthropicToken: 'your-anthropic-token'
|
|
||||||
```
|
|
||||||
|
|
||||||
### Perplexity
|
|
||||||
|
|
||||||
- **Models:** Mixtral-8x7b-instruct
|
|
||||||
- **Features:** Chat, Streaming
|
|
||||||
- **Configuration Example:**
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
perplexityToken: 'your-perplexity-token'
|
|
||||||
```
|
|
||||||
|
|
||||||
### Groq
|
|
||||||
|
|
||||||
- **Models:** Llama-3.3-70b-versatile
|
|
||||||
- **Features:** Chat, Streaming
|
|
||||||
- **Configuration Example:**
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
groqToken: 'your-groq-token'
|
|
||||||
```
|
|
||||||
|
|
||||||
### Ollama
|
|
||||||
|
|
||||||
- **Models:** Configurable (default: llama2; use llava for vision/document tasks)
|
|
||||||
- **Features:** Chat, Streaming, Vision, Document Processing
|
|
||||||
- **Configuration Example:**
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
ollama: {
|
|
||||||
baseUrl: 'http://localhost:11434', // Optional
|
|
||||||
model: 'llama2', // Optional
|
|
||||||
visionModel: 'llava' // Optional for vision and document tasks
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Exo
|
|
||||||
|
|
||||||
- **Models:** Configurable (supports LLaMA, Mistral, LlaVA, Qwen, and Deepseek)
|
|
||||||
- **Features:** Chat, Streaming
|
|
||||||
- **Configuration Example:**
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
exo: {
|
|
||||||
baseUrl: 'http://localhost:8080/v1', // Optional
|
|
||||||
apiKey: 'your-api-key' // Optional for local deployments
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Quick Start
|
|
||||||
|
|
||||||
Initialize SmartAi with the provider configurations you plan to use:
|
|
||||||
|
|
||||||
```typescript
|
```typescript
|
||||||
import { SmartAi } from '@push.rocks/smartai';
|
import { SmartAi } from '@push.rocks/smartai';
|
||||||
|
|
||||||
const smartAi = new SmartAi({
|
const smartAi = new SmartAi({
|
||||||
openaiToken: 'your-openai-token',
|
openaiToken: 'your-openai-token',
|
||||||
xaiToken: 'your-xai-token',
|
|
||||||
anthropicToken: 'your-anthropic-token',
|
anthropicToken: 'your-anthropic-token',
|
||||||
perplexityToken: 'your-perplexity-token',
|
perplexityToken: 'your-perplexity-token',
|
||||||
|
xaiToken: 'your-xai-token',
|
||||||
groqToken: 'your-groq-token',
|
groqToken: 'your-groq-token',
|
||||||
ollama: {
|
ollama: {
|
||||||
baseUrl: 'http://localhost:11434',
|
baseUrl: 'http://localhost:11434',
|
||||||
model: 'llama2'
|
model: 'llama2',
|
||||||
|
visionModel: 'llava'
|
||||||
},
|
},
|
||||||
exo: {
|
exo: {
|
||||||
baseUrl: 'http://localhost:8080/v1',
|
baseUrl: 'http://localhost:8080/v1',
|
||||||
@ -149,31 +43,33 @@ const smartAi = new SmartAi({
|
|||||||
await smartAi.start();
|
await smartAi.start();
|
||||||
```
|
```
|
||||||
|
|
||||||
## Usage Examples
|
|
||||||
|
|
||||||
### Chat Interactions
|
### Chat Interactions
|
||||||
|
|
||||||
**Synchronous Chat:**
|
Interaction through chat is a key feature. SmartAi caters to both synchronous and asynchronous (streaming) chats across several AI models.
|
||||||
|
|
||||||
|
#### Regular Synchronous Chat
|
||||||
|
|
||||||
|
Connect with AI models via straightforward request-response interactions.
|
||||||
|
|
||||||
```typescript
|
```typescript
|
||||||
const response = await smartAi.openaiProvider.chat({
|
const syncResponse = await smartAi.openaiProvider.chat({
|
||||||
systemMessage: 'You are a helpful assistant.',
|
systemMessage: 'You are a helpful assistant.',
|
||||||
userMessage: 'What is the capital of France?',
|
userMessage: 'What is the capital of France?',
|
||||||
messageHistory: [] // Include previous conversation messages if applicable
|
messageHistory: [] // Could include context or preceding messages
|
||||||
});
|
});
|
||||||
|
|
||||||
console.log(response.message);
|
console.log(syncResponse.message); // Outputs: "The capital of France is Paris."
|
||||||
```
|
```
|
||||||
|
|
||||||
### Streaming Chat
|
#### Real-Time Streaming Chat
|
||||||
|
|
||||||
**Real-Time Streaming:**
|
For continuous interaction and lower latency, engage in streaming chat.
|
||||||
|
|
||||||
```typescript
|
```typescript
|
||||||
const textEncoder = new TextEncoder();
|
const textEncoder = new TextEncoder();
|
||||||
const textDecoder = new TextDecoder();
|
const textDecoder = new TextDecoder();
|
||||||
|
|
||||||
// Create a transform stream for sending and receiving data
|
// Establish a transform stream
|
||||||
const { writable, readable } = new TransformStream();
|
const { writable, readable } = new TransformStream();
|
||||||
const writer = writable.getWriter();
|
const writer = writable.getWriter();
|
||||||
|
|
||||||
@ -184,7 +80,7 @@ const message = {
|
|||||||
|
|
||||||
writer.write(textEncoder.encode(JSON.stringify(message) + '\n'));
|
writer.write(textEncoder.encode(JSON.stringify(message) + '\n'));
|
||||||
|
|
||||||
// Start streaming the response
|
// Initiate streaming
|
||||||
const stream = await smartAi.openaiProvider.chatStream(readable);
|
const stream = await smartAi.openaiProvider.chatStream(readable);
|
||||||
const reader = stream.getReader();
|
const reader = stream.getReader();
|
||||||
|
|
||||||
@ -197,133 +93,137 @@ while (true) {
|
|||||||
|
|
||||||
### Audio Generation
|
### Audio Generation
|
||||||
|
|
||||||
Generate audio (supported by providers like OpenAI):
|
Audio generation from textual input is possible using providers like OpenAI.
|
||||||
|
|
||||||
```typescript
|
```typescript
|
||||||
const audioStream = await smartAi.openaiProvider.audio({
|
const audioStream = await smartAi.openaiProvider.audio({
|
||||||
message: 'Hello, this is a test of text-to-speech'
|
message: 'This is a test message for generating speech.'
|
||||||
});
|
});
|
||||||
|
|
||||||
// Process the audio stream, for example, play it or save to a file.
|
// Use the audioStream e.g., playing or saving it.
|
||||||
```
|
```
|
||||||
|
|
||||||
### Document Processing
|
### Document Analysis
|
||||||
|
|
||||||
Analyze and extract key information from documents:
|
SmartAi can ingest and process documents, extracting meaningful information or performing classifications.
|
||||||
|
|
||||||
```typescript
|
```typescript
|
||||||
// Example using OpenAI
|
const pdfBuffer = await fetchPdf('https://example.com/document.pdf');
|
||||||
const documentResult = await smartAi.openaiProvider.document({
|
const documentRes = await smartAi.openaiProvider.document({
|
||||||
systemMessage: 'Classify the document type',
|
systemMessage: 'Determine the nature of the document.',
|
||||||
userMessage: 'What type of document is this?',
|
userMessage: 'Classify this document.',
|
||||||
messageHistory: [],
|
|
||||||
pdfDocuments: [pdfBuffer] // Uint8Array containing the PDF content
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
Other providers (e.g., Ollama and Anthropic) follow a similar pattern:
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
// Using Ollama for document processing
|
|
||||||
const ollamaResult = await smartAi.ollamaProvider.document({
|
|
||||||
systemMessage: 'You are a document analysis assistant',
|
|
||||||
userMessage: 'Extract key information from this document',
|
|
||||||
messageHistory: [],
|
messageHistory: [],
|
||||||
pdfDocuments: [pdfBuffer]
|
pdfDocuments: [pdfBuffer]
|
||||||
});
|
});
|
||||||
|
|
||||||
|
console.log(documentRes.message); // Outputs: classified document type
|
||||||
```
|
```
|
||||||
|
|
||||||
|
SmartAi allows easy switching between providers, thus giving developers flexibility:
|
||||||
|
|
||||||
```typescript
|
```typescript
|
||||||
// Using Anthropic for document processing
|
const anthopicRes = await smartAi.anthropicProvider.document({
|
||||||
const anthropicResult = await smartAi.anthropicProvider.document({
|
systemMessage: 'Analyze this document.',
|
||||||
systemMessage: 'Analyze the document',
|
userMessage: 'Extract core points.',
|
||||||
userMessage: 'Please extract the main points',
|
|
||||||
messageHistory: [],
|
messageHistory: [],
|
||||||
pdfDocuments: [pdfBuffer]
|
pdfDocuments: [pdfBuffer]
|
||||||
});
|
});
|
||||||
|
|
||||||
|
console.log(anthopicRes.message); // Outputs: summarized core points
|
||||||
```
|
```
|
||||||
|
|
||||||
### Vision Processing
|
### Vision Processing
|
||||||
|
|
||||||
Analyze images with vision capabilities:
|
Engage AI models in analyzing and describing images:
|
||||||
|
|
||||||
```typescript
|
```typescript
|
||||||
// Using OpenAI GPT-4 Vision
|
const imageBuffer = await fetchImage('path/to/image.jpg');
|
||||||
const imageDescription = await smartAi.openaiProvider.vision({
|
|
||||||
image: imageBuffer, // Uint8Array containing image data
|
// Using OpenAI's vision capabilities
|
||||||
prompt: 'What do you see in this image?'
|
const visionOutput = await smartAi.openaiProvider.vision({
|
||||||
|
image: imageBuffer,
|
||||||
|
prompt: 'Describe the image.'
|
||||||
});
|
});
|
||||||
|
|
||||||
// Using Ollama for vision tasks
|
console.log(visionOutput); // Outputs: image description
|
||||||
const ollamaImageAnalysis = await smartAi.ollamaProvider.vision({
|
|
||||||
image: imageBuffer,
|
|
||||||
prompt: 'Analyze this image in detail'
|
|
||||||
});
|
|
||||||
|
|
||||||
// Using Anthropic for vision analysis
|
|
||||||
const anthropicImageAnalysis = await smartAi.anthropicProvider.vision({
|
|
||||||
image: imageBuffer,
|
|
||||||
prompt: 'Describe the contents of this image'
|
|
||||||
});
|
|
||||||
```
|
```
|
||||||
|
|
||||||
## Error Handling
|
Use other providers for more varied analysis:
|
||||||
|
|
||||||
Always wrap API calls in try-catch blocks to manage errors effectively:
|
```typescript
|
||||||
|
const ollamaOutput = await smartAi.ollamaProvider.vision({
|
||||||
|
image: imageBuffer,
|
||||||
|
prompt: 'Detailed analysis required.'
|
||||||
|
});
|
||||||
|
|
||||||
|
console.log(ollamaOutput); // Outputs: detailed analysis results
|
||||||
|
```
|
||||||
|
|
||||||
|
### Error Handling
|
||||||
|
|
||||||
|
Due to the nature of external integrations, ensure to wrap AI calls within try-catch blocks.
|
||||||
|
|
||||||
```typescript
|
```typescript
|
||||||
try {
|
try {
|
||||||
const response = await smartAi.openaiProvider.chat({
|
const response = await smartAi.anthropicProvider.chat({
|
||||||
systemMessage: 'You are a helpful assistant.',
|
systemMessage: 'Hello!',
|
||||||
userMessage: 'Hello!',
|
userMessage: 'Help me out.',
|
||||||
messageHistory: []
|
messageHistory: []
|
||||||
});
|
});
|
||||||
console.log(response.message);
|
console.log(response.message);
|
||||||
} catch (error: any) {
|
} catch (error: any) {
|
||||||
console.error('AI provider error:', error.message);
|
console.error('Encountered an error:', error.message);
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
## Development
|
### Providers and Customization
|
||||||
|
|
||||||
### Running Tests
|
The library supports provider-specific customization, enabling tailored interactions:
|
||||||
|
|
||||||
To run the test suite, use the following command:
|
```typescript
|
||||||
|
const smartAi = new SmartAi({
|
||||||
|
openaiToken: 'your-openai-token',
|
||||||
|
anthropicToken: 'your-anthropic-token',
|
||||||
|
ollama: {
|
||||||
|
baseUrl: 'http://localhost:11434',
|
||||||
|
model: 'llama2',
|
||||||
|
visionModel: 'llava'
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
```bash
|
await smartAi.start();
|
||||||
npm run test
|
|
||||||
```
|
```
|
||||||
|
|
||||||
Ensure your environment is configured with the appropriate tokens and settings for the providers you are testing.
|
### Advanced Streaming Customization
|
||||||
|
|
||||||
### Building the Project
|
Developers can implement real-time processing pipelines with custom transformations:
|
||||||
|
|
||||||
Compile the TypeScript code and build the package using:
|
```typescript
|
||||||
|
const customProcessingStream = new TransformStream({
|
||||||
|
transform(chunk, controller) {
|
||||||
|
const processed = chunk.toUpperCase(); // Example transformation
|
||||||
|
controller.enqueue(processed);
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
```bash
|
const processedStream = stream.pipeThrough(customProcessingStream);
|
||||||
npm run build
|
const processedReader = processedStream.getReader();
|
||||||
|
|
||||||
|
while (true) {
|
||||||
|
const { done, value } = await processedReader.read();
|
||||||
|
if (done) break;
|
||||||
|
console.log('Processed Output:', value);
|
||||||
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
This command prepares the library for distribution.
|
This approach can facilitate adaptive content processing workflows.
|
||||||
|
|
||||||
## Contributing
|
### Conclusion
|
||||||
|
|
||||||
Contributions are welcome! Please follow these steps:
|
SmartAi is a powerful toolkit for multi-faceted AI integration, offering robust solutions for chat, media, and document processing. Developers can enjoy a consistent API experience while leveraging the strengths of each supported AI model.
|
||||||
|
|
||||||
|
For futher exploration, developers might consider perusing individual provider's documentation to understand specific capabilities and limitations.
|
||||||
|
|
||||||
1. Fork the repository.
|
|
||||||
2. Create a feature branch:
|
|
||||||
```bash
|
|
||||||
git checkout -b feature/my-feature
|
|
||||||
```
|
|
||||||
3. Commit your changes with clear messages:
|
|
||||||
```bash
|
|
||||||
git commit -m 'Add new feature'
|
|
||||||
```
|
|
||||||
4. Push your branch to your fork:
|
|
||||||
```bash
|
|
||||||
git push origin feature/my-feature
|
|
||||||
```
|
|
||||||
5. Open a Pull Request with a detailed description of your changes.
|
|
||||||
|
|
||||||
## License and Legal Information
|
## License and Legal Information
|
||||||
|
|
||||||
@ -342,4 +242,4 @@ Registered at District court Bremen HRB 35230 HB, Germany
|
|||||||
|
|
||||||
For any legal inquiries or if you require further information, please contact us via email at hello@task.vc.
|
For any legal inquiries or if you require further information, please contact us via email at hello@task.vc.
|
||||||
|
|
||||||
By using this repository, you acknowledge that you have read this section, agree to comply with its terms, and understand that the licensing of the code does not imply endorsement by Task Venture Capital GmbH of any derivative works.
|
By using this repository, you acknowledge that you have read this section, agree to comply with its terms, and understand that the licensing of the code does not imply endorsement by Task Venture Capital GmbH of any derivative works.
|
||||||
|
@ -3,6 +3,6 @@
|
|||||||
*/
|
*/
|
||||||
export const commitinfo = {
|
export const commitinfo = {
|
||||||
name: '@push.rocks/smartai',
|
name: '@push.rocks/smartai',
|
||||||
version: '0.4.2',
|
version: '0.5.0',
|
||||||
description: 'A TypeScript library for integrating and interacting with multiple AI models, offering capabilities for chat and potentially audio responses.'
|
description: 'SmartAi is a versatile TypeScript library designed to facilitate integration and interaction with various AI models, offering functionalities for chat, audio generation, document processing, and vision tasks.'
|
||||||
}
|
}
|
||||||
|
Loading…
x
Reference in New Issue
Block a user