fix(core): update

This commit is contained in:
Philipp Kunz 2024-04-29 12:37:43 +02:00
parent f628a71184
commit 3a5f2d52e5
4 changed files with 60 additions and 79 deletions

View File

@ -5,21 +5,20 @@
"githost": "code.foss.global",
"gitscope": "push.rocks",
"gitrepo": "smartai",
"description": "Provides a standardized interface for integrating and conversing with multiple AI models, supporting operations like chat and potentially audio responses.",
"description": "A TypeScript library for integrating and interacting with multiple AI models, offering capabilities for chat and potentially audio responses.",
"npmPackagename": "@push.rocks/smartai",
"license": "MIT",
"projectDomain": "push.rocks",
"keywords": [
"AI models integration",
"OpenAI GPT",
"Anthropic AI",
"text-to-speech",
"conversation stream",
"AI integration",
"chatbot",
"TypeScript",
"ESM",
"streaming API",
"modular design",
"development tool"
"OpenAI",
"Anthropic",
"multi-model support",
"audio responses",
"text-to-speech",
"streaming chat"
]
}
},

View File

@ -2,7 +2,7 @@
"name": "@push.rocks/smartai",
"version": "0.0.12",
"private": false,
"description": "Provides a standardized interface for integrating and conversing with multiple AI models, supporting operations like chat and potentially audio responses.",
"description": "A TypeScript library for integrating and interacting with multiple AI models, offering capabilities for chat and potentially audio responses.",
"main": "dist_ts/index.js",
"typings": "dist_ts/index.d.ts",
"type": "module",
@ -57,15 +57,14 @@
"readme.md"
],
"keywords": [
"AI models integration",
"OpenAI GPT",
"Anthropic AI",
"text-to-speech",
"conversation stream",
"AI integration",
"chatbot",
"TypeScript",
"ESM",
"streaming API",
"modular design",
"development tool"
"OpenAI",
"Anthropic",
"multi-model support",
"audio responses",
"text-to-speech",
"streaming chat"
]
}

View File

@ -14,99 +14,82 @@ This command installs the package and adds it to your project's dependencies.
## Usage
The usage section delves into how to leverage the `@push.rocks/smartai` package to interact with AI models in an application. This package simplifies the integration and conversation with AI models by providing a standardized interface. The examples below demonstrate the package's capabilities in engaging with AI models for chat operations and potentially handling audio responses using TypeScript and ESM syntax.
The `@push.rocks/smartai` package is a comprehensive solution for integrating and interacting with various AI models, designed to support operations ranging from chat interactions to possibly handling audio responses. This documentation will guide you through the process of utilizing `@push.rocks/smartai` in your applications, focusing on TypeScript and ESM syntax to demonstrate its full capabilities.
### Integrating AI Models
### Getting Started
#### Importing the Module
Before you begin, ensure you have installed the package in your project as described in the **Install** section above. Once installed, you can start integrating AI functionalities into your application.
Start by importing `SmartAi` and the AI providers you wish to use from `@push.rocks/smartai`.
### Initializing SmartAi
The first step is to import and initialize the `SmartAi` class with appropriate options, including tokens for the AI services you plan to use:
```typescript
import { SmartAi, OpenAiProvider, AnthropicProvider } from '@push.rocks/smartai';
```
import { SmartAi } from '@push.rocks/smartai';
#### Initializing `SmartAi`
Create an instance of `SmartAi` with the necessary credentials for accessing the AI services.
```typescript
const smartAi = new SmartAi({
openaiToken: 'your-openai-access-token',
anthropicToken: 'your-anthropic-access-token'
});
await smartAi.start();
```
### Chatting with the AI
### Creating Conversations with AI
#### Creating a Conversation
To begin a conversation, choose the AI provider you'd like to use. For instance, to use OpenAI:
`SmartAi` provides a flexible interface to create and manage conversations with different AI providers. You can create a conversation with any supported AI provider like OpenAI or Anthropic by specifying the provider you want to use:
```typescript
async function createOpenAiConversation() {
const conversation = await smartAi.createOpenApiConversation();
// Use the conversation for chatting
}
const openAiConversation = await smartAi.createConversation('openai');
const anthropicConversation = await smartAi.createConversation('anthropic');
```
Similarly, for an Anthropic AI conversation:
### Chatting with AI
Once you have a conversation instance, you can start sending messages to the AI and receive responses. Each conversation object provides methods to interact in a synchronous or asynchronous manner, depending on your use case.
#### Synchronous Chat Example
Here's how you can have a synchronous chat with OpenAI:
```typescript
async function createAnthropicConversation() {
const conversation = await smartAi.createAnthropicConversation();
// Use the conversation for chatting
}
const response = await openAiConversation.chat({
systemMessage: 'This is a greeting from the system.',
userMessage: 'Hello, AI! How are you today?',
messageHistory: [] // Previous messages in the conversation
});
console.log(response.message); // Log the response from AI
```
### Streaming Chat with OpenAI
#### Streaming Chat Example
For more advanced scenarios, like a streaming chat with OpenAI, you would interact with the chat stream directly:
For real-time, streaming interactions, you can utilize the streaming capabilities provided by the conversation object. This enables a continuous exchange of messages between your application and the AI:
```typescript
// Assuming a conversation has been created and initialized...
const inputStreamWriter = conversation.getInputStreamWriter();
const outputStream = conversation.getOutputStream();
const inputStreamWriter = openAiConversation.getInputStreamWriter();
const outputStream = openAiConversation.getOutputStream();
// Write a message to the input stream for the AI to process
await inputStreamWriter.write('Hello, how can I help you today?');
inputStreamWriter.write('Hello, AI! Can you stream responses?');
// Listen to the output stream for responses from AI
const reader = outputStream.getReader();
reader.read().then(function processText({done, value}) {
if (done) {
console.log("No more messages from AI");
console.log('Stream finished.');
return;
}
console.log("AI says:", value);
// Continue reading messages
reader.read().then(processText);
console.log('AI says:', value);
reader.read().then(processText); // Continue reading messages
});
```
### Handling Audio Responses
### Extending Conversations
The package may also support converting text responses from the AI into audio. While specific implementation details depend on the AI provider's capabilities, a generic approach would involve creating a text-to-speech instance and utilizing it:
```typescript
// This is a hypothetical function call as the implementation might vary
const tts = await TTS.createWithOpenAi(smartAi);
// The TTS instance would then be used to convert text to speech
```
### Extensive Feature Set
`@push.rocks/smartai` provides comprehensive support for interacting with various AI models, not limited to text chat. It encompasses audio responses, potentially incorporating AI-powered analyses, and other multi-modal interactions.
Refer to the specific AI providers documentation through `@push.rocks/smartai`, such as OpenAI and Anthropic, for detailed guidance on utilizing the full spectrum of capabilities, including the implementation of custom conversation flows, handling streaming data efficiently, and generating audio responses from AI conversations.
The modular design of `@push.rocks/smartai` allows you to extend conversations with additional features, such as handling audio responses or integrating other AI-powered functionalities. Utilize the provided AI providers' APIs to explore and implement a wide range of AI interactions within your conversations.
### Conclusion
Equipped with `@push.rocks/smartai`, developers can streamline the integration of sophisticated AI interactions into their applications. The package facilitates robust communication with AI models, supporting diverse operations from simple chats to complex audio feedback mechanisms, all within a unified, easy-to-use interface.
Explore the package more to uncover its full potential in creating engaging, AI-enhanced interactions in your applications.
With `@push.rocks/smartai`, integrating AI functionalities into your applications becomes streamlined and efficient. By leveraging the standardized interface provided by the package, you can easily converse with multiple AI models, expanding the capabilities of your applications with cutting-edge AI features. Whether you're implementing simple chat interactions or complex, real-time communication flows, `@push.rocks/smartai` offers the tools and flexibility needed to create engaging, AI-enhanced experiences.
## License and Legal Information

View File

@ -3,6 +3,6 @@
*/
export const commitinfo = {
name: '@push.rocks/smartai',
version: '0.0.12',
description: 'Provides a standardized interface for integrating and conversing with multiple AI models, supporting operations like chat and potentially audio responses.'
version: '0.0.13',
description: 'A TypeScript library for integrating and interacting with multiple AI models, offering capabilities for chat and potentially audio responses.'
}