.gitea/workflows | ||
.vscode | ||
test | ||
ts | ||
.gitignore | ||
npmextra.json | ||
package.json | ||
pnpm-lock.yaml | ||
qenv.yml | ||
readme.hints.md | ||
readme.md | ||
tsconfig.json |
@push.rocks/smartai
Provides a standardized interface for integrating and conversing with multiple AI models, supporting operations like chat and potentially audio responses.
Install
To add @push.rocks/smartai to your project, run the following command in your terminal:
npm install @push.rocks/smartai
This command installs the package and adds it to your project's dependencies.
Usage
The usage section delves into how to leverage the @push.rocks/smartai
package to interact with AI models in an application. This package simplifies the integration and conversation with AI models by providing a standardized interface. The examples below demonstrate the package's capabilities in engaging with AI models for chat operations and potentially handling audio responses using TypeScript and ESM syntax.
Integrating AI Models
Importing the Module
Start by importing SmartAi
and the AI providers you wish to use from @push.rocks/smartai
.
import { SmartAi, OpenAiProvider, AnthropicProvider } from '@push.rocks/smartai';
Initializing SmartAi
Create an instance of SmartAi
with the necessary credentials for accessing the AI services.
const smartAi = new SmartAi({
openaiToken: 'your-openai-access-token',
anthropicToken: 'your-anthropic-access-token'
});
Chatting with the AI
Creating a Conversation
To begin a conversation, choose the AI provider you'd like to use. For instance, to use OpenAI:
async function createOpenAiConversation() {
const conversation = await smartAi.createOpenApiConversation();
// Use the conversation for chatting
}
Similarly, for an Anthropic AI conversation:
async function createAnthropicConversation() {
const conversation = await smartAi.createAnthropicConversation();
// Use the conversation for chatting
}
Streaming Chat with OpenAI
For more advanced scenarios, like a streaming chat with OpenAI, you would interact with the chat stream directly:
// Assuming a conversation has been created and initialized...
const inputStreamWriter = conversation.getInputStreamWriter();
const outputStream = conversation.getOutputStream();
// Write a message to the input stream for the AI to process
await inputStreamWriter.write('Hello, how can I help you today?');
// Listen to the output stream for responses from AI
const reader = outputStream.getReader();
reader.read().then(function processText({ done, value }) {
if (done) {
console.log("No more messages from AI");
return;
}
console.log("AI says:", value);
// Continue reading messages
reader.read().then(processText);
});
Handling Audio Responses
The package may also support converting text responses from the AI into audio. While specific implementation details depend on the AI provider's capabilities, a generic approach would involve creating a text-to-speech instance and utilizing it:
// This is a hypothetical function call as the implementation might vary
const tts = await TTS.createWithOpenAi(smartAi);
// The TTS instance would then be used to convert text to speech
Extensive Feature Set
@push.rocks/smartai
provides comprehensive support for interacting with various AI models, not limited to text chat. It encompasses audio responses, potentially incorporating AI-powered analyses, and other multi-modal interactions.
Refer to the specific AI providers’ documentation through @push.rocks/smartai
, such as OpenAI and Anthropic, for detailed guidance on utilizing the full spectrum of capabilities, including the implementation of custom conversation flows, handling streaming data efficiently, and generating audio responses from AI conversations.
Conclusion
Equipped with @push.rocks/smartai
, developers can streamline the integration of sophisticated AI interactions into their applications. The package facilitates robust communication with AI models, supporting diverse operations from simple chats to complex audio feedback mechanisms, all within a unified, easy-to-use interface.
Explore the package more to uncover its full potential in creating engaging, AI-enhanced interactions in your applications.
License and Legal Information
This repository contains open-source code that is licensed under the MIT License. A copy of the MIT License can be found in the license file within this repository.
Please note: The MIT License does not grant permission to use the trade names, trademarks, service marks, or product names of the project, except as required for reasonable and customary use in describing the origin of the work and reproducing the content of the NOTICE file.
Trademarks
This project is owned and maintained by Task Venture Capital GmbH. The names and logos associated with Task Venture Capital GmbH and any related products or services are trademarks of Task Venture Capital GmbH and are not included within the scope of the MIT license granted herein. Use of these trademarks must comply with Task Venture Capital GmbH's Trademark Guidelines, and any usage must be approved in writing by Task Venture Capital GmbH.
Company Information
Task Venture Capital GmbH
Registered at District court Bremen HRB 35230 HB, Germany
For any legal inquiries or if you require further information, please contact us via email at hello@task.vc.
By using this repository, you acknowledge that you have read this section, agree to comply with its terms, and understand that the licensing of the code does not imply endorsement by Task Venture Capital GmbH of any derivative works.