A TypeScript library for integrating and interacting with multiple AI models, offering capabilities for chat and potentially audio responses.
Go to file
2024-04-04 02:47:44 +02:00
.gitea/workflows fix(core): update 2024-03-30 12:42:44 +01:00
.vscode fix(core): update 2024-03-30 12:42:44 +01:00
test fix(core): update 2024-03-30 12:42:44 +01:00
ts fix(core): update 2024-04-04 02:47:44 +02:00
.gitignore fix(core): update 2024-03-30 12:42:44 +01:00
npmextra.json fix(core): update 2024-04-04 02:47:44 +02:00
package.json fix(core): update 2024-04-04 02:47:44 +02:00
pnpm-lock.yaml fix(core): update 2024-04-04 02:47:44 +02:00
qenv.yml fix(core): update 2024-03-31 01:32:37 +01:00
readme.md fix(core): update 2024-04-04 02:47:44 +02:00
tsconfig.json fix(core): update 2024-03-30 12:42:44 +01:00

@push.rocks/smartai

a standardized interface to talk to AI models

Install

To install @push.rocks/smartai, run the following command in your terminal:

npm install @push.rocks/smartai

This will add the package to your project's dependencies.

Usage

In the following guide, you'll learn how to leverage @push.rocks/smartai for integrating AI models into your applications using TypeScript with ESM syntax.

Getting Started

First, you'll need to import the necessary modules from @push.rocks/smartai. This typically includes the main SmartAi class along with any specific provider classes you intend to use, such as OpenAiProvider or AnthropicProvider.

import { SmartAi, OpenAiProvider, AnthropicProvider } from '@push.rocks/smartai';

Initialization

Create an instance of SmartAi by providing the required options, which include authentication tokens for the AI providers you plan to use.

const smartAi = new SmartAi({
  openaiToken: 'your-openai-token-here',
  anthropicToken: 'your-anthropic-token-here'
});

Creating a Conversation

@push.rocks/smartai offers a versatile way to handle conversations with AI. To create a conversation using OpenAI, for instance:

async function createOpenAiConversation() {
  const conversation = await smartAi.createOpenApiConversation();
}

For Anthropic-based conversations:

async function createAnthropicConversation() {
  const conversation = await smartAi.createAnthropicConversation();
}

Advanced Usage: Streaming and Chat

Advanced use cases might require direct access to the streaming APIs provided by the AI models. For instance, handling a chat stream with OpenAI can be achieved as follows:

Set Up the Conversation Stream

First, create a conversation and obtain the input and output streams.

const conversation = await smartAi.createOpenApiConversation();
const inputStreamWriter = conversation.getInputStreamWriter();
const outputStream = conversation.getOutputStream();

Write to Input Stream

To send messages to the AI model, use the input stream writer.

await inputStreamWriter.write('Hello, SmartAI!');

Processing Output Stream

Output from the AI model can be processed by reading from the output stream.

const reader = outputStream.getReader();
reader.read().then(function processText({ done, value }) {
  if (done) {
    console.log("Stream complete");
    return;
  }
  console.log("Received from AI:", value);
  reader.read().then(processText);
});

Handling Audio

@push.rocks/smartai also supports handling audio responses from AI models. To generate and retrieve audio output:

const tts = await TTS.createWithOpenAi(smartAi);

This code snippet initializes text-to-speech (TTS) capabilities using the OpenAI model. Further customization and usage of audio APIs will depend on the capabilities offered by the specific AI model and provider you are working with.

Conclusion

@push.rocks/smartai offers a flexible and standardized interface for interacting with AI models, streamlining the development of applications that leverage AI capabilities. Through the outlined examples, you've seen how to initialize the library, create conversations, and handle both text and audio interactions with AI models in a TypeScript environment following ESM syntax.

For a comprehensive understanding of all features and to explore more advanced use cases, refer to the official documentation and check the npmextra.json file's tsdocs section for additional insights on module usage.

This repository contains open-source code that is licensed under the MIT License. A copy of the MIT License can be found in the license file within this repository.

Please note: The MIT License does not grant permission to use the trade names, trademarks, service marks, or product names of the project, except as required for reasonable and customary use in describing the origin of the work and reproducing the content of the NOTICE file.

Trademarks

This project is owned and maintained by Task Venture Capital GmbH. The names and logos associated with Task Venture Capital GmbH and any related products or services are trademarks of Task Venture Capital GmbH and are not included within the scope of the MIT license granted herein. Use of these trademarks must comply with Task Venture Capital GmbH's Trademark Guidelines, and any usage must be approved in writing by Task Venture Capital GmbH.

Company Information

Task Venture Capital GmbH
Registered at District court Bremen HRB 35230 HB, Germany

For any legal inquiries or if you require further information, please contact us via email at hello@task.vc.

By using this repository, you acknowledge that you have read this section, agree to comply with its terms, and understand that the licensing of the code does not imply endorsement by Task Venture Capital GmbH of any derivative works.