631 lines
19 KiB
Markdown
631 lines
19 KiB
Markdown
# @push.rocks/smartai
|
||
**One API to rule them all** 🚀
|
||
|
||
[](https://www.npmjs.com/package/@push.rocks/smartai)
|
||
[](https://www.typescriptlang.org/)
|
||
[](https://opensource.org/licenses/MIT)
|
||
|
||
SmartAI unifies the world's leading AI providers - OpenAI, Anthropic, Perplexity, Ollama, Groq, XAI, Exo, and ElevenLabs - under a single, elegant TypeScript interface. Build AI applications at lightning speed without vendor lock-in.
|
||
|
||
## 🎯 Why SmartAI?
|
||
|
||
- **🔌 Universal Interface**: Write once, run with any AI provider. Switch between GPT-4, Claude, Llama, or Grok with a single line change.
|
||
- **🛡️ Type-Safe**: Full TypeScript support with comprehensive type definitions for all operations
|
||
- **🌊 Streaming First**: Built for real-time applications with native streaming support
|
||
- **🎨 Multi-Modal**: Seamlessly work with text, images, audio, and documents
|
||
- **🏠 Local & Cloud**: Support for both cloud providers and local models via Ollama
|
||
- **⚡ Zero Lock-In**: Your code remains portable across all AI providers
|
||
|
||
## 🚀 Quick Start
|
||
|
||
```bash
|
||
npm install @push.rocks/smartai
|
||
```
|
||
|
||
```typescript
|
||
import { SmartAi } from '@push.rocks/smartai';
|
||
|
||
// Initialize with your favorite providers
|
||
const ai = new SmartAi({
|
||
openaiToken: 'sk-...',
|
||
anthropicToken: 'sk-ant-...',
|
||
elevenlabsToken: 'sk-...',
|
||
elevenlabs: {
|
||
defaultVoiceId: '19STyYD15bswVz51nqLf' // Optional: Samara voice
|
||
}
|
||
});
|
||
|
||
await ai.start();
|
||
|
||
// Same API, multiple providers
|
||
const response = await ai.openaiProvider.chat({
|
||
systemMessage: 'You are a helpful assistant.',
|
||
userMessage: 'Explain quantum computing in simple terms',
|
||
messageHistory: []
|
||
});
|
||
```
|
||
|
||
## 📊 Provider Capabilities Matrix
|
||
|
||
Choose the right provider for your use case:
|
||
|
||
| Provider | Chat | Streaming | TTS | Vision | Documents | Research | Images | Highlights |
|
||
|----------|:----:|:---------:|:---:|:------:|:---------:|:--------:|:------:|------------|
|
||
| **OpenAI** | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | • gpt-image-1<br>• DALL-E 3<br>• Deep research API |
|
||
| **Anthropic** | ✅ | ✅ | ❌ | ✅ | ✅ | ✅ | ❌ | • Claude Sonnet 4.5<br>• Superior reasoning<br>• Web search API |
|
||
| **ElevenLabs** | ❌ | ❌ | ✅ | ❌ | ❌ | ❌ | ❌ | • Premium TTS<br>• 70+ languages<br>• Natural voices |
|
||
| **Ollama** | ✅ | ✅ | ❌ | ✅ | ✅ | ❌ | ❌ | • 100% local<br>• Privacy-first<br>• No API costs |
|
||
| **XAI** | ✅ | ✅ | ❌ | ❌ | ✅ | ❌ | ❌ | • Grok models<br>• Real-time data<br>• Uncensored |
|
||
| **Perplexity** | ✅ | ✅ | ❌ | ❌ | ❌ | ✅ | ❌ | • Web-aware<br>• Research-focused<br>• Sonar Pro models |
|
||
| **Groq** | ✅ | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | • 10x faster<br>• LPU inference<br>• Low latency |
|
||
| **Exo** | ✅ | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | • Distributed<br>• P2P compute<br>• Decentralized |
|
||
|
||
## 🎮 Core Features
|
||
|
||
### 💬 Universal Chat Interface
|
||
|
||
Works identically across all providers:
|
||
|
||
```typescript
|
||
// Use GPT-4 for complex reasoning
|
||
const gptResponse = await ai.openaiProvider.chat({
|
||
systemMessage: 'You are a expert physicist.',
|
||
userMessage: 'Explain the implications of quantum entanglement',
|
||
messageHistory: []
|
||
});
|
||
|
||
// Use Claude for safety-critical applications
|
||
const claudeResponse = await ai.anthropicProvider.chat({
|
||
systemMessage: 'You are a medical advisor.',
|
||
userMessage: 'Review this patient data for concerns',
|
||
messageHistory: []
|
||
});
|
||
|
||
// Use Groq for lightning-fast responses
|
||
const groqResponse = await ai.groqProvider.chat({
|
||
systemMessage: 'You are a code reviewer.',
|
||
userMessage: 'Quick! Find the bug in this code: ...',
|
||
messageHistory: []
|
||
});
|
||
```
|
||
|
||
### 🌊 Real-Time Streaming
|
||
|
||
Build responsive chat interfaces with token-by-token streaming:
|
||
|
||
```typescript
|
||
// Create a chat stream
|
||
const stream = await ai.openaiProvider.chatStream(inputStream);
|
||
const reader = stream.getReader();
|
||
|
||
// Display responses as they arrive
|
||
while (true) {
|
||
const { done, value } = await reader.read();
|
||
if (done) break;
|
||
|
||
// Update UI in real-time
|
||
process.stdout.write(value);
|
||
}
|
||
```
|
||
|
||
### 🎙️ Text-to-Speech
|
||
|
||
Generate natural voices with OpenAI or ElevenLabs:
|
||
|
||
```typescript
|
||
// OpenAI TTS
|
||
const audioStream = await ai.openaiProvider.audio({
|
||
message: 'Welcome to the future of AI development!'
|
||
});
|
||
|
||
// ElevenLabs TTS - Premium quality, natural voices (uses v3 by default)
|
||
const elevenLabsAudio = await ai.elevenlabsProvider.audio({
|
||
message: 'Experience the most lifelike text to speech technology.',
|
||
voiceId: '19STyYD15bswVz51nqLf', // Optional: Samara voice
|
||
modelId: 'eleven_v3', // Optional: defaults to eleven_v3 (70+ languages, most expressive)
|
||
voiceSettings: { // Optional: fine-tune voice characteristics
|
||
stability: 0.5, // 0-1: Speech consistency
|
||
similarity_boost: 0.8, // 0-1: Voice similarity to original
|
||
style: 0.0, // 0-1: Expressiveness (higher = more expressive)
|
||
use_speaker_boost: true // Enhanced clarity
|
||
}
|
||
});
|
||
|
||
// Stream directly to speakers
|
||
audioStream.pipe(speakerOutput);
|
||
|
||
// Or save to file
|
||
audioStream.pipe(fs.createWriteStream('welcome.mp3'));
|
||
```
|
||
|
||
### 👁️ Vision Analysis
|
||
|
||
Understand images with multiple providers:
|
||
|
||
```typescript
|
||
const image = fs.readFileSync('product-photo.jpg');
|
||
|
||
// OpenAI: General purpose vision
|
||
const gptVision = await ai.openaiProvider.vision({
|
||
image,
|
||
prompt: 'Describe this product and suggest marketing angles'
|
||
});
|
||
|
||
// Anthropic: Detailed analysis
|
||
const claudeVision = await ai.anthropicProvider.vision({
|
||
image,
|
||
prompt: 'Identify any safety concerns or defects'
|
||
});
|
||
|
||
// Ollama: Private, local analysis
|
||
const ollamaVision = await ai.ollamaProvider.vision({
|
||
image,
|
||
prompt: 'Extract all text and categorize the content'
|
||
});
|
||
```
|
||
|
||
### 📄 Document Intelligence
|
||
|
||
Extract insights from PDFs with AI:
|
||
|
||
```typescript
|
||
const contract = fs.readFileSync('contract.pdf');
|
||
const invoice = fs.readFileSync('invoice.pdf');
|
||
|
||
// Analyze documents
|
||
const analysis = await ai.openaiProvider.document({
|
||
systemMessage: 'You are a legal expert.',
|
||
userMessage: 'Compare these documents and highlight key differences',
|
||
messageHistory: [],
|
||
pdfDocuments: [contract, invoice]
|
||
});
|
||
|
||
// Multi-document analysis
|
||
const taxDocs = [form1099, w2, receipts];
|
||
const taxAnalysis = await ai.anthropicProvider.document({
|
||
systemMessage: 'You are a tax advisor.',
|
||
userMessage: 'Prepare a tax summary from these documents',
|
||
messageHistory: [],
|
||
pdfDocuments: taxDocs
|
||
});
|
||
```
|
||
|
||
### 🔬 Research & Web Search
|
||
|
||
Perform deep research with web search capabilities across multiple providers:
|
||
|
||
```typescript
|
||
// OpenAI Deep Research - Comprehensive analysis
|
||
const deepResearch = await ai.openaiProvider.research({
|
||
query: 'What are the latest developments in quantum computing?',
|
||
searchDepth: 'deep',
|
||
includeWebSearch: true
|
||
});
|
||
|
||
console.log(deepResearch.answer);
|
||
console.log('Sources:', deepResearch.sources);
|
||
|
||
// Anthropic Web Search - Domain-filtered research
|
||
const anthropic = new AnthropicProvider({
|
||
anthropicToken: 'sk-ant-...',
|
||
enableWebSearch: true,
|
||
searchDomainAllowList: ['nature.com', 'science.org']
|
||
});
|
||
|
||
const scientificResearch = await anthropic.research({
|
||
query: 'Latest breakthroughs in CRISPR gene editing',
|
||
searchDepth: 'advanced'
|
||
});
|
||
|
||
// Perplexity - Research-focused with citations
|
||
const perplexityResearch = await ai.perplexityProvider.research({
|
||
query: 'Current state of autonomous vehicle technology',
|
||
searchDepth: 'deep' // Uses Sonar Pro model
|
||
});
|
||
```
|
||
|
||
**Research Options:**
|
||
- `searchDepth`: 'basic' | 'advanced' | 'deep'
|
||
- `maxSources`: Number of sources to include
|
||
- `includeWebSearch`: Enable web search (OpenAI)
|
||
- `background`: Run as background task (OpenAI)
|
||
|
||
**Supported Providers:**
|
||
- **OpenAI**: Deep Research API with specialized models (`o3-deep-research-2025-06-26`, `o4-mini-deep-research-2025-06-26`)
|
||
- **Anthropic**: Web Search API with domain filtering
|
||
- **Perplexity**: Sonar and Sonar Pro models with built-in citations
|
||
|
||
### 🎨 Image Generation & Editing
|
||
|
||
Generate and edit images with OpenAI's cutting-edge models:
|
||
|
||
```typescript
|
||
// Basic image generation with gpt-image-1
|
||
const image = await ai.openaiProvider.imageGenerate({
|
||
prompt: 'A futuristic robot assistant in a modern office, digital art',
|
||
model: 'gpt-image-1',
|
||
quality: 'high',
|
||
size: '1024x1024'
|
||
});
|
||
|
||
// Save the generated image
|
||
const imageBuffer = Buffer.from(image.images[0].b64_json!, 'base64');
|
||
fs.writeFileSync('robot.png', imageBuffer);
|
||
|
||
// Advanced: Transparent background with custom format
|
||
const logo = await ai.openaiProvider.imageGenerate({
|
||
prompt: 'Minimalist mountain peak logo, geometric design',
|
||
model: 'gpt-image-1',
|
||
quality: 'high',
|
||
size: '1024x1024',
|
||
background: 'transparent',
|
||
outputFormat: 'png'
|
||
});
|
||
|
||
// WebP with compression for web use
|
||
const webImage = await ai.openaiProvider.imageGenerate({
|
||
prompt: 'Product showcase: sleek smartphone on marble surface',
|
||
model: 'gpt-image-1',
|
||
quality: 'high',
|
||
size: '1536x1024',
|
||
outputFormat: 'webp',
|
||
outputCompression: 85
|
||
});
|
||
|
||
// Superior text rendering (gpt-image-1's strength)
|
||
const signage = await ai.openaiProvider.imageGenerate({
|
||
prompt: 'Vintage cafe sign saying "COFFEE & CODE" in hand-lettered typography',
|
||
model: 'gpt-image-1',
|
||
quality: 'high',
|
||
size: '1024x1024'
|
||
});
|
||
|
||
// Generate multiple variations at once
|
||
const variations = await ai.openaiProvider.imageGenerate({
|
||
prompt: 'Abstract geometric pattern, colorful minimalist art',
|
||
model: 'gpt-image-1',
|
||
n: 3,
|
||
quality: 'medium',
|
||
size: '1024x1024'
|
||
});
|
||
|
||
// Edit an existing image
|
||
const editedImage = await ai.openaiProvider.imageEdit({
|
||
image: originalImageBuffer,
|
||
prompt: 'Add sunglasses and change the background to a beach sunset',
|
||
model: 'gpt-image-1',
|
||
quality: 'high'
|
||
});
|
||
```
|
||
|
||
**Image Generation Options:**
|
||
- `model`: 'gpt-image-1' | 'dall-e-3' | 'dall-e-2'
|
||
- `quality`: 'low' | 'medium' | 'high' | 'auto'
|
||
- `size`: Multiple aspect ratios up to 4096×4096
|
||
- `background`: 'transparent' | 'opaque' | 'auto'
|
||
- `outputFormat`: 'png' | 'jpeg' | 'webp'
|
||
- `outputCompression`: 0-100 for webp/jpeg
|
||
- `moderation`: 'low' | 'auto'
|
||
- `n`: Number of images (1-10)
|
||
|
||
**gpt-image-1 Advantages:**
|
||
- Superior text rendering in images
|
||
- Up to 4096×4096 resolution
|
||
- Transparent background support
|
||
- Advanced output formats (WebP with compression)
|
||
- Better prompt understanding
|
||
- Streaming support for progressive rendering
|
||
|
||
### 🔄 Persistent Conversations
|
||
|
||
Maintain context across interactions:
|
||
|
||
```typescript
|
||
// Create a coding assistant conversation
|
||
const assistant = ai.createConversation('openai');
|
||
await assistant.setSystemMessage('You are an expert TypeScript developer.');
|
||
|
||
// First question
|
||
const inputWriter = assistant.getInputStreamWriter();
|
||
await inputWriter.write('How do I implement a singleton pattern?');
|
||
|
||
// Continue the conversation
|
||
await inputWriter.write('Now show me how to make it thread-safe');
|
||
|
||
// The assistant remembers the entire context
|
||
```
|
||
|
||
## 🚀 Real-World Examples
|
||
|
||
### Build a Customer Support Bot
|
||
|
||
```typescript
|
||
const supportBot = new SmartAi({
|
||
anthropicToken: process.env.ANTHROPIC_KEY // Claude for empathetic responses
|
||
});
|
||
|
||
async function handleCustomerQuery(query: string, history: ChatMessage[]) {
|
||
try {
|
||
const response = await supportBot.anthropicProvider.chat({
|
||
systemMessage: `You are a helpful customer support agent.
|
||
Be empathetic, professional, and solution-oriented.`,
|
||
userMessage: query,
|
||
messageHistory: history
|
||
});
|
||
|
||
return response.message;
|
||
} catch (error) {
|
||
// Fallback to another provider if needed
|
||
return await supportBot.openaiProvider.chat({...});
|
||
}
|
||
}
|
||
```
|
||
|
||
### Create a Code Review Assistant
|
||
|
||
```typescript
|
||
const codeReviewer = new SmartAi({
|
||
groqToken: process.env.GROQ_KEY // Groq for speed
|
||
});
|
||
|
||
async function reviewCode(code: string, language: string) {
|
||
const startTime = Date.now();
|
||
|
||
const review = await codeReviewer.groqProvider.chat({
|
||
systemMessage: `You are a ${language} expert. Review code for:
|
||
- Security vulnerabilities
|
||
- Performance issues
|
||
- Best practices
|
||
- Potential bugs`,
|
||
userMessage: `Review this code:\n\n${code}`,
|
||
messageHistory: []
|
||
});
|
||
|
||
console.log(`Review completed in ${Date.now() - startTime}ms`);
|
||
return review.message;
|
||
}
|
||
```
|
||
|
||
### Build a Research Assistant
|
||
|
||
```typescript
|
||
const researcher = new SmartAi({
|
||
perplexityToken: process.env.PERPLEXITY_KEY
|
||
});
|
||
|
||
async function research(topic: string) {
|
||
// Perplexity excels at web-aware research
|
||
const findings = await researcher.perplexityProvider.chat({
|
||
systemMessage: 'You are a research assistant. Provide factual, cited information.',
|
||
userMessage: `Research the latest developments in ${topic}`,
|
||
messageHistory: []
|
||
});
|
||
|
||
return findings.message;
|
||
}
|
||
```
|
||
|
||
### Local AI for Sensitive Data
|
||
|
||
```typescript
|
||
const localAI = new SmartAi({
|
||
ollama: {
|
||
baseUrl: 'http://localhost:11434',
|
||
model: 'llama2',
|
||
visionModel: 'llava'
|
||
}
|
||
});
|
||
|
||
// Process sensitive documents without leaving your infrastructure
|
||
async function analyzeSensitiveDoc(pdfBuffer: Buffer) {
|
||
const analysis = await localAI.ollamaProvider.document({
|
||
systemMessage: 'Extract and summarize key information.',
|
||
userMessage: 'Analyze this confidential document',
|
||
messageHistory: [],
|
||
pdfDocuments: [pdfBuffer]
|
||
});
|
||
|
||
// Data never leaves your servers
|
||
return analysis.message;
|
||
}
|
||
```
|
||
|
||
## ⚡ Performance Tips
|
||
|
||
### 1. Provider Selection Strategy
|
||
|
||
```typescript
|
||
class SmartAIRouter {
|
||
constructor(private ai: SmartAi) {}
|
||
|
||
async query(message: string, requirements: {
|
||
speed?: boolean;
|
||
accuracy?: boolean;
|
||
cost?: boolean;
|
||
privacy?: boolean;
|
||
}) {
|
||
if (requirements.privacy) {
|
||
return this.ai.ollamaProvider.chat({...}); // Local only
|
||
}
|
||
if (requirements.speed) {
|
||
return this.ai.groqProvider.chat({...}); // 10x faster
|
||
}
|
||
if (requirements.accuracy) {
|
||
return this.ai.anthropicProvider.chat({...}); // Best reasoning
|
||
}
|
||
// Default fallback
|
||
return this.ai.openaiProvider.chat({...});
|
||
}
|
||
}
|
||
```
|
||
|
||
### 2. Streaming for Large Responses
|
||
|
||
```typescript
|
||
// Don't wait for the entire response
|
||
async function streamResponse(userQuery: string) {
|
||
const stream = await ai.openaiProvider.chatStream(createInputStream(userQuery));
|
||
|
||
// Process tokens as they arrive
|
||
for await (const chunk of stream) {
|
||
updateUI(chunk); // Immediate feedback
|
||
await processChunk(chunk); // Parallel processing
|
||
}
|
||
}
|
||
```
|
||
|
||
### 3. Parallel Multi-Provider Queries
|
||
|
||
```typescript
|
||
// Get the best answer from multiple AIs
|
||
async function consensusQuery(question: string) {
|
||
const providers = [
|
||
ai.openaiProvider.chat({...}),
|
||
ai.anthropicProvider.chat({...}),
|
||
ai.perplexityProvider.chat({...})
|
||
];
|
||
|
||
const responses = await Promise.all(providers);
|
||
return synthesizeResponses(responses);
|
||
}
|
||
```
|
||
|
||
## 🛠️ Advanced Features
|
||
|
||
### Custom Streaming Transformations
|
||
|
||
```typescript
|
||
// Add real-time translation
|
||
const translationStream = new TransformStream({
|
||
async transform(chunk, controller) {
|
||
const translated = await translateChunk(chunk);
|
||
controller.enqueue(translated);
|
||
}
|
||
});
|
||
|
||
const responseStream = await ai.openaiProvider.chatStream(input);
|
||
const translatedStream = responseStream.pipeThrough(translationStream);
|
||
```
|
||
|
||
### Error Handling & Fallbacks
|
||
|
||
```typescript
|
||
class ResilientAI {
|
||
private providers = ['openai', 'anthropic', 'groq'];
|
||
|
||
async query(opts: ChatOptions): Promise<ChatResponse> {
|
||
for (const provider of this.providers) {
|
||
try {
|
||
return await this.ai[`${provider}Provider`].chat(opts);
|
||
} catch (error) {
|
||
console.warn(`${provider} failed, trying next...`);
|
||
continue;
|
||
}
|
||
}
|
||
throw new Error('All providers failed');
|
||
}
|
||
}
|
||
```
|
||
|
||
### Token Counting & Cost Management
|
||
|
||
```typescript
|
||
// Track usage across providers
|
||
class UsageTracker {
|
||
async trackedChat(provider: string, options: ChatOptions) {
|
||
const start = Date.now();
|
||
const response = await ai[`${provider}Provider`].chat(options);
|
||
|
||
const usage = {
|
||
provider,
|
||
duration: Date.now() - start,
|
||
inputTokens: estimateTokens(options),
|
||
outputTokens: estimateTokens(response.message)
|
||
};
|
||
|
||
await this.logUsage(usage);
|
||
return response;
|
||
}
|
||
}
|
||
```
|
||
|
||
## 📦 Installation & Setup
|
||
|
||
### Prerequisites
|
||
|
||
- Node.js 16+
|
||
- TypeScript 4.5+
|
||
- API keys for your chosen providers
|
||
|
||
### Environment Setup
|
||
|
||
```bash
|
||
# Install
|
||
npm install @push.rocks/smartai
|
||
|
||
# Set up environment variables
|
||
export OPENAI_API_KEY=sk-...
|
||
export ANTHROPIC_API_KEY=sk-ant-...
|
||
export PERPLEXITY_API_KEY=pplx-...
|
||
export ELEVENLABS_API_KEY=sk-...
|
||
# ... etc
|
||
```
|
||
|
||
### TypeScript Configuration
|
||
|
||
```json
|
||
{
|
||
"compilerOptions": {
|
||
"target": "ES2022",
|
||
"module": "NodeNext",
|
||
"lib": ["ES2022"],
|
||
"strict": true,
|
||
"esModuleInterop": true,
|
||
"skipLibCheck": true
|
||
}
|
||
}
|
||
```
|
||
|
||
## 🎯 Choosing the Right Provider
|
||
|
||
| Use Case | Recommended Provider | Why |
|
||
|----------|---------------------|-----|
|
||
| **General Purpose** | OpenAI | Most features, stable, well-documented |
|
||
| **Complex Reasoning** | Anthropic | Superior logical thinking, safer outputs |
|
||
| **Research & Facts** | Perplexity | Web-aware, provides citations |
|
||
| **Deep Research** | OpenAI | Deep Research API with comprehensive analysis |
|
||
| **Premium TTS** | ElevenLabs | Most natural voices, 70+ languages, superior quality (v3) |
|
||
| **Speed Critical** | Groq | 10x faster inference, sub-second responses |
|
||
| **Privacy Critical** | Ollama | 100% local, no data leaves your servers |
|
||
| **Real-time Data** | XAI | Access to current information |
|
||
| **Cost Sensitive** | Ollama/Exo | Free (local) or distributed compute |
|
||
|
||
## 📈 Roadmap
|
||
|
||
- [x] Research & Web Search API
|
||
- [x] Image generation support (gpt-image-1, DALL-E 3, DALL-E 2)
|
||
- [ ] Streaming function calls
|
||
- [ ] Voice input processing
|
||
- [ ] Fine-tuning integration
|
||
- [ ] Embedding support
|
||
- [ ] Agent framework
|
||
- [ ] More providers (Cohere, AI21, etc.)
|
||
|
||
## License and Legal Information
|
||
|
||
This repository contains open-source code that is licensed under the MIT License. A copy of the MIT License can be found in the [license](license) file within this repository.
|
||
|
||
**Please note:** The MIT License does not grant permission to use the trade names, trademarks, service marks, or product names of the project, except as required for reasonable and customary use in describing the origin of the work and reproducing the content of the NOTICE file.
|
||
|
||
### Trademarks
|
||
|
||
This project is owned and maintained by Task Venture Capital GmbH. The names and logos associated with Task Venture Capital GmbH and any related products or services are trademarks of Task Venture Capital GmbH and are not included within the scope of the MIT license granted herein. Use of these trademarks must comply with Task Venture Capital GmbH's Trademark Guidelines, and any usage must be approved in writing by Task Venture Capital GmbH.
|
||
|
||
### Company Information
|
||
|
||
Task Venture Capital GmbH
|
||
Registered at District court Bremen HRB 35230 HB, Germany
|
||
|
||
For any legal inquiries or if you require further information, please contact us via email at hello@task.vc.
|
||
|
||
By using this repository, you acknowledge that you have read this section, agree to comply with its terms, and understand that the licensing of the code does not imply endorsement by Task Venture Capital GmbH of any derivative works. |