feat(docs): Add integration for downloading and incorporating readme documents from external domains

This commit is contained in:
Philipp Kunz 2025-01-25 01:17:17 +01:00
parent 49573c41c9
commit 7fb30a6209
22 changed files with 3719 additions and 1 deletions

View File

@ -1,5 +1,12 @@
# Changelog # Changelog
## 2025-01-25 - 1.1.0 - feat(docs)
Add integration for downloading and incorporating readme documents from external domains
- Extended functionality in docs/.vitepress/config.ts to download and integrate README documents into documentation.
- Introduced new README files for several projects under the docs/push.rocks/ directory.
- Expanded project capabilities by integrating content from 'serve.zone' with update in README structures.
## 2025-01-25 - 1.0.10 - fix(docs) ## 2025-01-25 - 1.0.10 - fix(docs)
Updated handlebars template documentation and fixed repository process filtering. Updated handlebars template documentation and fixed repository process filtering.

View File

@ -29,6 +29,12 @@ export default async () => {
plugins.path.join(paths.docsDir, 'push.rocks'), plugins.path.join(paths.docsDir, 'push.rocks'),
); );
await helpers.downloadReadmes(
'https://code.foss.global',
'serve.zone',
plugins.path.join(paths.docsDir, 'serve.zone'),
);
return plugins.vitepress.defineConfig({ return plugins.vitepress.defineConfig({
lang: 'en-US', lang: 'en-US',

View File

@ -139,6 +139,7 @@ export async function downloadReadmes(
let readmeContent = atob(readmeContentResponseObject.content); let readmeContent = atob(readmeContentResponseObject.content);
readmeContent = `--- readmeContent = `---
title: "@${org}/${repo.name}" title: "@${org}/${repo.name}"
source: "gitea"
--- ---
${readmeContent}`; ${readmeContent}`;
const sanitizedRepoName = repoName.replace(/[^a-z0-9_\-]/gi, '_'); // Sanitize filename const sanitizedRepoName = repoName.replace(/[^a-z0-9_\-]/gi, '_'); // Sanitize filename

View File

@ -0,0 +1,354 @@
---
title: "@serve.zone/cloudly"
---
# @serve.zone/cloudly
A multi-cloud management tool utilizing Docker Swarmkit for orchestrating containerized apps across various cloud providers, with web, CLI, and API interfaces for configuration and integration management.
## Install
To install `@serve.zone/cloudly`, run the following command in your terminal:
```bash
npm install @serve.zone/cloudly --save
```
This will install the package and add it to your project's `package.json` dependencies.
## Usage
`@serve.zone/cloudly` is designed to provide a unified interface for managing multi-cloud environments, encapsulating complex cloud interactions with Docker Swarmkit into simpler, programmable entities. This document will guide you through various use-cases and implementation examples to give you a comprehensive understanding of the module's capabilities.
### Prerequisites
Before you begin, ensure your environment is set up correctly:
- You have Node.js installed (preferably the latest LTS version).
- Your environment is configured to use TypeScript if you're working in a TypeScript project.
### Basic Setup
#### Creating a Cloudly Instance
The foundation of working with `@serve.zone/cloudly` involves creating an instance of the `Cloudly` class. This instance serves as the gateway to managing cloud resources and orchestrates interactions within the platform. Here’s how to get started:
```typescript
import { Cloudly, ICloudlyConfig } from '@serve.zone/cloudly';
const myCloudlyConfig: ICloudlyConfig = {
cfToken: 'your_cloudflare_api_token',
hetznerToken: 'your_hetzner_api_token',
environment: 'development',
letsEncryptEmail: 'lets_encrypt_email@example.com',
publicUrl: 'example.com',
publicPort: '8443',
mongoDescriptor: {
mongoDbUrl: 'mongodb+srv://<username>:<password>@<cluster>.mongodb.net/myFirstDatabase',
mongoDbName: 'myDatabase',
mongoDbUser: 'myUser',
mongoDbPass: 'myPassword',
},
};
const myCloudlyInstance = new Cloudly(myCloudlyConfig);
```
The configuration object `ICloudlyConfig` provides essential information needed for initializing external services, such as Cloudflare, Hetzner, and a MongoDB server. Adjust the parameters to match your actual service credentials and specifications.
### Core Features and Use Cases
#### Orchestrating Docker Swarmkit Clusters
Docker Swarmkit cluster management is a primary feature of `@serve.zone/cloudly`. Through its abstracted, programmable interface, you can operate clusters effortlessly. Here’s an example of how to create a cluster using `Cloudly`:
```typescript
import { Cloudly } from '@serve.zone/cloudly';
interface ICluster {
name: string;
id: string;
cloudlyUrl: string;
servers: string[];
sshKeys: string[];
}
async function manageClusters() {
const myCloudlyConfig = {
cfToken: 'your_cloudflare_api_token',
environment: 'development',
mongoDescriptor: {
mongoDbUrl: 'mongodb+srv://<username>:<password>@<cluster>.mongodb.net/myFirstDatabase',
mongoDbName: 'myDatabase',
mongoDbUser: 'myUser',
mongoDbPass: 'myPassword',
},
letsEncryptEmail: 'lets_encrypt_email@example.com',
publicUrl: 'example.com',
publicPort: 8443,
hetznerToken: 'your_hetzner_api_token',
};
const myCloudlyInstance = new Cloudly(myCloudlyConfig);
await myCloudlyInstance.start();
const newCluster: ICluster = {
name: 'example_cluster',
id: 'example_cluster_id',
cloudlyUrl: 'https://example.com:8443',
servers: [],
sshKeys: [],
};
// Store the newly created cluster with Cloudly
const storedCluster = await myCloudlyInstance.clusterManager.storeCluster(newCluster);
console.log('Cluster stored:', storedCluster);
}
manageClusters();
```
In this scenario, a cluster called `example_cluster` is initialized using the `Cloudly` instance. This method represents a central mechanism to efficiently handle cluster entities and associated metadata.
#### Integrating With Cloudflare for DNS Management
`@serve.zone/cloudly` provides built-in capabilities for managing DNS records through integration with Cloudflare. Using the `CloudflareConnector`, you can programmatically create, manage, and delete DNS entries:
```typescript
import { Cloudly } from '@serve.zone/cloudly';
async function configureCloudflareDNS() {
const myCloudlyConfig = {
cfToken: 'your_cloudflare_api_token',
environment: 'development',
letsEncryptEmail: 'lets_encrypt_email@example.com',
publicUrl: 'example.com',
publicPort: 8443,
hetznerToken: 'your_hetzner_api_token',
mongoDescriptor: {
mongoDbUrl: 'mongodb+srv://<username>:<password>@<cluster>.mongodb.net/myFirstDatabase',
mongoDbName: 'myDatabase',
mongoDbUser: 'myUser',
mongoDbPass: 'myPassword',
},
};
const myCloudlyInstance = new Cloudly(myCloudlyConfig);
await myCloudlyInstance.start();
const cfConnector = myCloudlyInstance.cloudflareConnector.cloudflare;
const dnsRecord = await cfConnector.createDNSRecord('example.com', 'sub.example.com', 'A', '127.0.0.1');
console.log('DNS Record:', dnsRecord);
}
configureCloudflareDNS();
```
Here, you create an A record for the subdomain `sub.example.com` pointing to `127.0.0.1`. All communication with Cloudflare is handled directly through the interface without manual intervention.
#### Dynamic Interaction with DigitalOcean
DigitalOcean resource management, including droplet creation, is simplified in Cloudly. By extending the API to encapsulate calls to external providers, Cloudly provides a seamless experience:
```typescript
import { Cloudly } from '@serve.zone/cloudly';
async function createDigitalOceanDroplets() {
const myCloudlyConfig = {
cfToken: 'your_cloudflare_api_token',
environment: 'development',
letsEncryptEmail: 'lets_encrypt_email@example.com',
publicUrl: 'example.com',
publicPort: 8443,
hetznerToken: 'your_hetzner_api_token',
mongoDescriptor: {
mongoDbUrl: 'mongodb+srv://<username>:<password>@<cluster>.mongodb.net/myFirstDatabase',
mongoDbName: 'myDatabase',
mongoDbUser: 'myUser',
mongoDbPass: 'myPassword',
},
};
const myCloudlyInstance = new Cloudly(myCloudlyConfig);
await myCloudlyInstance.start();
const doConnector = myCloudlyInstance.digitaloceanConnector;
const droplet = await doConnector.createDroplet('example-droplet', 'nyc3', 's-1vcpu-1gb', 'ubuntu-20-04-x64');
console.log('Droplet created:', droplet);
}
createDigitalOceanDroplets();
```
In this script, a droplet named `example-droplet` is created within the `nyc3` region using the `ubuntu-20-04-x64` image. The module abstracts complexities by directly interfacing with DigitalOcean.
### Advanced Use Cases
#### Implementing Web Management Interface
`@serve.zone/cloudly` facilitates dashboard management with advanced Web Components built with `@design.estate`. This section of the library allows the creation of dynamic, interactive panels for real-time resource management in a modern browser interface.
```typescript
import { html } from '@design.estate/dees-element';
const renderDashboard = () => {
return html`
<cloudly-dashboard>
<dees-simple-appdash>
<!-- Define sections and elements -->
<cloudly-view-clusters></cloudly-view-clusters>
<cloudly-view-dns></cloudly-view-dns>
<cloudly-view-images></cloudly-view-images>
<!-- Other custom views -->
</dees-simple-appdash>
</cloudly-dashboard>
`;
};
document.body.appendChild(renderDashboard());
```
Utilizing the custom web components designed specifically for Cloudly, dashboards are adaptable, interactive, and maintainable. These elements allow you to structure a complete cloud management center without needing to delve into detailed UI engineering.
#### Comprehensive Log Management
With Cloudly’s Log Management capabilities, you can track and analyze system logs for better insights into your cloud ecosystem’s behavior:
```typescript
import { Cloudly } from '@serve.zone/cloudly';
async function initiateLogManagement() {
const myCloudlyConfig = {
cfToken: 'your_cloudflare_api_token',
environment: 'development',
letsEncryptEmail: 'lets_encrypt_email@example.com',
publicUrl: 'example.com',
publicPort: 8443,
hetznerToken: 'your_hetzner_api_token',
mongoDescriptor: {
mongoDbUrl: 'mongodb+srv://<username>:<password>@<cluster>.mongodb.net/myFirstDatabase',
mongoDbName: 'myDatabase',
mongoDbUser: 'myUser',
mongoDbPass: 'myPassword',
},
};
const myCloudlyInstance = new Cloudly(myCloudlyConfig);
await myCloudlyInstance.start();
const logs = await myCloudlyInstance.logManager.fetchLogs();
console.log('Logs:', logs);
}
initiateLogManagement();
```
Cloudly provides the tools needed to collect and process logs within your cloud infrastructure. Logs are an essential part of system validation, troubleshooting, monitoring, and auditing.
#### Secret Management and Bundles
Managing secrets securely and efficiently is critical for cloud operations. Cloudly allows you to create and manage secret groups and bundles that can be used across multiple applications and environments:
```typescript
import { Cloudly } from '@serve.zone/cloudly';
async function createSecrets() {
const myCloudlyConfig = {
cfToken: 'your_cloudflare_api_token',
environment: 'development',
letsEncryptEmail: 'lets_encrypt_email@example.com',
publicUrl: 'example.com',
publicPort: 8443,
hetznerToken: 'your_hetzner_api_token',
mongoDescriptor: {
mongoDbUrl: 'mongodb+srv://<username>:<password>@<cluster>.mongodb.net/myFirstDatabase',
mongoDbName: 'myDatabase',
mongoDbUser: 'myUser',
mongoDbPass: 'myPassword',
},
};
const myCloudlyInstance = new Cloudly(myCloudlyConfig);
await myCloudlyInstance.start();
const newSecretGroup = await myCloudlyInstance.secretManager.createSecretGroup({
name: 'example_secret_group',
secrets: [
{ key: 'SECRET_KEY', value: 's3cr3t' },
],
});
const newSecretBundle = await myCloudlyInstance.secretManager.createSecretBundle({
name: 'example_bundle',
secretGroups: [newSecretGroup],
});
console.log('Created Secret Group and Bundle:', newSecretGroup, newSecretBundle);
}
createSecrets();
```
Secrets, such as API keys and sensitive configuration data, are managed efficiently using secret groups and bundles. This structured approach to secret management enhances both security and accessibility.
### Task Scheduling and Management
With task buffers, you can schedule and manage background tasks integral to cloud operations:
```typescript
import { Cloudly } from '@serve.zone/cloudly';
import { TaskBuffer } from '@push.rocks/taskbuffer';
async function scheduleTasks() {
const myCloudlyConfig = {
cfToken: 'your_cloudflare_api_token',
environment: 'development',
letsEncryptEmail: 'lets_encrypt_email@example.com',
publicUrl: 'example.com',
publicPort: 8443,
hetznerToken: 'your_hetzner_api_token',
mongoDescriptor: {
mongoDbUrl: 'mongodb+srv://<username>:<password>@<cluster>.mongodb.net/myFirstDatabase',
mongoDbName: 'myDatabase',
mongoDbUser: 'myUser',
mongoDbPass: 'myPassword',
},
};
const myCloudlyInstance = new Cloudly(myCloudlyConfig);
await myCloudlyInstance.start();
const taskManager = new TaskBuffer();
taskManager.scheduleEvery('minute', async () => {
console.log('Running scheduled task...');
// Task logic
});
console.log('Tasks scheduled.');
}
scheduleTasks();
```
The example demonstrates setting up periodic task execution using task buffers as part of Cloudly's task management. Whether it's maintenance routines, data updates, or resource checks, tasks can be managed effectively.
This comprehensive overview of `@serve.zone/cloudly` is designed to help you leverage its full capabilities in managing multi-cloud environments. Each example is meant to serve as a starting point, and you are encouraged to explore further by consulting the relevant sections in the documentation, engaging with community discussions, or experimenting in your own environment.
## License and Legal Information
This repository contains open-source code that is licensed under the MIT License. A copy of the MIT License can be found in the [license](license) file within this repository.
**Please note:** The MIT License does not grant permission to use the trade names, trademarks, service marks, or product names of the project, except as required for reasonable and customary use in describing the origin of the work and reproducing the content of the NOTICE file.
### Trademarks
This project is owned and maintained by Task Venture Capital GmbH. The names and logos associated with Task Venture Capital GmbH and any related products or services are trademarks of Task Venture Capital GmbH and are not included within the scope of the MIT license granted herein. Use of these trademarks must comply with Task Venture Capital GmbH's Trademark Guidelines, and any usage must be approved in writing by Task Venture Capital GmbH.
### Company Information
Task Venture Capital GmbH
Registered at District court Bremen HRB 35230 HB, Germany
For any legal inquiries or if you require further information, please contact us via email at hello@task.vc.
By using this repository, you acknowledge that you have read this section, agree to comply with its terms, and understand that the licensing of the code does not imply endorsement by Task Venture Capital GmbH of any derivative works.

View File

@ -0,0 +1,347 @@
---
title: "@serve.zone/coreflow"
---
# @serve.zone/coreflow
A comprehensive solution for managing Docker and scaling applications across servers, handling tasks from service provisioning to network traffic management.
## Install
To install @serve.zone/coreflow, you can use npm with the following command:
```sh
npm install @serve.zone/coreflow --save
```
Given that this is a private package, make sure you have access to the required npm registry and that you are authenticated properly.
## Usage
Coreflow is designed as an advanced tool for managing Docker-based applications and services, enabling efficient scaling across servers, and handling multiple aspects of service provisioning and network traffic management. Below are examples and explanations to illustrate its capabilities and how you can leverage Coreflow in your infrastructure. Note that these examples are based on TypeScript and use ESM syntax.
### Prerequisites
Before you start, ensure you have Docker and Docker Swarm configured in your environment as Coreflow operates on top of these technologies. Additionally, verify that your environment variables are properly set up for accessing Coreflow's functionalities.
### Setting Up Coreflow
To get started, you need to import and initialize Coreflow within your application. Here's an example of how to do this in a TypeScript module:
```typescript
import { Coreflow } from '@serve.zone/coreflow';
// Initialize Coreflow
const coreflowInstance = new Coreflow();
// Start Coreflow
await coreflowInstance.start();
// Example: Add your logic here for handling Docker events
coreflowInstance.handleDockerEvents().then(() => {
console.log('Docker events are being handled.');
});
// Stop Coreflow when done
await coreflowInstance.stop();
```
In the above example:
- The Coreflow instance is initialized.
- Coreflow is started, which internally initializes various managers and connectors.
- The method `handleDockerEvents` is used to handle Docker events.
- Finally, Coreflow is stopped gracefully.
### Configuring Service Connections
Coreflow manages applications and services, often requiring direct interactions with other services like a database, message broker, or external API. Coreflow simplifies these connections through its configuration and service discovery layers.
```typescript
// Assuming coreflowInstance is already started as per previous examples
const serviceConnection = coreflowInstance.createServiceConnection({
serviceName: 'myDatabaseService',
servicePort: 3306,
});
serviceConnection.connect().then(() => {
console.log('Successfully connected to the service');
});
```
### Scaling Your Application
Coreflow excels in scaling applications across multiple servers. This involves not just replicating services, but also ensuring they are properly networked, balanced, and monitored.
```typescript
const scalingPolicy = {
serviceName: 'apiService',
replicaCount: 5, // Target number of replicas
maxReplicaCount: 10, // Maximum number of replicas
minReplicaCount: 2, // Minimum number of replicas
};
coreflowInstance.applyScalingPolicy(scalingPolicy).then(() => {
console.log('Scaling policy applied successfully.');
});
```
In the above example:
- A scaling policy is defined with target, maximum, and minimum replica counts for the `apiService`.
- The `applyScalingPolicy` method of the Coreflow instance is used to apply this scaling policy.
### Managing Network Traffic
One of Coreflow's key features is its ability to manage network traffic, ensuring that it is efficiently distributed among various services based on load, priority, and other custom rules.
```typescript
import { TrafficRule } from '@serve.zone/coreflow';
const rule: TrafficRule = {
serviceName: 'webService',
externalPort: 80,
internalPort: 3000,
protocol: 'http',
};
coreflowInstance.applyTrafficRule(rule).then(() => {
console.log('Traffic rule applied successfully.');
});
```
In the above example:
- A traffic rule is defined for the `webService`, redirecting external traffic from port 80 to the service's internal port 3000.
- The `applyTrafficRule` method is used to enforce this rule.
### Continuous Deployment
Coreflow integrates continuous integration and deployment processes, allowing seamless updates and rollbacks for your services:
```typescript
const deploymentConfig = {
serviceName: 'userAuthService',
image: 'myregistry.com/userauthservice:latest',
updatePolicy: 'rolling', // or "recreate"
};
coreflowInstance.deployService(deploymentConfig).then(() => {
console.log('Service deployed successfully.');
});
```
In the above example:
- A deployment configuration is created for the `userAuthService` using the latest image from the specified registry.
- The `deployService` method is then used to deploy the service using the specified update policy (e.g., rolling updates or recreating the service).
### Observability and Monitoring
To keep track of your applications' health and performance, Coreflow provides tools for logging, monitoring, and alerting.
```typescript
coreflowInstance.monitorService('webService').on('serviceHealthUpdate', (healthStatus) => {
console.log(`Received health update for webService: ${healthStatus}`);
});
```
In the above example:
- The `monitorService` method is used to monitor the health status of the `webService`.
- When a health update event is received, it is logged to the console.
### Detailed Example: Setting Up and Managing Coreflow
Here is a detailed example that covers various features, from setup to scaling and traffic management.
#### Step 1: Initialize Coreflow
```typescript
import { Coreflow } from '@serve.zone/coreflow';
const coreflowInstance = new Coreflow();
async function initializeCoreflow() {
await coreflowInstance.start();
console.log('Coreflow initialized.');
await manageServices();
}
initializeCoreflow().catch((error) => {
console.error('Error initializing Coreflow:', error);
});
```
#### Step 2: Handling Docker Events
```typescript
coreflowInstance.handleDockerEvents().then(() => {
console.log('Docker events are being handled.');
});
```
#### Step 3: Configuring and Connecting to a Service
```typescript
const serviceConnection = coreflowInstance.createServiceConnection({
serviceName: 'databaseService',
servicePort: 5432,
});
serviceConnection.connect().then(() => {
console.log('Successfully connected to the database service.');
});
```
#### Step 4: Applying a Scaling Policy
```typescript
const scalingPolicy = {
serviceName: 'microserviceA',
replicaCount: 3, // Starting with 3 replicas
maxReplicaCount: 10, // Allowing up to 10 replicas
minReplicaCount: 2, // Ensuring at least 2 replicas
};
coreflowInstance.applyScalingPolicy(scalingPolicy).then(() => {
console.log('Scaling policy applied for microserviceA');
});
```
#### Step 5: Managing Network Traffic
```typescript
import { TrafficRule } from '@serve.zone/coreflow';
const trafficRules: TrafficRule[] = [
{
serviceName: 'frontendService',
externalPort: 80,
internalPort: 3000,
protocol: 'http',
},
{
serviceName: 'apiService',
externalPort: 443,
internalPort: 4000,
protocol: 'https',
},
];
Promise.all(trafficRules.map((rule) => coreflowInstance.applyTrafficRule(rule))).then(() => {
console.log('Traffic rules applied.');
});
```
#### Step 6: Deploying a Service
```typescript
const deploymentConfig = {
serviceName: 'authService',
image: 'myregistry.com/authservice:latest',
updatePolicy: 'rolling', // Performing rolling updates
};
coreflowInstance.deployService(deploymentConfig).then(() => {
console.log('AuthService deployed successfully.');
});
```
#### Step 7: Monitoring a Service
```typescript
coreflowInstance.monitorService('frontendService').on('serviceHealthUpdate', (healthStatus) => {
console.log(`Health update for frontendService: ${healthStatus}`);
});
```
### Advanced Usage: Task Scheduling and Traffic Configuration
In more complex scenarios, you might want to leverage Coreflow's ability to schedule tasks and manage traffic configurations.
#### Scheduling Tasks
Coreflow supports scheduling updates and other tasks using the `taskBuffer` API.
```typescript
import { Task } from '@push.rocks/taskbuffer';
const checkinTask = new Task({
name: 'checkin',
buffered: true,
taskFunction: async () => {
console.log('Running checkin task...');
},
});
const taskManager = coreflowInstance.taskManager;
taskManager.addAndScheduleTask(checkinTask, '0 * * * * *'); // Scheduling task to run every minute
taskManager.start().then(() => {
console.log('Task manager started.');
});
```
#### Managing Traffic Routing
Coreflow can manage complex traffic routing scenarios, such as configuring reverse proxies for different services.
```typescript
import { CoretrafficConnector } from '@serve.zone/coreflow';
// Assume coreflowInstance is already started
const coretrafficConnector = new CoretrafficConnector(coreflowInstance);
const reverseProxyConfigs = [
{
hostName: 'example.com',
destinationIp: '192.168.1.100',
destinationPort: '3000',
privateKey: '<your-private-key>',
publicKey: '<your-public-key>',
},
{
hostName: 'api.example.com',
destinationIp: '192.168.1.101',
destinationPort: '4000',
privateKey: '<your-private-key>',
publicKey: '<your-public-key>',
},
];
coretrafficConnector.setReverseConfigs(reverseProxyConfigs).then(() => {
console.log('Reverse proxy configurations applied.');
});
```
### Integrating with Cloudly
Coreflow is designed to integrate seamlessly with Cloudly, a configuration management and orchestration tool.
#### Starting the Cloudly Connector
```typescript
const cloudlyConnector = coreflowInstance.cloudlyConnector;
cloudlyConnector.start().then(() => {
console.log('Cloudly connector started.');
});
```
#### Retrieving and Applying Configurations from Cloudly
```typescript
cloudlyConnector.getConfigFromCloudly().then((config) => {
console.log('Received configuration from Cloudly:', config);
coreflowInstance.clusterManager.provisionWorkloadServices(config).then(() => {
console.log('Workload services provisioned based on Cloudly config.');
});
});
```
### Conclusion
Coreflow is a powerful and flexible tool for managing Docker-based applications, scaling services, configuring network traffic, handling continuous deployments, and ensuring observability of your infrastructure. The examples provided aim to give a comprehensive understanding of how to use Coreflow in various scenarios, ensuring it meets your DevOps and CI/CD needs.
By leveraging Coreflow's rich feature set, you can optimize your infrastructure for high availability, scalability, and efficient operation across multiple servers and environments.
undefined

View File

@ -0,0 +1,179 @@
---
title: "@serve.zone/corerender"
---
# Corerender
A rendering service for serve.zone that preserves styles for web components.
## Install
To install Corerender in your project, you can use npm. Make sure you have Node.js installed and then run the following command in your terminal:
```shell
npm install corerender
```
This will add `corerender` as a dependency to your project, allowing you to use its rendering services to preserve styles for web components efficiently.
## Usage
Welcome to the comprehensive usage guide for `corerender`, a powerful rendering service designed to integrate seamlessly within your web applications, ensuring that styles for web components are preserved properly. The guide is structured to provide a thorough understanding of `corerender`'s capabilities, demonstrating its flexibility and efficiency through realistic scenarios.
### Setting Up Your Environment
First things first, let’s get `corerender` up and running in your project. Ensure you've installed the package as detailed in the [Install](#install) section. Since `corerender` is a TypeScript-friendly library, it is recommended to use TypeScript for development to leverage the full power of type safety and IntelliSense.
### Basic Render Service Setup
```typescript
import { Rendertron } from 'corerender';
const rendertronInstance = new Rendertron();
(async () => {
console.log('Starting rendertron...');
await rendertronInstance.start();
console.log('Rendertron started successfully!');
})();
```
The code initializes an instance of `Rendertron` and starts the service asynchronously. `Rendertron` is the core class responsible for managing the rendering processes, including task scheduling and storing rendering results persistently in a database.
### Understanding the Rendertron Architecture
The architecture of `Rendertron` is designed to support web component rendering through several integral components:
1. **Prerender Manager**: Manages the creation and retrieval of prerender results.
2. **Task Manager**: Handles scheduling tasks for prerendering operations and cleanup routines.
3. **Utility Service Server**: Provides the server interface that accepts/render requests and serves prerendered content efficiently.
### Using the Prerender Manager
The `PrerenderManager` is responsible for generating and caching the rendering results of webpages. Here’s how you can use the `PrerenderManager` to prerender a webpage:
```typescript
import { PrerenderManager } from 'corerender/dist_ts/rendertron.classes.prerendermanager';
(async () => {
const prerenderManager = new PrerenderManager();
await prerenderManager.start();
const urlToPrerender = 'https://example.com';
const prerenderResult = await prerenderManager.getPrerenderResultForUrl(urlToPrerender);
console.log(`Prerendered content for ${urlToPrerender}:`);
console.log(prerenderResult);
await prerenderManager.stop();
})();
```
The above script demonstrates accessing a webpage's prerendered content. It initializes the `PrerenderManager`, specifies a URL, and requests the rendering result, which is stored or retrieved from the database.
### Scheduling Prerendering Tasks
The `TaskManager` class allows for efficiently scheduling tasks, such as regular prerendering of local domains and cleanup of outdated render results:
```typescript
import { TaskManager } from 'corerender/dist_ts/rendertron.taskmanager';
const taskManager = new TaskManager(rendertronInstance);
taskManager.start();
// Example: Manual trigger of a specific task
taskManager.triggerTaskByName('prerenderLocalDomains');
taskManager.stop();
```
`TaskManager` works closely with the `Rendertron` service to ensure tasks are executed as per defined schedules (e.g., every 30 minutes or daily). It allows manual triggering for immediate execution outside the schedule.
### Managing Render Results
The pre-rendered results are stored using `smartdata`’s `SmartDataDbDoc`. You may need advanced control over whether these are retrieved, created anew, or updated:
```typescript
import { PrerenderResult } from 'corerender/dist_ts/rendertron.classes.prerenderresult';
(async () => {
const url = 'https://example.com';
let prerenderResult = await PrerenderResult.getPrerenderResultForUrl(prerenderManager, url);
// Check if an updated result is necessary
if (prerenderResultNeedsUpdate(prerenderResult)) {
prerenderResult = await PrerenderResult.createPrerenderResultForUrl(prerenderManager, url);
}
console.log(`Final Prerendered content for ${url}:`, prerenderResult.renderResultString);
})();
```
### Integrating with External Systems
`Corerender` can be integrated into broader systems that programmatically manage URLs and rendering frequencies. For instance, parsing and prerendering sitemaps:
```typescript
class IntegrationExample {
private prerenderManager: PrerenderManager;
constructor() {
this.prerenderManager = new PrerenderManager();
}
async prerenderFromSitemap(sitemapUrl: string) {
await this.prerenderManager.prerenderSitemap(sitemapUrl);
console.log('Finished prerendering sitemap:', sitemapUrl);
}
}
(async () => {
const integrationExample = new IntegrationExample();
await integrationExample.prerenderFromSitemap('https://example.com/sitemap.xml');
})();
```
### Server-Side Rendering Directly with SmartSSR
`Rendertron` uses the highly efficient `smartssr` for SSR requests. You can easily direct incoming server requests to utilize this rendering pipeline:
```typescript
import { typedserver } from 'corerender/dist_ts/rendertron.plugins';
const serviceServerInstance = new typedserver.utilityservers.UtilityServiceServer({
serviceDomain: 'rendertron.example.com',
serviceName: 'RendertronService',
serviceVersion: '2.0.61', // Replace with dynamic version retrieval if needed
addCustomRoutes: async (serverArg) => {
serverArg.addRoute(
'/render/*',
new typedserver.servertools.Handler('GET', async (req, res) => {
const requestedUrl = req.url.replace('/render/', '');
const prerenderedContent = await prerenderManager.getPrerenderResultForUrl(requestedUrl);
res.write(prerenderedContent);
res.end();
})
);
},
});
(async () => {
await serviceServerInstance.start();
console.log('SSR Server Started');
})();
```
### Customizing the Logger
`Rendertron` employs the `smartlog` package for logging activities across the service. To customize logging, instantiate a logger with custom configurations:
```typescript
import { smartlog } from 'corerender/dist_ts/rendertron.plugins';
const customLogger = smartlog.Smartlog.create({ /* custom options */ });
customLogger.log('info', 'Custom logger integrated successfully.');
```
### Closing Remarks
With these examples, you should have a robust understanding of how to implement `corerender` in your web application. It’s a powerful service that takes care of rendering optimizations, allowing developers to focus on building components and architecture, with clear workflows to handle tasks and results efficiently.
undefined

View File

@ -0,0 +1,227 @@
---
title: "@serve.zone/coretraffic"
---
# CoreTraffic
route traffic within your docker setup. TypeScript ready.
## Install
To install `coretraffic`, you should have Node.js already set up on your system. Assuming Node.js and npm are ready, install the package via the npm registry with the following command:
```bash
npm install coretraffic
```
To make the most out of `coretraffic`, ensure TypeScript is set up in your development environment, as this module is TypeScript ready and provides enhanced IntelliSense.
## Usage
`coretraffic` is designed to manage and route traffic within a Docker setup, offering robust solutions to your traffic management needs with an emphasis on efficiency and reliability. Utilizing TypeScript for static typing, you get enhanced code completion and fewer runtime errors. The module is set up to handle the intricacies of proxy configuration and routing, providing a powerful foundation for any Docker-based traffic management application.
Below, we'll delve into the capabilities and features of `coretraffic`, complete with comprehensive examples to get you started.
### Initializing CoreTraffic
First, you'll want to create an instance of the `CoreTraffic` class. This serves as the entry point to accessing the module's capabilities.
```typescript
import { CoreTraffic } from 'coretraffic';
// Create an instance of CoreTraffic
const coreTrafficInstance = new CoreTraffic();
```
This initializes the `coreTrafficInstance` with default properties, ready to configure routing settings and handle proxy configurations.
### Starting and Stopping CoreTraffic
Controlling the lifecycle of your `CoreTraffic` instance is key to effective traffic management. You can start and stop the instance with straightforward method calls.
#### Starting CoreTraffic
To begin managing traffic, start the instance. This sets up the internal network proxy and SSL redirection services, making them ready to handle incoming requests.
```typescript
async function startCoreTraffic() {
await coreTrafficInstance.start();
console.log('CoreTraffic is now running.');
}
startCoreTraffic();
```
This code initializes all internal components, including `NetworkProxy` and `SslRedirect`, beginning to route traffic as configured.
#### Stopping CoreTraffic
When you no longer need to route traffic, shutting down the instance cleanly is important. `CoreTraffic` provides a `stop` method for this purpose.
```typescript
async function stopCoreTraffic() {
await coreTrafficInstance.stop();
console.log('CoreTraffic has stopped.');
}
stopCoreTraffic();
```
This ensures all background tasks are halted, and network configurations are cleaned up.
### Configuring Network Proxy
A core feature of `CoreTraffic` is its ability to configure network proxies dynamically. At its heart is the `NetworkProxy` class, a powerful tool for managing routing configurations.
#### Adding Default Headers
You may wish to integrate unique headers across all routed requests, possible with the `addDefaultHeaders` method. This is useful for tagging requests or managing CORS.
```typescript
coreTrafficInstance.networkProxy.addDefaultHeaders({
'x-powered-by': 'coretraffic',
'custom-header': 'custom-value'
});
```
This injects custom headers into all outgoing responses managed by `coretraffic`, thereby allowing customized interaction with requests as needed.
#### Proxy Configuration Updates
Dynamic updates to proxy configurations can be facilitated via tasks managed by `CoretrafficTaskManager`. This feature allows adjustment of routing rules without interrupting service.
```typescript
import { IReverseProxyConfig } from '@tsclass/network';
const configureRouting = async () => {
const reverseProxyConfig: IReverseProxyConfig[] = [{
// Example configuration, adjust as needed
host: 'example.com',
target: 'http://internal-service:3000',
}];
await coreTrafficInstance.taskmanager.setupRoutingTask.trigger(reverseProxyConfig);
console.log('Updated routing configurations');
};
configureRouting();
```
In this example, a reverse proxy configuration is defined, specifying that requests to `example.com` should be directed to an internal service.
### SSL Redirection
`CoreTraffic` supports SSL redirection, an essential feature for secure communications. The `SslRedirect` component listens on one port to redirect traffic to the secure version on another port.
```typescript
// SslRedirect is initialized on port 7999 by default
console.log('SSL Redirection is active!');
```
Out-of-the-box, this listens on the configurable port and safely forwards insecure HTTP traffic to its HTTPS counterpart.
### Coreflow Connector
A unique aspect of `coretraffic` is its integration capability with `Coreflow`, allowing communication between different network nodes. The `CoreflowConnector` facilitates receiving configuration updates via a socket connection.
#### Setting up the CoreflowConnector
```typescript
const coreflowConnector = coreTrafficInstance.coreflowConnector;
async function connectCoreflow() {
await coreflowConnector.start();
console.log('Coreflow connector activated.');
}
connectCoreflow();
```
This method enables a persistent connection to a Coreflow server, allowing real-time configuration updates and management of routing policies.
#### Stopping the CoreflowConnector
To disconnect cleanly:
```typescript
async function disconnectCoreflow() {
await coreflowConnector.stop();
console.log('Coreflow connector terminated.');
}
disconnectCoreflow();
```
This halts the connection, ensuring no dangling resources remain when shutting down your application.
### Task Management
The `CoretrafficTaskManager` handles complex, buffered tasks. Flexibility and power are at your fingertips with this system, ideal for timed or queued execution needs.
#### Managing Tasks
Here is how you would initiate the task manager:
```typescript
const taskManager = coreTrafficInstance.taskmanager;
// Start tasks
taskManager.start()
.then(() => console.log('Task manager is running'))
.catch(err => console.error('Failed to start task manager', err));
```
Stop tasks once processing is no longer required:
```typescript
taskManager.stop()
.then(() => console.log('Task manager stopped'))
.catch(err => console.error('Failed to stop task manager', err));
```
### Logging and Debugging
Effective logging is provided using `Smartlog`, designed to track detailed application insights and report on activity and actions within `coretraffic`.
#### Configuring Log Levels
`coretraffic` supports log levels which can be adjusted as per your requirements:
```typescript
import { logger } from './coretraffic.logging.ts';
logger.log('info', 'System initialized');
logger.log('debug', 'Detailed debugging process');
logger.log('warn', 'Potential issue detected');
logger.log('error', 'An error has occurred');
```
These log entries help monitor logic flow and catch issues during development or deployment in production environments.
### Test Setup
For those interested in testing, `coretraffic` uses `tapbundle` and `tstest` to ensure reliability and correctness. A sample test module is provided to demonstrate initialization and lifecycle actions.
Here’s an example of a non-CI test scenario:
```typescript
import * as coretraffic from '../ts/index.js';
import { tap, expect } from '@push.rocks/tapbundle';
let testCoreTraffic;
tap.test('should create and handle coretraffic instances', async () => {
testCoreTraffic = new coretraffic.CoreTraffic();
expect(testCoreTraffic).toBeInstanceOf(coretraffic.CoreTraffic);
await testCoreTraffic.start();
await new Promise(resolve => setTimeout(resolve, 10000)); // Keep alive for demonstration
await testCoreTraffic.stop();
});
tap.start();
```
This test suite validates essential functionality within development iterations, ensuring `coretraffic` performs as expected.
`coretraffic` offers a vast landscape of operations within Docker environments, handling traffic with modularity and efficiency. Whether starting simple routing tasks or integrating with complex systems like Coreflow, this module provides robust support where needed most. Embrace your traffic management challenges with the dedicated features of `coretraffic`.
undefined

View File

@ -0,0 +1,169 @@
---
title: "@serve.zone/nullresolve"
---
# @losslessone_private/nullresolve
The nullresolve is a robust service designed to manage and handle requests effectively within the servzone architecture. It ensures requests that would normally remain unserved receive appropriate handling and feedback.
## Install
To install the `@losslessone_private/nullresolve` package, it is essential to first set up a proper environment for handling private npm packages due to its private nature. This can be achieved through npm or yarn, which are both suitable JavaScript package managers.
### Step-by-Step Installation:
1. **Ensure you are logged into npm** with sufficient permissions to access private packages:
```bash
npm login
```
Authentication is necessary for accessing private modules like `@losslessone_private/nullresolve`.
2. **Install Using npm:**
```bash
npm install @losslessone_private/nullresolve
```
If you are using a specific registry for your company or project, make sure to specify it in your npm configuration.
3. **Install Using Yarn:**
```bash
yarn add @losslessone_private/nullresolve
```
After these steps, the module should be ready for use in your JavaScript or TypeScript project.
## Usage
The purpose of `nullresolve` is pivotal within a network ecosystem, particularly one that interfaces directly with user requests and external resources. Below, a comprehensive guide exists to demonstrate effective usage of this module within applications.
### Quick Start Example
Initialization and launching of a nullresolve service can be done succinctly:
```typescript
// Import the NullResolve class from the package
import { NullResolve } from '@losslessone_private/nullresolve';
// Create an instance of NullResolve
const myNullResolveService = new NullResolve();
// Start the service
myNullResolveService.start().then(() => {
console.log('NullResolve service is running!');
}).catch((error) => {
console.error('Error starting NullResolve service:', error);
});
// Stop the service gracefully
process.on('SIGINT', async () => {
await myNullResolveService.stop();
console.log('NullResolve service stopped.');
process.exit(0);
});
```
### Detailed Guide: Handling Requests and Custom Routes
`nullresolve` can swiftly handle complex request scenarios utilizing its robust framework. Here's a detailed example of setting up custom handler routes that can respond with various HTTP statuses or custom messages based on the request:
```typescript
import { NullResolve } from '@losslessone_private/nullresolve';
// Initialize the service
const myService = new NullResolve();
// Start the service with custom routes
myService.serviceServer.addCustomRoutes(async (server) => {
server.addRoute(
'/error/:code',
new plugins.typedserver.servertools.Handler('GET', async (req, res) => {
let message;
switch (req.params.code) {
case '404':
message = 'This resource was not found.';
break;
case '500':
message = 'Internal Server Error. Please try later.';
break;
default:
message = 'An unexpected error occurred.';
}
res.status(200).send(`<html><body><h1>${message}</h1></body></html>`);
})
);
});
// Activating the service
myService.start().then(() => {
console.log('Custom route service started.');
}).catch((err) => {
console.error('Error while starting the service:', err);
});
```
### Integrating Logging and Monitoring
Given the mission-critical nature of services like `nullresolve`, reliable logging is indispensable to monitor activities and diagnose issues swiftly. This is integrated by default using the `smartlog` module for robust logging capabilities:
```typescript
import { logger } from './nullresolve.logging.js';
// Utilize the logger for tracking and problem-solving
logger.info('Service Log: nullresolve service initiated');
logger.warn('Warning Log: Potential issue detected');
logger.error('Error Log: An error occurred in service operation');
```
### Advanced Configuration
For systems requiring specialized setups, nullresolve offers configurability through both code and external configuration objects:
```typescript
// Customize through code
const config = {
domain: 'customdomain.com',
port: 8080,
routes: [
{
method: 'GET',
path: '/status/check',
handler: async (req, res) => {
res.status(200).send('Service is operational.');
}
}
]
};
myService.configure(config);
// Running the service with a new configuration
myService.start();
```
### Graceful Shutdown and Resource Management
Services such as the one provided by `nullresolve` must incorporate mechanisms to stop gracefully, allowing them to release resources and finish current tasks before complete termination:
```typescript
process.on('SIGTERM', async () => {
logger.info('Service is stopping gracefully.');
await myService.stop();
logger.info('Service has been successfully stopped.');
process.exit(0);
});
```
### Custom Error Handling Strategies
It is often beneficial to ensure that the service reacts gracefully during unexpected shutdowns or errors. Here's an example of implementing a strategy for error handling:
```typescript
const handleCriticalError = (err: Error) => {
logger.error(`Critical Error: ${err.message}`);
process.exit(1);
};
process.on('unhandledRejection', handleCriticalError);
process.on('uncaughtException', handleCriticalError);
```
By deploying `nullresolve` strategically within your infrastructure, it can transform how unhandled requests and errors are addressed, providing comprehensive protection and valuable insights into system status and health. This guide should serve to ensure effective deployment, utilization, and management of this sophisticated null service.
undefined

View File

@ -0,0 +1,34 @@
---
title: "@serve.zone/platformclient"
---
# @serve.zone/platformclient
a module that makes it really easy to use the serve.zone platform inside your app
## Availabililty and Links
* [npmjs.org (npm package)](https://www.npmjs.com/package/@serve.zone/platformclient)
* [gitlab.com (source)](https://gitlab.com/serve.zone/platformclient)
* [github.com (source mirror)](https://github.com/serve.zone/platformclient)
* [docs (typedoc)](https://serve.zone.gitlab.io/platformclient/)
## Status for master
Status Category | Status Badge
-- | --
GitLab Pipelines | [![pipeline status](https://gitlab.com/serve.zone/platformclient/badges/master/pipeline.svg)](https://lossless.cloud)
GitLab Pipline Test Coverage | [![coverage report](https://gitlab.com/serve.zone/platformclient/badges/master/coverage.svg)](https://lossless.cloud)
npm | [![npm downloads per month](https://badgen.net/npm/dy/@serve.zone/platformclient)](https://lossless.cloud)
Snyk | [![Known Vulnerabilities](https://badgen.net/snyk/serve.zone/platformclient)](https://lossless.cloud)
TypeScript Support | [![TypeScript](https://badgen.net/badge/TypeScript/>=%203.x/blue?icon=typescript)](https://lossless.cloud)
node Support | [![node](https://img.shields.io/badge/node->=%2010.x.x-blue.svg)](https://nodejs.org/dist/latest-v10.x/docs/api/)
Code Style | [![Code Style](https://badgen.net/badge/style/prettier/purple)](https://lossless.cloud)
PackagePhobia (total standalone install weight) | [![PackagePhobia](https://badgen.net/packagephobia/install/@serve.zone/platformclient)](https://lossless.cloud)
PackagePhobia (package size on registry) | [![PackagePhobia](https://badgen.net/packagephobia/publish/@serve.zone/platformclient)](https://lossless.cloud)
BundlePhobia (total size when bundled) | [![BundlePhobia](https://badgen.net/bundlephobia/minzip/@serve.zone/platformclient)](https://lossless.cloud)
## Usage
Use TypeScript for best in class intellisense
For further information read the linked docs at the top of this readme.
## Legal
> MIT licensed | **&copy;** [Task Venture Capital GmbH](https://task.vc)
| By using this npm module you agree to our [privacy policy](https://lossless.gmbH/privacy)

View File

@ -0,0 +1,129 @@
---
title: "@serve.zone/platformservice"
---
# @serve.zone/platformservice
contains the platformservice container with mail, sms, letter, ai services.
## Install
To install `@serve.zone/platformservice`, run the following command:
```sh
npm install @serve.zone/platformservice --save
```
Make sure you have Node.js and npm installed on your system to use this package.
## Usage
This document provides extensive usage scenarios for the `@serve.zone/platformservice`, a comprehensive ESM module written in TypeScript offering a wide range of services such as mail, SMS, letter, and artificial intelligence (AI) functionalities. This service is an exemplar of a modular design, allowing users to leverage various communication methods and AI services efficiently. Key features provided by this platform include sending and receiving emails, managing SMS services, letter dispatching, and utilizing AI for diverse purposes.
### Prerequisites
Before diving into the examples, ensure you have the platform service installed and configured correctly. The package leverages environment variables for configuration, so you must set up the necessary variables, including service endpoints, authentication tokens, and database connections.
### Initialization
First, initialize the platform service, ensuring all dependencies are correctly loaded and configured:
```ts
import { SzPlatformService } from '@serve.zone/platformservice';
async function initService() {
const platformService = new SzPlatformService();
await platformService.start();
console.log('Platform service initialized successfully.');
}
initService();
```
### Sending Emails
One of the primary services offered is email management. Here's how to send an email using the platform service:
```ts
import { EmailService, IEmailOptions } from '@serve.zone/platformservice';
async function sendEmail() {
const emailOptions: IEmailOptions = {
from: 'no-reply@example.com',
to: 'recipient@example.com',
subject: 'Test Email',
body: '<h1>This is a test email</h1>',
};
const emailService = new EmailService('MAILGUN_API_KEY'); // Replace with your real API key
await emailService.sendEmail(emailOptions);
console.log('Email sent successfully.');
}
sendEmail();
```
### Managing SMS
Similar to email, the platform also facilitates SMS sending:
```ts
import { SmsService, ISmsConstructorOptions } from '@serve.zone/platformservice';
async function sendSms() {
const smsOptions: ISmsConstructorOptions = {
apiGatewayApiToken: 'SMS_API_TOKEN', // Replace with your real token
};
const smsService = new SmsService(smsOptions);
await smsService.sendSms(1234567890, 'SENDER_NAME', 'This is a test SMS.');
console.log('SMS sent successfully.');
}
sendSms();
```
### Dispatching Letters
For physical mail correspondence, the platform provides a letter service:
```ts
import { LetterService, ILetterConstructorOptions } from '@serve.zone/platformservice';
async function sendLetter() {
const letterOptions: ILetterConstructorOptions = {
letterxpressUser: 'USER',
letterxpressToken: 'TOKEN',
};
const letterService = new LetterService(letterOptions);
await letterService.sendLetter('This is a test letter body.', {address: 'Recipient Address', name: 'Recipient Name'});
console.log('Letter dispatched successfully.');
}
sendLetter();
```
### Leveraging AI Services
The platform also integrates AI functionalities, allowing for innovative use cases like generating content, analyzing text, or automating responses:
```ts
import { AiService } from '@serve.zone/platformservice';
async function useAiService() {
const aiService = new AiService('OPENAI_API_KEY'); // Replace with your real API key
const response = await aiService.generateText('Prompt for the AI service.');
console.log(`AI response: ${response}`);
}
useAiService();
```
### Conclusion
The `@serve.zone/platformservice` offers a robust set of features for modern application requirements, including but not limited to communication and AI services. By following the examples above, developers can integrate these services into their applications, harnessing the power of email, SMS, letters, and artificial intelligence seamlessly.
undefined

View File

@ -0,0 +1,110 @@
---
title: "@serve.zone/remoteingress"
---
# @serve.zone/remoteingress
Provides a service for creating private tunnels and reaching private clusters from the outside as part of the @serve.zone stack.
## Install
To install `@serve.zone/remoteingress`, run the following command in your terminal:
```sh
npm install @serve.zone/remoteingress
```
This command will download and install the remoteingress package and its dependencies into your project.
## Usage
`@serve.zone/remoteingress` is designed to facilitate the creation of secure private tunnels and enable access to private clusters from external sources, offering an integral part of the @serve.zone stack infrastructure. Below, we illustrate how to employ this package within your project, leveraging TypeScript and ESM syntax for modern, type-safe, and modular code.
### Prerequisites
Ensure that you have Node.js and TypeScript installed in your environment. Your project should be set up with TypeScript support, and you might want to familiarize yourself with basic networking concepts and TLS/SSL for secure communication.
### Importing and Initializing Connectors
`@serve.zone/remoteingress` offers two primary components: `ConnectorPublic` and `ConnectorPrivate`. Here's how to use them:
#### Setup ConnectorPublic
`ConnectorPublic` acts as a gateway, accepting incoming tunnel connections from `ConnectorPrivate` instances and facilitating secure communication between the internet and your private network.
```typescript
import { ConnectorPublic } from '@serve.zone/remoteingress';
// Initialize ConnectorPublic
const publicConnector = new ConnectorPublic({
tlsOptions: {
key: fs.readFileSync("<path-to-your-tls/key.pem>"),
cert: fs.readFileSync("<path-to-your-cert/cert.pem>"),
// Consider including 'ca' and 'passphrase' if required for your setup
},
listenPort: 443 // Example listen port; adjust based on your needs
});
```
#### Setup ConnectorPrivate
`ConnectorPrivate` establishes a secure tunnel to `ConnectorPublic`, effectively bridging your internal services with the external point of access.
```typescript
import { ConnectorPrivate } from '@serve.zone/remoteingress';
// Initialize ConnectorPrivate pointing to your ConnectorPublic instance
const privateConnector = new ConnectorPrivate({
publicHost: 'your.public.domain.tld',
publicPort: 443, // Ensure this matches the listening port of ConnectorPublic
tlsOptions: {
// You might want to specify TLS options here, similar to ConnectorPublic
}
});
```
### Secure Communication
It's imperative to ensure that the communication between `ConnectorPublic` and `ConnectorPrivate` is secure:
- Always use valid TLS certificates.
- Prefer using certificates issued by recognized Certificate Authorities (CA).
- Optionally, configure mutual TLS (mTLS) by requiring client certificates for an added layer of security.
### Advanced Usage
Both connectors can be finely tuned:
- **Logging and Monitoring:** Integrate with your existing logging and monitoring systems to keep tabs on tunnel activity, performance metrics, and potential security anomalies.
- **Custom Handlers:** Implement custom traffic handling logic for specialized routing, filtering, or protocol-specific processing.
- **Automation:** Automate the deployment and scaling of both `ConnectorPublic` and `ConnectorPrivate` instances using infrastructure-as-code (IAC) tools and practices, ensuring that your tunneling infrastructure can dynamically adapt to the ever-changing needs of your services.
### Example Scenarios
1. **Securing Application APIs:** Use `@serve.zone/remoteingress` to expose private APIs to your frontend deployed on a public cloud, ensuring that only your infrastructure can access these endpoints.
2. **Remote Database Access:** Securely access databases within a private VPC from your local development machine without opening direct access to the internet.
3. **Service Mesh Integration:** Integrate `@serve.zone/remoteingress` as part of a service mesh setup to securely connect services across multiple clusters with robust identity and encryption at the tunnel level.
For detailed documentation, API references, and additional use cases, please refer to the inline documentation and source code within the package. Always prioritize security and robustness when dealing with network ingress to protect your infrastructure and data from unauthorized access and threats.
## License and Legal Information
This repository contains open-source code that is licensed under the MIT License. A copy of the MIT License can be found in the [license](license) file within this repository.
**Please note:** The MIT License does not grant permission to use the trade names, trademarks, service marks, or product names of the project, except as required for reasonable and customary use in describing the origin of the work and reproducing the content of the NOTICE file.
### Trademarks
This project is owned and maintained by Task Venture Capital GmbH. The names and logos associated with Task Venture Capital GmbH and any related products or services are trademarks of Task Venture Capital GmbH and are not included within the scope of the MIT license granted herein. Use of these trademarks must comply with Task Venture Capital GmbH's Trademark Guidelines, and any usage must be approved in writing by Task Venture Capital GmbH.
### Company Information
Task Venture Capital GmbH
Registered at District court Bremen HRB 35230 HB, Germany
For any legal inquiries or if you require further information, please contact us via email at hello@task.vc.
By using this repository, you acknowledge that you have read this section, agree to comply with its terms, and understand that the licensing of the code does not imply endorsement by Task Venture Capital GmbH of any derivative works.

View File

@ -0,0 +1,303 @@
---
title: "@serve.zone/spark"
---
# @serve.zone/spark
A comprehensive tool for maintaining and configuring servers, integrating with Docker and supporting advanced task scheduling, targeted at the serve.zone infrastructure. It's mainly designed to be utilized by @serve.zone/cloudly as a cluster node server system manager, maintaining and configuring servers on the base OS level.
## Install
To install `@serve.zone/spark`, run the following command in your terminal:
```sh
npm install @serve.zone/spark --save
```
Ensure you have both Node.js and npm installed on your machine.
## Usage
### Getting Started
To use `@serve.zone/spark` in your project, you need to include and initiate it in your TypeScript project. Ensure you have TypeScript and the necessary build tools set up in your project.
First, import `@serve.zone/spark`:
```typescript
import { Spark } from '@serve.zone/spark';
```
### Initializing Spark
Create an instance of the `Spark` class to start using Spark. This instance will serve as the main entry point for interacting with Spark functionalities.
```typescript
const sparkInstance = new Spark();
```
### Running Spark as a Daemon
To run Spark as a daemon, which is useful for maintaining and configuring servers at the OS level, you can use the CLI feature bundled with Spark. This should ideally be handled outside of your code through a command-line terminal but can also be automated within your Node.js scripts if required.
```shell
spark installdaemon
```
The command above sets up Spark as a system service, enabling it to run and maintain server configurations automatically.
### Updating Spark or Maintained Services
Spark can self-update and manage updates for its maintained services. Trigger an update check and process by calling the `updateServices` method on the Spark instance.
```typescript
await sparkInstance.sparkUpdateManager.updateServices();
```
### Managing Configuration and Logging
Spark allows extensive configuration and logging customization. Use the `SparkLocalConfig` and logging features to tailor Spark's operation to your needs.
```typescript
// Accessing the local configuration
const localConfig = sparkInstance.sparkLocalConfig;
// Utilizing the logger for custom log messages
import { logger } from '@serve.zone/spark';
logger.log('info', 'Custom log message');
```
### Advanced Usage
`@serve.zone/spark` offers tools for detailed server and service management, including but not limited to task scheduling, daemon management, and service updates. Explore the `SparkTaskManager` for scheduling specific tasks, `SparkUpdateManager` for handling service updates, and `SparkLocalConfig` for configuration.
### Example: Scheduling Custom Tasks
```typescript
import { SparkTaskManager } from '@serve.zone/spark';
const sparkInstance = new Spark();
const myTask = {
name: 'customTask',
taskFunction: async () => {
console.log('Running custom task');
},
};
sparkInstance.sparkTaskManager.taskmanager.addAndScheduleTask(myTask, '* * * * * *');
```
The example above creates a simple task that logs a message every second, demonstrating how to use Spark's task manager for custom scheduled tasks.
### Detailed Service Management
For advanced configurations, including Docker and service management, you can utilize the following patterns:
- Use `SparkUpdateManager` to handle Docker image updates, service creation, and management.
- Access and modify Docker and service configurations through Spark's integration with configuration files and environment variables.
```typescript
// Managing Docker services with Spark
await sparkInstance.sparkUpdateManager.dockerHost.someDockerMethod();
// Example: Creating a Docker service
const newServiceDefinition = {...};
await sparkInstance.sparkUpdateManager.createService(newServiceDefinition);
```
### CLI Commands
Spark provides several CLI commands to interact with and manage the system services:
#### Installing Spark as a Daemon
```shell
spark installdaemon
```
Sets up Spark as a system service to maintain server configurations automatically.
#### Updating the Daemon
```shell
spark updatedaemon
```
Updates the daemon service if a new version is available.
#### Running Spark as Daemon
```shell
spark asdaemon
```
Runs Spark in daemon mode, which is suitable for executing automated tasks.
#### Viewing Logs
```shell
spark logs
```
Views the logs of the Spark daemon service.
#### Cleaning Up Services
```shell
spark prune
```
Stops and cleans up all Docker services (stacks, networks, secrets, etc.) and prunes the Docker system.
### Programmatic Daemon Management
You can also manage the daemon programmatically:
```typescript
import { SmartDaemon } from '@push.rocks/smartdaemon';
import { Spark } from '@serve.zone/spark';
const sparkInstance = new Spark();
const smartDaemon = new SmartDaemon();
const startDaemon = async () => {
const sparkService = await smartDaemon.addService({
name: 'spark',
version: sparkInstance.sparkInfo.projectInfo.version,
command: 'spark asdaemon',
description: 'Spark daemon service',
workingDir: '/path/to/project',
});
await sparkService.save();
await sparkService.enable();
await sparkService.start();
};
const updateDaemon = async () => {
const sparkService = await smartDaemon.addService({
name: 'spark',
version: sparkInstance.sparkInfo.projectInfo.version,
command: 'spark asdaemon',
description: 'Spark daemon service',
workingDir: '/path/to/project',
});
await sparkService.reload();
};
startDaemon();
updateDaemon();
```
This illustrates how to initiate and update the Spark daemon using the `SmartDaemon` class from `@push.rocks/smartdaemon`.
### Configuration Management
Extensive configuration management is possible through the `SparkLocalConfig` and other configuration classes. This feature allows you to make your application's behavior adaptable based on different environments and requirements.
```typescript
// Example on setting local config
import { SparkLocalConfig } from '@serve.zone/spark';
const localConfig = new SparkLocalConfig(sparkInstance);
await localConfig.kvStore.set('someKey', 'someValue');
// Retrieving a value from local config
const someConfigValue = await localConfig.kvStore.get('someKey');
console.log(someConfigValue); // Outputs: someValue
```
### Detailed Log Management
Logging is a crucial aspect of any automation tool, and `@serve.zone/spark` offers rich logging functionality through its built-in logging library.
```typescript
import { logger, Spark } from '@serve.zone/spark';
const sparkInstance = new Spark();
logger.log('info', 'Spark instance created.');
// Using logger in various levels of severity
logger.log('debug', 'This is a debug message');
logger.log('warn', 'This is a warning message');
logger.log('error', 'This is an error message');
logger.log('ok', 'This is a success message');
```
### Real-World Scenarios
#### Automated System Update and Restart
In real-world scenarios, you might want to automate system updates and reboots to ensure your services are running the latest security patches and features.
```typescript
import { Spark } from '@serve.zone/spark';
import { SmartShell } from '@push.rocks/smartshell';
const sparkInstance = new Spark();
const shell = new SmartShell({ executor: 'bash' });
const updateAndRestart = async () => {
await shell.exec('apt-get update && apt-get upgrade -y');
console.log('System updated.');
await shell.exec('reboot');
};
sparkInstance.sparkTaskManager.taskmanager.addAndScheduleTask(
{ name: 'updateAndRestart', taskFunction: updateAndRestart },
'0 3 * * 7' // Every Sunday at 3 AM
);
```
This example demonstrates creating and scheduling a task to update and restart the server every Sunday at 3 AM using Spark's task management capabilities.
#### Integrating with Docker for Service Deployment
Spark's tight integration with Docker makes it an excellent tool for deploying containerized applications across your infrastructure.
```typescript
import { Spark } from '@serve.zone/spark';
import { DockerHost } from '@apiclient.xyz/docker';
const sparkInstance = new Spark();
const dockerHost = new DockerHost({});
const deployService = async () => {
const image = await dockerHost.pullImage('my-docker-repo/my-service:latest');
const newService = await dockerHost.createService({
name: 'my-service',
image,
ports: ['80:8080'],
environmentVariables: {
NODE_ENV: 'production',
},
});
console.log(`Service ${newService.name} deployed.`);
};
deployService();
```
This example demonstrates how to pull a Docker image and deploy it as a new service in your infrastructure using Spark's Docker integration.
### Managing Secrets
Managing secrets and sensitive data is crucial in any configuration and automation tool. Spark's integration with Docker allows you to handle secrets securely.
```typescript
import { Spark, SparkUpdateManager } from '@serve.zone/spark';
import { DockerSecret } from '@apiclient.xyz/docker';
const sparkInstance = new Spark();
const updateManager = new SparkUpdateManager(sparkInstance);
const createDockerSecret = async () => {
const secret = await DockerSecret.createSecret(updateManager.dockerHost, {
name: 'dbPassword',
contentArg: 'superSecretPassword',
});
console.log(`Secret ${secret.Spec.Name} created.`);
};
createDockerSecret();
```
This example shows how to create a Docker secret using Spark's `SparkUpdateManager` class, ensuring that sensitive information is securely stored and managed.
## License and Legal Information
This repository contains open-source code that is licensed under the MIT License. A copy of the MIT License can be found in the [license](license) file within this repository.
**Please note:** The MIT License does not grant permission to use the trade names, trademarks, service marks, or product names of the project, except as required for reasonable and customary use in describing the origin of the work and reproducing the content of the NOTICE file.
### Trademarks
This project is owned and maintained by Task Venture Capital GmbH. The names and logos associated with Task Venture Capital GmbH and any related products or services are trademarks of Task Venture Capital GmbH and are not included within the scope of the MIT license granted herein. Use of these trademarks must comply with Task Venture Capital GmbH's Trademark Guidelines, and any usage must be approved in writing by Task Venture Capital GmbH.
### Company Information
Task Venture Capital GmbH
Registered at District court Bremen HRB 35230 HB, Germany
For any legal inquiries or if you require further information, please contact us via email at hello@task.vc.
By using this repository, you acknowledge that you have read this section, agree to comply with its terms, and understand that the licensing of the code does not imply endorsement by Task Venture Capital GmbH of any derivative works.

View File

@ -0,0 +1,354 @@
---
title: "@serve.zone/cloudly"
---
# @serve.zone/cloudly
A multi-cloud management tool utilizing Docker Swarmkit for orchestrating containerized apps across various cloud providers, with web, CLI, and API interfaces for configuration and integration management.
## Install
To install `@serve.zone/cloudly`, run the following command in your terminal:
```bash
npm install @serve.zone/cloudly --save
```
This will install the package and add it to your project's `package.json` dependencies.
## Usage
`@serve.zone/cloudly` is designed to provide a unified interface for managing multi-cloud environments, encapsulating complex cloud interactions with Docker Swarmkit into simpler, programmable entities. This document will guide you through various use-cases and implementation examples to give you a comprehensive understanding of the module's capabilities.
### Prerequisites
Before you begin, ensure your environment is set up correctly:
- You have Node.js installed (preferably the latest LTS version).
- Your environment is configured to use TypeScript if you're working in a TypeScript project.
### Basic Setup
#### Creating a Cloudly Instance
The foundation of working with `@serve.zone/cloudly` involves creating an instance of the `Cloudly` class. This instance serves as the gateway to managing cloud resources and orchestrates interactions within the platform. Here’s how to get started:
```typescript
import { Cloudly, ICloudlyConfig } from '@serve.zone/cloudly';
const myCloudlyConfig: ICloudlyConfig = {
cfToken: 'your_cloudflare_api_token',
hetznerToken: 'your_hetzner_api_token',
environment: 'development',
letsEncryptEmail: 'lets_encrypt_email@example.com',
publicUrl: 'example.com',
publicPort: '8443',
mongoDescriptor: {
mongoDbUrl: 'mongodb+srv://<username>:<password>@<cluster>.mongodb.net/myFirstDatabase',
mongoDbName: 'myDatabase',
mongoDbUser: 'myUser',
mongoDbPass: 'myPassword',
},
};
const myCloudlyInstance = new Cloudly(myCloudlyConfig);
```
The configuration object `ICloudlyConfig` provides essential information needed for initializing external services, such as Cloudflare, Hetzner, and a MongoDB server. Adjust the parameters to match your actual service credentials and specifications.
### Core Features and Use Cases
#### Orchestrating Docker Swarmkit Clusters
Docker Swarmkit cluster management is a primary feature of `@serve.zone/cloudly`. Through its abstracted, programmable interface, you can operate clusters effortlessly. Here’s an example of how to create a cluster using `Cloudly`:
```typescript
import { Cloudly } from '@serve.zone/cloudly';
interface ICluster {
name: string;
id: string;
cloudlyUrl: string;
servers: string[];
sshKeys: string[];
}
async function manageClusters() {
const myCloudlyConfig = {
cfToken: 'your_cloudflare_api_token',
environment: 'development',
mongoDescriptor: {
mongoDbUrl: 'mongodb+srv://<username>:<password>@<cluster>.mongodb.net/myFirstDatabase',
mongoDbName: 'myDatabase',
mongoDbUser: 'myUser',
mongoDbPass: 'myPassword',
},
letsEncryptEmail: 'lets_encrypt_email@example.com',
publicUrl: 'example.com',
publicPort: 8443,
hetznerToken: 'your_hetzner_api_token',
};
const myCloudlyInstance = new Cloudly(myCloudlyConfig);
await myCloudlyInstance.start();
const newCluster: ICluster = {
name: 'example_cluster',
id: 'example_cluster_id',
cloudlyUrl: 'https://example.com:8443',
servers: [],
sshKeys: [],
};
// Store the newly created cluster with Cloudly
const storedCluster = await myCloudlyInstance.clusterManager.storeCluster(newCluster);
console.log('Cluster stored:', storedCluster);
}
manageClusters();
```
In this scenario, a cluster called `example_cluster` is initialized using the `Cloudly` instance. This method represents a central mechanism to efficiently handle cluster entities and associated metadata.
#### Integrating With Cloudflare for DNS Management
`@serve.zone/cloudly` provides built-in capabilities for managing DNS records through integration with Cloudflare. Using the `CloudflareConnector`, you can programmatically create, manage, and delete DNS entries:
```typescript
import { Cloudly } from '@serve.zone/cloudly';
async function configureCloudflareDNS() {
const myCloudlyConfig = {
cfToken: 'your_cloudflare_api_token',
environment: 'development',
letsEncryptEmail: 'lets_encrypt_email@example.com',
publicUrl: 'example.com',
publicPort: 8443,
hetznerToken: 'your_hetzner_api_token',
mongoDescriptor: {
mongoDbUrl: 'mongodb+srv://<username>:<password>@<cluster>.mongodb.net/myFirstDatabase',
mongoDbName: 'myDatabase',
mongoDbUser: 'myUser',
mongoDbPass: 'myPassword',
},
};
const myCloudlyInstance = new Cloudly(myCloudlyConfig);
await myCloudlyInstance.start();
const cfConnector = myCloudlyInstance.cloudflareConnector.cloudflare;
const dnsRecord = await cfConnector.createDNSRecord('example.com', 'sub.example.com', 'A', '127.0.0.1');
console.log('DNS Record:', dnsRecord);
}
configureCloudflareDNS();
```
Here, you create an A record for the subdomain `sub.example.com` pointing to `127.0.0.1`. All communication with Cloudflare is handled directly through the interface without manual intervention.
#### Dynamic Interaction with DigitalOcean
DigitalOcean resource management, including droplet creation, is simplified in Cloudly. By extending the API to encapsulate calls to external providers, Cloudly provides a seamless experience:
```typescript
import { Cloudly } from '@serve.zone/cloudly';
async function createDigitalOceanDroplets() {
const myCloudlyConfig = {
cfToken: 'your_cloudflare_api_token',
environment: 'development',
letsEncryptEmail: 'lets_encrypt_email@example.com',
publicUrl: 'example.com',
publicPort: 8443,
hetznerToken: 'your_hetzner_api_token',
mongoDescriptor: {
mongoDbUrl: 'mongodb+srv://<username>:<password>@<cluster>.mongodb.net/myFirstDatabase',
mongoDbName: 'myDatabase',
mongoDbUser: 'myUser',
mongoDbPass: 'myPassword',
},
};
const myCloudlyInstance = new Cloudly(myCloudlyConfig);
await myCloudlyInstance.start();
const doConnector = myCloudlyInstance.digitaloceanConnector;
const droplet = await doConnector.createDroplet('example-droplet', 'nyc3', 's-1vcpu-1gb', 'ubuntu-20-04-x64');
console.log('Droplet created:', droplet);
}
createDigitalOceanDroplets();
```
In this script, a droplet named `example-droplet` is created within the `nyc3` region using the `ubuntu-20-04-x64` image. The module abstracts complexities by directly interfacing with DigitalOcean.
### Advanced Use Cases
#### Implementing Web Management Interface
`@serve.zone/cloudly` facilitates dashboard management with advanced Web Components built with `@design.estate`. This section of the library allows the creation of dynamic, interactive panels for real-time resource management in a modern browser interface.
```typescript
import { html } from '@design.estate/dees-element';
const renderDashboard = () => {
return html`
<cloudly-dashboard>
<dees-simple-appdash>
<!-- Define sections and elements -->
<cloudly-view-clusters></cloudly-view-clusters>
<cloudly-view-dns></cloudly-view-dns>
<cloudly-view-images></cloudly-view-images>
<!-- Other custom views -->
</dees-simple-appdash>
</cloudly-dashboard>
`;
};
document.body.appendChild(renderDashboard());
```
Utilizing the custom web components designed specifically for Cloudly, dashboards are adaptable, interactive, and maintainable. These elements allow you to structure a complete cloud management center without needing to delve into detailed UI engineering.
#### Comprehensive Log Management
With Cloudly’s Log Management capabilities, you can track and analyze system logs for better insights into your cloud ecosystem’s behavior:
```typescript
import { Cloudly } from '@serve.zone/cloudly';
async function initiateLogManagement() {
const myCloudlyConfig = {
cfToken: 'your_cloudflare_api_token',
environment: 'development',
letsEncryptEmail: 'lets_encrypt_email@example.com',
publicUrl: 'example.com',
publicPort: 8443,
hetznerToken: 'your_hetzner_api_token',
mongoDescriptor: {
mongoDbUrl: 'mongodb+srv://<username>:<password>@<cluster>.mongodb.net/myFirstDatabase',
mongoDbName: 'myDatabase',
mongoDbUser: 'myUser',
mongoDbPass: 'myPassword',
},
};
const myCloudlyInstance = new Cloudly(myCloudlyConfig);
await myCloudlyInstance.start();
const logs = await myCloudlyInstance.logManager.fetchLogs();
console.log('Logs:', logs);
}
initiateLogManagement();
```
Cloudly provides the tools needed to collect and process logs within your cloud infrastructure. Logs are an essential part of system validation, troubleshooting, monitoring, and auditing.
#### Secret Management and Bundles
Managing secrets securely and efficiently is critical for cloud operations. Cloudly allows you to create and manage secret groups and bundles that can be used across multiple applications and environments:
```typescript
import { Cloudly } from '@serve.zone/cloudly';
async function createSecrets() {
const myCloudlyConfig = {
cfToken: 'your_cloudflare_api_token',
environment: 'development',
letsEncryptEmail: 'lets_encrypt_email@example.com',
publicUrl: 'example.com',
publicPort: 8443,
hetznerToken: 'your_hetzner_api_token',
mongoDescriptor: {
mongoDbUrl: 'mongodb+srv://<username>:<password>@<cluster>.mongodb.net/myFirstDatabase',
mongoDbName: 'myDatabase',
mongoDbUser: 'myUser',
mongoDbPass: 'myPassword',
},
};
const myCloudlyInstance = new Cloudly(myCloudlyConfig);
await myCloudlyInstance.start();
const newSecretGroup = await myCloudlyInstance.secretManager.createSecretGroup({
name: 'example_secret_group',
secrets: [
{ key: 'SECRET_KEY', value: 's3cr3t' },
],
});
const newSecretBundle = await myCloudlyInstance.secretManager.createSecretBundle({
name: 'example_bundle',
secretGroups: [newSecretGroup],
});
console.log('Created Secret Group and Bundle:', newSecretGroup, newSecretBundle);
}
createSecrets();
```
Secrets, such as API keys and sensitive configuration data, are managed efficiently using secret groups and bundles. This structured approach to secret management enhances both security and accessibility.
### Task Scheduling and Management
With task buffers, you can schedule and manage background tasks integral to cloud operations:
```typescript
import { Cloudly } from '@serve.zone/cloudly';
import { TaskBuffer } from '@push.rocks/taskbuffer';
async function scheduleTasks() {
const myCloudlyConfig = {
cfToken: 'your_cloudflare_api_token',
environment: 'development',
letsEncryptEmail: 'lets_encrypt_email@example.com',
publicUrl: 'example.com',
publicPort: 8443,
hetznerToken: 'your_hetzner_api_token',
mongoDescriptor: {
mongoDbUrl: 'mongodb+srv://<username>:<password>@<cluster>.mongodb.net/myFirstDatabase',
mongoDbName: 'myDatabase',
mongoDbUser: 'myUser',
mongoDbPass: 'myPassword',
},
};
const myCloudlyInstance = new Cloudly(myCloudlyConfig);
await myCloudlyInstance.start();
const taskManager = new TaskBuffer();
taskManager.scheduleEvery('minute', async () => {
console.log('Running scheduled task...');
// Task logic
});
console.log('Tasks scheduled.');
}
scheduleTasks();
```
The example demonstrates setting up periodic task execution using task buffers as part of Cloudly's task management. Whether it's maintenance routines, data updates, or resource checks, tasks can be managed effectively.
This comprehensive overview of `@serve.zone/cloudly` is designed to help you leverage its full capabilities in managing multi-cloud environments. Each example is meant to serve as a starting point, and you are encouraged to explore further by consulting the relevant sections in the documentation, engaging with community discussions, or experimenting in your own environment.
## License and Legal Information
This repository contains open-source code that is licensed under the MIT License. A copy of the MIT License can be found in the [license](license) file within this repository.
**Please note:** The MIT License does not grant permission to use the trade names, trademarks, service marks, or product names of the project, except as required for reasonable and customary use in describing the origin of the work and reproducing the content of the NOTICE file.
### Trademarks
This project is owned and maintained by Task Venture Capital GmbH. The names and logos associated with Task Venture Capital GmbH and any related products or services are trademarks of Task Venture Capital GmbH and are not included within the scope of the MIT license granted herein. Use of these trademarks must comply with Task Venture Capital GmbH's Trademark Guidelines, and any usage must be approved in writing by Task Venture Capital GmbH.
### Company Information
Task Venture Capital GmbH
Registered at District court Bremen HRB 35230 HB, Germany
For any legal inquiries or if you require further information, please contact us via email at hello@task.vc.
By using this repository, you acknowledge that you have read this section, agree to comply with its terms, and understand that the licensing of the code does not imply endorsement by Task Venture Capital GmbH of any derivative works.

View File

@ -0,0 +1,347 @@
---
title: "@serve.zone/coreflow"
---
# @serve.zone/coreflow
A comprehensive solution for managing Docker and scaling applications across servers, handling tasks from service provisioning to network traffic management.
## Install
To install @serve.zone/coreflow, you can use npm with the following command:
```sh
npm install @serve.zone/coreflow --save
```
Given that this is a private package, make sure you have access to the required npm registry and that you are authenticated properly.
## Usage
Coreflow is designed as an advanced tool for managing Docker-based applications and services, enabling efficient scaling across servers, and handling multiple aspects of service provisioning and network traffic management. Below are examples and explanations to illustrate its capabilities and how you can leverage Coreflow in your infrastructure. Note that these examples are based on TypeScript and use ESM syntax.
### Prerequisites
Before you start, ensure you have Docker and Docker Swarm configured in your environment as Coreflow operates on top of these technologies. Additionally, verify that your environment variables are properly set up for accessing Coreflow's functionalities.
### Setting Up Coreflow
To get started, you need to import and initialize Coreflow within your application. Here's an example of how to do this in a TypeScript module:
```typescript
import { Coreflow } from '@serve.zone/coreflow';
// Initialize Coreflow
const coreflowInstance = new Coreflow();
// Start Coreflow
await coreflowInstance.start();
// Example: Add your logic here for handling Docker events
coreflowInstance.handleDockerEvents().then(() => {
console.log('Docker events are being handled.');
});
// Stop Coreflow when done
await coreflowInstance.stop();
```
In the above example:
- The Coreflow instance is initialized.
- Coreflow is started, which internally initializes various managers and connectors.
- The method `handleDockerEvents` is used to handle Docker events.
- Finally, Coreflow is stopped gracefully.
### Configuring Service Connections
Coreflow manages applications and services, often requiring direct interactions with other services like a database, message broker, or external API. Coreflow simplifies these connections through its configuration and service discovery layers.
```typescript
// Assuming coreflowInstance is already started as per previous examples
const serviceConnection = coreflowInstance.createServiceConnection({
serviceName: 'myDatabaseService',
servicePort: 3306,
});
serviceConnection.connect().then(() => {
console.log('Successfully connected to the service');
});
```
### Scaling Your Application
Coreflow excels in scaling applications across multiple servers. This involves not just replicating services, but also ensuring they are properly networked, balanced, and monitored.
```typescript
const scalingPolicy = {
serviceName: 'apiService',
replicaCount: 5, // Target number of replicas
maxReplicaCount: 10, // Maximum number of replicas
minReplicaCount: 2, // Minimum number of replicas
};
coreflowInstance.applyScalingPolicy(scalingPolicy).then(() => {
console.log('Scaling policy applied successfully.');
});
```
In the above example:
- A scaling policy is defined with target, maximum, and minimum replica counts for the `apiService`.
- The `applyScalingPolicy` method of the Coreflow instance is used to apply this scaling policy.
### Managing Network Traffic
One of Coreflow's key features is its ability to manage network traffic, ensuring that it is efficiently distributed among various services based on load, priority, and other custom rules.
```typescript
import { TrafficRule } from '@serve.zone/coreflow';
const rule: TrafficRule = {
serviceName: 'webService',
externalPort: 80,
internalPort: 3000,
protocol: 'http',
};
coreflowInstance.applyTrafficRule(rule).then(() => {
console.log('Traffic rule applied successfully.');
});
```
In the above example:
- A traffic rule is defined for the `webService`, redirecting external traffic from port 80 to the service's internal port 3000.
- The `applyTrafficRule` method is used to enforce this rule.
### Continuous Deployment
Coreflow integrates continuous integration and deployment processes, allowing seamless updates and rollbacks for your services:
```typescript
const deploymentConfig = {
serviceName: 'userAuthService',
image: 'myregistry.com/userauthservice:latest',
updatePolicy: 'rolling', // or "recreate"
};
coreflowInstance.deployService(deploymentConfig).then(() => {
console.log('Service deployed successfully.');
});
```
In the above example:
- A deployment configuration is created for the `userAuthService` using the latest image from the specified registry.
- The `deployService` method is then used to deploy the service using the specified update policy (e.g., rolling updates or recreating the service).
### Observability and Monitoring
To keep track of your applications' health and performance, Coreflow provides tools for logging, monitoring, and alerting.
```typescript
coreflowInstance.monitorService('webService').on('serviceHealthUpdate', (healthStatus) => {
console.log(`Received health update for webService: ${healthStatus}`);
});
```
In the above example:
- The `monitorService` method is used to monitor the health status of the `webService`.
- When a health update event is received, it is logged to the console.
### Detailed Example: Setting Up and Managing Coreflow
Here is a detailed example that covers various features, from setup to scaling and traffic management.
#### Step 1: Initialize Coreflow
```typescript
import { Coreflow } from '@serve.zone/coreflow';
const coreflowInstance = new Coreflow();
async function initializeCoreflow() {
await coreflowInstance.start();
console.log('Coreflow initialized.');
await manageServices();
}
initializeCoreflow().catch((error) => {
console.error('Error initializing Coreflow:', error);
});
```
#### Step 2: Handling Docker Events
```typescript
coreflowInstance.handleDockerEvents().then(() => {
console.log('Docker events are being handled.');
});
```
#### Step 3: Configuring and Connecting to a Service
```typescript
const serviceConnection = coreflowInstance.createServiceConnection({
serviceName: 'databaseService',
servicePort: 5432,
});
serviceConnection.connect().then(() => {
console.log('Successfully connected to the database service.');
});
```
#### Step 4: Applying a Scaling Policy
```typescript
const scalingPolicy = {
serviceName: 'microserviceA',
replicaCount: 3, // Starting with 3 replicas
maxReplicaCount: 10, // Allowing up to 10 replicas
minReplicaCount: 2, // Ensuring at least 2 replicas
};
coreflowInstance.applyScalingPolicy(scalingPolicy).then(() => {
console.log('Scaling policy applied for microserviceA');
});
```
#### Step 5: Managing Network Traffic
```typescript
import { TrafficRule } from '@serve.zone/coreflow';
const trafficRules: TrafficRule[] = [
{
serviceName: 'frontendService',
externalPort: 80,
internalPort: 3000,
protocol: 'http',
},
{
serviceName: 'apiService',
externalPort: 443,
internalPort: 4000,
protocol: 'https',
},
];
Promise.all(trafficRules.map((rule) => coreflowInstance.applyTrafficRule(rule))).then(() => {
console.log('Traffic rules applied.');
});
```
#### Step 6: Deploying a Service
```typescript
const deploymentConfig = {
serviceName: 'authService',
image: 'myregistry.com/authservice:latest',
updatePolicy: 'rolling', // Performing rolling updates
};
coreflowInstance.deployService(deploymentConfig).then(() => {
console.log('AuthService deployed successfully.');
});
```
#### Step 7: Monitoring a Service
```typescript
coreflowInstance.monitorService('frontendService').on('serviceHealthUpdate', (healthStatus) => {
console.log(`Health update for frontendService: ${healthStatus}`);
});
```
### Advanced Usage: Task Scheduling and Traffic Configuration
In more complex scenarios, you might want to leverage Coreflow's ability to schedule tasks and manage traffic configurations.
#### Scheduling Tasks
Coreflow supports scheduling updates and other tasks using the `taskBuffer` API.
```typescript
import { Task } from '@push.rocks/taskbuffer';
const checkinTask = new Task({
name: 'checkin',
buffered: true,
taskFunction: async () => {
console.log('Running checkin task...');
},
});
const taskManager = coreflowInstance.taskManager;
taskManager.addAndScheduleTask(checkinTask, '0 * * * * *'); // Scheduling task to run every minute
taskManager.start().then(() => {
console.log('Task manager started.');
});
```
#### Managing Traffic Routing
Coreflow can manage complex traffic routing scenarios, such as configuring reverse proxies for different services.
```typescript
import { CoretrafficConnector } from '@serve.zone/coreflow';
// Assume coreflowInstance is already started
const coretrafficConnector = new CoretrafficConnector(coreflowInstance);
const reverseProxyConfigs = [
{
hostName: 'example.com',
destinationIp: '192.168.1.100',
destinationPort: '3000',
privateKey: '<your-private-key>',
publicKey: '<your-public-key>',
},
{
hostName: 'api.example.com',
destinationIp: '192.168.1.101',
destinationPort: '4000',
privateKey: '<your-private-key>',
publicKey: '<your-public-key>',
},
];
coretrafficConnector.setReverseConfigs(reverseProxyConfigs).then(() => {
console.log('Reverse proxy configurations applied.');
});
```
### Integrating with Cloudly
Coreflow is designed to integrate seamlessly with Cloudly, a configuration management and orchestration tool.
#### Starting the Cloudly Connector
```typescript
const cloudlyConnector = coreflowInstance.cloudlyConnector;
cloudlyConnector.start().then(() => {
console.log('Cloudly connector started.');
});
```
#### Retrieving and Applying Configurations from Cloudly
```typescript
cloudlyConnector.getConfigFromCloudly().then((config) => {
console.log('Received configuration from Cloudly:', config);
coreflowInstance.clusterManager.provisionWorkloadServices(config).then(() => {
console.log('Workload services provisioned based on Cloudly config.');
});
});
```
### Conclusion
Coreflow is a powerful and flexible tool for managing Docker-based applications, scaling services, configuring network traffic, handling continuous deployments, and ensuring observability of your infrastructure. The examples provided aim to give a comprehensive understanding of how to use Coreflow in various scenarios, ensuring it meets your DevOps and CI/CD needs.
By leveraging Coreflow's rich feature set, you can optimize your infrastructure for high availability, scalability, and efficient operation across multiple servers and environments.
undefined

View File

@ -0,0 +1,179 @@
---
title: "@serve.zone/corerender"
---
# Corerender
A rendering service for serve.zone that preserves styles for web components.
## Install
To install Corerender in your project, you can use npm. Make sure you have Node.js installed and then run the following command in your terminal:
```shell
npm install corerender
```
This will add `corerender` as a dependency to your project, allowing you to use its rendering services to preserve styles for web components efficiently.
## Usage
Welcome to the comprehensive usage guide for `corerender`, a powerful rendering service designed to integrate seamlessly within your web applications, ensuring that styles for web components are preserved properly. The guide is structured to provide a thorough understanding of `corerender`'s capabilities, demonstrating its flexibility and efficiency through realistic scenarios.
### Setting Up Your Environment
First things first, let’s get `corerender` up and running in your project. Ensure you've installed the package as detailed in the [Install](#install) section. Since `corerender` is a TypeScript-friendly library, it is recommended to use TypeScript for development to leverage the full power of type safety and IntelliSense.
### Basic Render Service Setup
```typescript
import { Rendertron } from 'corerender';
const rendertronInstance = new Rendertron();
(async () => {
console.log('Starting rendertron...');
await rendertronInstance.start();
console.log('Rendertron started successfully!');
})();
```
The code initializes an instance of `Rendertron` and starts the service asynchronously. `Rendertron` is the core class responsible for managing the rendering processes, including task scheduling and storing rendering results persistently in a database.
### Understanding the Rendertron Architecture
The architecture of `Rendertron` is designed to support web component rendering through several integral components:
1. **Prerender Manager**: Manages the creation and retrieval of prerender results.
2. **Task Manager**: Handles scheduling tasks for prerendering operations and cleanup routines.
3. **Utility Service Server**: Provides the server interface that accepts/render requests and serves prerendered content efficiently.
### Using the Prerender Manager
The `PrerenderManager` is responsible for generating and caching the rendering results of webpages. Here’s how you can use the `PrerenderManager` to prerender a webpage:
```typescript
import { PrerenderManager } from 'corerender/dist_ts/rendertron.classes.prerendermanager';
(async () => {
const prerenderManager = new PrerenderManager();
await prerenderManager.start();
const urlToPrerender = 'https://example.com';
const prerenderResult = await prerenderManager.getPrerenderResultForUrl(urlToPrerender);
console.log(`Prerendered content for ${urlToPrerender}:`);
console.log(prerenderResult);
await prerenderManager.stop();
})();
```
The above script demonstrates accessing a webpage's prerendered content. It initializes the `PrerenderManager`, specifies a URL, and requests the rendering result, which is stored or retrieved from the database.
### Scheduling Prerendering Tasks
The `TaskManager` class allows for efficiently scheduling tasks, such as regular prerendering of local domains and cleanup of outdated render results:
```typescript
import { TaskManager } from 'corerender/dist_ts/rendertron.taskmanager';
const taskManager = new TaskManager(rendertronInstance);
taskManager.start();
// Example: Manual trigger of a specific task
taskManager.triggerTaskByName('prerenderLocalDomains');
taskManager.stop();
```
`TaskManager` works closely with the `Rendertron` service to ensure tasks are executed as per defined schedules (e.g., every 30 minutes or daily). It allows manual triggering for immediate execution outside the schedule.
### Managing Render Results
The pre-rendered results are stored using `smartdata`’s `SmartDataDbDoc`. You may need advanced control over whether these are retrieved, created anew, or updated:
```typescript
import { PrerenderResult } from 'corerender/dist_ts/rendertron.classes.prerenderresult';
(async () => {
const url = 'https://example.com';
let prerenderResult = await PrerenderResult.getPrerenderResultForUrl(prerenderManager, url);
// Check if an updated result is necessary
if (prerenderResultNeedsUpdate(prerenderResult)) {
prerenderResult = await PrerenderResult.createPrerenderResultForUrl(prerenderManager, url);
}
console.log(`Final Prerendered content for ${url}:`, prerenderResult.renderResultString);
})();
```
### Integrating with External Systems
`Corerender` can be integrated into broader systems that programmatically manage URLs and rendering frequencies. For instance, parsing and prerendering sitemaps:
```typescript
class IntegrationExample {
private prerenderManager: PrerenderManager;
constructor() {
this.prerenderManager = new PrerenderManager();
}
async prerenderFromSitemap(sitemapUrl: string) {
await this.prerenderManager.prerenderSitemap(sitemapUrl);
console.log('Finished prerendering sitemap:', sitemapUrl);
}
}
(async () => {
const integrationExample = new IntegrationExample();
await integrationExample.prerenderFromSitemap('https://example.com/sitemap.xml');
})();
```
### Server-Side Rendering Directly with SmartSSR
`Rendertron` uses the highly efficient `smartssr` for SSR requests. You can easily direct incoming server requests to utilize this rendering pipeline:
```typescript
import { typedserver } from 'corerender/dist_ts/rendertron.plugins';
const serviceServerInstance = new typedserver.utilityservers.UtilityServiceServer({
serviceDomain: 'rendertron.example.com',
serviceName: 'RendertronService',
serviceVersion: '2.0.61', // Replace with dynamic version retrieval if needed
addCustomRoutes: async (serverArg) => {
serverArg.addRoute(
'/render/*',
new typedserver.servertools.Handler('GET', async (req, res) => {
const requestedUrl = req.url.replace('/render/', '');
const prerenderedContent = await prerenderManager.getPrerenderResultForUrl(requestedUrl);
res.write(prerenderedContent);
res.end();
})
);
},
});
(async () => {
await serviceServerInstance.start();
console.log('SSR Server Started');
})();
```
### Customizing the Logger
`Rendertron` employs the `smartlog` package for logging activities across the service. To customize logging, instantiate a logger with custom configurations:
```typescript
import { smartlog } from 'corerender/dist_ts/rendertron.plugins';
const customLogger = smartlog.Smartlog.create({ /* custom options */ });
customLogger.log('info', 'Custom logger integrated successfully.');
```
### Closing Remarks
With these examples, you should have a robust understanding of how to implement `corerender` in your web application. It’s a powerful service that takes care of rendering optimizations, allowing developers to focus on building components and architecture, with clear workflows to handle tasks and results efficiently.
undefined

View File

@ -0,0 +1,227 @@
---
title: "@serve.zone/coretraffic"
---
# CoreTraffic
route traffic within your docker setup. TypeScript ready.
## Install
To install `coretraffic`, you should have Node.js already set up on your system. Assuming Node.js and npm are ready, install the package via the npm registry with the following command:
```bash
npm install coretraffic
```
To make the most out of `coretraffic`, ensure TypeScript is set up in your development environment, as this module is TypeScript ready and provides enhanced IntelliSense.
## Usage
`coretraffic` is designed to manage and route traffic within a Docker setup, offering robust solutions to your traffic management needs with an emphasis on efficiency and reliability. Utilizing TypeScript for static typing, you get enhanced code completion and fewer runtime errors. The module is set up to handle the intricacies of proxy configuration and routing, providing a powerful foundation for any Docker-based traffic management application.
Below, we'll delve into the capabilities and features of `coretraffic`, complete with comprehensive examples to get you started.
### Initializing CoreTraffic
First, you'll want to create an instance of the `CoreTraffic` class. This serves as the entry point to accessing the module's capabilities.
```typescript
import { CoreTraffic } from 'coretraffic';
// Create an instance of CoreTraffic
const coreTrafficInstance = new CoreTraffic();
```
This initializes the `coreTrafficInstance` with default properties, ready to configure routing settings and handle proxy configurations.
### Starting and Stopping CoreTraffic
Controlling the lifecycle of your `CoreTraffic` instance is key to effective traffic management. You can start and stop the instance with straightforward method calls.
#### Starting CoreTraffic
To begin managing traffic, start the instance. This sets up the internal network proxy and SSL redirection services, making them ready to handle incoming requests.
```typescript
async function startCoreTraffic() {
await coreTrafficInstance.start();
console.log('CoreTraffic is now running.');
}
startCoreTraffic();
```
This code initializes all internal components, including `NetworkProxy` and `SslRedirect`, beginning to route traffic as configured.
#### Stopping CoreTraffic
When you no longer need to route traffic, shutting down the instance cleanly is important. `CoreTraffic` provides a `stop` method for this purpose.
```typescript
async function stopCoreTraffic() {
await coreTrafficInstance.stop();
console.log('CoreTraffic has stopped.');
}
stopCoreTraffic();
```
This ensures all background tasks are halted, and network configurations are cleaned up.
### Configuring Network Proxy
A core feature of `CoreTraffic` is its ability to configure network proxies dynamically. At its heart is the `NetworkProxy` class, a powerful tool for managing routing configurations.
#### Adding Default Headers
You may wish to integrate unique headers across all routed requests, possible with the `addDefaultHeaders` method. This is useful for tagging requests or managing CORS.
```typescript
coreTrafficInstance.networkProxy.addDefaultHeaders({
'x-powered-by': 'coretraffic',
'custom-header': 'custom-value'
});
```
This injects custom headers into all outgoing responses managed by `coretraffic`, thereby allowing customized interaction with requests as needed.
#### Proxy Configuration Updates
Dynamic updates to proxy configurations can be facilitated via tasks managed by `CoretrafficTaskManager`. This feature allows adjustment of routing rules without interrupting service.
```typescript
import { IReverseProxyConfig } from '@tsclass/network';
const configureRouting = async () => {
const reverseProxyConfig: IReverseProxyConfig[] = [{
// Example configuration, adjust as needed
host: 'example.com',
target: 'http://internal-service:3000',
}];
await coreTrafficInstance.taskmanager.setupRoutingTask.trigger(reverseProxyConfig);
console.log('Updated routing configurations');
};
configureRouting();
```
In this example, a reverse proxy configuration is defined, specifying that requests to `example.com` should be directed to an internal service.
### SSL Redirection
`CoreTraffic` supports SSL redirection, an essential feature for secure communications. The `SslRedirect` component listens on one port to redirect traffic to the secure version on another port.
```typescript
// SslRedirect is initialized on port 7999 by default
console.log('SSL Redirection is active!');
```
Out-of-the-box, this listens on the configurable port and safely forwards insecure HTTP traffic to its HTTPS counterpart.
### Coreflow Connector
A unique aspect of `coretraffic` is its integration capability with `Coreflow`, allowing communication between different network nodes. The `CoreflowConnector` facilitates receiving configuration updates via a socket connection.
#### Setting up the CoreflowConnector
```typescript
const coreflowConnector = coreTrafficInstance.coreflowConnector;
async function connectCoreflow() {
await coreflowConnector.start();
console.log('Coreflow connector activated.');
}
connectCoreflow();
```
This method enables a persistent connection to a Coreflow server, allowing real-time configuration updates and management of routing policies.
#### Stopping the CoreflowConnector
To disconnect cleanly:
```typescript
async function disconnectCoreflow() {
await coreflowConnector.stop();
console.log('Coreflow connector terminated.');
}
disconnectCoreflow();
```
This halts the connection, ensuring no dangling resources remain when shutting down your application.
### Task Management
The `CoretrafficTaskManager` handles complex, buffered tasks. Flexibility and power are at your fingertips with this system, ideal for timed or queued execution needs.
#### Managing Tasks
Here is how you would initiate the task manager:
```typescript
const taskManager = coreTrafficInstance.taskmanager;
// Start tasks
taskManager.start()
.then(() => console.log('Task manager is running'))
.catch(err => console.error('Failed to start task manager', err));
```
Stop tasks once processing is no longer required:
```typescript
taskManager.stop()
.then(() => console.log('Task manager stopped'))
.catch(err => console.error('Failed to stop task manager', err));
```
### Logging and Debugging
Effective logging is provided using `Smartlog`, designed to track detailed application insights and report on activity and actions within `coretraffic`.
#### Configuring Log Levels
`coretraffic` supports log levels which can be adjusted as per your requirements:
```typescript
import { logger } from './coretraffic.logging.ts';
logger.log('info', 'System initialized');
logger.log('debug', 'Detailed debugging process');
logger.log('warn', 'Potential issue detected');
logger.log('error', 'An error has occurred');
```
These log entries help monitor logic flow and catch issues during development or deployment in production environments.
### Test Setup
For those interested in testing, `coretraffic` uses `tapbundle` and `tstest` to ensure reliability and correctness. A sample test module is provided to demonstrate initialization and lifecycle actions.
Here’s an example of a non-CI test scenario:
```typescript
import * as coretraffic from '../ts/index.js';
import { tap, expect } from '@push.rocks/tapbundle';
let testCoreTraffic;
tap.test('should create and handle coretraffic instances', async () => {
testCoreTraffic = new coretraffic.CoreTraffic();
expect(testCoreTraffic).toBeInstanceOf(coretraffic.CoreTraffic);
await testCoreTraffic.start();
await new Promise(resolve => setTimeout(resolve, 10000)); // Keep alive for demonstration
await testCoreTraffic.stop();
});
tap.start();
```
This test suite validates essential functionality within development iterations, ensuring `coretraffic` performs as expected.
`coretraffic` offers a vast landscape of operations within Docker environments, handling traffic with modularity and efficiency. Whether starting simple routing tasks or integrating with complex systems like Coreflow, this module provides robust support where needed most. Embrace your traffic management challenges with the dedicated features of `coretraffic`.
undefined

View File

@ -0,0 +1,169 @@
---
title: "@serve.zone/nullresolve"
---
# @losslessone_private/nullresolve
The nullresolve is a robust service designed to manage and handle requests effectively within the servzone architecture. It ensures requests that would normally remain unserved receive appropriate handling and feedback.
## Install
To install the `@losslessone_private/nullresolve` package, it is essential to first set up a proper environment for handling private npm packages due to its private nature. This can be achieved through npm or yarn, which are both suitable JavaScript package managers.
### Step-by-Step Installation:
1. **Ensure you are logged into npm** with sufficient permissions to access private packages:
```bash
npm login
```
Authentication is necessary for accessing private modules like `@losslessone_private/nullresolve`.
2. **Install Using npm:**
```bash
npm install @losslessone_private/nullresolve
```
If you are using a specific registry for your company or project, make sure to specify it in your npm configuration.
3. **Install Using Yarn:**
```bash
yarn add @losslessone_private/nullresolve
```
After these steps, the module should be ready for use in your JavaScript or TypeScript project.
## Usage
The purpose of `nullresolve` is pivotal within a network ecosystem, particularly one that interfaces directly with user requests and external resources. Below, a comprehensive guide exists to demonstrate effective usage of this module within applications.
### Quick Start Example
Initialization and launching of a nullresolve service can be done succinctly:
```typescript
// Import the NullResolve class from the package
import { NullResolve } from '@losslessone_private/nullresolve';
// Create an instance of NullResolve
const myNullResolveService = new NullResolve();
// Start the service
myNullResolveService.start().then(() => {
console.log('NullResolve service is running!');
}).catch((error) => {
console.error('Error starting NullResolve service:', error);
});
// Stop the service gracefully
process.on('SIGINT', async () => {
await myNullResolveService.stop();
console.log('NullResolve service stopped.');
process.exit(0);
});
```
### Detailed Guide: Handling Requests and Custom Routes
`nullresolve` can swiftly handle complex request scenarios utilizing its robust framework. Here's a detailed example of setting up custom handler routes that can respond with various HTTP statuses or custom messages based on the request:
```typescript
import { NullResolve } from '@losslessone_private/nullresolve';
// Initialize the service
const myService = new NullResolve();
// Start the service with custom routes
myService.serviceServer.addCustomRoutes(async (server) => {
server.addRoute(
'/error/:code',
new plugins.typedserver.servertools.Handler('GET', async (req, res) => {
let message;
switch (req.params.code) {
case '404':
message = 'This resource was not found.';
break;
case '500':
message = 'Internal Server Error. Please try later.';
break;
default:
message = 'An unexpected error occurred.';
}
res.status(200).send(`<html><body><h1>${message}</h1></body></html>`);
})
);
});
// Activating the service
myService.start().then(() => {
console.log('Custom route service started.');
}).catch((err) => {
console.error('Error while starting the service:', err);
});
```
### Integrating Logging and Monitoring
Given the mission-critical nature of services like `nullresolve`, reliable logging is indispensable to monitor activities and diagnose issues swiftly. This is integrated by default using the `smartlog` module for robust logging capabilities:
```typescript
import { logger } from './nullresolve.logging.js';
// Utilize the logger for tracking and problem-solving
logger.info('Service Log: nullresolve service initiated');
logger.warn('Warning Log: Potential issue detected');
logger.error('Error Log: An error occurred in service operation');
```
### Advanced Configuration
For systems requiring specialized setups, nullresolve offers configurability through both code and external configuration objects:
```typescript
// Customize through code
const config = {
domain: 'customdomain.com',
port: 8080,
routes: [
{
method: 'GET',
path: '/status/check',
handler: async (req, res) => {
res.status(200).send('Service is operational.');
}
}
]
};
myService.configure(config);
// Running the service with a new configuration
myService.start();
```
### Graceful Shutdown and Resource Management
Services such as the one provided by `nullresolve` must incorporate mechanisms to stop gracefully, allowing them to release resources and finish current tasks before complete termination:
```typescript
process.on('SIGTERM', async () => {
logger.info('Service is stopping gracefully.');
await myService.stop();
logger.info('Service has been successfully stopped.');
process.exit(0);
});
```
### Custom Error Handling Strategies
It is often beneficial to ensure that the service reacts gracefully during unexpected shutdowns or errors. Here's an example of implementing a strategy for error handling:
```typescript
const handleCriticalError = (err: Error) => {
logger.error(`Critical Error: ${err.message}`);
process.exit(1);
};
process.on('unhandledRejection', handleCriticalError);
process.on('uncaughtException', handleCriticalError);
```
By deploying `nullresolve` strategically within your infrastructure, it can transform how unhandled requests and errors are addressed, providing comprehensive protection and valuable insights into system status and health. This guide should serve to ensure effective deployment, utilization, and management of this sophisticated null service.
undefined

View File

@ -0,0 +1,34 @@
---
title: "@serve.zone/platformclient"
---
# @serve.zone/platformclient
a module that makes it really easy to use the serve.zone platform inside your app
## Availabililty and Links
* [npmjs.org (npm package)](https://www.npmjs.com/package/@serve.zone/platformclient)
* [gitlab.com (source)](https://gitlab.com/serve.zone/platformclient)
* [github.com (source mirror)](https://github.com/serve.zone/platformclient)
* [docs (typedoc)](https://serve.zone.gitlab.io/platformclient/)
## Status for master
Status Category | Status Badge
-- | --
GitLab Pipelines | [![pipeline status](https://gitlab.com/serve.zone/platformclient/badges/master/pipeline.svg)](https://lossless.cloud)
GitLab Pipline Test Coverage | [![coverage report](https://gitlab.com/serve.zone/platformclient/badges/master/coverage.svg)](https://lossless.cloud)
npm | [![npm downloads per month](https://badgen.net/npm/dy/@serve.zone/platformclient)](https://lossless.cloud)
Snyk | [![Known Vulnerabilities](https://badgen.net/snyk/serve.zone/platformclient)](https://lossless.cloud)
TypeScript Support | [![TypeScript](https://badgen.net/badge/TypeScript/>=%203.x/blue?icon=typescript)](https://lossless.cloud)
node Support | [![node](https://img.shields.io/badge/node->=%2010.x.x-blue.svg)](https://nodejs.org/dist/latest-v10.x/docs/api/)
Code Style | [![Code Style](https://badgen.net/badge/style/prettier/purple)](https://lossless.cloud)
PackagePhobia (total standalone install weight) | [![PackagePhobia](https://badgen.net/packagephobia/install/@serve.zone/platformclient)](https://lossless.cloud)
PackagePhobia (package size on registry) | [![PackagePhobia](https://badgen.net/packagephobia/publish/@serve.zone/platformclient)](https://lossless.cloud)
BundlePhobia (total size when bundled) | [![BundlePhobia](https://badgen.net/bundlephobia/minzip/@serve.zone/platformclient)](https://lossless.cloud)
## Usage
Use TypeScript for best in class intellisense
For further information read the linked docs at the top of this readme.
## Legal
> MIT licensed | **&copy;** [Task Venture Capital GmbH](https://task.vc)
| By using this npm module you agree to our [privacy policy](https://lossless.gmbH/privacy)

View File

@ -0,0 +1,129 @@
---
title: "@serve.zone/platformservice"
---
# @serve.zone/platformservice
contains the platformservice container with mail, sms, letter, ai services.
## Install
To install `@serve.zone/platformservice`, run the following command:
```sh
npm install @serve.zone/platformservice --save
```
Make sure you have Node.js and npm installed on your system to use this package.
## Usage
This document provides extensive usage scenarios for the `@serve.zone/platformservice`, a comprehensive ESM module written in TypeScript offering a wide range of services such as mail, SMS, letter, and artificial intelligence (AI) functionalities. This service is an exemplar of a modular design, allowing users to leverage various communication methods and AI services efficiently. Key features provided by this platform include sending and receiving emails, managing SMS services, letter dispatching, and utilizing AI for diverse purposes.
### Prerequisites
Before diving into the examples, ensure you have the platform service installed and configured correctly. The package leverages environment variables for configuration, so you must set up the necessary variables, including service endpoints, authentication tokens, and database connections.
### Initialization
First, initialize the platform service, ensuring all dependencies are correctly loaded and configured:
```ts
import { SzPlatformService } from '@serve.zone/platformservice';
async function initService() {
const platformService = new SzPlatformService();
await platformService.start();
console.log('Platform service initialized successfully.');
}
initService();
```
### Sending Emails
One of the primary services offered is email management. Here's how to send an email using the platform service:
```ts
import { EmailService, IEmailOptions } from '@serve.zone/platformservice';
async function sendEmail() {
const emailOptions: IEmailOptions = {
from: 'no-reply@example.com',
to: 'recipient@example.com',
subject: 'Test Email',
body: '<h1>This is a test email</h1>',
};
const emailService = new EmailService('MAILGUN_API_KEY'); // Replace with your real API key
await emailService.sendEmail(emailOptions);
console.log('Email sent successfully.');
}
sendEmail();
```
### Managing SMS
Similar to email, the platform also facilitates SMS sending:
```ts
import { SmsService, ISmsConstructorOptions } from '@serve.zone/platformservice';
async function sendSms() {
const smsOptions: ISmsConstructorOptions = {
apiGatewayApiToken: 'SMS_API_TOKEN', // Replace with your real token
};
const smsService = new SmsService(smsOptions);
await smsService.sendSms(1234567890, 'SENDER_NAME', 'This is a test SMS.');
console.log('SMS sent successfully.');
}
sendSms();
```
### Dispatching Letters
For physical mail correspondence, the platform provides a letter service:
```ts
import { LetterService, ILetterConstructorOptions } from '@serve.zone/platformservice';
async function sendLetter() {
const letterOptions: ILetterConstructorOptions = {
letterxpressUser: 'USER',
letterxpressToken: 'TOKEN',
};
const letterService = new LetterService(letterOptions);
await letterService.sendLetter('This is a test letter body.', {address: 'Recipient Address', name: 'Recipient Name'});
console.log('Letter dispatched successfully.');
}
sendLetter();
```
### Leveraging AI Services
The platform also integrates AI functionalities, allowing for innovative use cases like generating content, analyzing text, or automating responses:
```ts
import { AiService } from '@serve.zone/platformservice';
async function useAiService() {
const aiService = new AiService('OPENAI_API_KEY'); // Replace with your real API key
const response = await aiService.generateText('Prompt for the AI service.');
console.log(`AI response: ${response}`);
}
useAiService();
```
### Conclusion
The `@serve.zone/platformservice` offers a robust set of features for modern application requirements, including but not limited to communication and AI services. By following the examples above, developers can integrate these services into their applications, harnessing the power of email, SMS, letters, and artificial intelligence seamlessly.
undefined

View File

@ -0,0 +1,110 @@
---
title: "@serve.zone/remoteingress"
---
# @serve.zone/remoteingress
Provides a service for creating private tunnels and reaching private clusters from the outside as part of the @serve.zone stack.
## Install
To install `@serve.zone/remoteingress`, run the following command in your terminal:
```sh
npm install @serve.zone/remoteingress
```
This command will download and install the remoteingress package and its dependencies into your project.
## Usage
`@serve.zone/remoteingress` is designed to facilitate the creation of secure private tunnels and enable access to private clusters from external sources, offering an integral part of the @serve.zone stack infrastructure. Below, we illustrate how to employ this package within your project, leveraging TypeScript and ESM syntax for modern, type-safe, and modular code.
### Prerequisites
Ensure that you have Node.js and TypeScript installed in your environment. Your project should be set up with TypeScript support, and you might want to familiarize yourself with basic networking concepts and TLS/SSL for secure communication.
### Importing and Initializing Connectors
`@serve.zone/remoteingress` offers two primary components: `ConnectorPublic` and `ConnectorPrivate`. Here's how to use them:
#### Setup ConnectorPublic
`ConnectorPublic` acts as a gateway, accepting incoming tunnel connections from `ConnectorPrivate` instances and facilitating secure communication between the internet and your private network.
```typescript
import { ConnectorPublic } from '@serve.zone/remoteingress';
// Initialize ConnectorPublic
const publicConnector = new ConnectorPublic({
tlsOptions: {
key: fs.readFileSync("<path-to-your-tls/key.pem>"),
cert: fs.readFileSync("<path-to-your-cert/cert.pem>"),
// Consider including 'ca' and 'passphrase' if required for your setup
},
listenPort: 443 // Example listen port; adjust based on your needs
});
```
#### Setup ConnectorPrivate
`ConnectorPrivate` establishes a secure tunnel to `ConnectorPublic`, effectively bridging your internal services with the external point of access.
```typescript
import { ConnectorPrivate } from '@serve.zone/remoteingress';
// Initialize ConnectorPrivate pointing to your ConnectorPublic instance
const privateConnector = new ConnectorPrivate({
publicHost: 'your.public.domain.tld',
publicPort: 443, // Ensure this matches the listening port of ConnectorPublic
tlsOptions: {
// You might want to specify TLS options here, similar to ConnectorPublic
}
});
```
### Secure Communication
It's imperative to ensure that the communication between `ConnectorPublic` and `ConnectorPrivate` is secure:
- Always use valid TLS certificates.
- Prefer using certificates issued by recognized Certificate Authorities (CA).
- Optionally, configure mutual TLS (mTLS) by requiring client certificates for an added layer of security.
### Advanced Usage
Both connectors can be finely tuned:
- **Logging and Monitoring:** Integrate with your existing logging and monitoring systems to keep tabs on tunnel activity, performance metrics, and potential security anomalies.
- **Custom Handlers:** Implement custom traffic handling logic for specialized routing, filtering, or protocol-specific processing.
- **Automation:** Automate the deployment and scaling of both `ConnectorPublic` and `ConnectorPrivate` instances using infrastructure-as-code (IAC) tools and practices, ensuring that your tunneling infrastructure can dynamically adapt to the ever-changing needs of your services.
### Example Scenarios
1. **Securing Application APIs:** Use `@serve.zone/remoteingress` to expose private APIs to your frontend deployed on a public cloud, ensuring that only your infrastructure can access these endpoints.
2. **Remote Database Access:** Securely access databases within a private VPC from your local development machine without opening direct access to the internet.
3. **Service Mesh Integration:** Integrate `@serve.zone/remoteingress` as part of a service mesh setup to securely connect services across multiple clusters with robust identity and encryption at the tunnel level.
For detailed documentation, API references, and additional use cases, please refer to the inline documentation and source code within the package. Always prioritize security and robustness when dealing with network ingress to protect your infrastructure and data from unauthorized access and threats.
## License and Legal Information
This repository contains open-source code that is licensed under the MIT License. A copy of the MIT License can be found in the [license](license) file within this repository.
**Please note:** The MIT License does not grant permission to use the trade names, trademarks, service marks, or product names of the project, except as required for reasonable and customary use in describing the origin of the work and reproducing the content of the NOTICE file.
### Trademarks
This project is owned and maintained by Task Venture Capital GmbH. The names and logos associated with Task Venture Capital GmbH and any related products or services are trademarks of Task Venture Capital GmbH and are not included within the scope of the MIT license granted herein. Use of these trademarks must comply with Task Venture Capital GmbH's Trademark Guidelines, and any usage must be approved in writing by Task Venture Capital GmbH.
### Company Information
Task Venture Capital GmbH
Registered at District court Bremen HRB 35230 HB, Germany
For any legal inquiries or if you require further information, please contact us via email at hello@task.vc.
By using this repository, you acknowledge that you have read this section, agree to comply with its terms, and understand that the licensing of the code does not imply endorsement by Task Venture Capital GmbH of any derivative works.

View File

@ -0,0 +1,303 @@
---
title: "@serve.zone/spark"
---
# @serve.zone/spark
A comprehensive tool for maintaining and configuring servers, integrating with Docker and supporting advanced task scheduling, targeted at the serve.zone infrastructure. It's mainly designed to be utilized by @serve.zone/cloudly as a cluster node server system manager, maintaining and configuring servers on the base OS level.
## Install
To install `@serve.zone/spark`, run the following command in your terminal:
```sh
npm install @serve.zone/spark --save
```
Ensure you have both Node.js and npm installed on your machine.
## Usage
### Getting Started
To use `@serve.zone/spark` in your project, you need to include and initiate it in your TypeScript project. Ensure you have TypeScript and the necessary build tools set up in your project.
First, import `@serve.zone/spark`:
```typescript
import { Spark } from '@serve.zone/spark';
```
### Initializing Spark
Create an instance of the `Spark` class to start using Spark. This instance will serve as the main entry point for interacting with Spark functionalities.
```typescript
const sparkInstance = new Spark();
```
### Running Spark as a Daemon
To run Spark as a daemon, which is useful for maintaining and configuring servers at the OS level, you can use the CLI feature bundled with Spark. This should ideally be handled outside of your code through a command-line terminal but can also be automated within your Node.js scripts if required.
```shell
spark installdaemon
```
The command above sets up Spark as a system service, enabling it to run and maintain server configurations automatically.
### Updating Spark or Maintained Services
Spark can self-update and manage updates for its maintained services. Trigger an update check and process by calling the `updateServices` method on the Spark instance.
```typescript
await sparkInstance.sparkUpdateManager.updateServices();
```
### Managing Configuration and Logging
Spark allows extensive configuration and logging customization. Use the `SparkLocalConfig` and logging features to tailor Spark's operation to your needs.
```typescript
// Accessing the local configuration
const localConfig = sparkInstance.sparkLocalConfig;
// Utilizing the logger for custom log messages
import { logger } from '@serve.zone/spark';
logger.log('info', 'Custom log message');
```
### Advanced Usage
`@serve.zone/spark` offers tools for detailed server and service management, including but not limited to task scheduling, daemon management, and service updates. Explore the `SparkTaskManager` for scheduling specific tasks, `SparkUpdateManager` for handling service updates, and `SparkLocalConfig` for configuration.
### Example: Scheduling Custom Tasks
```typescript
import { SparkTaskManager } from '@serve.zone/spark';
const sparkInstance = new Spark();
const myTask = {
name: 'customTask',
taskFunction: async () => {
console.log('Running custom task');
},
};
sparkInstance.sparkTaskManager.taskmanager.addAndScheduleTask(myTask, '* * * * * *');
```
The example above creates a simple task that logs a message every second, demonstrating how to use Spark's task manager for custom scheduled tasks.
### Detailed Service Management
For advanced configurations, including Docker and service management, you can utilize the following patterns:
- Use `SparkUpdateManager` to handle Docker image updates, service creation, and management.
- Access and modify Docker and service configurations through Spark's integration with configuration files and environment variables.
```typescript
// Managing Docker services with Spark
await sparkInstance.sparkUpdateManager.dockerHost.someDockerMethod();
// Example: Creating a Docker service
const newServiceDefinition = {...};
await sparkInstance.sparkUpdateManager.createService(newServiceDefinition);
```
### CLI Commands
Spark provides several CLI commands to interact with and manage the system services:
#### Installing Spark as a Daemon
```shell
spark installdaemon
```
Sets up Spark as a system service to maintain server configurations automatically.
#### Updating the Daemon
```shell
spark updatedaemon
```
Updates the daemon service if a new version is available.
#### Running Spark as Daemon
```shell
spark asdaemon
```
Runs Spark in daemon mode, which is suitable for executing automated tasks.
#### Viewing Logs
```shell
spark logs
```
Views the logs of the Spark daemon service.
#### Cleaning Up Services
```shell
spark prune
```
Stops and cleans up all Docker services (stacks, networks, secrets, etc.) and prunes the Docker system.
### Programmatic Daemon Management
You can also manage the daemon programmatically:
```typescript
import { SmartDaemon } from '@push.rocks/smartdaemon';
import { Spark } from '@serve.zone/spark';
const sparkInstance = new Spark();
const smartDaemon = new SmartDaemon();
const startDaemon = async () => {
const sparkService = await smartDaemon.addService({
name: 'spark',
version: sparkInstance.sparkInfo.projectInfo.version,
command: 'spark asdaemon',
description: 'Spark daemon service',
workingDir: '/path/to/project',
});
await sparkService.save();
await sparkService.enable();
await sparkService.start();
};
const updateDaemon = async () => {
const sparkService = await smartDaemon.addService({
name: 'spark',
version: sparkInstance.sparkInfo.projectInfo.version,
command: 'spark asdaemon',
description: 'Spark daemon service',
workingDir: '/path/to/project',
});
await sparkService.reload();
};
startDaemon();
updateDaemon();
```
This illustrates how to initiate and update the Spark daemon using the `SmartDaemon` class from `@push.rocks/smartdaemon`.
### Configuration Management
Extensive configuration management is possible through the `SparkLocalConfig` and other configuration classes. This feature allows you to make your application's behavior adaptable based on different environments and requirements.
```typescript
// Example on setting local config
import { SparkLocalConfig } from '@serve.zone/spark';
const localConfig = new SparkLocalConfig(sparkInstance);
await localConfig.kvStore.set('someKey', 'someValue');
// Retrieving a value from local config
const someConfigValue = await localConfig.kvStore.get('someKey');
console.log(someConfigValue); // Outputs: someValue
```
### Detailed Log Management
Logging is a crucial aspect of any automation tool, and `@serve.zone/spark` offers rich logging functionality through its built-in logging library.
```typescript
import { logger, Spark } from '@serve.zone/spark';
const sparkInstance = new Spark();
logger.log('info', 'Spark instance created.');
// Using logger in various levels of severity
logger.log('debug', 'This is a debug message');
logger.log('warn', 'This is a warning message');
logger.log('error', 'This is an error message');
logger.log('ok', 'This is a success message');
```
### Real-World Scenarios
#### Automated System Update and Restart
In real-world scenarios, you might want to automate system updates and reboots to ensure your services are running the latest security patches and features.
```typescript
import { Spark } from '@serve.zone/spark';
import { SmartShell } from '@push.rocks/smartshell';
const sparkInstance = new Spark();
const shell = new SmartShell({ executor: 'bash' });
const updateAndRestart = async () => {
await shell.exec('apt-get update && apt-get upgrade -y');
console.log('System updated.');
await shell.exec('reboot');
};
sparkInstance.sparkTaskManager.taskmanager.addAndScheduleTask(
{ name: 'updateAndRestart', taskFunction: updateAndRestart },
'0 3 * * 7' // Every Sunday at 3 AM
);
```
This example demonstrates creating and scheduling a task to update and restart the server every Sunday at 3 AM using Spark's task management capabilities.
#### Integrating with Docker for Service Deployment
Spark's tight integration with Docker makes it an excellent tool for deploying containerized applications across your infrastructure.
```typescript
import { Spark } from '@serve.zone/spark';
import { DockerHost } from '@apiclient.xyz/docker';
const sparkInstance = new Spark();
const dockerHost = new DockerHost({});
const deployService = async () => {
const image = await dockerHost.pullImage('my-docker-repo/my-service:latest');
const newService = await dockerHost.createService({
name: 'my-service',
image,
ports: ['80:8080'],
environmentVariables: {
NODE_ENV: 'production',
},
});
console.log(`Service ${newService.name} deployed.`);
};
deployService();
```
This example demonstrates how to pull a Docker image and deploy it as a new service in your infrastructure using Spark's Docker integration.
### Managing Secrets
Managing secrets and sensitive data is crucial in any configuration and automation tool. Spark's integration with Docker allows you to handle secrets securely.
```typescript
import { Spark, SparkUpdateManager } from '@serve.zone/spark';
import { DockerSecret } from '@apiclient.xyz/docker';
const sparkInstance = new Spark();
const updateManager = new SparkUpdateManager(sparkInstance);
const createDockerSecret = async () => {
const secret = await DockerSecret.createSecret(updateManager.dockerHost, {
name: 'dbPassword',
contentArg: 'superSecretPassword',
});
console.log(`Secret ${secret.Spec.Name} created.`);
};
createDockerSecret();
```
This example shows how to create a Docker secret using Spark's `SparkUpdateManager` class, ensuring that sensitive information is securely stored and managed.
## License and Legal Information
This repository contains open-source code that is licensed under the MIT License. A copy of the MIT License can be found in the [license](license) file within this repository.
**Please note:** The MIT License does not grant permission to use the trade names, trademarks, service marks, or product names of the project, except as required for reasonable and customary use in describing the origin of the work and reproducing the content of the NOTICE file.
### Trademarks
This project is owned and maintained by Task Venture Capital GmbH. The names and logos associated with Task Venture Capital GmbH and any related products or services are trademarks of Task Venture Capital GmbH and are not included within the scope of the MIT license granted herein. Use of these trademarks must comply with Task Venture Capital GmbH's Trademark Guidelines, and any usage must be approved in writing by Task Venture Capital GmbH.
### Company Information
Task Venture Capital GmbH
Registered at District court Bremen HRB 35230 HB, Germany
For any legal inquiries or if you require further information, please contact us via email at hello@task.vc.
By using this repository, you acknowledge that you have read this section, agree to comply with its terms, and understand that the licensing of the code does not imply endorsement by Task Venture Capital GmbH of any derivative works.

View File

@ -11,7 +11,7 @@
"scripts": { "scripts": {
"watch": "vitepress dev docs", "watch": "vitepress dev docs",
"serve": "vitepress serve docs", "serve": "vitepress serve docs",
"docs": "vitepress build docs" "build": "vitepress build docs"
}, },
"devDependencies": { "devDependencies": {
"@git.zone/tsbuild": "^2.2.0", "@git.zone/tsbuild": "^2.2.0",