Files
2026-05-02 21:59:42 +00:00

10 KiB

@serve.zone/coreflow

Coreflow is the Docker Swarm reconciliation engine for the serve.zone platform. It runs inside a cluster, connects back to Cloudly, reads the desired cluster state, provisions the base runtime services, deploys workload services, and pushes reverse-proxy routing updates to Coretraffic.

Issue Reporting and Security

For reporting bugs, issues, or security vulnerabilities, please visit community.foss.global/. This is the central community hub for all issue reporting. Developers who sign and comply with our contribution agreement and go through identification can also get a code.foss.global/ account to submit Pull Requests directly.

What Coreflow Does

Coreflow sits between Cloudly and the Docker Swarm runtime:

  • Connects to Cloudly over the @serve.zone/api WebSocket client and registers as coreflow.
  • Authenticates with Cloudly using the cluster jump code token.
  • Reads the cluster configuration and service definitions managed by Cloudly.
  • Ensures base Docker networks exist for traffic and platform communication.
  • Deploys and updates base services such as coretraffic and corelog.
  • Deploys workload services from Cloudly image definitions.
  • Creates Docker secrets from Cloudly secret bundles and attaches them to services.
  • Builds reverse proxy configs from service domains, Docker task IPs, and Cloudly certificates.
  • Sends routing updates to Coretraffic through the internal TypedSocket server.
  • Reconciles state initially, on Cloudly config updates, and on a scheduled hourly task.

Runtime Model

Coreflow is not a general-purpose application framework. It is a long-running cluster component designed to be started as a service or Docker container on a Docker Swarm manager node.

Cloudly
  -> Coreflow
      -> local Docker Engine / Swarm
      -> Coretraffic via internal TypedSocket

Coreflow never waits for Cloudly to call it. It connects outward to Cloudly, keeps the connection tagged as a coreflow client, and reacts to config update events from that connection.

Requirements

  • Node.js runtime compatible with the project toolchain.
  • pnpm for dependency management.
  • A Docker Swarm manager with access to the local Docker socket.
  • A reachable Cloudly instance.
  • A valid Cloudly jump code for the target cluster.

Configuration

Coreflow reads runtime configuration through @push.rocks/qenv from the project environment and .nogit overlays.

Required environment variables:

Variable Purpose
CLOUDLY_URL WebSocket/HTTP endpoint of the Cloudly control plane.
JUMPCODE Cloudly token used to authenticate this Coreflow instance and tag the connection.

Example .nogit/.env:

CLOUDLY_URL=https://cloudly.example.com
JUMPCODE=cluster-machine-token

Installation

This package is private and normally deployed as part of the serve.zone platform image pipeline.

pnpm install
pnpm run build
pnpm start

The package also exposes the coreflow binary after build through dist/cli.js, while the repository entrypoint is cli.js.

Programmatic Startup

The CLI path imports runCli() from dist_ts/index.js. For direct TypeScript usage inside this repository, instantiate the main class and call start():

import { Coreflow } from './ts/coreflow.classes.coreflow.js';

const coreflow = new Coreflow();
await coreflow.start();

process.on('SIGTERM', async () => {
  await coreflow.stop();
});

start() initializes components in this order:

  1. InternalServer starts a SmartServe server on port 3000 with TypedSocket support.
  2. CloudlyConnector connects to Cloudly and resolves the cluster identity.
  3. ClusterManager reads initial Cloudly config and subscribes to config updates.
  4. PlatformManager starts its placeholder lifecycle hook.
  5. CoreflowTaskmanager schedules the initial and recurring reconciliation tasks.

Reconciliation Flow

The task manager coordinates the runtime work as a task chain:

updateBaseServices
  -> updateWorkloadServices
      -> updateTrafficRouting

updateBaseServices ensures the base Docker networks and platform services exist:

  • sznwebgateway for public web routing.
  • szncorechat for internal base-service communication.
  • coretraffic attached to both networks with host ports 80 and 443 mapped to its service ports.
  • corelog attached to the internal network.

updateWorkloadServices fetches Cloudly services, skips non-workload service categories, pulls or imports the configured Docker image, creates a Docker secret from the assigned secret bundle, and creates or replaces the Docker service when an update is required.

updateTrafficRouting inspects Docker services on the web gateway network, resolves container IPs, fetches certificates for configured domains, and sends IReverseProxyConfig[] updates to Coretraffic with the updateRouting typed request.

The same base-service task is triggered when Cloudly emits a config update. After the initial delayed run, it is also scheduled hourly.

Cloudly Integration

CloudlyConnector wraps CloudlyApiClient from @serve.zone/api:

this.cloudlyApiClient = new CloudlyApiClient({
  registerAs: 'coreflow',
  cloudlyUrl,
});

After connection, Coreflow authenticates with JUMPCODE and requests a stateful, tagged identity. That identity is then used to fetch cluster configuration and certificates.

Coreflow depends on these Cloudly-side resources being present and valid:

  • Cluster configuration for the authenticated identity.
  • Service records with image, volume, resource, domain, port, and secret bundle references.
  • Image records pointing either to internal Cloudly image storage or an external registry.
  • Secret bundles that can be flattened into environment key/value data.
  • SSL certificates for all routed domains.

Corestore Volumes

Coreflow deploys corestore as a global base service and bind mounts /run/docker/plugins so Docker can discover the corestore VolumeDriver socket on each node.

Workload services can declare first-class volumes:

volumes: [
  {
    mountPath: '/data',
    driver: 'corestore',
    backup: true,
  },
]

If name is omitted, Coreflow derives a stable Docker volume name from the service id and mount path. During service creation it sends a Docker volume mount with DriverConfig.Name = 'corestore', plus service metadata as driver options and volume labels.

Coreflow also exposes Cloudly-triggered backup handlers over its TypedSocket connection. executeServiceBackup snapshots corestore volumes plus provisioned smartdb/smartstorage resources, and executeServiceRestore restores those snapshots back into the service's corestore resources.

Coretraffic Integration

Coreflow starts an internal SmartServe/TypedSocket server on port 3000. Coretraffic is expected to connect to that server and tag its connection as coretraffic.

When routing changes are computed, Coreflow sends:

const request = typedsocketServer.createTypedRequest('updateRouting', coretrafficConnection);
await request.fire({ reverseConfigs });

Each reverse config contains destination container IPs, destination ports, hostname, and certificate material.

Docker Image Handling

Coreflow supports two image sources from Cloudly image metadata:

  • Internal Cloudly images: Coreflow pulls the requested image version stream from Cloudly and imports it into Docker from a tar stream.
  • External registry images: Coreflow authenticates against the configured registry, pulls the image, and updates the local Docker image.

Invalid or incomplete image location data causes reconciliation to fail for that service, which is intentional: Coreflow only deploys services with complete desired-state data.

Development

Common commands:

pnpm install
pnpm run build
pnpm test
pnpm run watch

Project layout:

Path Purpose
ts/index.ts CLI startup wrapper and lifecycle entrypoints.
ts/coreflow.classes.coreflow.ts Main coordinator class.
ts/coreflow.connector.cloudlyconnector.ts Cloudly API connection and identity handling.
ts/coreflow.classes.clustermanager.ts Docker network, service, secret, image, and routing reconciliation.
ts/coreflow.classes.taskmanager.ts Buffered and scheduled reconciliation task chain.
ts/coreflow.connector.coretrafficconnector.ts TypedSocket routing updates to Coretraffic.
ts/coreflow.classes.internalserver.ts Internal SmartServe and TypedSocket server.

Operational Notes

  • Coreflow expects Docker access through the local Docker socket by default.
  • Reconciliation removes and recreates services when the Docker service reports that it needs an update.
  • Workload services must be attached to sznwebgateway for routing to be generated.
  • The current routing logic uses the first available container IP for a service.
  • PlatformManager provisions database and objectstorage bindings through corestore.

This repository contains open-source code licensed under the MIT License. A copy of the license can be found in the license file.

Please note: The MIT License does not grant permission to use the trade names, trademarks, service marks, or product names of the project, except as required for reasonable and customary use in describing the origin of the work and reproducing the content of the NOTICE file.

Trademarks

This project is owned and maintained by Task Venture Capital GmbH. The names and logos associated with Task Venture Capital GmbH and any related products or services are trademarks of Task Venture Capital GmbH or third parties, and are not included within the scope of the MIT license granted herein.

Use of these trademarks must comply with Task Venture Capital GmbH's Trademark Guidelines or the guidelines of the respective third-party owners, and any usage must be approved in writing. Third-party trademarks used herein are the property of their respective owners and used only in a descriptive manner, e.g. for an implementation of an API or similar.

Company Information

Task Venture Capital GmbH
Registered at District Court Bremen HRB 35230 HB, Germany

For any legal inquiries or further information, please contact us via email at hello@task.vc.

By using this repository, you acknowledge that you have read this section, agree to comply with its terms, and understand that the licensing of the code does not imply endorsement by Task Venture Capital GmbH of any derivative works.