Cloudly is the serve.zone control plane: a TypeScript service and browser dashboard that stores desired infrastructure state, authenticates humans and machines, coordinates clusters, serves an OCI registry, manages workload metadata, and pushes runtime configuration to connected node components.
For reporting bugs, issues, or security vulnerabilities, please visit [community.foss.global/](https://community.foss.global/). This is the central community hub for all issue reporting. Developers who sign and comply with our contribution agreement and go through identification can also get a [code.foss.global/](https://code.foss.global/) account to submit Pull Requests directly.
Cloudly is the place where serve.zone operators describe what should run. It does not directly run every workload itself. Instead, it keeps the authoritative desired state in MongoDB and exposes TypedRequest/TypedSocket APIs so runtime components can reconcile that state where the containers actually live.
- **Platform bindings**: capabilities such as `database`, `objectstorage`, `logging`, `backup`, and RPC-style platform services that Coreflow/Corestore can reconcile.
- **Dashboard**: a web component UI rendered from `ts_web` with views for overview, settings, secrets, clusters, external registries, images, services, deployments, tasks, domains, DNS, mail/log/storage/database shells, backups, and BaseOS.
| `CloudlyTaskManager` | Registers predefined and runtime tasks, tracks task executions, schedules cron jobs, and exposes task APIs. |
| `CloudlySettingsManager` | Stores runtime settings in MongoDB, masks sensitive values for API responses, and refreshes gateway/Coreflow state after relevant changes. |
Cloudly uses `@push.rocks/smartconfig``AppData` with environment mappings. The runtime entry point loads `.nogit`/environment values through `@push.rocks/qenv`, and embedded callers can override values by constructing `new Cloudly(config)` programmatically.
For backup replication with `CLOUDLY_BACKUP_TARGET_TYPE=s3`, set `CLOUDLY_BACKUP_S3_ENDPOINT`, `CLOUDLY_BACKUP_S3_ACCESS_KEY`, `CLOUDLY_BACKUP_S3_SECRET_KEY`, and `CLOUDLY_BACKUP_S3_BUCKET`. Optional S3 variables are `CLOUDLY_BACKUP_S3_REGION`, `CLOUDLY_BACKUP_S3_PORT`, and `CLOUDLY_BACKUP_S3_USE_SSL`.
For backup replication with `CLOUDLY_BACKUP_TARGET_TYPE=smb`, set `CLOUDLY_BACKUP_SMB_HOST` and `CLOUDLY_BACKUP_SMB_SHARE`. Optional SMB variables are `CLOUDLY_BACKUP_SMB_PORT`, `CLOUDLY_BACKUP_SMB_USERNAME`, `CLOUDLY_BACKUP_SMB_PASSWORD`, and `CLOUDLY_BACKUP_SMB_DOMAIN`.
Cloudly exposes a single composed TypedRouter. Managers add their own typed handlers to the main router, and `CloudlyServer` exposes that router through the HTTP/WebSocket server.
On first startup, Cloudly bootstraps the first human admin from `SERVEZONE_ADMINACCOUNT`. Human clients authenticate through `adminLoginWithUsernameAndPassword`; machine clients authenticate through `getIdentityByToken`. Cluster creation creates a machine user and token for Coreflow.
Machine clients such as Coreflow authenticate with `getIdentityByToken`, request a stateful identity, and tag their WebSocket connection. That lets Cloudly push configuration to already-connected Coreflow instances instead of opening inbound connections to cluster nodes.
Cloudly serves an OCI registry under `/v2` through `CloudlyRegistryManager`. The registry uses configured S3 storage and issues OCI tokens from Cloudly authentication state.
Registry push hooks record tag/digest metadata on the linked image and service. Unless `deployOnPush` is explicitly `false`, a successful push updates the service image version and asks connected Coreflow clients to reconcile.
Registry token requests use HTTP Basic credentials against Cloudly users. User passwords and unexpired user tokens are accepted; push/delete scopes require an admin user or a token with the `admin` assigned role.
CoreBuild worker configuration can use `corebuildWorkersJson` for multiple workers or the legacy `corebuildWorkerUrl` and `corebuildWorkerToken` settings for one worker.
Cloudly owns backup records and user-facing backup/restore requests. Coreflow executes the cluster-local work, and Corestore snapshots volumes, database resources, object storage resources, and archive objects.
Manual `createServiceBackup` requests expect Coreflow to complete remote archive replication. Cloudly validates archive object size and SHA-256 checksums, writes a manifest, records target metadata, and marks completed backups as `replicated`. Restores read the manifest and objects back through the configured target writer.
## Task Automation
Cloudly registers a TaskBuffer-backed task manager. The API and dashboard can list tasks, trigger tasks manually, inspect execution logs/metrics, and request cancellation for running tasks.
Predefined tasks currently include:
| Task | Purpose |
| --- | --- |
| `cloudflare-domain-sync` | Imports and updates domains from configured Cloudflare zones. |
| `dns-sync` | Iterates DNS entries marked as external; provider sync is currently a placeholder. |
| `cert-renewal` | Checks activated domains for certificate renewal; renewal logic is currently a placeholder. |
| `cleanup` | Removes old task executions and contains placeholders for log/image cleanup. |
| `health-check` | Iterates deployments and records health metrics; runtime health checks are currently placeholders. |
| `resource-report` | Generates node resource metrics; values are currently placeholders until runtime metrics are wired in. |
| `db-maintenance` | Maintenance shell for database optimization tasks. |
| `security-scan` | Security scan shell for exposed ports, image freshness, and weak configuration checks. |
| `docker-cleanup` | Docker cleanup shell for containers, images, volumes, and networks. |
| `backup-all-services` | Registered by the backup manager and enabled only when `CLOUDLY_BACKUP_CRON` is set. |
Cloudly can integrate with a dcrouter gateway when the gateway URL and API token are present in settings. The current integration syncs externally available domains into Cloudly and passes an external gateway route configuration to Coreflow. Coreflow can then ask dcrouter for certificates and synchronize public routes while still routing to cluster-local Coretraffic.
The package metadata and settings schema include fields for several cloud providers. The code paths currently exercised in this repository are Cloudflare for ACME DNS-01 and domain sync, Hetzner for selected node/bare-metal provisioning paths, S3-compatible storage, SMB/S3 backup archive targets, MongoDB/SmartData, CoreBuild, Coreflow, Corestore, and optional dcrouter integration. Several provider connection tests and predefined tasks are configuration checks or implementation shells; verify provider-specific behavior in the relevant manager before relying on it operationally.
**Please note:** The MIT License does not grant permission to use the trade names, trademarks, service marks, or product names of the project, except as required for reasonable and customary use in describing the origin of the work and reproducing the content of the NOTICE file.
This project is owned and maintained by Task Venture Capital GmbH. The names and logos associated with Task Venture Capital GmbH and any related products or services are trademarks of Task Venture Capital GmbH or third parties, and are not included within the scope of the MIT license granted herein.
Use of these trademarks must comply with Task Venture Capital GmbH's Trademark Guidelines or the guidelines of the respective third-party owners, and any usage must be approved in writing. Third-party trademarks used herein are the property of their respective owners and used only in a descriptive manner, e.g. for an implementation of an API or similar.
By using this repository, you acknowledge that you have read this section, agree to comply with its terms, and understand that the licensing of the code does not imply endorsement by Task Venture Capital GmbH of any derivative works.