@serve.zone/corestore
corestore is the node-local serve.zone storage provider. It runs one container that starts:
@push.rocks/smartdbas a MongoDB-compatible database endpoint on port27017.@push.rocks/smartstorageas an S3-compatible object-storage endpoint on port9000.- A small control API on port
3000for Coreflow provisioning. - A Docker VolumeDriver plugin on
/run/docker/plugins/corestore.sock.
Purpose
Coreflow can run corestore on every node and provision per-service resources on the node that hosts a workload requiring database, objectstorage, or persistent volumes.
Runtime
pnpm install
pnpm build
node cli.js
Default ports:
| Service | Port | Purpose |
|---|---|---|
| Control API | 3000 |
Provisioning, deprovisioning, health, metrics |
| S3 | 9000 |
S3-compatible API from smartstorage |
| DB | 27017 |
MongoDB wire protocol from smartdb |
Default data directory: /data/corestore.
Configuration
| Env var | Default | Purpose |
|---|---|---|
CORESTORE_DATA_DIR |
/data/corestore |
Persistent data root |
CORESTORE_BIND_ADDRESS |
0.0.0.0 |
Bind address for all endpoints |
CORESTORE_PUBLIC_HOST |
corestore |
Hostname injected into service credentials |
CORESTORE_CONTROL_PORT |
3000 |
Control API port |
CORESTORE_S3_PORT |
9000 |
S3 endpoint port |
CORESTORE_DB_PORT |
27017 |
Mongo-compatible DB endpoint port |
CORESTORE_REGION |
us-east-1 |
S3 region |
CORESTORE_API_TOKEN |
unset | Optional bearer token for mutating/read-sensitive control APIs |
CORESTORE_MASTER_SECRET |
generated and persisted | Seed for deterministic tenant credentials |
CORESTORE_VOLUME_PLUGIN_SOCKET |
/run/docker/plugins/corestore.sock |
Docker VolumeDriver socket path |
CORESTORE_ARCHIVE_PASSPHRASE |
unset | Optional encryption passphrase for volume snapshots |
When Coreflow creates the global corestore service, it forwards its own CORESTORE_API_TOKEN environment variable into the service. Set the same value on Coreflow to protect provisioning APIs from workload containers on the same overlay network.
Control API
Health is unauthenticated:
curl http://corestore:3000/health
Provision per-service DB and S3 resources:
curl -X POST http://corestore:3000/resources/provision \
-H 'content-type: application/json' \
-H 'authorization: Bearer <CORESTORE_API_TOKEN>' \
-d '{"serviceId":"svc-123","serviceName":"api","capabilities":["database","objectstorage"]}'
The response contains service-specific env vars such as MONGODB_URI, S3_BUCKET, AWS_ACCESS_KEY_ID, and AWS_ENDPOINT_URL.
Deprovision a service:
curl -X POST http://corestore:3000/resources/deprovision \
-H 'content-type: application/json' \
-H 'authorization: Bearer <CORESTORE_API_TOKEN>' \
-d '{"serviceId":"svc-123"}'
List managed volumes:
curl http://corestore:3000/volumes \
-H 'authorization: Bearer <CORESTORE_API_TOKEN>'
Snapshot a volume into the local containerarchive repository:
curl -X POST http://corestore:3000/volumes/snapshot \
-H 'content-type: application/json' \
-H 'authorization: Bearer <CORESTORE_API_TOKEN>' \
-d '{"name":"sz-api-data-abc123","snapshotName":"before-deploy"}'
Restore a snapshot into an existing volume:
curl -X POST http://corestore:3000/volumes/restore \
-H 'content-type: application/json' \
-H 'authorization: Bearer <CORESTORE_API_TOKEN>' \
-d '{"name":"sz-api-data-abc123","snapshotId":"<snapshot-id>"}'
Docker Volume Driver
Corestore implements Docker's legacy VolumeDriver API over a Unix socket. The corestore service must bind mount /run/docker/plugins from the host so Docker can discover /run/docker/plugins/corestore.sock.
Docker calls corestore for Create, Mount, Unmount, Remove, Path, Get, List, and Capabilities. Mountpoints are real host paths under /data/corestore/volumes/<volume>/data; Docker bind-mounts those paths into workload containers.
The driver reports Scope: local, because volume data is node-local. Backup orchestration should snapshot volumes through the control API before destructive changes or restores.
Docker
pnpm run build:docker
The image exposes 3000, 9000, and 27017 and stores all runtime data under /data/corestore.
Coreflow Integration Notes
The intended cluster behavior is:
- deploy
corestoreas a node-local/global service so every workload node has a local storage provider; - provision
databaseandobjectstoragebindings through/resources/provision; - mount service volumes through Docker
DriverConfig.Name = corestore; - snapshot and restore service volumes through
/volumes/snapshotand/volumes/restore; - merge the returned env vars into the workload Docker secret before service creation;
- mark Cloudly platform bindings
readywith endpoint metadata and credential env refs; - deprovision resources when the service binding or workload is deleted.