Files
corestore/readme.md
T
2026-05-02 15:01:41 +00:00

3.4 KiB

@serve.zone/corestore

corestore is the node-local serve.zone storage provider. It runs one container that starts:

  • @push.rocks/smartdb as a MongoDB-compatible database endpoint on port 27017.
  • @push.rocks/smartstorage as an S3-compatible object-storage endpoint on port 9000.
  • A small control API on port 3000 for Coreflow provisioning.

Purpose

Coreflow can run corestore on every node and provision per-service resources on the node that hosts a workload requiring database or objectstorage.

The first implementation exposes the provider container and provisioning API. Coreflow should call the control API when reconciling platform bindings, then inject the returned environment variables into the workload secret.

Runtime

pnpm install
pnpm build
node cli.js

Default ports:

Service Port Purpose
Control API 3000 Provisioning, deprovisioning, health, metrics
S3 9000 S3-compatible API from smartstorage
DB 27017 MongoDB wire protocol from smartdb

Default data directory: /data/corestore.

Configuration

Env var Default Purpose
CORESTORE_DATA_DIR /data/corestore Persistent data root
CORESTORE_BIND_ADDRESS 0.0.0.0 Bind address for all endpoints
CORESTORE_PUBLIC_HOST corestore Hostname injected into service credentials
CORESTORE_CONTROL_PORT 3000 Control API port
CORESTORE_S3_PORT 9000 S3 endpoint port
CORESTORE_DB_PORT 27017 Mongo-compatible DB endpoint port
CORESTORE_REGION us-east-1 S3 region
CORESTORE_API_TOKEN unset Optional bearer token for mutating/read-sensitive control APIs
CORESTORE_MASTER_SECRET generated and persisted Seed for deterministic tenant credentials

When Coreflow creates the global corestore service, it forwards its own CORESTORE_API_TOKEN environment variable into the service. Set the same value on Coreflow to protect provisioning APIs from workload containers on the same overlay network.

Control API

Health is unauthenticated:

curl http://corestore:3000/health

Provision per-service DB and S3 resources:

curl -X POST http://corestore:3000/resources/provision \
  -H 'content-type: application/json' \
  -H 'authorization: Bearer <CORESTORE_API_TOKEN>' \
  -d '{"serviceId":"svc-123","serviceName":"api","capabilities":["database","objectstorage"]}'

The response contains service-specific env vars such as MONGODB_URI, S3_BUCKET, AWS_ACCESS_KEY_ID, and AWS_ENDPOINT_URL.

Deprovision a service:

curl -X POST http://corestore:3000/resources/deprovision \
  -H 'content-type: application/json' \
  -H 'authorization: Bearer <CORESTORE_API_TOKEN>' \
  -d '{"serviceId":"svc-123"}'

Docker

pnpm run build:docker

The image exposes 3000, 9000, and 27017 and stores all runtime data under /data/corestore.

Coreflow Integration Notes

The intended cluster behavior is:

  • deploy corestore as a node-local/global service so every workload node has a local storage provider;
  • provision database and objectstorage bindings through /resources/provision;
  • merge the returned env vars into the workload Docker secret before service creation;
  • mark Cloudly platform bindings ready with endpoint metadata and credential env refs;
  • deprovision resources when the service binding or workload is deleted.