feat: add corestore volume driver

This commit is contained in:
2026-05-02 18:58:21 +00:00
parent 29f0d94e86
commit 02d1b77ae8
6 changed files with 644 additions and 7 deletions
+39 -3
View File
@@ -5,12 +5,11 @@
- `@push.rocks/smartdb` as a MongoDB-compatible database endpoint on port `27017`.
- `@push.rocks/smartstorage` as an S3-compatible object-storage endpoint on port `9000`.
- A small control API on port `3000` for Coreflow provisioning.
- A Docker VolumeDriver plugin on `/run/docker/plugins/corestore.sock`.
## Purpose
Coreflow can run `corestore` on every node and provision per-service resources on the node that hosts a workload requiring `database` or `objectstorage`.
The first implementation exposes the provider container and provisioning API. Coreflow should call the control API when reconciling platform bindings, then inject the returned environment variables into the workload secret.
Coreflow can run `corestore` on every node and provision per-service resources on the node that hosts a workload requiring `database`, `objectstorage`, or persistent volumes.
## Runtime
@@ -43,6 +42,8 @@ Default data directory: `/data/corestore`.
| `CORESTORE_REGION` | `us-east-1` | S3 region |
| `CORESTORE_API_TOKEN` | unset | Optional bearer token for mutating/read-sensitive control APIs |
| `CORESTORE_MASTER_SECRET` | generated and persisted | Seed for deterministic tenant credentials |
| `CORESTORE_VOLUME_PLUGIN_SOCKET` | `/run/docker/plugins/corestore.sock` | Docker VolumeDriver socket path |
| `CORESTORE_ARCHIVE_PASSPHRASE` | unset | Optional encryption passphrase for volume snapshots |
When Coreflow creates the global `corestore` service, it forwards its own `CORESTORE_API_TOKEN` environment variable into the service. Set the same value on Coreflow to protect provisioning APIs from workload containers on the same overlay network.
@@ -74,6 +75,39 @@ curl -X POST http://corestore:3000/resources/deprovision \
-d '{"serviceId":"svc-123"}'
```
List managed volumes:
```bash
curl http://corestore:3000/volumes \
-H 'authorization: Bearer <CORESTORE_API_TOKEN>'
```
Snapshot a volume into the local `containerarchive` repository:
```bash
curl -X POST http://corestore:3000/volumes/snapshot \
-H 'content-type: application/json' \
-H 'authorization: Bearer <CORESTORE_API_TOKEN>' \
-d '{"name":"sz-api-data-abc123","snapshotName":"before-deploy"}'
```
Restore a snapshot into an existing volume:
```bash
curl -X POST http://corestore:3000/volumes/restore \
-H 'content-type: application/json' \
-H 'authorization: Bearer <CORESTORE_API_TOKEN>' \
-d '{"name":"sz-api-data-abc123","snapshotId":"<snapshot-id>"}'
```
## Docker Volume Driver
Corestore implements Docker's legacy VolumeDriver API over a Unix socket. The `corestore` service must bind mount `/run/docker/plugins` from the host so Docker can discover `/run/docker/plugins/corestore.sock`.
Docker calls `corestore` for `Create`, `Mount`, `Unmount`, `Remove`, `Path`, `Get`, `List`, and `Capabilities`. Mountpoints are real host paths under `/data/corestore/volumes/<volume>/data`; Docker bind-mounts those paths into workload containers.
The driver reports `Scope: local`, because volume data is node-local. Backup orchestration should snapshot volumes through the control API before destructive changes or restores.
## Docker
```bash
@@ -88,6 +122,8 @@ The intended cluster behavior is:
- deploy `corestore` as a node-local/global service so every workload node has a local storage provider;
- provision `database` and `objectstorage` bindings through `/resources/provision`;
- mount service volumes through Docker `DriverConfig.Name = corestore`;
- snapshot and restore service volumes through `/volumes/snapshot` and `/volumes/restore`;
- merge the returned env vars into the workload Docker secret before service creation;
- mark Cloudly platform bindings `ready` with endpoint metadata and credential env refs;
- deprovision resources when the service binding or workload is deleted.