Compare commits
89 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
| c420a30341 | |||
| fe109f0953 | |||
| 012dce63b1 | |||
| 54780482c7 | |||
| 7ab0fb3c1f | |||
| 713fda2a86 | |||
| ec32c19300 | |||
| 7d1d91157c | |||
| b69c96c240 | |||
| 9ee8851d03 | |||
| 7f6031f31a | |||
| 6f1b8469e0 | |||
| cd06c74cc3 | |||
| d3acc720ca | |||
| 1b6de75097 | |||
| 497f8f59a7 | |||
| 0c7d65e4ad | |||
| 3f2cd074ce | |||
| 59ed7233bd | |||
| 01e3ba16c4 | |||
| f5c1d5fcda | |||
| 45b0971f2f | |||
| 178f440d7e | |||
| 7fff15a90c | |||
| 69e23f667e | |||
| a2bf4df7c2 | |||
| 9e0a0b5a89 | |||
| 3a227bd838 | |||
| f5a7fccfc2 | |||
| a30d2029a5 | |||
| 88727dd47d | |||
| 9a5ed2220e | |||
| decd39e7c4 | |||
| ad2e228208 | |||
| cf06019d79 | |||
| cf44b0047d | |||
| 260b5364e6 | |||
| 51c1962042 | |||
| d3b78054ad | |||
| d2ae35f0ce | |||
| a605477663 | |||
| ba98086548 | |||
| 0b3c22556b | |||
| 069e6e6c8f | |||
| 10598520d8 | |||
| 075b7946b1 | |||
| f47fca3304 | |||
| 575e010a6b | |||
| 60a5dc4663 | |||
| 36d80b1e27 | |||
| 465cf0ee72 | |||
| bd5cd5c0cb | |||
| b622565e34 | |||
| 56376121ab | |||
| e3359d1235 | |||
| f1eeec6922 | |||
| 69362bb529 | |||
| 857fcc50ba | |||
| 5d0df006eb | |||
| e6256502ce | |||
| d5dc141171 | |||
| 2538f5ae2c | |||
| 4613193dcc | |||
| 848b3afe54 | |||
| dd86bae942 | |||
| 4691c61544 | |||
| dfb2d3b340 | |||
| 6a19ab05e3 | |||
| 7b718da7a2 | |||
| ebaf545418 | |||
| 2cdfdaed55 | |||
| 2216804652 | |||
| 1b177037f5 | |||
| 9d6590927c | |||
| eaf401200c | |||
| e97a4d53ae | |||
| ca2b3b25a5 | |||
| 19703de50d | |||
| bcab4f274e | |||
| 64e947735f | |||
| 1e05c08002 | |||
| 167df321f9 | |||
| 49998c4c32 | |||
| 8045ec38df | |||
| 793fb18b43 | |||
| 09534fd899 | |||
| 5f3783a5e9 | |||
| 92555c5a5e | |||
| ddc7fa4bee |
@@ -1,140 +0,0 @@
|
|||||||
# Onebox Development Notes
|
|
||||||
|
|
||||||
## ⚠️ CRITICAL DEVELOPMENT RULES ⚠️
|
|
||||||
|
|
||||||
### NEVER GUESS - ALWAYS READ THE ACTUAL CODE
|
|
||||||
**FUCKING ALWAYS look at the dependency actual code. Don't start fucking guessing stuff.**
|
|
||||||
|
|
||||||
run "pnpm run watch" when starting to do stuff, so the UI gets recompiled and the server automatically restarts on file changes.
|
|
||||||
|
|
||||||
When working with any dependency:
|
|
||||||
1. **READ the actual source code** in `node_modules/` or check the package documentation
|
|
||||||
2. **CHECK the exact API** - don't assume based on similar libraries
|
|
||||||
3. **VERIFY method names, return types, and property structures** before using them
|
|
||||||
4. **TEST with the actual implementation** - APIs change between versions
|
|
||||||
|
|
||||||
Common mistakes to avoid:
|
|
||||||
- ❌ Assuming API structure based on similar libraries
|
|
||||||
- ❌ Guessing method names or property paths
|
|
||||||
- ❌ Using outdated documentation without checking current version
|
|
||||||
- ✅ Read the actual TypeScript definitions in node_modules
|
|
||||||
- ✅ Check the package's README and changelog
|
|
||||||
- ✅ Test the actual behavior before implementing
|
|
||||||
|
|
||||||
## Architecture Changes
|
|
||||||
|
|
||||||
### Reverse Proxy Implementation
|
|
||||||
- **Replaced Nginx** with native Deno reverse proxy (`ts/classes/reverseproxy.ts`)
|
|
||||||
- Features:
|
|
||||||
- HTTP/HTTPS dual servers (ports 80/443)
|
|
||||||
- TLS/SSL certificate management with hot-reload
|
|
||||||
- WebSocket bidirectional proxying
|
|
||||||
- Dynamic routing from database
|
|
||||||
- SNI (Server Name Indication) support
|
|
||||||
|
|
||||||
### Code Organization
|
|
||||||
- Removed "onebox." prefix from all TypeScript files
|
|
||||||
- Organized into subfolders:
|
|
||||||
- `ts/classes/` - All class implementations
|
|
||||||
- `ts/` - Root level utilities (logging, types, plugins, cli, info)
|
|
||||||
|
|
||||||
### WebSocket Real-time Communication
|
|
||||||
- **Backend**: WebSocket endpoint at `/api/ws` (`ts/classes/httpserver.ts:96-174`)
|
|
||||||
- Connection management with client Set tracking
|
|
||||||
- Broadcast methods: `broadcast()`, `broadcastServiceUpdate()`, `broadcastServiceStatus()`
|
|
||||||
- Integrated with service lifecycle (start/stop/restart actions)
|
|
||||||
- Status monitoring loop broadcasts changes automatically
|
|
||||||
- **Frontend**: Angular WebSocket service (`ui/src/app/core/services/websocket.service.ts`)
|
|
||||||
- Auto-connects on app initialization
|
|
||||||
- Exponential backoff reconnection (max 5 attempts)
|
|
||||||
- RxJS Observable-based message streaming
|
|
||||||
- Components subscribe to real-time updates
|
|
||||||
- **Message Types**:
|
|
||||||
- `connected` - Initial connection confirmation
|
|
||||||
- `service_update` - Service lifecycle changes (action: created/updated/deleted/started/stopped)
|
|
||||||
- `service_status` - Real-time status changes from monitoring loop
|
|
||||||
- `system_status` - System-wide updates
|
|
||||||
- **Testing**: Use `.nogit/test-ws-updates.ts` to monitor WebSocket messages
|
|
||||||
|
|
||||||
### Docker Configuration
|
|
||||||
- **System Docker**: Uses root Docker at `/var/run/docker.sock` (NOT rootless)
|
|
||||||
- **Swarm Mode**: Enabled for service orchestration
|
|
||||||
- **API Access**: Interact with Docker via direct API calls to the socket
|
|
||||||
- ❌ DO NOT switch Docker CLI contexts
|
|
||||||
- ✅ Use curl/HTTP requests to `/var/run/docker.sock`
|
|
||||||
- **Network**: Overlay network `onebox-network` with `Attachable: true`
|
|
||||||
- **Services vs Containers**: All workloads run as Swarm services (not standalone containers)
|
|
||||||
|
|
||||||
## Debugging Tips
|
|
||||||
|
|
||||||
### Backend Logs
|
|
||||||
Use the background bash task to check server logs:
|
|
||||||
```bash
|
|
||||||
# Check for specific patterns (e.g., Login attempts)
|
|
||||||
BashOutput tool with filter: "Login|error|Error"
|
|
||||||
|
|
||||||
# Check all recent output
|
|
||||||
BashOutput tool without filter
|
|
||||||
```
|
|
||||||
|
|
||||||
The dev server runs with `--watch` so it auto-restarts on file changes.
|
|
||||||
|
|
||||||
### Frontend Testing
|
|
||||||
Use Playwright for UI testing:
|
|
||||||
```typescript
|
|
||||||
// Navigate to app
|
|
||||||
mcp__playwright__browser_navigate({ url: "http://localhost:3000" })
|
|
||||||
|
|
||||||
// Fill login form
|
|
||||||
mcp__playwright__browser_fill_form({
|
|
||||||
fields: [
|
|
||||||
{ name: "Username", type: "textbox", ref: "...", value: "admin" },
|
|
||||||
{ name: "Password", type: "textbox", ref: "...", value: "admin" }
|
|
||||||
]
|
|
||||||
})
|
|
||||||
|
|
||||||
// Click button
|
|
||||||
mcp__playwright__browser_click({ element: "Sign in button", ref: "..." })
|
|
||||||
|
|
||||||
// Check console errors
|
|
||||||
// Playwright automatically shows console messages in results
|
|
||||||
```
|
|
||||||
|
|
||||||
### Common Issues
|
|
||||||
|
|
||||||
#### Login Issue (Fixed)
|
|
||||||
**Problem**: `admin/admin` credentials returned "Invalid credentials"
|
|
||||||
|
|
||||||
**Root Cause**: `rowToUser()` function in database.ts was accessing rows as arrays `row[2]` instead of objects `row.password_hash`. The @db/sqlite library returns rows as objects with snake_case column names.
|
|
||||||
|
|
||||||
**Fix**: Updated `rowToUser()` to support both access patterns:
|
|
||||||
```typescript
|
|
||||||
private rowToUser(row: any): IUser {
|
|
||||||
return {
|
|
||||||
passwordHash: String(row.password_hash || row[2]),
|
|
||||||
// ... other fields
|
|
||||||
};
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Location**: `ts/classes/database.ts:506-515`
|
|
||||||
|
|
||||||
## Default Credentials
|
|
||||||
- Username: `admin`
|
|
||||||
- Password: `admin`
|
|
||||||
- ⚠️ Change immediately after first login!
|
|
||||||
|
|
||||||
## Development Server
|
|
||||||
```bash
|
|
||||||
# Main server (port 3000)
|
|
||||||
deno task dev
|
|
||||||
|
|
||||||
# Check server status
|
|
||||||
curl http://localhost:3000/api/status
|
|
||||||
```
|
|
||||||
|
|
||||||
## API Endpoints
|
|
||||||
- `POST /api/auth/login` - Login (returns JWT-like token)
|
|
||||||
- `GET /api/status` - System status (requires auth)
|
|
||||||
- `GET /api/services` - List services (requires auth)
|
|
||||||
- See `ts/classes/httpserver.ts` for full API
|
|
||||||
37
.gitea/release-template.md
Normal file
37
.gitea/release-template.md
Normal file
@@ -0,0 +1,37 @@
|
|||||||
|
## Onebox {{VERSION}}
|
||||||
|
|
||||||
|
Pre-compiled binaries for multiple platforms.
|
||||||
|
|
||||||
|
### Installation
|
||||||
|
|
||||||
|
#### Option 1: Via npm (recommended)
|
||||||
|
|
||||||
|
```bash
|
||||||
|
npm install -g @serve.zone/onebox
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Option 2: Via installer script
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -sSL https://code.foss.global/serve.zone/onebox/raw/branch/main/install.sh | sudo bash
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Option 3: Direct binary download
|
||||||
|
|
||||||
|
Download the appropriate binary for your platform from the assets below and make it executable.
|
||||||
|
|
||||||
|
### Supported Platforms
|
||||||
|
|
||||||
|
- Linux x86_64 (x64)
|
||||||
|
- Linux ARM64 (aarch64)
|
||||||
|
- macOS x86_64 (Intel)
|
||||||
|
- macOS ARM64 (Apple Silicon)
|
||||||
|
- Windows x86_64
|
||||||
|
|
||||||
|
### Checksums
|
||||||
|
|
||||||
|
SHA256 checksums are provided in `SHA256SUMS.txt` for binary verification.
|
||||||
|
|
||||||
|
### npm Package
|
||||||
|
|
||||||
|
The npm package includes automatic binary detection and installation for your platform.
|
||||||
114
.gitea/workflows/ci.yml
Normal file
114
.gitea/workflows/ci.yml
Normal file
@@ -0,0 +1,114 @@
|
|||||||
|
name: CI
|
||||||
|
|
||||||
|
on:
|
||||||
|
push:
|
||||||
|
branches:
|
||||||
|
- main
|
||||||
|
pull_request:
|
||||||
|
branches:
|
||||||
|
- main
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
check:
|
||||||
|
name: Type Check & Lint
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
container:
|
||||||
|
image: code.foss.global/host.today/ht-docker-node:latest
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- name: Checkout code
|
||||||
|
uses: actions/checkout@v4
|
||||||
|
|
||||||
|
- name: Set up Deno
|
||||||
|
uses: denoland/setup-deno@v1
|
||||||
|
with:
|
||||||
|
deno-version: v2.x
|
||||||
|
|
||||||
|
- name: Install dependencies
|
||||||
|
run: deno install --entrypoint mod.ts
|
||||||
|
|
||||||
|
- name: Check TypeScript types
|
||||||
|
run: deno check mod.ts
|
||||||
|
|
||||||
|
- name: Lint code
|
||||||
|
run: deno lint
|
||||||
|
continue-on-error: true
|
||||||
|
|
||||||
|
- name: Format check
|
||||||
|
run: deno fmt --check
|
||||||
|
continue-on-error: true
|
||||||
|
|
||||||
|
build:
|
||||||
|
name: Build Test (Current Platform)
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
container:
|
||||||
|
image: code.foss.global/host.today/ht-docker-node:latest
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- name: Checkout code
|
||||||
|
uses: actions/checkout@v4
|
||||||
|
|
||||||
|
- name: Set up Deno
|
||||||
|
uses: denoland/setup-deno@v1
|
||||||
|
with:
|
||||||
|
deno-version: v2.x
|
||||||
|
|
||||||
|
- name: Set up Node.js
|
||||||
|
uses: actions/setup-node@v4
|
||||||
|
with:
|
||||||
|
node-version: '22'
|
||||||
|
|
||||||
|
- name: Enable corepack
|
||||||
|
run: corepack enable
|
||||||
|
|
||||||
|
- name: Install dependencies
|
||||||
|
run: pnpm install --ignore-scripts
|
||||||
|
|
||||||
|
- name: Compile for current platform
|
||||||
|
run: |
|
||||||
|
echo "Testing compilation for Linux x86_64..."
|
||||||
|
npx tsdeno compile --allow-all --no-check \
|
||||||
|
--output onebox-test \
|
||||||
|
--target x86_64-unknown-linux-gnu mod.ts
|
||||||
|
|
||||||
|
- name: Test binary execution
|
||||||
|
run: |
|
||||||
|
chmod +x onebox-test
|
||||||
|
./onebox-test --version
|
||||||
|
./onebox-test --help
|
||||||
|
|
||||||
|
build-all:
|
||||||
|
name: Build All Platforms
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
container:
|
||||||
|
image: code.foss.global/host.today/ht-docker-node:latest
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- name: Checkout code
|
||||||
|
uses: actions/checkout@v4
|
||||||
|
|
||||||
|
- name: Set up Deno
|
||||||
|
uses: denoland/setup-deno@v1
|
||||||
|
with:
|
||||||
|
deno-version: v2.x
|
||||||
|
|
||||||
|
- name: Set up Node.js
|
||||||
|
uses: actions/setup-node@v4
|
||||||
|
with:
|
||||||
|
node-version: '22'
|
||||||
|
|
||||||
|
- name: Enable corepack
|
||||||
|
run: corepack enable
|
||||||
|
|
||||||
|
- name: Install dependencies
|
||||||
|
run: pnpm install --ignore-scripts
|
||||||
|
|
||||||
|
- name: Compile all platform binaries
|
||||||
|
run: mkdir -p dist/binaries && npx tsdeno compile
|
||||||
|
|
||||||
|
- name: Upload all binaries as artifact
|
||||||
|
uses: actions/upload-artifact@v3
|
||||||
|
with:
|
||||||
|
name: onebox-binaries.zip
|
||||||
|
path: dist/binaries/*
|
||||||
|
retention-days: 30
|
||||||
131
.gitea/workflows/npm-publish.yml
Normal file
131
.gitea/workflows/npm-publish.yml
Normal file
@@ -0,0 +1,131 @@
|
|||||||
|
name: Publish to npm
|
||||||
|
|
||||||
|
on:
|
||||||
|
push:
|
||||||
|
tags:
|
||||||
|
- 'v*'
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
npm-publish:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
container:
|
||||||
|
image: code.foss.global/host.today/ht-docker-node:latest
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- name: Checkout code
|
||||||
|
uses: actions/checkout@v4
|
||||||
|
|
||||||
|
- name: Set up Deno
|
||||||
|
uses: denoland/setup-deno@v1
|
||||||
|
with:
|
||||||
|
deno-version: v2.x
|
||||||
|
|
||||||
|
- name: Setup Node.js for npm publishing
|
||||||
|
uses: actions/setup-node@v4
|
||||||
|
with:
|
||||||
|
node-version: '18.x'
|
||||||
|
registry-url: 'https://registry.npmjs.org/'
|
||||||
|
|
||||||
|
- name: Get version from tag
|
||||||
|
id: version
|
||||||
|
run: |
|
||||||
|
VERSION=${GITHUB_REF#refs/tags/}
|
||||||
|
echo "version=$VERSION" >> $GITHUB_OUTPUT
|
||||||
|
echo "version_number=${VERSION#v}" >> $GITHUB_OUTPUT
|
||||||
|
echo "Publishing version: $VERSION"
|
||||||
|
|
||||||
|
- name: Verify deno.json version matches tag
|
||||||
|
run: |
|
||||||
|
DENO_VERSION=$(grep -o '"version": "[^"]*"' deno.json | cut -d'"' -f4)
|
||||||
|
TAG_VERSION="${{ steps.version.outputs.version_number }}"
|
||||||
|
echo "deno.json version: $DENO_VERSION"
|
||||||
|
echo "Tag version: $TAG_VERSION"
|
||||||
|
if [ "$DENO_VERSION" != "$TAG_VERSION" ]; then
|
||||||
|
echo "ERROR: Version mismatch!"
|
||||||
|
echo "deno.json has version $DENO_VERSION but tag is $TAG_VERSION"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
- name: Compile binaries for npm package
|
||||||
|
run: |
|
||||||
|
echo "Compiling binaries for npm package..."
|
||||||
|
deno task compile
|
||||||
|
echo ""
|
||||||
|
echo "Binary sizes:"
|
||||||
|
ls -lh dist/binaries/
|
||||||
|
|
||||||
|
- name: Generate SHA256 checksums
|
||||||
|
run: |
|
||||||
|
cd dist/binaries
|
||||||
|
sha256sum * > SHA256SUMS
|
||||||
|
cat SHA256SUMS
|
||||||
|
cd ../..
|
||||||
|
|
||||||
|
- name: Sync package.json version
|
||||||
|
run: |
|
||||||
|
VERSION="${{ steps.version.outputs.version_number }}"
|
||||||
|
echo "Syncing package.json to version ${VERSION}..."
|
||||||
|
npm version ${VERSION} --no-git-tag-version --allow-same-version
|
||||||
|
echo "package.json version: $(grep '"version"' package.json | head -1)"
|
||||||
|
|
||||||
|
- name: Create npm package
|
||||||
|
run: |
|
||||||
|
echo "Creating npm package..."
|
||||||
|
npm pack
|
||||||
|
echo ""
|
||||||
|
echo "Package created:"
|
||||||
|
ls -lh *.tgz
|
||||||
|
|
||||||
|
- name: Test local installation
|
||||||
|
run: |
|
||||||
|
echo "Testing local package installation..."
|
||||||
|
PACKAGE_FILE=$(ls *.tgz)
|
||||||
|
npm install -g ${PACKAGE_FILE}
|
||||||
|
echo ""
|
||||||
|
echo "Testing onebox command:"
|
||||||
|
onebox --version || echo "Note: Binary execution may fail in CI environment"
|
||||||
|
echo ""
|
||||||
|
echo "Checking installed files:"
|
||||||
|
npm ls -g @serve.zone/onebox || true
|
||||||
|
|
||||||
|
- name: Publish to npm
|
||||||
|
env:
|
||||||
|
NODE_AUTH_TOKEN: ${{ secrets.NPM_TOKEN }}
|
||||||
|
run: |
|
||||||
|
echo "Publishing to npm registry..."
|
||||||
|
npm publish --access public
|
||||||
|
echo ""
|
||||||
|
echo "Successfully published @serve.zone/onebox to npm!"
|
||||||
|
echo ""
|
||||||
|
echo "Package info:"
|
||||||
|
npm view @serve.zone/onebox
|
||||||
|
|
||||||
|
- name: Verify npm package
|
||||||
|
run: |
|
||||||
|
echo "Waiting for npm propagation..."
|
||||||
|
sleep 30
|
||||||
|
echo ""
|
||||||
|
echo "Verifying published package..."
|
||||||
|
npm view @serve.zone/onebox
|
||||||
|
echo ""
|
||||||
|
echo "Testing installation from npm:"
|
||||||
|
npm install -g @serve.zone/onebox
|
||||||
|
echo ""
|
||||||
|
echo "Package installed successfully!"
|
||||||
|
which onebox || echo "Binary location check skipped"
|
||||||
|
|
||||||
|
- name: Publish Summary
|
||||||
|
run: |
|
||||||
|
echo "================================================"
|
||||||
|
echo " npm Publish Complete!"
|
||||||
|
echo "================================================"
|
||||||
|
echo ""
|
||||||
|
echo "Package: @serve.zone/onebox"
|
||||||
|
echo "Version: ${{ steps.version.outputs.version }}"
|
||||||
|
echo ""
|
||||||
|
echo "Installation:"
|
||||||
|
echo " npm install -g @serve.zone/onebox"
|
||||||
|
echo ""
|
||||||
|
echo "Registry:"
|
||||||
|
echo " https://www.npmjs.com/package/@serve.zone/onebox"
|
||||||
|
echo ""
|
||||||
211
.gitea/workflows/release.yml
Normal file
211
.gitea/workflows/release.yml
Normal file
@@ -0,0 +1,211 @@
|
|||||||
|
name: Release
|
||||||
|
|
||||||
|
on:
|
||||||
|
push:
|
||||||
|
tags:
|
||||||
|
- 'v*'
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
build-and-release:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
container:
|
||||||
|
image: code.foss.global/host.today/ht-docker-node:latest
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- name: Checkout code
|
||||||
|
uses: actions/checkout@v4
|
||||||
|
with:
|
||||||
|
fetch-depth: 0
|
||||||
|
|
||||||
|
- name: Set up Deno
|
||||||
|
uses: denoland/setup-deno@v1
|
||||||
|
with:
|
||||||
|
deno-version: v2.x
|
||||||
|
|
||||||
|
- name: Set up Node.js
|
||||||
|
uses: actions/setup-node@v4
|
||||||
|
with:
|
||||||
|
node-version: '22'
|
||||||
|
|
||||||
|
- name: Enable corepack
|
||||||
|
run: corepack enable
|
||||||
|
|
||||||
|
- name: Install dependencies
|
||||||
|
run: pnpm install --ignore-scripts
|
||||||
|
|
||||||
|
- name: Get version from tag
|
||||||
|
id: version
|
||||||
|
run: |
|
||||||
|
VERSION=${GITHUB_REF#refs/tags/}
|
||||||
|
echo "version=$VERSION" >> $GITHUB_OUTPUT
|
||||||
|
echo "version_number=${VERSION#v}" >> $GITHUB_OUTPUT
|
||||||
|
echo "Building version: $VERSION"
|
||||||
|
|
||||||
|
- name: Verify deno.json version matches tag
|
||||||
|
run: |
|
||||||
|
DENO_VERSION=$(grep -o '"version": "[^"]*"' deno.json | cut -d'"' -f4)
|
||||||
|
TAG_VERSION="${{ steps.version.outputs.version_number }}"
|
||||||
|
echo "deno.json version: $DENO_VERSION"
|
||||||
|
echo "Tag version: $TAG_VERSION"
|
||||||
|
if [ "$DENO_VERSION" != "$TAG_VERSION" ]; then
|
||||||
|
echo "ERROR: Version mismatch!"
|
||||||
|
echo "deno.json has version $DENO_VERSION but tag is $TAG_VERSION"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
- name: Compile binaries for all platforms
|
||||||
|
run: mkdir -p dist/binaries && npx tsdeno compile
|
||||||
|
|
||||||
|
- name: Generate SHA256 checksums
|
||||||
|
run: |
|
||||||
|
cd dist/binaries
|
||||||
|
sha256sum * > SHA256SUMS.txt
|
||||||
|
cat SHA256SUMS.txt
|
||||||
|
cd ../..
|
||||||
|
|
||||||
|
- name: Extract changelog for this version
|
||||||
|
id: changelog
|
||||||
|
run: |
|
||||||
|
VERSION="${{ steps.version.outputs.version }}"
|
||||||
|
|
||||||
|
# Check if CHANGELOG.md exists
|
||||||
|
if [ ! -f CHANGELOG.md ] && [ ! -f changelog.md ]; then
|
||||||
|
echo "No changelog found, using default release notes"
|
||||||
|
cat > /tmp/release_notes.md << EOF
|
||||||
|
## Onebox $VERSION
|
||||||
|
|
||||||
|
Pre-compiled binaries for multiple platforms.
|
||||||
|
|
||||||
|
### Installation
|
||||||
|
|
||||||
|
Use the installation script:
|
||||||
|
\`\`\`bash
|
||||||
|
curl -sSL https://code.foss.global/serve.zone/onebox/raw/branch/main/install.sh | sudo bash
|
||||||
|
\`\`\`
|
||||||
|
|
||||||
|
Or download the binary for your platform and make it executable.
|
||||||
|
|
||||||
|
### Supported Platforms
|
||||||
|
- Linux x86_64 (x64)
|
||||||
|
- Linux ARM64 (aarch64)
|
||||||
|
- macOS x86_64 (Intel)
|
||||||
|
- macOS ARM64 (Apple Silicon)
|
||||||
|
- Windows x86_64
|
||||||
|
|
||||||
|
### Checksums
|
||||||
|
SHA256 checksums are provided in SHA256SUMS.txt
|
||||||
|
EOF
|
||||||
|
else
|
||||||
|
CHANGELOG_FILE=$([ -f CHANGELOG.md ] && echo "CHANGELOG.md" || echo "changelog.md")
|
||||||
|
awk "/## \[$VERSION\]/,/## \[/" "$CHANGELOG_FILE" | sed '$d' > /tmp/release_notes.md || cat > /tmp/release_notes.md << EOF
|
||||||
|
## Onebox $VERSION
|
||||||
|
|
||||||
|
See changelog.md for full details.
|
||||||
|
|
||||||
|
### Installation
|
||||||
|
|
||||||
|
Use the installation script:
|
||||||
|
\`\`\`bash
|
||||||
|
curl -sSL https://code.foss.global/serve.zone/onebox/raw/branch/main/install.sh | sudo bash
|
||||||
|
\`\`\`
|
||||||
|
EOF
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo "Release notes:"
|
||||||
|
cat /tmp/release_notes.md
|
||||||
|
|
||||||
|
- name: Delete existing release if it exists
|
||||||
|
run: |
|
||||||
|
VERSION="${{ steps.version.outputs.version }}"
|
||||||
|
|
||||||
|
echo "Checking for existing release $VERSION..."
|
||||||
|
|
||||||
|
# Try to get existing release by tag
|
||||||
|
EXISTING_RELEASE_ID=$(curl -s \
|
||||||
|
-H "Authorization: token ${{ secrets.GITHUB_TOKEN }}" \
|
||||||
|
"https://code.foss.global/api/v1/repos/serve.zone/onebox/releases/tags/$VERSION" \
|
||||||
|
| jq -r '.id // empty')
|
||||||
|
|
||||||
|
if [ -n "$EXISTING_RELEASE_ID" ]; then
|
||||||
|
echo "Found existing release (ID: $EXISTING_RELEASE_ID), deleting..."
|
||||||
|
curl -X DELETE -s \
|
||||||
|
-H "Authorization: token ${{ secrets.GITHUB_TOKEN }}" \
|
||||||
|
"https://code.foss.global/api/v1/repos/serve.zone/onebox/releases/$EXISTING_RELEASE_ID"
|
||||||
|
echo "Existing release deleted"
|
||||||
|
sleep 2
|
||||||
|
else
|
||||||
|
echo "No existing release found, proceeding with creation"
|
||||||
|
fi
|
||||||
|
|
||||||
|
- name: Create Gitea Release
|
||||||
|
run: |
|
||||||
|
VERSION="${{ steps.version.outputs.version }}"
|
||||||
|
RELEASE_NOTES=$(cat /tmp/release_notes.md)
|
||||||
|
|
||||||
|
# Create the release
|
||||||
|
echo "Creating release for $VERSION..."
|
||||||
|
RELEASE_ID=$(curl -X POST -s \
|
||||||
|
-H "Authorization: token ${{ secrets.GITHUB_TOKEN }}" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
"https://code.foss.global/api/v1/repos/serve.zone/onebox/releases" \
|
||||||
|
-d "{
|
||||||
|
\"tag_name\": \"$VERSION\",
|
||||||
|
\"name\": \"Onebox $VERSION\",
|
||||||
|
\"body\": $(jq -Rs . /tmp/release_notes.md),
|
||||||
|
\"draft\": false,
|
||||||
|
\"prerelease\": false
|
||||||
|
}" | jq -r '.id')
|
||||||
|
|
||||||
|
echo "Release created with ID: $RELEASE_ID"
|
||||||
|
|
||||||
|
# Upload binaries as release assets
|
||||||
|
for binary in dist/binaries/*; do
|
||||||
|
filename=$(basename "$binary")
|
||||||
|
echo "Uploading $filename..."
|
||||||
|
curl -X POST -s \
|
||||||
|
-H "Authorization: token ${{ secrets.GITHUB_TOKEN }}" \
|
||||||
|
-H "Content-Type: application/octet-stream" \
|
||||||
|
--data-binary "@$binary" \
|
||||||
|
"https://code.foss.global/api/v1/repos/serve.zone/onebox/releases/$RELEASE_ID/assets?name=$filename"
|
||||||
|
done
|
||||||
|
|
||||||
|
echo "All assets uploaded successfully"
|
||||||
|
|
||||||
|
- name: Clean up old releases
|
||||||
|
run: |
|
||||||
|
echo "Cleaning up old releases (keeping only last 3)..."
|
||||||
|
|
||||||
|
# Fetch all releases sorted by creation date
|
||||||
|
RELEASES=$(curl -s -H "Authorization: token ${{ secrets.GITHUB_TOKEN }}" \
|
||||||
|
"https://code.foss.global/api/v1/repos/serve.zone/onebox/releases" | \
|
||||||
|
jq -r 'sort_by(.created_at) | reverse | .[3:] | .[].id')
|
||||||
|
|
||||||
|
# Delete old releases
|
||||||
|
if [ -n "$RELEASES" ]; then
|
||||||
|
echo "Found releases to delete:"
|
||||||
|
for release_id in $RELEASES; do
|
||||||
|
echo " Deleting release ID: $release_id"
|
||||||
|
curl -X DELETE -s -H "Authorization: token ${{ secrets.GITHUB_TOKEN }}" \
|
||||||
|
"https://code.foss.global/api/v1/repos/serve.zone/onebox/releases/$release_id"
|
||||||
|
done
|
||||||
|
echo "Old releases deleted successfully"
|
||||||
|
else
|
||||||
|
echo "No old releases to delete (less than 4 releases total)"
|
||||||
|
fi
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
- name: Release Summary
|
||||||
|
run: |
|
||||||
|
echo "================================================"
|
||||||
|
echo " Release ${{ steps.version.outputs.version }} Complete!"
|
||||||
|
echo "================================================"
|
||||||
|
echo ""
|
||||||
|
echo "Binaries published:"
|
||||||
|
ls -lh dist/binaries/
|
||||||
|
echo ""
|
||||||
|
echo "Release URL:"
|
||||||
|
echo "https://code.foss.global/serve.zone/onebox/releases/tag/${{ steps.version.outputs.version }}"
|
||||||
|
echo ""
|
||||||
|
echo "Installation command:"
|
||||||
|
echo "curl -sSL https://code.foss.global/serve.zone/onebox/raw/branch/main/install.sh | sudo bash"
|
||||||
|
echo ""
|
||||||
239
changelog.md
239
changelog.md
@@ -1,5 +1,244 @@
|
|||||||
# Changelog
|
# Changelog
|
||||||
|
|
||||||
|
## 2026-03-16 - 1.18.2 - fix(repo)
|
||||||
|
no changes to commit
|
||||||
|
|
||||||
|
|
||||||
|
## 2026-03-16 - 1.18.1 - fix(repo)
|
||||||
|
no changes to commit
|
||||||
|
|
||||||
|
|
||||||
|
## 2026-03-16 - 1.18.0 - feat(platform-services)
|
||||||
|
add platform service log retrieval and display in the services UI
|
||||||
|
|
||||||
|
- add typed request support in the ops server to fetch Docker logs for platform service containers
|
||||||
|
- store fetched platform service logs in web app state and load them when opening platform service details
|
||||||
|
- render platform service logs in the services detail view and add sidebar icons for main navigation tabs
|
||||||
|
|
||||||
|
## 2026-03-16 - 1.17.4 - fix(docs)
|
||||||
|
add hello world running screenshot for documentation
|
||||||
|
|
||||||
|
- Adds a new PNG asset showing the application in a running hello world state.
|
||||||
|
- Supports project documentation or README usage without changing runtime behavior.
|
||||||
|
|
||||||
|
## 2026-03-16 - 1.17.3 - fix(mongodb)
|
||||||
|
downgrade the MongoDB service image to 4.4 and use the legacy mongo shell for container operations
|
||||||
|
|
||||||
|
- changes the default MongoDB container image from mongo:7 to mongo:4.4
|
||||||
|
- replaces mongosh with mongo for health checks, provisioning, and deprovisioning inside the container
|
||||||
|
|
||||||
|
## 2026-03-16 - 1.17.2 - fix(platform-services)
|
||||||
|
provision ClickHouse, MinIO, and MongoDB resources via docker exec instead of host port access
|
||||||
|
|
||||||
|
- switch ClickHouse provisioning and teardown to in-container client commands to avoid host port mapping issues
|
||||||
|
- replace MinIO host-side S3 API calls with in-container mc commands for bucket creation and removal
|
||||||
|
- run MongoDB provisioning and deprovisioning through mongosh inside the container and improve docker exec failure reporting
|
||||||
|
|
||||||
|
## 2026-03-16 - 1.17.1 - fix(repo)
|
||||||
|
no changes to commit
|
||||||
|
|
||||||
|
|
||||||
|
## 2026-03-16 - 1.17.0 - feat(web/services)
|
||||||
|
add deploy service action to the services view
|
||||||
|
|
||||||
|
- Adds a prominent "Deploy Service" button to the services page header.
|
||||||
|
- Routes users into the create service view directly from the services listing.
|
||||||
|
- Includes a new service creation form screenshot asset for the updated interface.
|
||||||
|
|
||||||
|
## 2026-03-16 - 1.16.0 - feat(services)
|
||||||
|
add platform service navigation and stats in the services UI
|
||||||
|
|
||||||
|
- add platform service stats state and fetch action
|
||||||
|
- show platform services in the services list and open a platform detail view
|
||||||
|
- enable dashboard clicks to jump directly to the selected platform service
|
||||||
|
- refresh platform service stats after start and restart actions
|
||||||
|
- bump @serve.zone/catalog to ^2.6.0 for the new platform service UI components
|
||||||
|
|
||||||
|
## 2026-03-16 - 1.15.3 - fix(install)
|
||||||
|
refresh systemd service configuration before restarting previously running installations
|
||||||
|
|
||||||
|
- Re-enable the systemd service during updates so unit file changes are applied before restart
|
||||||
|
- Add a log message indicating the service configuration is being refreshed
|
||||||
|
|
||||||
|
## 2026-03-16 - 1.15.2 - fix(systemd)
|
||||||
|
set HOME and DENO_DIR for the systemd service environment
|
||||||
|
|
||||||
|
- Adds HOME=/root to the generated onebox systemd unit
|
||||||
|
- Adds DENO_DIR=/root/.cache/deno so Deno cache paths are available when running as a service
|
||||||
|
|
||||||
|
## 2026-03-16 - 1.15.1 - fix(systemd)
|
||||||
|
move Docker installation and swarm initialization to systemd enable flow
|
||||||
|
|
||||||
|
- Ensures Docker is installed before writing and enabling the systemd unit that depends on docker.service.
|
||||||
|
- Removes Docker auto-installation from Onebox initialization so setup happens in the service management path.
|
||||||
|
|
||||||
|
## 2026-03-16 - 1.15.0 - feat(systemd)
|
||||||
|
replace smartdaemon-based service management with native systemd commands
|
||||||
|
|
||||||
|
- adds a dedicated OneboxSystemd manager for enabling, disabling, starting, stopping, checking status, and following logs
|
||||||
|
- introduces a new `onebox systemd` CLI command set and updates install and help output to use it
|
||||||
|
- removes the smartdaemon dependency and related service management code
|
||||||
|
|
||||||
|
## 2026-03-16 - 1.14.10 - fix(services)
|
||||||
|
stop auto-update monitoring during shutdown
|
||||||
|
|
||||||
|
- Track the auto-update polling interval in the services manager
|
||||||
|
- Clear the auto-update interval when Onebox shuts down to prevent background checks after shutdown
|
||||||
|
|
||||||
|
## 2026-03-16 - 1.14.9 - fix(repo)
|
||||||
|
no changes to commit
|
||||||
|
|
||||||
|
|
||||||
|
## 2026-03-16 - 1.14.8 - fix(repo)
|
||||||
|
no changes to commit
|
||||||
|
|
||||||
|
|
||||||
|
## 2026-03-16 - 1.14.7 - fix(repo)
|
||||||
|
no changes to commit
|
||||||
|
|
||||||
|
|
||||||
|
## 2026-03-16 - 1.14.6 - fix(project)
|
||||||
|
no changes to commit
|
||||||
|
|
||||||
|
|
||||||
|
## 2026-03-16 - 1.14.5 - fix(onebox)
|
||||||
|
move Docker auto-install and swarm initialization into Onebox startup flow
|
||||||
|
|
||||||
|
- removes Docker setup from daemon service installation
|
||||||
|
- ensures Docker is installed before Docker initialization during Onebox startup
|
||||||
|
- preserves automatic Docker Swarm initialization on fresh servers
|
||||||
|
|
||||||
|
## 2026-03-16 - 1.14.4 - fix(repo)
|
||||||
|
no changes to commit
|
||||||
|
|
||||||
|
|
||||||
|
## 2026-03-16 - 1.14.3 - fix(repo)
|
||||||
|
no changes to commit
|
||||||
|
|
||||||
|
|
||||||
|
## 2026-03-16 - 1.14.2 - fix(repo)
|
||||||
|
no changes to commit
|
||||||
|
|
||||||
|
|
||||||
|
## 2026-03-16 - 1.14.1 - fix(repo)
|
||||||
|
no changes to commit
|
||||||
|
|
||||||
|
|
||||||
|
## 2026-03-16 - 1.14.0 - feat(daemon)
|
||||||
|
auto-install Docker and initialize Swarm during daemon service setup
|
||||||
|
|
||||||
|
- Adds a Docker availability check before installing the Onebox daemon service
|
||||||
|
- Installs Docker automatically when it is missing using the standard installation script
|
||||||
|
- Attempts to initialize Docker Swarm after installation and handles already-initialized environments gracefully
|
||||||
|
|
||||||
|
## 2026-03-16 - 1.13.17 - fix(ci)
|
||||||
|
remove forced container image pulling from Gitea workflow jobs
|
||||||
|
|
||||||
|
- Drops the `--pull always` container option from CI, npm publish, and release workflows.
|
||||||
|
- Keeps workflow container images unchanged while avoiding forced pulls on every job run.
|
||||||
|
|
||||||
|
## 2026-03-16 - 1.13.16 - fix(ci)
|
||||||
|
refresh workflow container images on every run and bump @apiclient.xyz/docker to ^5.1.1
|
||||||
|
|
||||||
|
- add --pull always to CI, release, and npm publish workflow containers to avoid stale images
|
||||||
|
- update @apiclient.xyz/docker from ^5.1.0 to ^5.1.1 in deno.json
|
||||||
|
|
||||||
|
## 2026-03-15 - 1.13.15 - fix(repo)
|
||||||
|
no changes to commit
|
||||||
|
|
||||||
|
|
||||||
|
## 2026-03-15 - 1.13.14 - fix(repo)
|
||||||
|
no changes to commit
|
||||||
|
|
||||||
|
|
||||||
|
## 2026-03-15 - 1.13.13 - fix(repo)
|
||||||
|
no changes to commit
|
||||||
|
|
||||||
|
|
||||||
|
## 2026-03-15 - 1.13.12 - fix(ci)
|
||||||
|
run pnpm install with --ignore-scripts in CI and release workflows
|
||||||
|
|
||||||
|
- Update CI workflow dependency installation steps to skip lifecycle scripts during builds.
|
||||||
|
- Apply the same install change to the release workflow for consistent automation behavior.
|
||||||
|
|
||||||
|
## 2026-03-15 - 1.13.11 - fix(project)
|
||||||
|
no changes to commit
|
||||||
|
|
||||||
|
|
||||||
|
## 2026-03-15 - 1.13.10 - fix(deps)
|
||||||
|
bump @git.zone/tsdeno to ^1.2.0
|
||||||
|
|
||||||
|
- Updates the tsdeno development dependency from ^1.1.1 to ^1.2.0.
|
||||||
|
|
||||||
|
## 2026-03-15 - 1.13.9 - fix(repo)
|
||||||
|
no changes to commit
|
||||||
|
|
||||||
|
|
||||||
|
## 2026-03-15 - 1.13.8 - fix(repo)
|
||||||
|
no changes to commit
|
||||||
|
|
||||||
|
|
||||||
|
## 2026-03-15 - 1.13.7 - fix(repo)
|
||||||
|
no changes to commit
|
||||||
|
|
||||||
|
|
||||||
|
## 2026-03-15 - 1.13.6 - fix(ci)
|
||||||
|
correct workflow container image registry path
|
||||||
|
|
||||||
|
- Update Gitea CI, release, and npm publish workflows to use the corrected ht-docker-node image path
|
||||||
|
- Align all workflow container references from hosttoday to host.today to prevent pipeline image resolution issues
|
||||||
|
|
||||||
|
## 2026-03-15 - 1.13.5 - fix(workflows)
|
||||||
|
switch Gitea workflow containers from ht-docker-dbase to ht-docker-node
|
||||||
|
|
||||||
|
- Updates the CI, release, and npm publish workflows to use the Node-focused container image consistently.
|
||||||
|
- Aligns workflow runtime images with the project's Node and Deno build and publish steps.
|
||||||
|
|
||||||
|
## 2026-03-15 - 1.13.4 - fix(ci)
|
||||||
|
run workflows in the shared build container and enable corepack for pnpm installs
|
||||||
|
|
||||||
|
- adds the ht-docker-dbase container image to CI, release, and npm publish workflows
|
||||||
|
- enables corepack before pnpm install in build and release jobs to ensure package manager availability
|
||||||
|
|
||||||
|
## 2026-03-15 - 1.13.3 - fix(build)
|
||||||
|
replace custom Deno compile scripts with tsdeno-based binary builds in CI and release workflows
|
||||||
|
|
||||||
|
- adds @git.zone/tsdeno as a dev dependency and configures compile targets in npmextra.json
|
||||||
|
- updates CI and release workflows to install Node.js dependencies before running tsdeno compile
|
||||||
|
- removes the legacy scripts/compile-all.sh script and points the compile task to tsdeno compile
|
||||||
|
|
||||||
|
## 2026-03-15 - 1.13.2 - fix(scripts)
|
||||||
|
install production dependencies before compiling binaries and exclude local node_modules from builds
|
||||||
|
|
||||||
|
- Adds a dependency installation step using the application entrypoint before cross-platform compilation
|
||||||
|
- Updates all deno compile targets to use --node-modules-dir=none to avoid bundling local node_modules
|
||||||
|
|
||||||
|
## 2026-03-15 - 1.13.1 - fix(deno)
|
||||||
|
remove nodeModulesDir from Deno configuration
|
||||||
|
|
||||||
|
- Drops the explicit nodeModulesDir setting from deno.json.
|
||||||
|
- Keeps the package version unchanged at 1.13.0 while simplifying runtime configuration.
|
||||||
|
|
||||||
|
## 2026-03-15 - 1.13.0 - feat(install)
|
||||||
|
improve installer with version selection, service restart handling, and upgrade documentation
|
||||||
|
|
||||||
|
- Adds installer command-line options for help, specific version selection, and custom install directory.
|
||||||
|
- Fetches the latest release from the Gitea API when no version is provided and installs the matching platform binary.
|
||||||
|
- Preserves Onebox data directories, stops and restarts the systemd service during updates, and refreshes installation instructions in the README including upgrade usage.
|
||||||
|
|
||||||
|
## 2026-03-15 - 1.12.1 - fix(package.json)
|
||||||
|
update package metadata
|
||||||
|
|
||||||
|
- Single metadata-only file changed (+1, -1)
|
||||||
|
- No source code or runtime behavior modified; safe patch release
|
||||||
|
|
||||||
|
## 2026-03-15 - 1.12.0 - feat(cli,release)
|
||||||
|
add self-upgrade command and automate CI, release, and npm publishing workflows
|
||||||
|
|
||||||
|
- adds a new `onebox upgrade` CLI command that checks the latest release and reinstalls the current binary via the installer script
|
||||||
|
- introduces Gitea CI workflows for type checks, build verification, multi-platform binary compilation, release creation, and npm publishing
|
||||||
|
- adds a reusable release template describing installation options, supported platforms, and checksum availability
|
||||||
|
|
||||||
## 2026-03-03 - 1.11.0 - feat(services)
|
## 2026-03-03 - 1.11.0 - feat(services)
|
||||||
map backend service data to UI components, add stats & logs parsing, fetch service stats, and fix logs request param
|
map backend service data to UI components, add stats & logs parsing, fetch service stats, and fix logs request param
|
||||||
|
|
||||||
|
|||||||
BIN
create-service-form.png
Normal file
BIN
create-service-form.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 59 KiB |
@@ -1,12 +1,11 @@
|
|||||||
{
|
{
|
||||||
"name": "@serve.zone/onebox",
|
"name": "@serve.zone/onebox",
|
||||||
"version": "1.11.0",
|
"version": "1.18.2",
|
||||||
"exports": "./mod.ts",
|
"exports": "./mod.ts",
|
||||||
"nodeModulesDir": "auto",
|
|
||||||
"tasks": {
|
"tasks": {
|
||||||
"test": "deno test --allow-all test/",
|
"test": "deno test --allow-all test/",
|
||||||
"test:watch": "deno test --allow-all --watch test/",
|
"test:watch": "deno test --allow-all --watch test/",
|
||||||
"compile": "bash scripts/compile-all.sh",
|
"compile": "tsdeno compile",
|
||||||
"dev": "pnpm run watch"
|
"dev": "pnpm run watch"
|
||||||
},
|
},
|
||||||
"imports": {
|
"imports": {
|
||||||
@@ -16,8 +15,7 @@
|
|||||||
"@std/assert": "jsr:@std/assert@^1.0.15",
|
"@std/assert": "jsr:@std/assert@^1.0.15",
|
||||||
"@std/encoding": "jsr:@std/encoding@^1.0.10",
|
"@std/encoding": "jsr:@std/encoding@^1.0.10",
|
||||||
"@db/sqlite": "jsr:@db/sqlite@0.12.0",
|
"@db/sqlite": "jsr:@db/sqlite@0.12.0",
|
||||||
"@push.rocks/smartdaemon": "npm:@push.rocks/smartdaemon@^2.1.0",
|
"@apiclient.xyz/docker": "npm:@apiclient.xyz/docker@^5.1.1",
|
||||||
"@apiclient.xyz/docker": "npm:@apiclient.xyz/docker@^5.1.0",
|
|
||||||
"@apiclient.xyz/cloudflare": "npm:@apiclient.xyz/cloudflare@6.4.3",
|
"@apiclient.xyz/cloudflare": "npm:@apiclient.xyz/cloudflare@6.4.3",
|
||||||
"@push.rocks/smartacme": "npm:@push.rocks/smartacme@^8.0.0",
|
"@push.rocks/smartacme": "npm:@push.rocks/smartacme@^8.0.0",
|
||||||
"@push.rocks/smartregistry": "npm:@push.rocks/smartregistry@^2.2.0",
|
"@push.rocks/smartregistry": "npm:@push.rocks/smartregistry@^2.2.0",
|
||||||
|
|||||||
BIN
hello-world-running.png
Normal file
BIN
hello-world-running.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 49 KiB |
448
install.sh
448
install.sh
@@ -1,192 +1,310 @@
|
|||||||
#!/bin/bash
|
#!/bin/bash
|
||||||
|
|
||||||
|
# Onebox Installer Script
|
||||||
|
# Downloads and installs pre-compiled Onebox binary from Gitea releases
|
||||||
#
|
#
|
||||||
# Onebox installer script
|
# Usage:
|
||||||
|
# Direct piped installation (recommended):
|
||||||
|
# curl -sSL https://code.foss.global/serve.zone/onebox/raw/branch/main/install.sh | sudo bash
|
||||||
#
|
#
|
||||||
|
# With version specification:
|
||||||
|
# curl -sSL https://code.foss.global/serve.zone/onebox/raw/branch/main/install.sh | sudo bash -s -- --version v1.11.0
|
||||||
|
#
|
||||||
|
# Options:
|
||||||
|
# -h, --help Show this help message
|
||||||
|
# --version VERSION Install specific version (e.g., v1.11.0)
|
||||||
|
# --install-dir DIR Installation directory (default: /opt/onebox)
|
||||||
|
|
||||||
set -e
|
set -e
|
||||||
|
|
||||||
# Configuration
|
# Default values
|
||||||
REPO_URL="https://code.foss.global/serve.zone/onebox"
|
SHOW_HELP=0
|
||||||
|
SPECIFIED_VERSION=""
|
||||||
INSTALL_DIR="/opt/onebox"
|
INSTALL_DIR="/opt/onebox"
|
||||||
BIN_LINK="/usr/local/bin/onebox"
|
GITEA_BASE_URL="https://code.foss.global"
|
||||||
|
GITEA_REPO="serve.zone/onebox"
|
||||||
|
SERVICE_NAME="onebox"
|
||||||
|
|
||||||
# Colors
|
# Parse command line arguments
|
||||||
RED='\033[0;31m'
|
while [[ $# -gt 0 ]]; do
|
||||||
GREEN='\033[0;32m'
|
case $1 in
|
||||||
YELLOW='\033[1;33m'
|
-h|--help)
|
||||||
NC='\033[0m' # No Color
|
SHOW_HELP=1
|
||||||
|
shift
|
||||||
|
;;
|
||||||
|
--version)
|
||||||
|
SPECIFIED_VERSION="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
--install-dir)
|
||||||
|
INSTALL_DIR="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
echo "Unknown option: $1"
|
||||||
|
echo "Use -h or --help for usage information"
|
||||||
|
exit 1
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
done
|
||||||
|
|
||||||
# Functions
|
if [ $SHOW_HELP -eq 1 ]; then
|
||||||
error() {
|
echo "Onebox Installer Script"
|
||||||
echo -e "${RED}Error: $1${NC}" >&2
|
echo "Downloads and installs pre-compiled Onebox binary"
|
||||||
exit 1
|
echo ""
|
||||||
}
|
echo "Usage: $0 [options]"
|
||||||
|
echo ""
|
||||||
info() {
|
echo "Options:"
|
||||||
echo -e "${GREEN}$1${NC}"
|
echo " -h, --help Show this help message"
|
||||||
}
|
echo " --version VERSION Install specific version (e.g., v1.11.0)"
|
||||||
|
echo " --install-dir DIR Installation directory (default: /opt/onebox)"
|
||||||
warn() {
|
echo ""
|
||||||
echo -e "${YELLOW}$1${NC}"
|
echo "Examples:"
|
||||||
}
|
echo " # Install latest version"
|
||||||
|
echo " curl -sSL https://code.foss.global/serve.zone/onebox/raw/branch/main/install.sh | sudo bash"
|
||||||
# Detect platform and architecture
|
echo ""
|
||||||
detect_platform() {
|
echo " # Install specific version"
|
||||||
OS=$(uname -s | tr '[:upper:]' '[:lower:]')
|
echo " curl -sSL https://code.foss.global/serve.zone/onebox/raw/branch/main/install.sh | sudo bash -s -- --version v1.11.0"
|
||||||
ARCH=$(uname -m)
|
exit 0
|
||||||
|
fi
|
||||||
case "$OS" in
|
|
||||||
linux)
|
|
||||||
PLATFORM="linux"
|
|
||||||
;;
|
|
||||||
darwin)
|
|
||||||
PLATFORM="macos"
|
|
||||||
;;
|
|
||||||
*)
|
|
||||||
error "Unsupported operating system: $OS"
|
|
||||||
;;
|
|
||||||
esac
|
|
||||||
|
|
||||||
case "$ARCH" in
|
|
||||||
x86_64|amd64)
|
|
||||||
ARCH="x64"
|
|
||||||
;;
|
|
||||||
aarch64|arm64)
|
|
||||||
ARCH="arm64"
|
|
||||||
;;
|
|
||||||
*)
|
|
||||||
error "Unsupported architecture: $ARCH"
|
|
||||||
;;
|
|
||||||
esac
|
|
||||||
|
|
||||||
BINARY_NAME="onebox-${PLATFORM}-${ARCH}"
|
|
||||||
}
|
|
||||||
|
|
||||||
# Get latest version from Gitea API
|
|
||||||
get_latest_version() {
|
|
||||||
info "Fetching latest version..."
|
|
||||||
VERSION=$(curl -s "${REPO_URL}/releases" | grep -o '"tag_name":"v[^"]*' | head -1 | cut -d'"' -f4 | cut -c2-)
|
|
||||||
|
|
||||||
if [ -z "$VERSION" ]; then
|
|
||||||
warn "Could not fetch latest version, using 'main' branch"
|
|
||||||
VERSION="main"
|
|
||||||
else
|
|
||||||
info "Latest version: v${VERSION}"
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
# Check if running as root
|
# Check if running as root
|
||||||
check_root() {
|
if [ "$EUID" -ne 0 ]; then
|
||||||
if [ "$EUID" -ne 0 ]; then
|
echo "Please run as root (sudo bash install.sh or pipe to sudo bash)"
|
||||||
error "This script must be run as root (use sudo)"
|
exit 1
|
||||||
fi
|
fi
|
||||||
|
|
||||||
|
# Helper function to detect OS and architecture
|
||||||
|
detect_platform() {
|
||||||
|
local os=$(uname -s)
|
||||||
|
local arch=$(uname -m)
|
||||||
|
|
||||||
|
# Map OS
|
||||||
|
case "$os" in
|
||||||
|
Linux)
|
||||||
|
os_name="linux"
|
||||||
|
;;
|
||||||
|
Darwin)
|
||||||
|
os_name="macos"
|
||||||
|
;;
|
||||||
|
MINGW*|MSYS*|CYGWIN*)
|
||||||
|
os_name="windows"
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
echo "Error: Unsupported operating system: $os"
|
||||||
|
echo "Supported: Linux, macOS, Windows"
|
||||||
|
exit 1
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
|
||||||
|
# Map architecture
|
||||||
|
case "$arch" in
|
||||||
|
x86_64|amd64)
|
||||||
|
arch_name="x64"
|
||||||
|
;;
|
||||||
|
aarch64|arm64)
|
||||||
|
arch_name="arm64"
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
echo "Error: Unsupported architecture: $arch"
|
||||||
|
echo "Supported: x86_64/amd64 (x64), aarch64/arm64 (arm64)"
|
||||||
|
exit 1
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
|
||||||
|
# Construct binary name
|
||||||
|
if [ "$os_name" = "windows" ]; then
|
||||||
|
echo "onebox-${os_name}-${arch_name}.exe"
|
||||||
|
else
|
||||||
|
echo "onebox-${os_name}-${arch_name}"
|
||||||
|
fi
|
||||||
}
|
}
|
||||||
|
|
||||||
|
# Get latest release version from Gitea API
|
||||||
|
get_latest_version() {
|
||||||
|
echo "Fetching latest release version from Gitea..." >&2
|
||||||
|
|
||||||
|
local api_url="${GITEA_BASE_URL}/api/v1/repos/${GITEA_REPO}/releases/latest"
|
||||||
|
local response=$(curl -sSL "$api_url" 2>/dev/null)
|
||||||
|
|
||||||
|
if [ $? -ne 0 ] || [ -z "$response" ]; then
|
||||||
|
echo "Error: Failed to fetch latest release information from Gitea API" >&2
|
||||||
|
echo "URL: $api_url" >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Extract tag_name from JSON response
|
||||||
|
local version=$(echo "$response" | grep -o '"tag_name":"[^"]*"' | cut -d'"' -f4)
|
||||||
|
|
||||||
|
if [ -z "$version" ]; then
|
||||||
|
echo "Error: Could not determine latest version from API response" >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo "$version"
|
||||||
|
}
|
||||||
|
|
||||||
|
# Main installation process
|
||||||
|
echo "================================================"
|
||||||
|
echo " Onebox Installation Script"
|
||||||
|
echo "================================================"
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
# Detect platform
|
||||||
|
BINARY_NAME=$(detect_platform)
|
||||||
|
echo "Detected platform: $BINARY_NAME"
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
# Determine version to install
|
||||||
|
if [ -n "$SPECIFIED_VERSION" ]; then
|
||||||
|
VERSION="$SPECIFIED_VERSION"
|
||||||
|
echo "Installing specified version: $VERSION"
|
||||||
|
else
|
||||||
|
VERSION=$(get_latest_version)
|
||||||
|
echo "Installing latest version: $VERSION"
|
||||||
|
fi
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
# Construct download URL
|
||||||
|
DOWNLOAD_URL="${GITEA_BASE_URL}/${GITEA_REPO}/releases/download/${VERSION}/${BINARY_NAME}"
|
||||||
|
echo "Download URL: $DOWNLOAD_URL"
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
# Check if service is running and stop it
|
||||||
|
SERVICE_WAS_RUNNING=0
|
||||||
|
if systemctl is-enabled --quiet "$SERVICE_NAME" 2>/dev/null || systemctl is-active --quiet "$SERVICE_NAME" 2>/dev/null; then
|
||||||
|
SERVICE_WAS_RUNNING=1
|
||||||
|
if systemctl is-active --quiet "$SERVICE_NAME" 2>/dev/null; then
|
||||||
|
echo "Stopping Onebox service..."
|
||||||
|
systemctl stop "$SERVICE_NAME"
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Clean installation directory - ensure only binary exists
|
||||||
|
if [ -d "$INSTALL_DIR" ]; then
|
||||||
|
echo "Cleaning installation directory: $INSTALL_DIR"
|
||||||
|
rm -rf "$INSTALL_DIR"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Create fresh installation directory
|
||||||
|
echo "Creating installation directory: $INSTALL_DIR"
|
||||||
|
mkdir -p "$INSTALL_DIR"
|
||||||
|
|
||||||
# Download binary
|
# Download binary
|
||||||
download_binary() {
|
echo "Downloading Onebox binary..."
|
||||||
info "Downloading Onebox ${VERSION} for ${PLATFORM}-${ARCH}..."
|
TEMP_FILE="$INSTALL_DIR/onebox.download"
|
||||||
|
curl -sSL "$DOWNLOAD_URL" -o "$TEMP_FILE"
|
||||||
|
|
||||||
# Create temp directory
|
if [ $? -ne 0 ]; then
|
||||||
TMP_DIR=$(mktemp -d)
|
echo "Error: Failed to download binary from $DOWNLOAD_URL"
|
||||||
TMP_FILE="${TMP_DIR}/${BINARY_NAME}"
|
echo ""
|
||||||
|
echo "Please check:"
|
||||||
|
echo " 1. Your internet connection"
|
||||||
|
echo " 2. The specified version exists: ${GITEA_BASE_URL}/${GITEA_REPO}/releases"
|
||||||
|
echo " 3. The platform binary is available for this release"
|
||||||
|
rm -f "$TEMP_FILE"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
# Try release download first
|
# Check if download was successful (file exists and not empty)
|
||||||
if [ "$VERSION" != "main" ]; then
|
if [ ! -s "$TEMP_FILE" ]; then
|
||||||
DOWNLOAD_URL="${REPO_URL}/releases/download/v${VERSION}/${BINARY_NAME}"
|
echo "Error: Downloaded file is empty or does not exist"
|
||||||
else
|
rm -f "$TEMP_FILE"
|
||||||
DOWNLOAD_URL="${REPO_URL}/raw/branch/main/dist/binaries/${BINARY_NAME}"
|
exit 1
|
||||||
fi
|
fi
|
||||||
|
|
||||||
if ! curl -L -f -o "$TMP_FILE" "$DOWNLOAD_URL"; then
|
# Move to final location
|
||||||
error "Failed to download binary from $DOWNLOAD_URL"
|
BINARY_PATH="$INSTALL_DIR/onebox"
|
||||||
fi
|
mv "$TEMP_FILE" "$BINARY_PATH"
|
||||||
|
|
||||||
# Verify download
|
if [ $? -ne 0 ] || [ ! -f "$BINARY_PATH" ]; then
|
||||||
if [ ! -f "$TMP_FILE" ] || [ ! -s "$TMP_FILE" ]; then
|
echo "Error: Failed to move binary to $BINARY_PATH"
|
||||||
error "Downloaded file is empty or missing"
|
rm -f "$TEMP_FILE" 2>/dev/null
|
||||||
fi
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
info "✓ Download complete"
|
# Make executable
|
||||||
}
|
chmod +x "$BINARY_PATH"
|
||||||
|
|
||||||
# Install binary
|
if [ $? -ne 0 ]; then
|
||||||
install_binary() {
|
echo "Error: Failed to make binary executable"
|
||||||
info "Installing Onebox to ${INSTALL_DIR}..."
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
# Create install directory
|
echo "Binary installed successfully to: $BINARY_PATH"
|
||||||
mkdir -p "$INSTALL_DIR"
|
echo ""
|
||||||
|
|
||||||
# Copy binary
|
# Check if /usr/local/bin is in PATH
|
||||||
cp "$TMP_FILE" "${INSTALL_DIR}/onebox"
|
if [[ ":$PATH:" == *":/usr/local/bin:"* ]]; then
|
||||||
chmod +x "${INSTALL_DIR}/onebox"
|
BIN_DIR="/usr/local/bin"
|
||||||
|
else
|
||||||
|
BIN_DIR="/usr/bin"
|
||||||
|
fi
|
||||||
|
|
||||||
# Create symlink
|
# Create symlink for global access
|
||||||
ln -sf "${INSTALL_DIR}/onebox" "$BIN_LINK"
|
ln -sf "$BINARY_PATH" "$BIN_DIR/onebox"
|
||||||
|
echo "Symlink created: $BIN_DIR/onebox -> $BINARY_PATH"
|
||||||
|
echo ""
|
||||||
|
|
||||||
# Cleanup temp files
|
# Create data directories
|
||||||
rm -rf "$TMP_DIR"
|
mkdir -p /var/lib/onebox
|
||||||
|
mkdir -p /var/www/certbot
|
||||||
|
|
||||||
info "✓ Installation complete"
|
# Re-enable and restart service if it was previously running (refreshes unit file)
|
||||||
}
|
if [ $SERVICE_WAS_RUNNING -eq 1 ]; then
|
||||||
|
echo "Refreshing systemd service..."
|
||||||
|
onebox systemd enable
|
||||||
|
echo "Restarting Onebox service..."
|
||||||
|
systemctl restart "$SERVICE_NAME"
|
||||||
|
echo "Service restarted successfully."
|
||||||
|
echo ""
|
||||||
|
fi
|
||||||
|
|
||||||
# Initialize database and config
|
echo "================================================"
|
||||||
initialize() {
|
echo " Onebox Installation Complete!"
|
||||||
info "Initializing Onebox..."
|
echo "================================================"
|
||||||
|
echo ""
|
||||||
|
echo "Installation details:"
|
||||||
|
echo " Binary location: $BINARY_PATH"
|
||||||
|
echo " Symlink location: $BIN_DIR/onebox"
|
||||||
|
echo " Version: $VERSION"
|
||||||
|
echo ""
|
||||||
|
|
||||||
# Create data directory
|
# Check if database exists (indicates existing installation)
|
||||||
mkdir -p /var/lib/onebox
|
if [ -f "/var/lib/onebox/onebox.db" ]; then
|
||||||
|
echo "Data directory: /var/lib/onebox (preserved)"
|
||||||
# Create certbot directory for ACME challenges
|
echo ""
|
||||||
mkdir -p /var/www/certbot
|
echo "Your existing data has been preserved."
|
||||||
|
if [ $SERVICE_WAS_RUNNING -eq 1 ]; then
|
||||||
info "✓ Initialization complete"
|
echo "The service has been restarted with your current settings."
|
||||||
}
|
else
|
||||||
|
echo "Start the service with: onebox systemd start"
|
||||||
# Print success message
|
fi
|
||||||
print_success() {
|
else
|
||||||
echo ""
|
echo "Get started:"
|
||||||
info "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
echo ""
|
||||||
info " Onebox installed successfully!"
|
echo " onebox --version"
|
||||||
info "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
echo " onebox --help"
|
||||||
echo ""
|
echo ""
|
||||||
echo "Next steps:"
|
echo " 1. Configure Cloudflare (optional):"
|
||||||
echo ""
|
echo " onebox config set cloudflareAPIKey <key>"
|
||||||
echo "1. Configure Cloudflare (optional):"
|
echo " onebox config set cloudflareEmail <email>"
|
||||||
echo " onebox config set cloudflareAPIKey <key>"
|
echo " onebox config set cloudflareZoneID <zone-id>"
|
||||||
echo " onebox config set cloudflareEmail <email>"
|
echo " onebox config set serverIP <your-server-ip>"
|
||||||
echo " onebox config set cloudflareZoneID <zone-id>"
|
echo ""
|
||||||
echo " onebox config set serverIP <your-server-ip>"
|
echo " 2. Configure ACME email:"
|
||||||
echo ""
|
echo " onebox config set acmeEmail <your@email.com>"
|
||||||
echo "2. Configure ACME email:"
|
echo ""
|
||||||
echo " onebox config set acmeEmail <your@email.com>"
|
echo " 3. Enable systemd service:"
|
||||||
echo ""
|
echo " onebox systemd enable"
|
||||||
echo "3. Install daemon:"
|
echo ""
|
||||||
echo " onebox daemon install"
|
echo " 4. Start service:"
|
||||||
echo ""
|
echo " onebox systemd start"
|
||||||
echo "4. Start daemon:"
|
echo ""
|
||||||
echo " onebox daemon start"
|
echo " 5. Deploy your first service:"
|
||||||
echo ""
|
echo " onebox service add myapp --image nginx:latest --domain app.example.com"
|
||||||
echo "5. Deploy your first service:"
|
echo ""
|
||||||
echo " onebox service add myapp --image nginx:latest --domain app.example.com"
|
echo " Web UI: http://localhost:3000"
|
||||||
echo ""
|
echo " Default credentials: admin / admin"
|
||||||
echo "Web UI: http://localhost:3000"
|
fi
|
||||||
echo "Default credentials: admin / admin"
|
echo ""
|
||||||
echo ""
|
|
||||||
}
|
|
||||||
|
|
||||||
# Main installation flow
|
|
||||||
main() {
|
|
||||||
info "Onebox Installer"
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
check_root
|
|
||||||
detect_platform
|
|
||||||
get_latest_version
|
|
||||||
download_binary
|
|
||||||
install_binary
|
|
||||||
initialize
|
|
||||||
print_success
|
|
||||||
}
|
|
||||||
|
|
||||||
# Run main function
|
|
||||||
main
|
|
||||||
|
|||||||
@@ -11,6 +11,26 @@
|
|||||||
}
|
}
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
|
"@git.zone/tsdeno": {
|
||||||
|
"compileTargets": [
|
||||||
|
{
|
||||||
|
"name": "onebox-linux-x64",
|
||||||
|
"entryPoint": "mod.ts",
|
||||||
|
"outDir": "dist/binaries",
|
||||||
|
"target": "x86_64-unknown-linux-gnu",
|
||||||
|
"permissions": ["--allow-all"],
|
||||||
|
"noCheck": true
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "onebox-linux-arm64",
|
||||||
|
"entryPoint": "mod.ts",
|
||||||
|
"outDir": "dist/binaries",
|
||||||
|
"target": "aarch64-unknown-linux-gnu",
|
||||||
|
"permissions": ["--allow-all"],
|
||||||
|
"noCheck": true
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
"@git.zone/tswatch": {
|
"@git.zone/tswatch": {
|
||||||
"bundles": [
|
"bundles": [
|
||||||
{
|
{
|
||||||
|
|||||||
@@ -1,6 +1,6 @@
|
|||||||
{
|
{
|
||||||
"name": "@serve.zone/onebox",
|
"name": "@serve.zone/onebox",
|
||||||
"version": "1.11.0",
|
"version": "1.18.2",
|
||||||
"description": "Self-hosted container platform with automatic SSL and DNS - a mini Heroku for single servers",
|
"description": "Self-hosted container platform with automatic SSL and DNS - a mini Heroku for single servers",
|
||||||
"main": "mod.ts",
|
"main": "mod.ts",
|
||||||
"type": "module",
|
"type": "module",
|
||||||
@@ -57,10 +57,11 @@
|
|||||||
"@api.global/typedrequest-interfaces": "^3.0.19",
|
"@api.global/typedrequest-interfaces": "^3.0.19",
|
||||||
"@design.estate/dees-catalog": "^3.43.3",
|
"@design.estate/dees-catalog": "^3.43.3",
|
||||||
"@design.estate/dees-element": "^2.1.6",
|
"@design.estate/dees-element": "^2.1.6",
|
||||||
"@serve.zone/catalog": "^2.5.0"
|
"@serve.zone/catalog": "^2.6.0"
|
||||||
},
|
},
|
||||||
"devDependencies": {
|
"devDependencies": {
|
||||||
"@git.zone/tsbundle": "^2.9.0",
|
"@git.zone/tsbundle": "^2.9.0",
|
||||||
|
"@git.zone/tsdeno": "^1.2.0",
|
||||||
"@git.zone/tswatch": "^3.2.0"
|
"@git.zone/tswatch": "^3.2.0"
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
939
pnpm-lock.yaml
generated
939
pnpm-lock.yaml
generated
File diff suppressed because it is too large
Load Diff
16
readme.md
16
readme.md
@@ -47,10 +47,11 @@ For reporting bugs, issues, or security vulnerabilities, please visit [community
|
|||||||
### Installation
|
### Installation
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Download the latest release for your platform
|
# One-line install (recommended)
|
||||||
curl -sSL https://code.foss.global/serve.zone/onebox/releases/latest/download/onebox-linux-x64 -o onebox
|
curl -sSL https://code.foss.global/serve.zone/onebox/raw/branch/main/install.sh | sudo bash
|
||||||
chmod +x onebox
|
|
||||||
sudo mv onebox /usr/local/bin/
|
# Install a specific version
|
||||||
|
curl -sSL https://code.foss.global/serve.zone/onebox/raw/branch/main/install.sh | sudo bash -s -- --version v1.11.0
|
||||||
|
|
||||||
# Or install from npm
|
# Or install from npm
|
||||||
pnpm install -g @serve.zone/onebox
|
pnpm install -g @serve.zone/onebox
|
||||||
@@ -242,6 +243,13 @@ onebox config set cloudflareZoneID your-zone-id
|
|||||||
onebox status
|
onebox status
|
||||||
```
|
```
|
||||||
|
|
||||||
|
### Upgrade
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Upgrade to the latest version (requires root)
|
||||||
|
sudo onebox upgrade
|
||||||
|
```
|
||||||
|
|
||||||
## Configuration 🔧
|
## Configuration 🔧
|
||||||
|
|
||||||
### System Requirements
|
### System Requirements
|
||||||
|
|||||||
@@ -1,56 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
#
|
|
||||||
# Compile Onebox for all platforms
|
|
||||||
#
|
|
||||||
|
|
||||||
set -e
|
|
||||||
|
|
||||||
VERSION=$(grep '"version"' deno.json | cut -d'"' -f4)
|
|
||||||
echo "Compiling Onebox v${VERSION} for all platforms..."
|
|
||||||
|
|
||||||
# Create dist directory
|
|
||||||
mkdir -p dist/binaries
|
|
||||||
|
|
||||||
# Compile for each platform
|
|
||||||
echo "Compiling for Linux x64..."
|
|
||||||
deno compile --allow-all --no-check \
|
|
||||||
--output "dist/binaries/onebox-linux-x64" \
|
|
||||||
--target x86_64-unknown-linux-gnu \
|
|
||||||
mod.ts
|
|
||||||
|
|
||||||
echo "Compiling for Linux ARM64..."
|
|
||||||
deno compile --allow-all --no-check \
|
|
||||||
--output "dist/binaries/onebox-linux-arm64" \
|
|
||||||
--target aarch64-unknown-linux-gnu \
|
|
||||||
mod.ts
|
|
||||||
|
|
||||||
echo "Compiling for macOS x64..."
|
|
||||||
deno compile --allow-all --no-check \
|
|
||||||
--output "dist/binaries/onebox-macos-x64" \
|
|
||||||
--target x86_64-apple-darwin \
|
|
||||||
mod.ts
|
|
||||||
|
|
||||||
echo "Compiling for macOS ARM64..."
|
|
||||||
deno compile --allow-all --no-check \
|
|
||||||
--output "dist/binaries/onebox-macos-arm64" \
|
|
||||||
--target aarch64-apple-darwin \
|
|
||||||
mod.ts
|
|
||||||
|
|
||||||
echo "Compiling for Windows x64..."
|
|
||||||
deno compile --allow-all --no-check \
|
|
||||||
--output "dist/binaries/onebox-windows-x64.exe" \
|
|
||||||
--target x86_64-pc-windows-msvc \
|
|
||||||
mod.ts
|
|
||||||
|
|
||||||
echo ""
|
|
||||||
echo "✓ Compilation complete!"
|
|
||||||
echo ""
|
|
||||||
echo "Binaries:"
|
|
||||||
ls -lh dist/binaries/
|
|
||||||
echo ""
|
|
||||||
echo "Next steps:"
|
|
||||||
echo "1. Test binaries on their respective platforms"
|
|
||||||
echo "2. Create git tag: git tag v${VERSION}"
|
|
||||||
echo "3. Push tag: git push origin v${VERSION}"
|
|
||||||
echo "4. Upload binaries to Gitea release"
|
|
||||||
echo "5. Publish to npm: pnpm publish"
|
|
||||||
BIN
sidebar-icons.png
Normal file
BIN
sidebar-icons.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 52 KiB |
@@ -3,6 +3,6 @@
|
|||||||
*/
|
*/
|
||||||
export const commitinfo = {
|
export const commitinfo = {
|
||||||
name: '@serve.zone/onebox',
|
name: '@serve.zone/onebox',
|
||||||
version: '1.11.0',
|
version: '1.18.2',
|
||||||
description: 'Self-hosted container platform with automatic SSL and DNS - a mini Heroku for single servers'
|
description: 'Self-hosted container platform with automatic SSL and DNS - a mini Heroku for single servers'
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -4,9 +4,7 @@
|
|||||||
* Handles background monitoring, metrics collection, and automatic tasks
|
* Handles background monitoring, metrics collection, and automatic tasks
|
||||||
*/
|
*/
|
||||||
|
|
||||||
import * as plugins from '../plugins.ts';
|
|
||||||
import { logger } from '../logging.ts';
|
import { logger } from '../logging.ts';
|
||||||
import { projectInfo } from '../info.ts';
|
|
||||||
import { getErrorMessage } from '../utils/error.ts';
|
import { getErrorMessage } from '../utils/error.ts';
|
||||||
import type { Onebox } from './onebox.ts';
|
import type { Onebox } from './onebox.ts';
|
||||||
|
|
||||||
@@ -18,7 +16,6 @@ const FALLBACK_PID_FILE = `${FALLBACK_PID_DIR}/onebox.pid`;
|
|||||||
|
|
||||||
export class OneboxDaemon {
|
export class OneboxDaemon {
|
||||||
private oneboxRef: Onebox;
|
private oneboxRef: Onebox;
|
||||||
private smartdaemon: plugins.smartdaemon.SmartDaemon | null = null;
|
|
||||||
private running = false;
|
private running = false;
|
||||||
private monitoringInterval: number | null = null;
|
private monitoringInterval: number | null = null;
|
||||||
private statsInterval: number | null = null;
|
private statsInterval: number | null = null;
|
||||||
@@ -46,68 +43,6 @@ export class OneboxDaemon {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
|
||||||
* Install systemd service
|
|
||||||
*/
|
|
||||||
async installService(): Promise<void> {
|
|
||||||
try {
|
|
||||||
logger.info('Installing Onebox daemon service...');
|
|
||||||
|
|
||||||
// Initialize smartdaemon if needed
|
|
||||||
if (!this.smartdaemon) {
|
|
||||||
this.smartdaemon = new plugins.smartdaemon.SmartDaemon();
|
|
||||||
}
|
|
||||||
|
|
||||||
// Get installation directory
|
|
||||||
const execPath = Deno.execPath();
|
|
||||||
|
|
||||||
const service = await this.smartdaemon.addService({
|
|
||||||
name: 'onebox',
|
|
||||||
version: projectInfo.version,
|
|
||||||
command: `${execPath} run --allow-all ${Deno.cwd()}/mod.ts daemon start`,
|
|
||||||
description: 'Onebox - Self-hosted container platform',
|
|
||||||
workingDir: Deno.cwd(),
|
|
||||||
});
|
|
||||||
|
|
||||||
await service.save();
|
|
||||||
await service.enable();
|
|
||||||
|
|
||||||
logger.success('Onebox daemon service installed');
|
|
||||||
logger.info('Start with: sudo systemctl start smartdaemon_onebox');
|
|
||||||
} catch (error) {
|
|
||||||
logger.error(`Failed to install daemon service: ${getErrorMessage(error)}`);
|
|
||||||
throw error;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Uninstall systemd service
|
|
||||||
*/
|
|
||||||
async uninstallService(): Promise<void> {
|
|
||||||
try {
|
|
||||||
logger.info('Uninstalling Onebox daemon service...');
|
|
||||||
|
|
||||||
// Initialize smartdaemon if needed
|
|
||||||
if (!this.smartdaemon) {
|
|
||||||
this.smartdaemon = new plugins.smartdaemon.SmartDaemon();
|
|
||||||
}
|
|
||||||
|
|
||||||
const services = await this.smartdaemon.systemdManager.getServices();
|
|
||||||
const service = services.find(s => s.name === 'onebox');
|
|
||||||
|
|
||||||
if (service) {
|
|
||||||
await service.stop();
|
|
||||||
await service.disable();
|
|
||||||
await service.delete();
|
|
||||||
}
|
|
||||||
|
|
||||||
logger.success('Onebox daemon service uninstalled');
|
|
||||||
} catch (error) {
|
|
||||||
logger.error(`Failed to uninstall daemon service: ${getErrorMessage(error)}`);
|
|
||||||
throw error;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Start daemon mode (background monitoring)
|
* Start daemon mode (background monitoring)
|
||||||
*/
|
*/
|
||||||
@@ -482,36 +417,7 @@ export class OneboxDaemon {
|
|||||||
static async ensureNoDaemon(): Promise<void> {
|
static async ensureNoDaemon(): Promise<void> {
|
||||||
const running = await OneboxDaemon.isDaemonRunning();
|
const running = await OneboxDaemon.isDaemonRunning();
|
||||||
if (running) {
|
if (running) {
|
||||||
throw new Error('Daemon is already running. Please stop it first with: onebox daemon stop');
|
throw new Error('Daemon is already running. Please stop it first with: onebox systemd stop');
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Get service status from systemd
|
|
||||||
*/
|
|
||||||
async getServiceStatus(): Promise<string> {
|
|
||||||
try {
|
|
||||||
// Don't need smartdaemon to check status, just use systemctl directly
|
|
||||||
const command = new Deno.Command('systemctl', {
|
|
||||||
args: ['status', 'smartdaemon_onebox'],
|
|
||||||
stdout: 'piped',
|
|
||||||
stderr: 'piped',
|
|
||||||
});
|
|
||||||
|
|
||||||
const { code, stdout } = await command.output();
|
|
||||||
const output = new TextDecoder().decode(stdout);
|
|
||||||
|
|
||||||
if (code === 0 || output.includes('active (running)')) {
|
|
||||||
return 'running';
|
|
||||||
} else if (output.includes('inactive') || output.includes('dead')) {
|
|
||||||
return 'stopped';
|
|
||||||
} else if (output.includes('failed')) {
|
|
||||||
return 'failed';
|
|
||||||
} else {
|
|
||||||
return 'unknown';
|
|
||||||
}
|
|
||||||
} catch (error) {
|
|
||||||
return 'not-installed';
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -881,12 +881,12 @@ export class OneboxDockerManager {
|
|||||||
]);
|
]);
|
||||||
|
|
||||||
const execInfo = await inspect();
|
const execInfo = await inspect();
|
||||||
const exitCode = execInfo.ExitCode || 0;
|
const exitCode = execInfo.ExitCode ?? -1;
|
||||||
|
|
||||||
return { stdout, stderr, exitCode };
|
return { stdout, stderr, exitCode };
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
logger.error(`Failed to exec in container ${containerID}: ${getErrorMessage(error)}`);
|
logger.error(`Failed to exec in container ${containerID}: ${getErrorMessage(error)}`);
|
||||||
throw error;
|
return { stdout: '', stderr: getErrorMessage(error), exitCode: -1 };
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -14,6 +14,7 @@ import { OneboxReverseProxy } from './reverseproxy.ts';
|
|||||||
import { OneboxDnsManager } from './dns.ts';
|
import { OneboxDnsManager } from './dns.ts';
|
||||||
import { OneboxSslManager } from './ssl.ts';
|
import { OneboxSslManager } from './ssl.ts';
|
||||||
import { OneboxDaemon } from './daemon.ts';
|
import { OneboxDaemon } from './daemon.ts';
|
||||||
|
import { OneboxSystemd } from './systemd.ts';
|
||||||
import { OneboxHttpServer } from './httpserver.ts';
|
import { OneboxHttpServer } from './httpserver.ts';
|
||||||
import { CloudflareDomainSync } from './cloudflare-sync.ts';
|
import { CloudflareDomainSync } from './cloudflare-sync.ts';
|
||||||
import { CertRequirementManager } from './cert-requirement-manager.ts';
|
import { CertRequirementManager } from './cert-requirement-manager.ts';
|
||||||
@@ -33,6 +34,7 @@ export class Onebox {
|
|||||||
public dns: OneboxDnsManager;
|
public dns: OneboxDnsManager;
|
||||||
public ssl: OneboxSslManager;
|
public ssl: OneboxSslManager;
|
||||||
public daemon: OneboxDaemon;
|
public daemon: OneboxDaemon;
|
||||||
|
public systemd: OneboxSystemd;
|
||||||
public httpServer: OneboxHttpServer;
|
public httpServer: OneboxHttpServer;
|
||||||
public cloudflareDomainSync: CloudflareDomainSync;
|
public cloudflareDomainSync: CloudflareDomainSync;
|
||||||
public certRequirementManager: CertRequirementManager;
|
public certRequirementManager: CertRequirementManager;
|
||||||
@@ -57,6 +59,7 @@ export class Onebox {
|
|||||||
this.dns = new OneboxDnsManager(this);
|
this.dns = new OneboxDnsManager(this);
|
||||||
this.ssl = new OneboxSslManager(this);
|
this.ssl = new OneboxSslManager(this);
|
||||||
this.daemon = new OneboxDaemon(this);
|
this.daemon = new OneboxDaemon(this);
|
||||||
|
this.systemd = new OneboxSystemd();
|
||||||
this.httpServer = new OneboxHttpServer(this);
|
this.httpServer = new OneboxHttpServer(this);
|
||||||
this.registry = new RegistryManager({
|
this.registry = new RegistryManager({
|
||||||
dataDir: './.nogit/registry-data',
|
dataDir: './.nogit/registry-data',
|
||||||
@@ -320,20 +323,6 @@ export class Onebox {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
|
||||||
* Start daemon mode
|
|
||||||
*/
|
|
||||||
async startDaemon(): Promise<void> {
|
|
||||||
await this.daemon.start();
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Stop daemon mode
|
|
||||||
*/
|
|
||||||
async stopDaemon(): Promise<void> {
|
|
||||||
await this.daemon.stop();
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Start OpsServer (TypedRequest-based, serves new UI)
|
* Start OpsServer (TypedRequest-based, serves new UI)
|
||||||
*/
|
*/
|
||||||
@@ -355,6 +344,9 @@ export class Onebox {
|
|||||||
try {
|
try {
|
||||||
logger.info('Shutting down Onebox...');
|
logger.info('Shutting down Onebox...');
|
||||||
|
|
||||||
|
// Stop auto-update monitoring
|
||||||
|
this.services.stopAutoUpdateMonitoring();
|
||||||
|
|
||||||
// Stop backup scheduler
|
// Stop backup scheduler
|
||||||
await this.backupScheduler.stop();
|
await this.backupScheduler.stop();
|
||||||
|
|
||||||
|
|||||||
@@ -194,12 +194,6 @@ export class ClickHouseProvider extends BasePlatformServiceProvider {
|
|||||||
const adminCreds = await credentialEncryption.decrypt(platformService.adminCredentialsEncrypted);
|
const adminCreds = await credentialEncryption.decrypt(platformService.adminCredentialsEncrypted);
|
||||||
const containerName = this.getContainerName();
|
const containerName = this.getContainerName();
|
||||||
|
|
||||||
// Get container host port for connection from host (overlay network IPs not accessible from host)
|
|
||||||
const hostPort = await this.oneboxRef.docker.getContainerHostPort(platformService.containerId, 8123);
|
|
||||||
if (!hostPort) {
|
|
||||||
throw new Error('Could not get ClickHouse container host port');
|
|
||||||
}
|
|
||||||
|
|
||||||
// Generate resource names and credentials
|
// Generate resource names and credentials
|
||||||
const dbName = this.generateResourceName(userService.name);
|
const dbName = this.generateResourceName(userService.name);
|
||||||
const username = this.generateResourceName(userService.name);
|
const username = this.generateResourceName(userService.name);
|
||||||
@@ -207,35 +201,16 @@ export class ClickHouseProvider extends BasePlatformServiceProvider {
|
|||||||
|
|
||||||
logger.info(`Provisioning ClickHouse database '${dbName}' for service '${userService.name}'...`);
|
logger.info(`Provisioning ClickHouse database '${dbName}' for service '${userService.name}'...`);
|
||||||
|
|
||||||
// Connect to ClickHouse via localhost and the mapped host port
|
// Use docker exec to provision inside the container (avoids host port mapping issues)
|
||||||
const baseUrl = `http://127.0.0.1:${hostPort}`;
|
const queries = [
|
||||||
|
`CREATE DATABASE IF NOT EXISTS ${dbName}`,
|
||||||
|
`CREATE USER IF NOT EXISTS ${username} IDENTIFIED BY '${password}'`,
|
||||||
|
`GRANT ALL ON ${dbName}.* TO ${username}`,
|
||||||
|
];
|
||||||
|
|
||||||
// Create database
|
for (const query of queries) {
|
||||||
await this.executeQuery(
|
await this.execClickHouseQuery(platformService.containerId, adminCreds, query);
|
||||||
baseUrl,
|
}
|
||||||
adminCreds.username,
|
|
||||||
adminCreds.password,
|
|
||||||
`CREATE DATABASE IF NOT EXISTS ${dbName}`
|
|
||||||
);
|
|
||||||
logger.info(`Created ClickHouse database '${dbName}'`);
|
|
||||||
|
|
||||||
// Create user with access to this database
|
|
||||||
await this.executeQuery(
|
|
||||||
baseUrl,
|
|
||||||
adminCreds.username,
|
|
||||||
adminCreds.password,
|
|
||||||
`CREATE USER IF NOT EXISTS ${username} IDENTIFIED BY '${password}'`
|
|
||||||
);
|
|
||||||
logger.info(`Created ClickHouse user '${username}'`);
|
|
||||||
|
|
||||||
// Grant permissions on the database
|
|
||||||
await this.executeQuery(
|
|
||||||
baseUrl,
|
|
||||||
adminCreds.username,
|
|
||||||
adminCreds.password,
|
|
||||||
`GRANT ALL ON ${dbName}.* TO ${username}`
|
|
||||||
);
|
|
||||||
logger.info(`Granted permissions to user '${username}' on database '${dbName}'`);
|
|
||||||
|
|
||||||
logger.success(`ClickHouse database '${dbName}' provisioned with user '${username}'`);
|
logger.success(`ClickHouse database '${dbName}' provisioned with user '${username}'`);
|
||||||
|
|
||||||
@@ -274,37 +249,11 @@ export class ClickHouseProvider extends BasePlatformServiceProvider {
|
|||||||
|
|
||||||
const adminCreds = await credentialEncryption.decrypt(platformService.adminCredentialsEncrypted);
|
const adminCreds = await credentialEncryption.decrypt(platformService.adminCredentialsEncrypted);
|
||||||
|
|
||||||
// Get container host port for connection from host (overlay network IPs not accessible from host)
|
|
||||||
const hostPort = await this.oneboxRef.docker.getContainerHostPort(platformService.containerId, 8123);
|
|
||||||
if (!hostPort) {
|
|
||||||
throw new Error('Could not get ClickHouse container host port');
|
|
||||||
}
|
|
||||||
|
|
||||||
logger.info(`Deprovisioning ClickHouse database '${resource.resourceName}'...`);
|
logger.info(`Deprovisioning ClickHouse database '${resource.resourceName}'...`);
|
||||||
|
|
||||||
const baseUrl = `http://127.0.0.1:${hostPort}`;
|
|
||||||
|
|
||||||
try {
|
try {
|
||||||
// Drop the user
|
await this.execClickHouseQuery(platformService.containerId, adminCreds, `DROP USER IF EXISTS ${credentials.username}`);
|
||||||
try {
|
await this.execClickHouseQuery(platformService.containerId, adminCreds, `DROP DATABASE IF EXISTS ${resource.resourceName}`);
|
||||||
await this.executeQuery(
|
|
||||||
baseUrl,
|
|
||||||
adminCreds.username,
|
|
||||||
adminCreds.password,
|
|
||||||
`DROP USER IF EXISTS ${credentials.username}`
|
|
||||||
);
|
|
||||||
logger.info(`Dropped ClickHouse user '${credentials.username}'`);
|
|
||||||
} catch (e) {
|
|
||||||
logger.warn(`Could not drop ClickHouse user: ${getErrorMessage(e)}`);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Drop the database
|
|
||||||
await this.executeQuery(
|
|
||||||
baseUrl,
|
|
||||||
adminCreds.username,
|
|
||||||
adminCreds.password,
|
|
||||||
`DROP DATABASE IF EXISTS ${resource.resourceName}`
|
|
||||||
);
|
|
||||||
logger.success(`ClickHouse database '${resource.resourceName}' dropped`);
|
logger.success(`ClickHouse database '${resource.resourceName}' dropped`);
|
||||||
} catch (e) {
|
} catch (e) {
|
||||||
logger.error(`Failed to deprovision ClickHouse database: ${getErrorMessage(e)}`);
|
logger.error(`Failed to deprovision ClickHouse database: ${getErrorMessage(e)}`);
|
||||||
@@ -313,26 +262,27 @@ export class ClickHouseProvider extends BasePlatformServiceProvider {
|
|||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Execute a ClickHouse SQL query via HTTP interface
|
* Execute a ClickHouse SQL query via docker exec inside the container
|
||||||
*/
|
*/
|
||||||
private async executeQuery(
|
private async execClickHouseQuery(
|
||||||
baseUrl: string,
|
containerId: string,
|
||||||
username: string,
|
adminCreds: { username: string; password: string },
|
||||||
password: string,
|
|
||||||
query: string
|
query: string
|
||||||
): Promise<string> {
|
): Promise<string> {
|
||||||
const url = `${baseUrl}/?user=${encodeURIComponent(username)}&password=${encodeURIComponent(password)}`;
|
const result = await this.oneboxRef.docker.execInContainer(
|
||||||
|
containerId,
|
||||||
|
[
|
||||||
|
'clickhouse-client',
|
||||||
|
'--user', adminCreds.username,
|
||||||
|
'--password', adminCreds.password,
|
||||||
|
'--query', query,
|
||||||
|
]
|
||||||
|
);
|
||||||
|
|
||||||
const response = await fetch(url, {
|
if (result.exitCode !== 0) {
|
||||||
method: 'POST',
|
throw new Error(`ClickHouse query failed (exit ${result.exitCode}): ${result.stderr.substring(0, 200)}`);
|
||||||
body: query,
|
|
||||||
});
|
|
||||||
|
|
||||||
if (!response.ok) {
|
|
||||||
const errorText = await response.text();
|
|
||||||
throw new Error(`ClickHouse query failed: ${errorText}`);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
return await response.text();
|
return result.stdout;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -196,84 +196,28 @@ export class MinioProvider extends BasePlatformServiceProvider {
|
|||||||
const adminCreds = await credentialEncryption.decrypt(platformService.adminCredentialsEncrypted);
|
const adminCreds = await credentialEncryption.decrypt(platformService.adminCredentialsEncrypted);
|
||||||
const containerName = this.getContainerName();
|
const containerName = this.getContainerName();
|
||||||
|
|
||||||
// Get container host port for connection from host (overlay network IPs not accessible from host)
|
// Generate bucket name
|
||||||
const hostPort = await this.oneboxRef.docker.getContainerHostPort(platformService.containerId, 9000);
|
|
||||||
if (!hostPort) {
|
|
||||||
throw new Error('Could not get MinIO container host port');
|
|
||||||
}
|
|
||||||
|
|
||||||
// Generate bucket name and credentials
|
|
||||||
const bucketName = this.generateBucketName(userService.name);
|
const bucketName = this.generateBucketName(userService.name);
|
||||||
const accessKey = credentialEncryption.generateAccessKey(20);
|
|
||||||
const secretKey = credentialEncryption.generateSecretKey(40);
|
|
||||||
|
|
||||||
logger.info(`Provisioning MinIO bucket '${bucketName}' for service '${userService.name}'...`);
|
logger.info(`Provisioning MinIO bucket '${bucketName}' for service '${userService.name}'...`);
|
||||||
|
|
||||||
// Connect to MinIO via localhost and the mapped host port (for provisioning from host)
|
// Use docker exec with mc (MinIO Client) inside the container
|
||||||
const provisioningEndpoint = `http://127.0.0.1:${hostPort}`;
|
// First configure mc alias for local server
|
||||||
|
await this.execMc(platformService.containerId, [
|
||||||
// Import AWS S3 client
|
'alias', 'set', 'local', 'http://localhost:9000',
|
||||||
const { S3Client, CreateBucketCommand, PutBucketPolicyCommand } = await import('npm:@aws-sdk/client-s3@3');
|
adminCreds.username, adminCreds.password,
|
||||||
|
]);
|
||||||
// Create S3 client with admin credentials - connect via host port
|
|
||||||
const s3Client = new S3Client({
|
|
||||||
endpoint: provisioningEndpoint,
|
|
||||||
region: 'us-east-1',
|
|
||||||
credentials: {
|
|
||||||
accessKeyId: adminCreds.username,
|
|
||||||
secretAccessKey: adminCreds.password,
|
|
||||||
},
|
|
||||||
forcePathStyle: true,
|
|
||||||
});
|
|
||||||
|
|
||||||
// Create the bucket
|
// Create the bucket
|
||||||
try {
|
const mbResult = await this.execMc(platformService.containerId, [
|
||||||
await s3Client.send(new CreateBucketCommand({
|
'mb', '--ignore-existing', `local/${bucketName}`,
|
||||||
Bucket: bucketName,
|
]);
|
||||||
}));
|
logger.info(`Created MinIO bucket '${bucketName}'`);
|
||||||
logger.info(`Created MinIO bucket '${bucketName}'`);
|
|
||||||
} catch (e: any) {
|
|
||||||
if (e.name !== 'BucketAlreadyOwnedByYou' && e.name !== 'BucketAlreadyExists') {
|
|
||||||
throw e;
|
|
||||||
}
|
|
||||||
logger.warn(`Bucket '${bucketName}' already exists`);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Create service account/access key using MinIO Admin API
|
// Set bucket policy to allow public read/write (services on the same network use root creds)
|
||||||
// MinIO Admin API requires mc client or direct API calls
|
await this.execMc(platformService.containerId, [
|
||||||
// For simplicity, we'll use root credentials and bucket policy isolation
|
'anonymous', 'set', 'none', `local/${bucketName}`,
|
||||||
// In production, you'd use MinIO's Admin API to create service accounts
|
]);
|
||||||
|
|
||||||
// Set bucket policy to allow access only with this bucket's credentials
|
|
||||||
const bucketPolicy = {
|
|
||||||
Version: '2012-10-17',
|
|
||||||
Statement: [
|
|
||||||
{
|
|
||||||
Effect: 'Allow',
|
|
||||||
Principal: { AWS: ['*'] },
|
|
||||||
Action: ['s3:GetObject', 's3:PutObject', 's3:DeleteObject', 's3:ListBucket'],
|
|
||||||
Resource: [
|
|
||||||
`arn:aws:s3:::${bucketName}`,
|
|
||||||
`arn:aws:s3:::${bucketName}/*`,
|
|
||||||
],
|
|
||||||
},
|
|
||||||
],
|
|
||||||
};
|
|
||||||
|
|
||||||
try {
|
|
||||||
await s3Client.send(new PutBucketPolicyCommand({
|
|
||||||
Bucket: bucketName,
|
|
||||||
Policy: JSON.stringify(bucketPolicy),
|
|
||||||
}));
|
|
||||||
logger.info(`Set bucket policy for '${bucketName}'`);
|
|
||||||
} catch (e) {
|
|
||||||
logger.warn(`Could not set bucket policy: ${getErrorMessage(e)}`);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Note: For proper per-service credentials, MinIO Admin API should be used
|
|
||||||
// For now, we're providing the bucket with root access
|
|
||||||
// TODO: Implement MinIO service account creation
|
|
||||||
logger.warn('Using root credentials for MinIO access. Consider implementing service accounts for production.');
|
|
||||||
|
|
||||||
// Use container name for the endpoint in credentials (user services run in same network)
|
// Use container name for the endpoint in credentials (user services run in same network)
|
||||||
const serviceEndpoint = `http://${containerName}:9000`;
|
const serviceEndpoint = `http://${containerName}:9000`;
|
||||||
@@ -281,7 +225,7 @@ export class MinioProvider extends BasePlatformServiceProvider {
|
|||||||
const credentials: Record<string, string> = {
|
const credentials: Record<string, string> = {
|
||||||
endpoint: serviceEndpoint,
|
endpoint: serviceEndpoint,
|
||||||
bucket: bucketName,
|
bucket: bucketName,
|
||||||
accessKey: adminCreds.username, // Using root for now
|
accessKey: adminCreds.username,
|
||||||
secretKey: adminCreds.password,
|
secretKey: adminCreds.password,
|
||||||
region: 'us-east-1',
|
region: 'us-east-1',
|
||||||
};
|
};
|
||||||
@@ -312,57 +256,37 @@ export class MinioProvider extends BasePlatformServiceProvider {
|
|||||||
|
|
||||||
const adminCreds = await credentialEncryption.decrypt(platformService.adminCredentialsEncrypted);
|
const adminCreds = await credentialEncryption.decrypt(platformService.adminCredentialsEncrypted);
|
||||||
|
|
||||||
// Get container host port for connection from host (overlay network IPs not accessible from host)
|
|
||||||
const hostPort = await this.oneboxRef.docker.getContainerHostPort(platformService.containerId, 9000);
|
|
||||||
if (!hostPort) {
|
|
||||||
throw new Error('Could not get MinIO container host port');
|
|
||||||
}
|
|
||||||
|
|
||||||
logger.info(`Deprovisioning MinIO bucket '${resource.resourceName}'...`);
|
logger.info(`Deprovisioning MinIO bucket '${resource.resourceName}'...`);
|
||||||
|
|
||||||
const { S3Client, DeleteBucketCommand, ListObjectsV2Command, DeleteObjectsCommand } = await import('npm:@aws-sdk/client-s3@3');
|
// Configure mc alias
|
||||||
|
await this.execMc(platformService.containerId, [
|
||||||
const s3Client = new S3Client({
|
'alias', 'set', 'local', 'http://localhost:9000',
|
||||||
endpoint: `http://127.0.0.1:${hostPort}`,
|
adminCreds.username, adminCreds.password,
|
||||||
region: 'us-east-1',
|
]);
|
||||||
credentials: {
|
|
||||||
accessKeyId: adminCreds.username,
|
|
||||||
secretAccessKey: adminCreds.password,
|
|
||||||
},
|
|
||||||
forcePathStyle: true,
|
|
||||||
});
|
|
||||||
|
|
||||||
try {
|
try {
|
||||||
// First, delete all objects in the bucket
|
// Remove all objects and the bucket
|
||||||
let continuationToken: string | undefined;
|
await this.execMc(platformService.containerId, [
|
||||||
do {
|
'rb', '--force', `local/${resource.resourceName}`,
|
||||||
const listResponse = await s3Client.send(new ListObjectsV2Command({
|
]);
|
||||||
Bucket: resource.resourceName,
|
|
||||||
ContinuationToken: continuationToken,
|
|
||||||
}));
|
|
||||||
|
|
||||||
if (listResponse.Contents && listResponse.Contents.length > 0) {
|
|
||||||
await s3Client.send(new DeleteObjectsCommand({
|
|
||||||
Bucket: resource.resourceName,
|
|
||||||
Delete: {
|
|
||||||
Objects: listResponse.Contents.map(obj => ({ Key: obj.Key! })),
|
|
||||||
},
|
|
||||||
}));
|
|
||||||
logger.info(`Deleted ${listResponse.Contents.length} objects from bucket`);
|
|
||||||
}
|
|
||||||
|
|
||||||
continuationToken = listResponse.IsTruncated ? listResponse.NextContinuationToken : undefined;
|
|
||||||
} while (continuationToken);
|
|
||||||
|
|
||||||
// Now delete the bucket
|
|
||||||
await s3Client.send(new DeleteBucketCommand({
|
|
||||||
Bucket: resource.resourceName,
|
|
||||||
}));
|
|
||||||
|
|
||||||
logger.success(`MinIO bucket '${resource.resourceName}' deleted`);
|
logger.success(`MinIO bucket '${resource.resourceName}' deleted`);
|
||||||
} catch (e) {
|
} catch (e) {
|
||||||
logger.error(`Failed to delete MinIO bucket: ${getErrorMessage(e)}`);
|
logger.error(`Failed to delete MinIO bucket: ${getErrorMessage(e)}`);
|
||||||
throw e;
|
throw e;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Execute mc (MinIO Client) command inside the container
|
||||||
|
*/
|
||||||
|
private async execMc(
|
||||||
|
containerId: string,
|
||||||
|
args: string[],
|
||||||
|
): Promise<{ stdout: string; stderr: string }> {
|
||||||
|
const result = await this.oneboxRef.docker.execInContainer(containerId, ['mc', ...args]);
|
||||||
|
if (result.exitCode !== 0) {
|
||||||
|
throw new Error(`mc command failed (exit ${result.exitCode}): ${result.stderr.substring(0, 200)}`);
|
||||||
|
}
|
||||||
|
return result;
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -28,7 +28,7 @@ export class MongoDBProvider extends BasePlatformServiceProvider {
|
|||||||
|
|
||||||
getDefaultConfig(): IPlatformServiceConfig {
|
getDefaultConfig(): IPlatformServiceConfig {
|
||||||
return {
|
return {
|
||||||
image: 'mongo:7',
|
image: 'mongo:4.4',
|
||||||
port: 27017,
|
port: 27017,
|
||||||
volumes: ['/var/lib/onebox/mongodb:/data/db'],
|
volumes: ['/var/lib/onebox/mongodb:/data/db'],
|
||||||
environment: {
|
environment: {
|
||||||
@@ -165,7 +165,7 @@ export class MongoDBProvider extends BasePlatformServiceProvider {
|
|||||||
// This avoids network issues with overlay networks
|
// This avoids network issues with overlay networks
|
||||||
const result = await this.oneboxRef.docker.execInContainer(
|
const result = await this.oneboxRef.docker.execInContainer(
|
||||||
platformService.containerId,
|
platformService.containerId,
|
||||||
['mongosh', '--eval', 'db.adminCommand("ping")', '--username', adminCreds.username, '--password', adminCreds.password, '--authenticationDatabase', 'admin', '--quiet']
|
['mongo', '--eval', 'db.adminCommand("ping")', '--username', adminCreds.username, '--password', adminCreds.password, '--authenticationDatabase', 'admin', '--quiet']
|
||||||
);
|
);
|
||||||
|
|
||||||
if (result.exitCode === 0) {
|
if (result.exitCode === 0) {
|
||||||
@@ -190,12 +190,6 @@ export class MongoDBProvider extends BasePlatformServiceProvider {
|
|||||||
const adminCreds = await credentialEncryption.decrypt(platformService.adminCredentialsEncrypted);
|
const adminCreds = await credentialEncryption.decrypt(platformService.adminCredentialsEncrypted);
|
||||||
const containerName = this.getContainerName();
|
const containerName = this.getContainerName();
|
||||||
|
|
||||||
// Get container host port for connection from host (overlay network IPs not accessible from host)
|
|
||||||
const hostPort = await this.oneboxRef.docker.getContainerHostPort(platformService.containerId, 27017);
|
|
||||||
if (!hostPort) {
|
|
||||||
throw new Error('Could not get MongoDB container host port');
|
|
||||||
}
|
|
||||||
|
|
||||||
// Generate resource names and credentials
|
// Generate resource names and credentials
|
||||||
const dbName = this.generateResourceName(userService.name);
|
const dbName = this.generateResourceName(userService.name);
|
||||||
const username = this.generateResourceName(userService.name);
|
const username = this.generateResourceName(userService.name);
|
||||||
@@ -203,32 +197,40 @@ export class MongoDBProvider extends BasePlatformServiceProvider {
|
|||||||
|
|
||||||
logger.info(`Provisioning MongoDB database '${dbName}' for service '${userService.name}'...`);
|
logger.info(`Provisioning MongoDB database '${dbName}' for service '${userService.name}'...`);
|
||||||
|
|
||||||
// Connect to MongoDB via localhost and the mapped host port
|
// Use docker exec to provision inside the container (avoids host port mapping issues)
|
||||||
const { MongoClient } = await import('npm:mongodb@6');
|
const escapedPassword = password.replace(/'/g, "'\\''");
|
||||||
const adminUri = `mongodb://${adminCreds.username}:${adminCreds.password}@127.0.0.1:${hostPort}/?authSource=admin`;
|
const escapedAdminPassword = adminCreds.password.replace(/'/g, "'\\''");
|
||||||
|
|
||||||
const client = new MongoClient(adminUri);
|
// Create database and user via mongo inside the container
|
||||||
await client.connect();
|
const mongoScript = `
|
||||||
|
db = db.getSiblingDB('${dbName}');
|
||||||
try {
|
db.createCollection('_onebox_init');
|
||||||
// Create the database by switching to it (MongoDB creates on first write)
|
db.createUser({
|
||||||
const db = client.db(dbName);
|
user: '${username}',
|
||||||
|
pwd: '${escapedPassword}',
|
||||||
// Create a collection to ensure the database exists
|
roles: [{ role: 'readWrite', db: '${dbName}' }]
|
||||||
await db.createCollection('_onebox_init');
|
|
||||||
|
|
||||||
// Create user with readWrite access to this database
|
|
||||||
await db.command({
|
|
||||||
createUser: username,
|
|
||||||
pwd: password,
|
|
||||||
roles: [{ role: 'readWrite', db: dbName }],
|
|
||||||
});
|
});
|
||||||
|
print('PROVISION_SUCCESS');
|
||||||
|
`;
|
||||||
|
|
||||||
logger.success(`MongoDB database '${dbName}' provisioned with user '${username}'`);
|
const result = await this.oneboxRef.docker.execInContainer(
|
||||||
} finally {
|
platformService.containerId,
|
||||||
await client.close();
|
[
|
||||||
|
'mongo',
|
||||||
|
'--username', adminCreds.username,
|
||||||
|
'--password', escapedAdminPassword,
|
||||||
|
'--authenticationDatabase', 'admin',
|
||||||
|
'--quiet',
|
||||||
|
'--eval', mongoScript,
|
||||||
|
]
|
||||||
|
);
|
||||||
|
|
||||||
|
if (result.exitCode !== 0 || !result.stdout.includes('PROVISION_SUCCESS')) {
|
||||||
|
throw new Error(`Failed to provision MongoDB database: exit code ${result.exitCode}, output: ${result.stdout.substring(0, 200)} ${result.stderr.substring(0, 200)}`);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
logger.success(`MongoDB database '${dbName}' provisioned with user '${username}'`);
|
||||||
|
|
||||||
// Build the credentials and env vars
|
// Build the credentials and env vars
|
||||||
const credentials: Record<string, string> = {
|
const credentials: Record<string, string> = {
|
||||||
host: containerName,
|
host: containerName,
|
||||||
@@ -262,37 +264,33 @@ export class MongoDBProvider extends BasePlatformServiceProvider {
|
|||||||
}
|
}
|
||||||
|
|
||||||
const adminCreds = await credentialEncryption.decrypt(platformService.adminCredentialsEncrypted);
|
const adminCreds = await credentialEncryption.decrypt(platformService.adminCredentialsEncrypted);
|
||||||
|
const escapedAdminPassword = adminCreds.password.replace(/'/g, "'\\''");
|
||||||
// Get container host port for connection from host (overlay network IPs not accessible from host)
|
|
||||||
const hostPort = await this.oneboxRef.docker.getContainerHostPort(platformService.containerId, 27017);
|
|
||||||
if (!hostPort) {
|
|
||||||
throw new Error('Could not get MongoDB container host port');
|
|
||||||
}
|
|
||||||
|
|
||||||
logger.info(`Deprovisioning MongoDB database '${resource.resourceName}'...`);
|
logger.info(`Deprovisioning MongoDB database '${resource.resourceName}'...`);
|
||||||
|
|
||||||
const { MongoClient } = await import('npm:mongodb@6');
|
const mongoScript = `
|
||||||
const adminUri = `mongodb://${adminCreds.username}:${adminCreds.password}@127.0.0.1:${hostPort}/?authSource=admin`;
|
db = db.getSiblingDB('${resource.resourceName}');
|
||||||
|
try { db.dropUser('${credentials.username}'); } catch(e) { print('User drop failed: ' + e); }
|
||||||
|
db.dropDatabase();
|
||||||
|
print('DEPROVISION_SUCCESS');
|
||||||
|
`;
|
||||||
|
|
||||||
const client = new MongoClient(adminUri);
|
const result = await this.oneboxRef.docker.execInContainer(
|
||||||
await client.connect();
|
platformService.containerId,
|
||||||
|
[
|
||||||
|
'mongo',
|
||||||
|
'--username', adminCreds.username,
|
||||||
|
'--password', escapedAdminPassword,
|
||||||
|
'--authenticationDatabase', 'admin',
|
||||||
|
'--quiet',
|
||||||
|
'--eval', mongoScript,
|
||||||
|
]
|
||||||
|
);
|
||||||
|
|
||||||
try {
|
if (result.exitCode !== 0) {
|
||||||
const db = client.db(resource.resourceName);
|
logger.warn(`MongoDB deprovision returned exit code ${result.exitCode}: ${result.stderr.substring(0, 200)}`);
|
||||||
|
|
||||||
// Drop the user
|
|
||||||
try {
|
|
||||||
await db.command({ dropUser: credentials.username });
|
|
||||||
logger.info(`Dropped MongoDB user '${credentials.username}'`);
|
|
||||||
} catch (e) {
|
|
||||||
logger.warn(`Could not drop MongoDB user: ${getErrorMessage(e)}`);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Drop the database
|
|
||||||
await db.dropDatabase();
|
|
||||||
logger.success(`MongoDB database '${resource.resourceName}' dropped`);
|
|
||||||
} finally {
|
|
||||||
await client.close();
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
logger.success(`MongoDB database '${resource.resourceName}' dropped`);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -15,6 +15,7 @@ export class OneboxServicesManager {
|
|||||||
private oneboxRef: any; // Will be Onebox instance
|
private oneboxRef: any; // Will be Onebox instance
|
||||||
private database: OneboxDatabase;
|
private database: OneboxDatabase;
|
||||||
private docker: OneboxDockerManager;
|
private docker: OneboxDockerManager;
|
||||||
|
private autoUpdateIntervalId: number | null = null;
|
||||||
|
|
||||||
constructor(oneboxRef: any) {
|
constructor(oneboxRef: any) {
|
||||||
this.oneboxRef = oneboxRef;
|
this.oneboxRef = oneboxRef;
|
||||||
@@ -681,7 +682,7 @@ export class OneboxServicesManager {
|
|||||||
*/
|
*/
|
||||||
startAutoUpdateMonitoring(): void {
|
startAutoUpdateMonitoring(): void {
|
||||||
// Check every 30 seconds
|
// Check every 30 seconds
|
||||||
setInterval(async () => {
|
this.autoUpdateIntervalId = setInterval(async () => {
|
||||||
try {
|
try {
|
||||||
await this.checkForRegistryUpdates();
|
await this.checkForRegistryUpdates();
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
@@ -692,6 +693,17 @@ export class OneboxServicesManager {
|
|||||||
logger.info('Auto-update monitoring started (30s interval)');
|
logger.info('Auto-update monitoring started (30s interval)');
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Stop auto-update monitoring
|
||||||
|
*/
|
||||||
|
stopAutoUpdateMonitoring(): void {
|
||||||
|
if (this.autoUpdateIntervalId !== null) {
|
||||||
|
clearInterval(this.autoUpdateIntervalId);
|
||||||
|
this.autoUpdateIntervalId = null;
|
||||||
|
logger.debug('Auto-update monitoring stopped');
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Check all services using onebox registry for updates
|
* Check all services using onebox registry for updates
|
||||||
*/
|
*/
|
||||||
|
|||||||
243
ts/classes/systemd.ts
Normal file
243
ts/classes/systemd.ts
Normal file
@@ -0,0 +1,243 @@
|
|||||||
|
/**
|
||||||
|
* Systemd Service Manager for Onebox
|
||||||
|
*
|
||||||
|
* Handles systemd unit file installation, enabling, starting, stopping,
|
||||||
|
* and status checking. Modeled on nupst's direct systemctl approach —
|
||||||
|
* no external library dependencies.
|
||||||
|
*/
|
||||||
|
|
||||||
|
import { logger } from '../logging.ts';
|
||||||
|
import { getErrorMessage } from '../utils/error.ts';
|
||||||
|
|
||||||
|
const SERVICE_NAME = 'onebox';
|
||||||
|
const SERVICE_FILE_PATH = '/etc/systemd/system/onebox.service';
|
||||||
|
|
||||||
|
const SERVICE_UNIT_TEMPLATE = `[Unit]
|
||||||
|
Description=Onebox - Self-hosted container platform
|
||||||
|
After=network-online.target docker.service
|
||||||
|
Wants=network-online.target
|
||||||
|
Requires=docker.service
|
||||||
|
|
||||||
|
[Service]
|
||||||
|
Type=simple
|
||||||
|
ExecStart=/usr/local/bin/onebox systemd start-daemon
|
||||||
|
Restart=always
|
||||||
|
RestartSec=10
|
||||||
|
WorkingDirectory=/var/lib/onebox
|
||||||
|
Environment=PATH=/usr/bin:/usr/local/bin
|
||||||
|
Environment=HOME=/root
|
||||||
|
Environment=DENO_DIR=/root/.cache/deno
|
||||||
|
|
||||||
|
[Install]
|
||||||
|
WantedBy=multi-user.target
|
||||||
|
`;
|
||||||
|
|
||||||
|
export class OneboxSystemd {
|
||||||
|
/**
|
||||||
|
* Install and enable the systemd service
|
||||||
|
*/
|
||||||
|
async enable(): Promise<void> {
|
||||||
|
try {
|
||||||
|
// Ensure Docker is installed before writing unit file (it requires docker.service)
|
||||||
|
await this.ensureDocker();
|
||||||
|
|
||||||
|
// Write the unit file
|
||||||
|
logger.info('Writing systemd unit file...');
|
||||||
|
await Deno.writeTextFile(SERVICE_FILE_PATH, SERVICE_UNIT_TEMPLATE);
|
||||||
|
logger.info(`Unit file written to ${SERVICE_FILE_PATH}`);
|
||||||
|
|
||||||
|
// Reload systemd daemon
|
||||||
|
await this.runSystemctl(['daemon-reload']);
|
||||||
|
|
||||||
|
// Enable the service
|
||||||
|
const result = await this.runSystemctl(['enable', `${SERVICE_NAME}.service`]);
|
||||||
|
if (!result.success) {
|
||||||
|
throw new Error(`Failed to enable service: ${result.stderr}`);
|
||||||
|
}
|
||||||
|
|
||||||
|
logger.success('Onebox systemd service enabled');
|
||||||
|
logger.info('Start with: onebox systemd start');
|
||||||
|
} catch (error) {
|
||||||
|
logger.error(`Failed to enable service: ${getErrorMessage(error)}`);
|
||||||
|
throw error;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Stop, disable, and remove the systemd service
|
||||||
|
*/
|
||||||
|
async disable(): Promise<void> {
|
||||||
|
try {
|
||||||
|
// Stop the service (ignore errors if not running)
|
||||||
|
await this.runSystemctl(['stop', `${SERVICE_NAME}.service`]);
|
||||||
|
|
||||||
|
// Disable the service
|
||||||
|
await this.runSystemctl(['disable', `${SERVICE_NAME}.service`]);
|
||||||
|
|
||||||
|
// Remove the unit file
|
||||||
|
try {
|
||||||
|
await Deno.remove(SERVICE_FILE_PATH);
|
||||||
|
logger.info(`Removed ${SERVICE_FILE_PATH}`);
|
||||||
|
} catch {
|
||||||
|
// File might not exist
|
||||||
|
}
|
||||||
|
|
||||||
|
// Reload systemd daemon
|
||||||
|
await this.runSystemctl(['daemon-reload']);
|
||||||
|
|
||||||
|
logger.success('Onebox systemd service disabled and removed');
|
||||||
|
} catch (error) {
|
||||||
|
logger.error(`Failed to disable service: ${getErrorMessage(error)}`);
|
||||||
|
throw error;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Start the service via systemctl
|
||||||
|
*/
|
||||||
|
async start(): Promise<void> {
|
||||||
|
const result = await this.runSystemctl(['start', `${SERVICE_NAME}.service`]);
|
||||||
|
if (!result.success) {
|
||||||
|
logger.error(`Failed to start service: ${result.stderr}`);
|
||||||
|
throw new Error(`Failed to start onebox service`);
|
||||||
|
}
|
||||||
|
logger.success('Onebox service started');
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Stop the service via systemctl
|
||||||
|
*/
|
||||||
|
async stop(): Promise<void> {
|
||||||
|
const result = await this.runSystemctl(['stop', `${SERVICE_NAME}.service`]);
|
||||||
|
if (!result.success) {
|
||||||
|
logger.error(`Failed to stop service: ${result.stderr}`);
|
||||||
|
throw new Error(`Failed to stop onebox service`);
|
||||||
|
}
|
||||||
|
logger.success('Onebox service stopped');
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Get and display service status
|
||||||
|
*/
|
||||||
|
async getStatus(): Promise<string> {
|
||||||
|
const result = await this.runSystemctl(['status', `${SERVICE_NAME}.service`]);
|
||||||
|
const output = result.stdout;
|
||||||
|
|
||||||
|
let status: string;
|
||||||
|
if (output.includes('active (running)')) {
|
||||||
|
status = 'running';
|
||||||
|
} else if (output.includes('inactive') || output.includes('dead')) {
|
||||||
|
status = 'stopped';
|
||||||
|
} else if (output.includes('failed')) {
|
||||||
|
status = 'failed';
|
||||||
|
} else if (!result.success && result.stderr.includes('could not be found')) {
|
||||||
|
status = 'not-installed';
|
||||||
|
} else {
|
||||||
|
status = 'unknown';
|
||||||
|
}
|
||||||
|
|
||||||
|
// Print the raw systemctl output for full details
|
||||||
|
if (output.trim()) {
|
||||||
|
console.log(output);
|
||||||
|
}
|
||||||
|
|
||||||
|
return status;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Show service logs via journalctl
|
||||||
|
*/
|
||||||
|
async showLogs(): Promise<void> {
|
||||||
|
const cmd = new Deno.Command('journalctl', {
|
||||||
|
args: ['-u', `${SERVICE_NAME}.service`, '-f'],
|
||||||
|
stdout: 'inherit',
|
||||||
|
stderr: 'inherit',
|
||||||
|
});
|
||||||
|
await cmd.output();
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Check if the service unit file is installed
|
||||||
|
*/
|
||||||
|
async isInstalled(): Promise<boolean> {
|
||||||
|
try {
|
||||||
|
await Deno.stat(SERVICE_FILE_PATH);
|
||||||
|
return true;
|
||||||
|
} catch {
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Ensure Docker is installed, installing it if necessary
|
||||||
|
*/
|
||||||
|
private async ensureDocker(): Promise<void> {
|
||||||
|
try {
|
||||||
|
const cmd = new Deno.Command('docker', {
|
||||||
|
args: ['--version'],
|
||||||
|
stdout: 'piped',
|
||||||
|
stderr: 'piped',
|
||||||
|
});
|
||||||
|
const result = await cmd.output();
|
||||||
|
if (result.success) {
|
||||||
|
const version = new TextDecoder().decode(result.stdout).trim();
|
||||||
|
logger.info(`Docker found: ${version}`);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
} catch {
|
||||||
|
// docker command not found
|
||||||
|
}
|
||||||
|
|
||||||
|
logger.info('Docker not found. Installing Docker...');
|
||||||
|
const installCmd = new Deno.Command('bash', {
|
||||||
|
args: ['-c', 'curl -fsSL https://get.docker.com | sh'],
|
||||||
|
stdin: 'inherit',
|
||||||
|
stdout: 'inherit',
|
||||||
|
stderr: 'inherit',
|
||||||
|
});
|
||||||
|
const installResult = await installCmd.output();
|
||||||
|
if (!installResult.success) {
|
||||||
|
throw new Error('Failed to install Docker. Please install it manually: curl -fsSL https://get.docker.com | sh');
|
||||||
|
}
|
||||||
|
logger.success('Docker installed successfully');
|
||||||
|
|
||||||
|
// Initialize Docker Swarm
|
||||||
|
logger.info('Initializing Docker Swarm...');
|
||||||
|
const swarmCmd = new Deno.Command('docker', {
|
||||||
|
args: ['swarm', 'init'],
|
||||||
|
stdout: 'piped',
|
||||||
|
stderr: 'piped',
|
||||||
|
});
|
||||||
|
const swarmResult = await swarmCmd.output();
|
||||||
|
if (swarmResult.success) {
|
||||||
|
logger.success('Docker Swarm initialized');
|
||||||
|
} else {
|
||||||
|
const stderr = new TextDecoder().decode(swarmResult.stderr);
|
||||||
|
if (stderr.includes('already part of a swarm')) {
|
||||||
|
logger.info('Docker Swarm already initialized');
|
||||||
|
} else {
|
||||||
|
logger.warn(`Docker Swarm init warning: ${stderr.trim()}`);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Run a systemctl command and return results
|
||||||
|
*/
|
||||||
|
private async runSystemctl(
|
||||||
|
args: string[]
|
||||||
|
): Promise<{ success: boolean; stdout: string; stderr: string }> {
|
||||||
|
const cmd = new Deno.Command('systemctl', {
|
||||||
|
args,
|
||||||
|
stdout: 'piped',
|
||||||
|
stderr: 'piped',
|
||||||
|
});
|
||||||
|
|
||||||
|
const result = await cmd.output();
|
||||||
|
return {
|
||||||
|
success: result.success,
|
||||||
|
stdout: new TextDecoder().decode(result.stdout),
|
||||||
|
stderr: new TextDecoder().decode(result.stderr),
|
||||||
|
};
|
||||||
|
}
|
||||||
|
}
|
||||||
160
ts/cli.ts
160
ts/cli.ts
@@ -7,6 +7,7 @@ import { projectInfo } from './info.ts';
|
|||||||
import { getErrorMessage } from './utils/error.ts';
|
import { getErrorMessage } from './utils/error.ts';
|
||||||
import { Onebox } from './classes/onebox.ts';
|
import { Onebox } from './classes/onebox.ts';
|
||||||
import { OneboxDaemon } from './classes/daemon.ts';
|
import { OneboxDaemon } from './classes/daemon.ts';
|
||||||
|
import { OneboxSystemd } from './classes/systemd.ts';
|
||||||
|
|
||||||
export async function runCli(): Promise<void> {
|
export async function runCli(): Promise<void> {
|
||||||
const args = Deno.args;
|
const args = Deno.args;
|
||||||
@@ -25,6 +26,19 @@ export async function runCli(): Promise<void> {
|
|||||||
const subcommand = args[1];
|
const subcommand = args[1];
|
||||||
|
|
||||||
try {
|
try {
|
||||||
|
// === LIGHTWEIGHT COMMANDS (no init()) ===
|
||||||
|
if (command === 'systemd') {
|
||||||
|
await handleSystemdCommand(subcommand, args.slice(2));
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (command === 'upgrade') {
|
||||||
|
await handleUpgradeCommand();
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
// === HEAVY COMMANDS (require full init()) ===
|
||||||
|
|
||||||
// Server command has special handling (doesn't shut down)
|
// Server command has special handling (doesn't shut down)
|
||||||
if (command === 'server') {
|
if (command === 'server') {
|
||||||
const onebox = new Onebox();
|
const onebox = new Onebox();
|
||||||
@@ -60,10 +74,6 @@ export async function runCli(): Promise<void> {
|
|||||||
await handleNginxCommand(onebox, subcommand, args.slice(2));
|
await handleNginxCommand(onebox, subcommand, args.slice(2));
|
||||||
break;
|
break;
|
||||||
|
|
||||||
case 'daemon':
|
|
||||||
await handleDaemonCommand(onebox, subcommand, args.slice(2));
|
|
||||||
break;
|
|
||||||
|
|
||||||
case 'config':
|
case 'config':
|
||||||
await handleConfigCommand(onebox, subcommand, args.slice(2));
|
await handleConfigCommand(onebox, subcommand, args.slice(2));
|
||||||
break;
|
break;
|
||||||
@@ -278,7 +288,7 @@ async function handleServerCommand(onebox: Onebox, args: string[]) {
|
|||||||
await OneboxDaemon.ensureNoDaemon();
|
await OneboxDaemon.ensureNoDaemon();
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
logger.error('Cannot start in ephemeral mode: Daemon is already running');
|
logger.error('Cannot start in ephemeral mode: Daemon is already running');
|
||||||
logger.info('Stop the daemon first: onebox daemon stop');
|
logger.info('Stop the daemon first: onebox systemd stop');
|
||||||
logger.info('Or run without --ephemeral to use the existing daemon');
|
logger.info('Or run without --ephemeral to use the existing daemon');
|
||||||
Deno.exit(1);
|
Deno.exit(1);
|
||||||
}
|
}
|
||||||
@@ -322,39 +332,49 @@ async function handleServerCommand(onebox: Onebox, args: string[]) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Daemon commands
|
// Systemd service commands (lightweight — no Onebox init)
|
||||||
async function handleDaemonCommand(onebox: Onebox, subcommand: string, _args: string[]) {
|
async function handleSystemdCommand(subcommand: string, _args: string[]) {
|
||||||
|
const systemd = new OneboxSystemd();
|
||||||
|
|
||||||
switch (subcommand) {
|
switch (subcommand) {
|
||||||
case 'install':
|
case 'enable':
|
||||||
await onebox.daemon.installService();
|
await systemd.enable();
|
||||||
|
break;
|
||||||
|
|
||||||
|
case 'disable':
|
||||||
|
await systemd.disable();
|
||||||
break;
|
break;
|
||||||
|
|
||||||
case 'start':
|
case 'start':
|
||||||
await onebox.startDaemon();
|
await systemd.start();
|
||||||
break;
|
break;
|
||||||
|
|
||||||
case 'stop':
|
case 'stop':
|
||||||
await onebox.stopDaemon();
|
await systemd.stop();
|
||||||
break;
|
break;
|
||||||
|
|
||||||
case 'logs': {
|
case 'status': {
|
||||||
const command = new Deno.Command('journalctl', {
|
const status = await systemd.getStatus();
|
||||||
args: ['-u', 'smartdaemon_onebox', '-f'],
|
logger.info(`Service status: ${status}`);
|
||||||
stdout: 'inherit',
|
|
||||||
stderr: 'inherit',
|
|
||||||
});
|
|
||||||
await command.output();
|
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
|
|
||||||
case 'status': {
|
case 'logs':
|
||||||
const status = await onebox.daemon.getServiceStatus();
|
await systemd.showLogs();
|
||||||
logger.info(`Daemon status: ${status}`);
|
break;
|
||||||
|
|
||||||
|
case 'start-daemon': {
|
||||||
|
// This is what systemd's ExecStart calls — full init + daemon loop
|
||||||
|
const onebox = new Onebox();
|
||||||
|
await onebox.init();
|
||||||
|
await onebox.daemon.start();
|
||||||
|
// start() blocks (keepAlive loop) until SIGTERM/SIGINT
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
|
|
||||||
default:
|
default:
|
||||||
logger.error(`Unknown daemon subcommand: ${subcommand}`);
|
logger.error(`Unknown systemd subcommand: ${subcommand}`);
|
||||||
|
logger.info('Available: enable, disable, start, stop, status, logs');
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -386,6 +406,78 @@ async function handleStatusCommand(onebox: Onebox) {
|
|||||||
console.log(JSON.stringify(status, null, 2));
|
console.log(JSON.stringify(status, null, 2));
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Upgrade command - self-update onebox to latest version
|
||||||
|
async function handleUpgradeCommand(): Promise<void> {
|
||||||
|
// Check if running as root
|
||||||
|
if (Deno.uid() !== 0) {
|
||||||
|
logger.error('This command must be run as root to upgrade Onebox.');
|
||||||
|
logger.info('Try: sudo onebox upgrade');
|
||||||
|
Deno.exit(1);
|
||||||
|
}
|
||||||
|
|
||||||
|
logger.info('Checking for updates...');
|
||||||
|
|
||||||
|
try {
|
||||||
|
// Get current version
|
||||||
|
const currentVersion = projectInfo.version;
|
||||||
|
|
||||||
|
// Fetch latest version from Gitea API
|
||||||
|
const apiUrl = 'https://code.foss.global/api/v1/repos/serve.zone/onebox/releases/latest';
|
||||||
|
const curlCmd = new Deno.Command('curl', {
|
||||||
|
args: ['-sSL', apiUrl],
|
||||||
|
stdout: 'piped',
|
||||||
|
stderr: 'piped',
|
||||||
|
});
|
||||||
|
const curlResult = await curlCmd.output();
|
||||||
|
const response = new TextDecoder().decode(curlResult.stdout);
|
||||||
|
const release = JSON.parse(response);
|
||||||
|
const latestVersion = release.tag_name as string; // e.g., "v1.11.0"
|
||||||
|
|
||||||
|
// Normalize versions for comparison (ensure both have "v" prefix)
|
||||||
|
const normalizedCurrent = currentVersion.startsWith('v')
|
||||||
|
? currentVersion
|
||||||
|
: `v${currentVersion}`;
|
||||||
|
const normalizedLatest = latestVersion.startsWith('v')
|
||||||
|
? latestVersion
|
||||||
|
: `v${latestVersion}`;
|
||||||
|
|
||||||
|
console.log(` Current version: ${normalizedCurrent}`);
|
||||||
|
console.log(` Latest version: ${normalizedLatest}`);
|
||||||
|
console.log('');
|
||||||
|
|
||||||
|
// Compare normalized versions
|
||||||
|
if (normalizedCurrent === normalizedLatest) {
|
||||||
|
logger.success('Already up to date!');
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
logger.info(`New version available: ${latestVersion}`);
|
||||||
|
logger.info('Downloading and installing...');
|
||||||
|
console.log('');
|
||||||
|
|
||||||
|
// Download and run the install script
|
||||||
|
const installUrl = 'https://code.foss.global/serve.zone/onebox/raw/branch/main/install.sh';
|
||||||
|
const installCmd = new Deno.Command('bash', {
|
||||||
|
args: ['-c', `curl -sSL ${installUrl} | bash`],
|
||||||
|
stdin: 'inherit',
|
||||||
|
stdout: 'inherit',
|
||||||
|
stderr: 'inherit',
|
||||||
|
});
|
||||||
|
const installResult = await installCmd.output();
|
||||||
|
|
||||||
|
if (!installResult.success) {
|
||||||
|
logger.error('Upgrade failed');
|
||||||
|
Deno.exit(1);
|
||||||
|
}
|
||||||
|
|
||||||
|
console.log('');
|
||||||
|
logger.success(`Upgraded to ${latestVersion}`);
|
||||||
|
} catch (error) {
|
||||||
|
logger.error(`Upgrade failed: ${getErrorMessage(error)}`);
|
||||||
|
Deno.exit(1);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// Helpers
|
// Helpers
|
||||||
function getArg(args: string[], flag: string): string {
|
function getArg(args: string[], flag: string): string {
|
||||||
const arg = args.find((a) => a.startsWith(`${flag}=`));
|
const arg = args.find((a) => a.startsWith(`${flag}=`));
|
||||||
@@ -430,17 +522,21 @@ Commands:
|
|||||||
nginx test
|
nginx test
|
||||||
nginx status
|
nginx status
|
||||||
|
|
||||||
daemon install
|
systemd enable Install and enable systemd service
|
||||||
daemon start
|
systemd disable Stop, disable, and remove systemd service
|
||||||
daemon stop
|
systemd start Start onebox via systemctl
|
||||||
daemon logs
|
systemd stop Stop onebox via systemctl
|
||||||
daemon status
|
systemd status Show systemd service status
|
||||||
|
systemd logs Follow service logs (journalctl)
|
||||||
|
|
||||||
config show
|
config show
|
||||||
config set <key> <value>
|
config set <key> <value>
|
||||||
|
|
||||||
status
|
status
|
||||||
|
|
||||||
|
upgrade
|
||||||
|
Upgrade Onebox to the latest version (requires root)
|
||||||
|
|
||||||
Options:
|
Options:
|
||||||
--help, -h Show this help message
|
--help, -h Show this help message
|
||||||
--version, -v Show version
|
--version, -v Show version
|
||||||
@@ -451,15 +547,15 @@ Development Workflow:
|
|||||||
onebox service add ... # In another terminal
|
onebox service add ... # In another terminal
|
||||||
|
|
||||||
Production Workflow:
|
Production Workflow:
|
||||||
onebox daemon install # Install systemd service
|
onebox systemd enable # Install and enable systemd service
|
||||||
onebox daemon start # Start daemon
|
onebox systemd start # Start via systemctl
|
||||||
onebox service add ... # CLI uses daemon
|
onebox service add ... # CLI manages services
|
||||||
|
|
||||||
Examples:
|
Examples:
|
||||||
onebox server --ephemeral # Start dev server
|
onebox server --ephemeral # Start dev server
|
||||||
onebox service add myapp --image nginx:latest --domain app.example.com --port 80
|
onebox service add myapp --image nginx:latest --domain app.example.com --port 80
|
||||||
onebox registry add --url registry.example.com --username user --password pass
|
onebox registry add --url registry.example.com --username user --password pass
|
||||||
onebox daemon install
|
onebox systemd enable
|
||||||
onebox daemon start
|
onebox systemd start
|
||||||
`);
|
`);
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -25,6 +25,7 @@ import type {
|
|||||||
import type { TBindValue } from './types.ts';
|
import type { TBindValue } from './types.ts';
|
||||||
import { logger } from '../logging.ts';
|
import { logger } from '../logging.ts';
|
||||||
import { getErrorMessage } from '../utils/error.ts';
|
import { getErrorMessage } from '../utils/error.ts';
|
||||||
|
import { MigrationRunner } from './migrations/index.ts';
|
||||||
|
|
||||||
// Import repositories
|
// Import repositories
|
||||||
import {
|
import {
|
||||||
@@ -71,7 +72,8 @@ export class OneboxDatabase {
|
|||||||
await this.createTables();
|
await this.createTables();
|
||||||
|
|
||||||
// Run migrations if needed
|
// Run migrations if needed
|
||||||
await this.runMigrations();
|
const runner = new MigrationRunner(this.query.bind(this));
|
||||||
|
runner.run();
|
||||||
|
|
||||||
// Initialize repositories with bound query function
|
// Initialize repositories with bound query function
|
||||||
const queryFn = this.query.bind(this);
|
const queryFn = this.query.bind(this);
|
||||||
@@ -241,724 +243,6 @@ export class OneboxDatabase {
|
|||||||
/**
|
/**
|
||||||
* Run database migrations
|
* Run database migrations
|
||||||
*/
|
*/
|
||||||
private async runMigrations(): Promise<void> {
|
|
||||||
if (!this.db) throw new Error('Database not initialized');
|
|
||||||
|
|
||||||
try {
|
|
||||||
const currentVersion = this.getMigrationVersion();
|
|
||||||
logger.info(`Current database migration version: ${currentVersion}`);
|
|
||||||
|
|
||||||
// Migration 1: Initial schema
|
|
||||||
if (currentVersion === 0) {
|
|
||||||
logger.info('Setting initial migration version to 1');
|
|
||||||
this.setMigrationVersion(1);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Migration 2: Convert timestamp columns from INTEGER to REAL
|
|
||||||
const updatedVersion = this.getMigrationVersion();
|
|
||||||
if (updatedVersion < 2) {
|
|
||||||
logger.info('Running migration 2: Converting timestamps to REAL...');
|
|
||||||
|
|
||||||
// SSL certificates
|
|
||||||
this.query(`
|
|
||||||
CREATE TABLE ssl_certificates_new (
|
|
||||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
|
||||||
domain TEXT NOT NULL UNIQUE,
|
|
||||||
cert_path TEXT NOT NULL,
|
|
||||||
key_path TEXT NOT NULL,
|
|
||||||
full_chain_path TEXT NOT NULL,
|
|
||||||
expiry_date REAL NOT NULL,
|
|
||||||
issuer TEXT NOT NULL,
|
|
||||||
created_at REAL NOT NULL,
|
|
||||||
updated_at REAL NOT NULL
|
|
||||||
)
|
|
||||||
`);
|
|
||||||
this.query(`INSERT INTO ssl_certificates_new SELECT * FROM ssl_certificates`);
|
|
||||||
this.query(`DROP TABLE ssl_certificates`);
|
|
||||||
this.query(`ALTER TABLE ssl_certificates_new RENAME TO ssl_certificates`);
|
|
||||||
|
|
||||||
// Services
|
|
||||||
this.query(`
|
|
||||||
CREATE TABLE services_new (
|
|
||||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
|
||||||
name TEXT NOT NULL UNIQUE,
|
|
||||||
image TEXT NOT NULL,
|
|
||||||
registry TEXT,
|
|
||||||
env_vars TEXT NOT NULL,
|
|
||||||
port INTEGER NOT NULL,
|
|
||||||
domain TEXT,
|
|
||||||
container_id TEXT,
|
|
||||||
status TEXT NOT NULL DEFAULT 'stopped',
|
|
||||||
created_at REAL NOT NULL,
|
|
||||||
updated_at REAL NOT NULL
|
|
||||||
)
|
|
||||||
`);
|
|
||||||
this.query(`INSERT INTO services_new SELECT * FROM services`);
|
|
||||||
this.query(`DROP TABLE services`);
|
|
||||||
this.query(`ALTER TABLE services_new RENAME TO services`);
|
|
||||||
|
|
||||||
// Registries
|
|
||||||
this.query(`
|
|
||||||
CREATE TABLE registries_new (
|
|
||||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
|
||||||
url TEXT NOT NULL UNIQUE,
|
|
||||||
username TEXT NOT NULL,
|
|
||||||
password_encrypted TEXT NOT NULL,
|
|
||||||
created_at REAL NOT NULL
|
|
||||||
)
|
|
||||||
`);
|
|
||||||
this.query(`INSERT INTO registries_new SELECT * FROM registries`);
|
|
||||||
this.query(`DROP TABLE registries`);
|
|
||||||
this.query(`ALTER TABLE registries_new RENAME TO registries`);
|
|
||||||
|
|
||||||
// Nginx configs
|
|
||||||
this.query(`
|
|
||||||
CREATE TABLE nginx_configs_new (
|
|
||||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
|
||||||
service_id INTEGER NOT NULL,
|
|
||||||
domain TEXT NOT NULL,
|
|
||||||
port INTEGER NOT NULL,
|
|
||||||
ssl_enabled INTEGER NOT NULL DEFAULT 0,
|
|
||||||
config_template TEXT NOT NULL,
|
|
||||||
created_at REAL NOT NULL,
|
|
||||||
updated_at REAL NOT NULL,
|
|
||||||
FOREIGN KEY (service_id) REFERENCES services(id) ON DELETE CASCADE
|
|
||||||
)
|
|
||||||
`);
|
|
||||||
this.query(`INSERT INTO nginx_configs_new SELECT * FROM nginx_configs`);
|
|
||||||
this.query(`DROP TABLE nginx_configs`);
|
|
||||||
this.query(`ALTER TABLE nginx_configs_new RENAME TO nginx_configs`);
|
|
||||||
|
|
||||||
// DNS records
|
|
||||||
this.query(`
|
|
||||||
CREATE TABLE dns_records_new (
|
|
||||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
|
||||||
domain TEXT NOT NULL UNIQUE,
|
|
||||||
type TEXT NOT NULL,
|
|
||||||
value TEXT NOT NULL,
|
|
||||||
cloudflare_id TEXT,
|
|
||||||
zone_id TEXT,
|
|
||||||
created_at REAL NOT NULL,
|
|
||||||
updated_at REAL NOT NULL
|
|
||||||
)
|
|
||||||
`);
|
|
||||||
this.query(`INSERT INTO dns_records_new SELECT * FROM dns_records`);
|
|
||||||
this.query(`DROP TABLE dns_records`);
|
|
||||||
this.query(`ALTER TABLE dns_records_new RENAME TO dns_records`);
|
|
||||||
|
|
||||||
// Metrics
|
|
||||||
this.query(`
|
|
||||||
CREATE TABLE metrics_new (
|
|
||||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
|
||||||
service_id INTEGER NOT NULL,
|
|
||||||
timestamp REAL NOT NULL,
|
|
||||||
cpu_percent REAL NOT NULL,
|
|
||||||
memory_used INTEGER NOT NULL,
|
|
||||||
memory_limit INTEGER NOT NULL,
|
|
||||||
network_rx_bytes INTEGER NOT NULL,
|
|
||||||
network_tx_bytes INTEGER NOT NULL,
|
|
||||||
FOREIGN KEY (service_id) REFERENCES services(id) ON DELETE CASCADE
|
|
||||||
)
|
|
||||||
`);
|
|
||||||
this.query(`INSERT INTO metrics_new SELECT * FROM metrics`);
|
|
||||||
this.query(`DROP TABLE metrics`);
|
|
||||||
this.query(`ALTER TABLE metrics_new RENAME TO metrics`);
|
|
||||||
this.query(`CREATE INDEX IF NOT EXISTS idx_metrics_service_timestamp ON metrics(service_id, timestamp DESC)`);
|
|
||||||
|
|
||||||
// Logs
|
|
||||||
this.query(`
|
|
||||||
CREATE TABLE logs_new (
|
|
||||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
|
||||||
service_id INTEGER NOT NULL,
|
|
||||||
timestamp REAL NOT NULL,
|
|
||||||
message TEXT NOT NULL,
|
|
||||||
level TEXT NOT NULL,
|
|
||||||
source TEXT NOT NULL,
|
|
||||||
FOREIGN KEY (service_id) REFERENCES services(id) ON DELETE CASCADE
|
|
||||||
)
|
|
||||||
`);
|
|
||||||
this.query(`INSERT INTO logs_new SELECT * FROM logs`);
|
|
||||||
this.query(`DROP TABLE logs`);
|
|
||||||
this.query(`ALTER TABLE logs_new RENAME TO logs`);
|
|
||||||
this.query(`CREATE INDEX IF NOT EXISTS idx_logs_service_timestamp ON logs(service_id, timestamp DESC)`);
|
|
||||||
|
|
||||||
// Users
|
|
||||||
this.query(`
|
|
||||||
CREATE TABLE users_new (
|
|
||||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
|
||||||
username TEXT NOT NULL UNIQUE,
|
|
||||||
password_hash TEXT NOT NULL,
|
|
||||||
role TEXT NOT NULL DEFAULT 'user',
|
|
||||||
created_at REAL NOT NULL,
|
|
||||||
updated_at REAL NOT NULL
|
|
||||||
)
|
|
||||||
`);
|
|
||||||
this.query(`INSERT INTO users_new SELECT * FROM users`);
|
|
||||||
this.query(`DROP TABLE users`);
|
|
||||||
this.query(`ALTER TABLE users_new RENAME TO users`);
|
|
||||||
|
|
||||||
// Settings
|
|
||||||
this.query(`
|
|
||||||
CREATE TABLE settings_new (
|
|
||||||
key TEXT PRIMARY KEY,
|
|
||||||
value TEXT NOT NULL,
|
|
||||||
updated_at REAL NOT NULL
|
|
||||||
)
|
|
||||||
`);
|
|
||||||
this.query(`INSERT INTO settings_new SELECT * FROM settings`);
|
|
||||||
this.query(`DROP TABLE settings`);
|
|
||||||
this.query(`ALTER TABLE settings_new RENAME TO settings`);
|
|
||||||
|
|
||||||
// Migrations table itself
|
|
||||||
this.query(`
|
|
||||||
CREATE TABLE migrations_new (
|
|
||||||
version INTEGER PRIMARY KEY,
|
|
||||||
applied_at REAL NOT NULL
|
|
||||||
)
|
|
||||||
`);
|
|
||||||
this.query(`INSERT INTO migrations_new SELECT * FROM migrations`);
|
|
||||||
this.query(`DROP TABLE migrations`);
|
|
||||||
this.query(`ALTER TABLE migrations_new RENAME TO migrations`);
|
|
||||||
|
|
||||||
this.setMigrationVersion(2);
|
|
||||||
logger.success('Migration 2 completed: All timestamps converted to REAL');
|
|
||||||
}
|
|
||||||
|
|
||||||
// Migration 3: Domain management tables
|
|
||||||
const version3 = this.getMigrationVersion();
|
|
||||||
if (version3 < 3) {
|
|
||||||
logger.info('Running migration 3: Creating domain management tables...');
|
|
||||||
|
|
||||||
this.query(`
|
|
||||||
CREATE TABLE domains (
|
|
||||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
|
||||||
domain TEXT NOT NULL UNIQUE,
|
|
||||||
dns_provider TEXT,
|
|
||||||
cloudflare_zone_id TEXT,
|
|
||||||
is_obsolete INTEGER NOT NULL DEFAULT 0,
|
|
||||||
default_wildcard INTEGER NOT NULL DEFAULT 1,
|
|
||||||
created_at REAL NOT NULL,
|
|
||||||
updated_at REAL NOT NULL
|
|
||||||
)
|
|
||||||
`);
|
|
||||||
|
|
||||||
this.query(`
|
|
||||||
CREATE TABLE certificates (
|
|
||||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
|
||||||
domain_id INTEGER NOT NULL,
|
|
||||||
cert_domain TEXT NOT NULL,
|
|
||||||
is_wildcard INTEGER NOT NULL DEFAULT 0,
|
|
||||||
cert_path TEXT NOT NULL,
|
|
||||||
key_path TEXT NOT NULL,
|
|
||||||
full_chain_path TEXT NOT NULL,
|
|
||||||
expiry_date REAL NOT NULL,
|
|
||||||
issuer TEXT NOT NULL,
|
|
||||||
is_valid INTEGER NOT NULL DEFAULT 1,
|
|
||||||
created_at REAL NOT NULL,
|
|
||||||
updated_at REAL NOT NULL,
|
|
||||||
FOREIGN KEY (domain_id) REFERENCES domains(id) ON DELETE CASCADE
|
|
||||||
)
|
|
||||||
`);
|
|
||||||
|
|
||||||
this.query(`
|
|
||||||
CREATE TABLE cert_requirements (
|
|
||||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
|
||||||
service_id INTEGER NOT NULL,
|
|
||||||
domain_id INTEGER NOT NULL,
|
|
||||||
subdomain TEXT NOT NULL,
|
|
||||||
certificate_id INTEGER,
|
|
||||||
status TEXT NOT NULL DEFAULT 'pending',
|
|
||||||
created_at REAL NOT NULL,
|
|
||||||
updated_at REAL NOT NULL,
|
|
||||||
FOREIGN KEY (service_id) REFERENCES services(id) ON DELETE CASCADE,
|
|
||||||
FOREIGN KEY (domain_id) REFERENCES domains(id) ON DELETE CASCADE,
|
|
||||||
FOREIGN KEY (certificate_id) REFERENCES certificates(id) ON DELETE SET NULL
|
|
||||||
)
|
|
||||||
`);
|
|
||||||
|
|
||||||
interface OldSslCert {
|
|
||||||
id?: number;
|
|
||||||
domain?: string;
|
|
||||||
cert_path?: string;
|
|
||||||
key_path?: string;
|
|
||||||
full_chain_path?: string;
|
|
||||||
expiry_date?: number;
|
|
||||||
issuer?: string;
|
|
||||||
created_at?: number;
|
|
||||||
updated_at?: number;
|
|
||||||
[key: number]: unknown;
|
|
||||||
}
|
|
||||||
const existingCerts = this.query<OldSslCert>('SELECT * FROM ssl_certificates');
|
|
||||||
|
|
||||||
const now = Date.now();
|
|
||||||
const domainMap = new Map<string, number>();
|
|
||||||
|
|
||||||
for (const cert of existingCerts) {
|
|
||||||
const domain = String(cert.domain ?? (cert as Record<number, unknown>)[1]);
|
|
||||||
if (!domainMap.has(domain)) {
|
|
||||||
this.query(
|
|
||||||
'INSERT INTO domains (domain, dns_provider, is_obsolete, default_wildcard, created_at, updated_at) VALUES (?, ?, ?, ?, ?, ?)',
|
|
||||||
[domain, null, 0, 1, now, now]
|
|
||||||
);
|
|
||||||
const result = this.query<{ id?: number; [key: number]: unknown }>('SELECT last_insert_rowid() as id');
|
|
||||||
const domainId = result[0].id ?? (result[0] as Record<number, unknown>)[0];
|
|
||||||
domainMap.set(domain, Number(domainId));
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
for (const cert of existingCerts) {
|
|
||||||
const domain = String(cert.domain ?? (cert as Record<number, unknown>)[1]);
|
|
||||||
const domainId = domainMap.get(domain);
|
|
||||||
|
|
||||||
this.query(
|
|
||||||
`INSERT INTO certificates (
|
|
||||||
domain_id, cert_domain, is_wildcard, cert_path, key_path, full_chain_path,
|
|
||||||
expiry_date, issuer, is_valid, created_at, updated_at
|
|
||||||
) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)`,
|
|
||||||
[
|
|
||||||
domainId,
|
|
||||||
domain,
|
|
||||||
0,
|
|
||||||
String(cert.cert_path ?? (cert as Record<number, unknown>)[2]),
|
|
||||||
String(cert.key_path ?? (cert as Record<number, unknown>)[3]),
|
|
||||||
String(cert.full_chain_path ?? (cert as Record<number, unknown>)[4]),
|
|
||||||
Number(cert.expiry_date ?? (cert as Record<number, unknown>)[5]),
|
|
||||||
String(cert.issuer ?? (cert as Record<number, unknown>)[6]),
|
|
||||||
1,
|
|
||||||
Number(cert.created_at ?? (cert as Record<number, unknown>)[7]),
|
|
||||||
Number(cert.updated_at ?? (cert as Record<number, unknown>)[8])
|
|
||||||
]
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
this.query('DROP TABLE ssl_certificates');
|
|
||||||
this.query('CREATE INDEX IF NOT EXISTS idx_domains_cloudflare_zone ON domains(cloudflare_zone_id)');
|
|
||||||
this.query('CREATE INDEX IF NOT EXISTS idx_certificates_domain ON certificates(domain_id)');
|
|
||||||
this.query('CREATE INDEX IF NOT EXISTS idx_certificates_expiry ON certificates(expiry_date)');
|
|
||||||
this.query('CREATE INDEX IF NOT EXISTS idx_cert_requirements_service ON cert_requirements(service_id)');
|
|
||||||
this.query('CREATE INDEX IF NOT EXISTS idx_cert_requirements_domain ON cert_requirements(domain_id)');
|
|
||||||
|
|
||||||
this.setMigrationVersion(3);
|
|
||||||
logger.success('Migration 3 completed: Domain management tables created');
|
|
||||||
}
|
|
||||||
|
|
||||||
// Migration 4: Add Onebox Registry support columns
|
|
||||||
const version4 = this.getMigrationVersion();
|
|
||||||
if (version4 < 4) {
|
|
||||||
logger.info('Running migration 4: Adding Onebox Registry columns to services table...');
|
|
||||||
|
|
||||||
this.query(`ALTER TABLE services ADD COLUMN use_onebox_registry INTEGER DEFAULT 0`);
|
|
||||||
this.query(`ALTER TABLE services ADD COLUMN registry_repository TEXT`);
|
|
||||||
this.query(`ALTER TABLE services ADD COLUMN registry_token TEXT`);
|
|
||||||
this.query(`ALTER TABLE services ADD COLUMN registry_image_tag TEXT DEFAULT 'latest'`);
|
|
||||||
this.query(`ALTER TABLE services ADD COLUMN auto_update_on_push INTEGER DEFAULT 0`);
|
|
||||||
this.query(`ALTER TABLE services ADD COLUMN image_digest TEXT`);
|
|
||||||
|
|
||||||
this.setMigrationVersion(4);
|
|
||||||
logger.success('Migration 4 completed: Onebox Registry columns added to services table');
|
|
||||||
}
|
|
||||||
|
|
||||||
// Migration 5: Registry tokens table
|
|
||||||
const version5 = this.getMigrationVersion();
|
|
||||||
if (version5 < 5) {
|
|
||||||
logger.info('Running migration 5: Creating registry_tokens table...');
|
|
||||||
|
|
||||||
this.query(`
|
|
||||||
CREATE TABLE registry_tokens (
|
|
||||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
|
||||||
name TEXT NOT NULL,
|
|
||||||
token_hash TEXT NOT NULL UNIQUE,
|
|
||||||
token_type TEXT NOT NULL,
|
|
||||||
scope TEXT NOT NULL,
|
|
||||||
expires_at REAL,
|
|
||||||
created_at REAL NOT NULL,
|
|
||||||
last_used_at REAL,
|
|
||||||
created_by TEXT NOT NULL
|
|
||||||
)
|
|
||||||
`);
|
|
||||||
|
|
||||||
this.query('CREATE INDEX IF NOT EXISTS idx_registry_tokens_type ON registry_tokens(token_type)');
|
|
||||||
this.query('CREATE INDEX IF NOT EXISTS idx_registry_tokens_hash ON registry_tokens(token_hash)');
|
|
||||||
|
|
||||||
this.setMigrationVersion(5);
|
|
||||||
logger.success('Migration 5 completed: Registry tokens table created');
|
|
||||||
}
|
|
||||||
|
|
||||||
// Migration 6: Drop registry_token column from services table
|
|
||||||
const version6 = this.getMigrationVersion();
|
|
||||||
if (version6 < 6) {
|
|
||||||
logger.info('Running migration 6: Dropping registry_token column from services table...');
|
|
||||||
|
|
||||||
this.query(`
|
|
||||||
CREATE TABLE services_new (
|
|
||||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
|
||||||
name TEXT NOT NULL UNIQUE,
|
|
||||||
image TEXT NOT NULL,
|
|
||||||
registry TEXT,
|
|
||||||
env_vars TEXT,
|
|
||||||
port INTEGER NOT NULL,
|
|
||||||
domain TEXT,
|
|
||||||
container_id TEXT,
|
|
||||||
status TEXT NOT NULL,
|
|
||||||
created_at REAL NOT NULL,
|
|
||||||
updated_at REAL NOT NULL,
|
|
||||||
use_onebox_registry INTEGER DEFAULT 0,
|
|
||||||
registry_repository TEXT,
|
|
||||||
registry_image_tag TEXT DEFAULT 'latest',
|
|
||||||
auto_update_on_push INTEGER DEFAULT 0,
|
|
||||||
image_digest TEXT
|
|
||||||
)
|
|
||||||
`);
|
|
||||||
|
|
||||||
this.query(`
|
|
||||||
INSERT INTO services_new (
|
|
||||||
id, name, image, registry, env_vars, port, domain, container_id, status,
|
|
||||||
created_at, updated_at, use_onebox_registry, registry_repository,
|
|
||||||
registry_image_tag, auto_update_on_push, image_digest
|
|
||||||
)
|
|
||||||
SELECT
|
|
||||||
id, name, image, registry, env_vars, port, domain, container_id, status,
|
|
||||||
created_at, updated_at, use_onebox_registry, registry_repository,
|
|
||||||
registry_image_tag, auto_update_on_push, image_digest
|
|
||||||
FROM services
|
|
||||||
`);
|
|
||||||
|
|
||||||
this.query('DROP TABLE services');
|
|
||||||
this.query('ALTER TABLE services_new RENAME TO services');
|
|
||||||
this.query('CREATE INDEX IF NOT EXISTS idx_services_name ON services(name)');
|
|
||||||
this.query('CREATE INDEX IF NOT EXISTS idx_services_status ON services(status)');
|
|
||||||
|
|
||||||
this.setMigrationVersion(6);
|
|
||||||
logger.success('Migration 6 completed: registry_token column dropped from services table');
|
|
||||||
}
|
|
||||||
|
|
||||||
// Migration 7: Platform services tables
|
|
||||||
const version7 = this.getMigrationVersion();
|
|
||||||
if (version7 < 7) {
|
|
||||||
logger.info('Running migration 7: Creating platform services tables...');
|
|
||||||
|
|
||||||
this.query(`
|
|
||||||
CREATE TABLE platform_services (
|
|
||||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
|
||||||
name TEXT NOT NULL UNIQUE,
|
|
||||||
type TEXT NOT NULL,
|
|
||||||
status TEXT NOT NULL DEFAULT 'stopped',
|
|
||||||
container_id TEXT,
|
|
||||||
config TEXT NOT NULL DEFAULT '{}',
|
|
||||||
admin_credentials_encrypted TEXT,
|
|
||||||
created_at REAL NOT NULL,
|
|
||||||
updated_at REAL NOT NULL
|
|
||||||
)
|
|
||||||
`);
|
|
||||||
|
|
||||||
this.query(`
|
|
||||||
CREATE TABLE platform_resources (
|
|
||||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
|
||||||
platform_service_id INTEGER NOT NULL,
|
|
||||||
service_id INTEGER NOT NULL,
|
|
||||||
resource_type TEXT NOT NULL,
|
|
||||||
resource_name TEXT NOT NULL,
|
|
||||||
credentials_encrypted TEXT NOT NULL,
|
|
||||||
created_at REAL NOT NULL,
|
|
||||||
FOREIGN KEY (platform_service_id) REFERENCES platform_services(id) ON DELETE CASCADE,
|
|
||||||
FOREIGN KEY (service_id) REFERENCES services(id) ON DELETE CASCADE
|
|
||||||
)
|
|
||||||
`);
|
|
||||||
|
|
||||||
this.query(`ALTER TABLE services ADD COLUMN platform_requirements TEXT DEFAULT '{}'`);
|
|
||||||
|
|
||||||
this.query('CREATE INDEX IF NOT EXISTS idx_platform_services_type ON platform_services(type)');
|
|
||||||
this.query('CREATE INDEX IF NOT EXISTS idx_platform_resources_service ON platform_resources(service_id)');
|
|
||||||
this.query('CREATE INDEX IF NOT EXISTS idx_platform_resources_platform ON platform_resources(platform_service_id)');
|
|
||||||
|
|
||||||
this.setMigrationVersion(7);
|
|
||||||
logger.success('Migration 7 completed: Platform services tables created');
|
|
||||||
}
|
|
||||||
|
|
||||||
// Migration 8: Convert certificates table to store PEM content
|
|
||||||
const version8 = this.getMigrationVersion();
|
|
||||||
if (version8 < 8) {
|
|
||||||
logger.info('Running migration 8: Converting certificates table to store PEM content...');
|
|
||||||
|
|
||||||
this.query(`
|
|
||||||
CREATE TABLE certificates_new (
|
|
||||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
|
||||||
domain_id INTEGER NOT NULL,
|
|
||||||
cert_domain TEXT NOT NULL,
|
|
||||||
is_wildcard INTEGER NOT NULL DEFAULT 0,
|
|
||||||
cert_pem TEXT NOT NULL DEFAULT '',
|
|
||||||
key_pem TEXT NOT NULL DEFAULT '',
|
|
||||||
fullchain_pem TEXT NOT NULL DEFAULT '',
|
|
||||||
expiry_date REAL NOT NULL,
|
|
||||||
issuer TEXT NOT NULL,
|
|
||||||
is_valid INTEGER NOT NULL DEFAULT 1,
|
|
||||||
created_at REAL NOT NULL,
|
|
||||||
updated_at REAL NOT NULL,
|
|
||||||
FOREIGN KEY (domain_id) REFERENCES domains(id) ON DELETE CASCADE
|
|
||||||
)
|
|
||||||
`);
|
|
||||||
|
|
||||||
this.query(`
|
|
||||||
INSERT INTO certificates_new (id, domain_id, cert_domain, is_wildcard, cert_pem, key_pem, fullchain_pem, expiry_date, issuer, is_valid, created_at, updated_at)
|
|
||||||
SELECT id, domain_id, cert_domain, is_wildcard, '', '', '', expiry_date, issuer, 0, created_at, updated_at FROM certificates
|
|
||||||
`);
|
|
||||||
|
|
||||||
this.query('DROP TABLE certificates');
|
|
||||||
this.query('ALTER TABLE certificates_new RENAME TO certificates');
|
|
||||||
this.query('CREATE INDEX IF NOT EXISTS idx_certificates_domain ON certificates(domain_id)');
|
|
||||||
this.query('CREATE INDEX IF NOT EXISTS idx_certificates_expiry ON certificates(expiry_date)');
|
|
||||||
|
|
||||||
this.setMigrationVersion(8);
|
|
||||||
logger.success('Migration 8 completed: Certificates table now stores PEM content');
|
|
||||||
}
|
|
||||||
|
|
||||||
// Migration 9: Backup system tables
|
|
||||||
const version9 = this.getMigrationVersion();
|
|
||||||
if (version9 < 9) {
|
|
||||||
logger.info('Running migration 9: Creating backup system tables...');
|
|
||||||
|
|
||||||
// Add include_image_in_backup column to services table
|
|
||||||
this.query(`ALTER TABLE services ADD COLUMN include_image_in_backup INTEGER DEFAULT 1`);
|
|
||||||
|
|
||||||
// Create backups table
|
|
||||||
this.query(`
|
|
||||||
CREATE TABLE backups (
|
|
||||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
|
||||||
service_id INTEGER NOT NULL,
|
|
||||||
service_name TEXT NOT NULL,
|
|
||||||
filename TEXT NOT NULL,
|
|
||||||
size_bytes INTEGER NOT NULL,
|
|
||||||
created_at REAL NOT NULL,
|
|
||||||
includes_image INTEGER NOT NULL,
|
|
||||||
platform_resources TEXT NOT NULL DEFAULT '[]',
|
|
||||||
checksum TEXT NOT NULL,
|
|
||||||
FOREIGN KEY (service_id) REFERENCES services(id) ON DELETE CASCADE
|
|
||||||
)
|
|
||||||
`);
|
|
||||||
|
|
||||||
this.query('CREATE INDEX IF NOT EXISTS idx_backups_service ON backups(service_id)');
|
|
||||||
this.query('CREATE INDEX IF NOT EXISTS idx_backups_created ON backups(created_at DESC)');
|
|
||||||
|
|
||||||
this.setMigrationVersion(9);
|
|
||||||
logger.success('Migration 9 completed: Backup system tables created');
|
|
||||||
}
|
|
||||||
|
|
||||||
// Migration 10: Backup schedules table and extend backups table
|
|
||||||
const version10 = this.getMigrationVersion();
|
|
||||||
if (version10 < 10) {
|
|
||||||
logger.info('Running migration 10: Creating backup schedules table...');
|
|
||||||
|
|
||||||
// Create backup_schedules table
|
|
||||||
this.query(`
|
|
||||||
CREATE TABLE backup_schedules (
|
|
||||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
|
||||||
service_id INTEGER NOT NULL,
|
|
||||||
service_name TEXT NOT NULL,
|
|
||||||
cron_expression TEXT NOT NULL,
|
|
||||||
retention_tier TEXT NOT NULL,
|
|
||||||
enabled INTEGER NOT NULL DEFAULT 1,
|
|
||||||
last_run_at REAL,
|
|
||||||
next_run_at REAL,
|
|
||||||
last_status TEXT,
|
|
||||||
last_error TEXT,
|
|
||||||
created_at REAL NOT NULL,
|
|
||||||
updated_at REAL NOT NULL,
|
|
||||||
FOREIGN KEY (service_id) REFERENCES services(id) ON DELETE CASCADE
|
|
||||||
)
|
|
||||||
`);
|
|
||||||
|
|
||||||
this.query('CREATE INDEX IF NOT EXISTS idx_backup_schedules_service ON backup_schedules(service_id)');
|
|
||||||
this.query('CREATE INDEX IF NOT EXISTS idx_backup_schedules_enabled ON backup_schedules(enabled)');
|
|
||||||
|
|
||||||
// Extend backups table with retention_tier and schedule_id columns
|
|
||||||
this.query('ALTER TABLE backups ADD COLUMN retention_tier TEXT');
|
|
||||||
this.query('ALTER TABLE backups ADD COLUMN schedule_id INTEGER REFERENCES backup_schedules(id) ON DELETE SET NULL');
|
|
||||||
|
|
||||||
this.setMigrationVersion(10);
|
|
||||||
logger.success('Migration 10 completed: Backup schedules table created');
|
|
||||||
}
|
|
||||||
|
|
||||||
// Migration 11: Add scope columns for global/pattern backup schedules
|
|
||||||
const version11 = this.getMigrationVersion();
|
|
||||||
if (version11 < 11) {
|
|
||||||
logger.info('Running migration 11: Adding scope columns to backup_schedules...');
|
|
||||||
|
|
||||||
// Recreate backup_schedules table with nullable service_id/service_name and new scope columns
|
|
||||||
this.query(`
|
|
||||||
CREATE TABLE backup_schedules_new (
|
|
||||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
|
||||||
scope_type TEXT NOT NULL DEFAULT 'service',
|
|
||||||
scope_pattern TEXT,
|
|
||||||
service_id INTEGER,
|
|
||||||
service_name TEXT,
|
|
||||||
cron_expression TEXT NOT NULL,
|
|
||||||
retention_tier TEXT NOT NULL,
|
|
||||||
enabled INTEGER NOT NULL DEFAULT 1,
|
|
||||||
last_run_at REAL,
|
|
||||||
next_run_at REAL,
|
|
||||||
last_status TEXT,
|
|
||||||
last_error TEXT,
|
|
||||||
created_at REAL NOT NULL,
|
|
||||||
updated_at REAL NOT NULL,
|
|
||||||
FOREIGN KEY (service_id) REFERENCES services(id) ON DELETE CASCADE
|
|
||||||
)
|
|
||||||
`);
|
|
||||||
|
|
||||||
// Copy existing schedules (all are service-specific)
|
|
||||||
this.query(`
|
|
||||||
INSERT INTO backup_schedules_new (
|
|
||||||
id, scope_type, scope_pattern, service_id, service_name, cron_expression,
|
|
||||||
retention_tier, enabled, last_run_at, next_run_at, last_status, last_error,
|
|
||||||
created_at, updated_at
|
|
||||||
)
|
|
||||||
SELECT
|
|
||||||
id, 'service', NULL, service_id, service_name, cron_expression,
|
|
||||||
retention_tier, enabled, last_run_at, next_run_at, last_status, last_error,
|
|
||||||
created_at, updated_at
|
|
||||||
FROM backup_schedules
|
|
||||||
`);
|
|
||||||
|
|
||||||
this.query('DROP TABLE backup_schedules');
|
|
||||||
this.query('ALTER TABLE backup_schedules_new RENAME TO backup_schedules');
|
|
||||||
this.query('CREATE INDEX IF NOT EXISTS idx_backup_schedules_service ON backup_schedules(service_id)');
|
|
||||||
this.query('CREATE INDEX IF NOT EXISTS idx_backup_schedules_enabled ON backup_schedules(enabled)');
|
|
||||||
this.query('CREATE INDEX IF NOT EXISTS idx_backup_schedules_scope ON backup_schedules(scope_type)');
|
|
||||||
|
|
||||||
this.setMigrationVersion(11);
|
|
||||||
logger.success('Migration 11 completed: Scope columns added to backup_schedules');
|
|
||||||
}
|
|
||||||
|
|
||||||
// Migration 12: GFS retention policy - replace retention_tier with per-tier retention counts
|
|
||||||
const version12 = this.getMigrationVersion();
|
|
||||||
if (version12 < 12) {
|
|
||||||
logger.info('Running migration 12: Updating backup system for GFS retention policy...');
|
|
||||||
|
|
||||||
// Recreate backup_schedules table with new retention columns
|
|
||||||
this.query(`
|
|
||||||
CREATE TABLE backup_schedules_new (
|
|
||||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
|
||||||
scope_type TEXT NOT NULL DEFAULT 'service',
|
|
||||||
scope_pattern TEXT,
|
|
||||||
service_id INTEGER,
|
|
||||||
service_name TEXT,
|
|
||||||
cron_expression TEXT NOT NULL,
|
|
||||||
retention_hourly INTEGER NOT NULL DEFAULT 0,
|
|
||||||
retention_daily INTEGER NOT NULL DEFAULT 7,
|
|
||||||
retention_weekly INTEGER NOT NULL DEFAULT 4,
|
|
||||||
retention_monthly INTEGER NOT NULL DEFAULT 12,
|
|
||||||
enabled INTEGER NOT NULL DEFAULT 1,
|
|
||||||
last_run_at REAL,
|
|
||||||
next_run_at REAL,
|
|
||||||
last_status TEXT,
|
|
||||||
last_error TEXT,
|
|
||||||
created_at REAL NOT NULL,
|
|
||||||
updated_at REAL NOT NULL,
|
|
||||||
FOREIGN KEY (service_id) REFERENCES services(id) ON DELETE CASCADE
|
|
||||||
)
|
|
||||||
`);
|
|
||||||
|
|
||||||
// Migrate existing data - convert old retention_tier to new format
|
|
||||||
// daily -> D:7, weekly -> W:4, monthly -> M:12, yearly -> M:12 (yearly becomes long monthly retention)
|
|
||||||
this.query(`
|
|
||||||
INSERT INTO backup_schedules_new (
|
|
||||||
id, scope_type, scope_pattern, service_id, service_name, cron_expression,
|
|
||||||
retention_hourly, retention_daily, retention_weekly, retention_monthly,
|
|
||||||
enabled, last_run_at, next_run_at, last_status, last_error, created_at, updated_at
|
|
||||||
)
|
|
||||||
SELECT
|
|
||||||
id, scope_type, scope_pattern, service_id, service_name, cron_expression,
|
|
||||||
0, -- retention_hourly
|
|
||||||
CASE WHEN retention_tier = 'daily' THEN 7 ELSE 0 END,
|
|
||||||
CASE WHEN retention_tier IN ('daily', 'weekly') THEN 4 ELSE 0 END,
|
|
||||||
CASE WHEN retention_tier IN ('daily', 'weekly', 'monthly') THEN 12
|
|
||||||
WHEN retention_tier = 'yearly' THEN 24 ELSE 12 END,
|
|
||||||
enabled, last_run_at, next_run_at, last_status, last_error, created_at, updated_at
|
|
||||||
FROM backup_schedules
|
|
||||||
`);
|
|
||||||
|
|
||||||
this.query('DROP TABLE backup_schedules');
|
|
||||||
this.query('ALTER TABLE backup_schedules_new RENAME TO backup_schedules');
|
|
||||||
this.query('CREATE INDEX IF NOT EXISTS idx_backup_schedules_service ON backup_schedules(service_id)');
|
|
||||||
this.query('CREATE INDEX IF NOT EXISTS idx_backup_schedules_enabled ON backup_schedules(enabled)');
|
|
||||||
this.query('CREATE INDEX IF NOT EXISTS idx_backup_schedules_scope ON backup_schedules(scope_type)');
|
|
||||||
|
|
||||||
// Recreate backups table without retention_tier column
|
|
||||||
this.query(`
|
|
||||||
CREATE TABLE backups_new (
|
|
||||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
|
||||||
service_id INTEGER NOT NULL,
|
|
||||||
service_name TEXT NOT NULL,
|
|
||||||
filename TEXT NOT NULL,
|
|
||||||
size_bytes INTEGER NOT NULL,
|
|
||||||
created_at REAL NOT NULL,
|
|
||||||
includes_image INTEGER NOT NULL,
|
|
||||||
platform_resources TEXT NOT NULL DEFAULT '[]',
|
|
||||||
checksum TEXT NOT NULL,
|
|
||||||
schedule_id INTEGER REFERENCES backup_schedules(id) ON DELETE SET NULL,
|
|
||||||
FOREIGN KEY (service_id) REFERENCES services(id) ON DELETE CASCADE
|
|
||||||
)
|
|
||||||
`);
|
|
||||||
|
|
||||||
this.query(`
|
|
||||||
INSERT INTO backups_new (
|
|
||||||
id, service_id, service_name, filename, size_bytes, created_at,
|
|
||||||
includes_image, platform_resources, checksum, schedule_id
|
|
||||||
)
|
|
||||||
SELECT
|
|
||||||
id, service_id, service_name, filename, size_bytes, created_at,
|
|
||||||
includes_image, platform_resources, checksum, schedule_id
|
|
||||||
FROM backups
|
|
||||||
`);
|
|
||||||
|
|
||||||
this.query('DROP TABLE backups');
|
|
||||||
this.query('ALTER TABLE backups_new RENAME TO backups');
|
|
||||||
this.query('CREATE INDEX IF NOT EXISTS idx_backups_service ON backups(service_id)');
|
|
||||||
this.query('CREATE INDEX IF NOT EXISTS idx_backups_created ON backups(created_at DESC)');
|
|
||||||
this.query('CREATE INDEX IF NOT EXISTS idx_backups_schedule ON backups(schedule_id)');
|
|
||||||
|
|
||||||
this.setMigrationVersion(12);
|
|
||||||
logger.success('Migration 12 completed: GFS retention policy schema updated');
|
|
||||||
}
|
|
||||||
} catch (error) {
|
|
||||||
logger.error(`Migration failed: ${getErrorMessage(error)}`);
|
|
||||||
if (error instanceof Error && error.stack) {
|
|
||||||
logger.error(`Stack: ${error.stack}`);
|
|
||||||
}
|
|
||||||
throw error;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Get current migration version
|
|
||||||
*/
|
|
||||||
private getMigrationVersion(): number {
|
|
||||||
if (!this.db) throw new Error('Database not initialized');
|
|
||||||
|
|
||||||
try {
|
|
||||||
const result = this.query<{ version?: number | null; [key: number]: unknown }>('SELECT MAX(version) as version FROM migrations');
|
|
||||||
if (result.length === 0) return 0;
|
|
||||||
|
|
||||||
const versionValue = result[0].version ?? (result[0] as Record<number, unknown>)[0];
|
|
||||||
return versionValue !== null && versionValue !== undefined ? Number(versionValue) : 0;
|
|
||||||
} catch (error) {
|
|
||||||
logger.warn(`Error getting migration version: ${getErrorMessage(error)}, defaulting to 0`);
|
|
||||||
return 0;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Set migration version
|
|
||||||
*/
|
|
||||||
private setMigrationVersion(version: number): void {
|
|
||||||
if (!this.db) throw new Error('Database not initialized');
|
|
||||||
|
|
||||||
this.query('INSERT INTO migrations (version, applied_at) VALUES (?, ?)', [
|
|
||||||
version,
|
|
||||||
Date.now(),
|
|
||||||
]);
|
|
||||||
logger.debug(`Migration version set to ${version}`);
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Close database connection
|
* Close database connection
|
||||||
*/
|
*/
|
||||||
|
|||||||
22
ts/database/migrations/base-migration.ts
Normal file
22
ts/database/migrations/base-migration.ts
Normal file
@@ -0,0 +1,22 @@
|
|||||||
|
/**
|
||||||
|
* Abstract base class for database migrations.
|
||||||
|
* All migrations must extend this class and implement the abstract members.
|
||||||
|
*/
|
||||||
|
|
||||||
|
import type { TQueryFunction } from '../types.ts';
|
||||||
|
|
||||||
|
export abstract class BaseMigration {
|
||||||
|
/** The migration version number (must be unique and sequential) */
|
||||||
|
abstract readonly version: number;
|
||||||
|
|
||||||
|
/** A short description of what this migration does */
|
||||||
|
abstract readonly description: string;
|
||||||
|
|
||||||
|
/** Execute the migration's SQL statements */
|
||||||
|
abstract up(query: TQueryFunction): void;
|
||||||
|
|
||||||
|
/** Returns a human-readable name for logging */
|
||||||
|
getName(): string {
|
||||||
|
return `Migration ${this.version}: ${this.description}`;
|
||||||
|
}
|
||||||
|
}
|
||||||
2
ts/database/migrations/index.ts
Normal file
2
ts/database/migrations/index.ts
Normal file
@@ -0,0 +1,2 @@
|
|||||||
|
export { BaseMigration } from './base-migration.ts';
|
||||||
|
export { MigrationRunner } from './migration-runner.ts';
|
||||||
12
ts/database/migrations/migration-001-initial.ts
Normal file
12
ts/database/migrations/migration-001-initial.ts
Normal file
@@ -0,0 +1,12 @@
|
|||||||
|
import { BaseMigration } from './base-migration.ts';
|
||||||
|
import type { TQueryFunction } from '../types.ts';
|
||||||
|
|
||||||
|
export class Migration001Initial extends BaseMigration {
|
||||||
|
readonly version = 1;
|
||||||
|
readonly description = 'Initial schema';
|
||||||
|
|
||||||
|
up(_query: TQueryFunction): void {
|
||||||
|
// Initial schema is created by createTables() in the database class.
|
||||||
|
// This migration just marks the initial version.
|
||||||
|
}
|
||||||
|
}
|
||||||
170
ts/database/migrations/migration-002-timestamps-to-real.ts
Normal file
170
ts/database/migrations/migration-002-timestamps-to-real.ts
Normal file
@@ -0,0 +1,170 @@
|
|||||||
|
import { BaseMigration } from './base-migration.ts';
|
||||||
|
import type { TQueryFunction } from '../types.ts';
|
||||||
|
|
||||||
|
export class Migration002TimestampsToReal extends BaseMigration {
|
||||||
|
readonly version = 2;
|
||||||
|
readonly description = 'Convert timestamp columns from INTEGER to REAL';
|
||||||
|
|
||||||
|
up(query: TQueryFunction): void {
|
||||||
|
// SSL certificates
|
||||||
|
query(`
|
||||||
|
CREATE TABLE ssl_certificates_new (
|
||||||
|
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||||
|
domain TEXT NOT NULL UNIQUE,
|
||||||
|
cert_path TEXT NOT NULL,
|
||||||
|
key_path TEXT NOT NULL,
|
||||||
|
full_chain_path TEXT NOT NULL,
|
||||||
|
expiry_date REAL NOT NULL,
|
||||||
|
issuer TEXT NOT NULL,
|
||||||
|
created_at REAL NOT NULL,
|
||||||
|
updated_at REAL NOT NULL
|
||||||
|
)
|
||||||
|
`);
|
||||||
|
query(`INSERT INTO ssl_certificates_new SELECT * FROM ssl_certificates`);
|
||||||
|
query(`DROP TABLE ssl_certificates`);
|
||||||
|
query(`ALTER TABLE ssl_certificates_new RENAME TO ssl_certificates`);
|
||||||
|
|
||||||
|
// Services
|
||||||
|
query(`
|
||||||
|
CREATE TABLE services_new (
|
||||||
|
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||||
|
name TEXT NOT NULL UNIQUE,
|
||||||
|
image TEXT NOT NULL,
|
||||||
|
registry TEXT,
|
||||||
|
env_vars TEXT NOT NULL,
|
||||||
|
port INTEGER NOT NULL,
|
||||||
|
domain TEXT,
|
||||||
|
container_id TEXT,
|
||||||
|
status TEXT NOT NULL DEFAULT 'stopped',
|
||||||
|
created_at REAL NOT NULL,
|
||||||
|
updated_at REAL NOT NULL
|
||||||
|
)
|
||||||
|
`);
|
||||||
|
query(`INSERT INTO services_new SELECT * FROM services`);
|
||||||
|
query(`DROP TABLE services`);
|
||||||
|
query(`ALTER TABLE services_new RENAME TO services`);
|
||||||
|
|
||||||
|
// Registries
|
||||||
|
query(`
|
||||||
|
CREATE TABLE registries_new (
|
||||||
|
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||||
|
url TEXT NOT NULL UNIQUE,
|
||||||
|
username TEXT NOT NULL,
|
||||||
|
password_encrypted TEXT NOT NULL,
|
||||||
|
created_at REAL NOT NULL
|
||||||
|
)
|
||||||
|
`);
|
||||||
|
query(`INSERT INTO registries_new SELECT * FROM registries`);
|
||||||
|
query(`DROP TABLE registries`);
|
||||||
|
query(`ALTER TABLE registries_new RENAME TO registries`);
|
||||||
|
|
||||||
|
// Nginx configs
|
||||||
|
query(`
|
||||||
|
CREATE TABLE nginx_configs_new (
|
||||||
|
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||||
|
service_id INTEGER NOT NULL,
|
||||||
|
domain TEXT NOT NULL,
|
||||||
|
port INTEGER NOT NULL,
|
||||||
|
ssl_enabled INTEGER NOT NULL DEFAULT 0,
|
||||||
|
config_template TEXT NOT NULL,
|
||||||
|
created_at REAL NOT NULL,
|
||||||
|
updated_at REAL NOT NULL,
|
||||||
|
FOREIGN KEY (service_id) REFERENCES services(id) ON DELETE CASCADE
|
||||||
|
)
|
||||||
|
`);
|
||||||
|
query(`INSERT INTO nginx_configs_new SELECT * FROM nginx_configs`);
|
||||||
|
query(`DROP TABLE nginx_configs`);
|
||||||
|
query(`ALTER TABLE nginx_configs_new RENAME TO nginx_configs`);
|
||||||
|
|
||||||
|
// DNS records
|
||||||
|
query(`
|
||||||
|
CREATE TABLE dns_records_new (
|
||||||
|
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||||
|
domain TEXT NOT NULL UNIQUE,
|
||||||
|
type TEXT NOT NULL,
|
||||||
|
value TEXT NOT NULL,
|
||||||
|
cloudflare_id TEXT,
|
||||||
|
zone_id TEXT,
|
||||||
|
created_at REAL NOT NULL,
|
||||||
|
updated_at REAL NOT NULL
|
||||||
|
)
|
||||||
|
`);
|
||||||
|
query(`INSERT INTO dns_records_new SELECT * FROM dns_records`);
|
||||||
|
query(`DROP TABLE dns_records`);
|
||||||
|
query(`ALTER TABLE dns_records_new RENAME TO dns_records`);
|
||||||
|
|
||||||
|
// Metrics
|
||||||
|
query(`
|
||||||
|
CREATE TABLE metrics_new (
|
||||||
|
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||||
|
service_id INTEGER NOT NULL,
|
||||||
|
timestamp REAL NOT NULL,
|
||||||
|
cpu_percent REAL NOT NULL,
|
||||||
|
memory_used INTEGER NOT NULL,
|
||||||
|
memory_limit INTEGER NOT NULL,
|
||||||
|
network_rx_bytes INTEGER NOT NULL,
|
||||||
|
network_tx_bytes INTEGER NOT NULL,
|
||||||
|
FOREIGN KEY (service_id) REFERENCES services(id) ON DELETE CASCADE
|
||||||
|
)
|
||||||
|
`);
|
||||||
|
query(`INSERT INTO metrics_new SELECT * FROM metrics`);
|
||||||
|
query(`DROP TABLE metrics`);
|
||||||
|
query(`ALTER TABLE metrics_new RENAME TO metrics`);
|
||||||
|
query(`CREATE INDEX IF NOT EXISTS idx_metrics_service_timestamp ON metrics(service_id, timestamp DESC)`);
|
||||||
|
|
||||||
|
// Logs
|
||||||
|
query(`
|
||||||
|
CREATE TABLE logs_new (
|
||||||
|
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||||
|
service_id INTEGER NOT NULL,
|
||||||
|
timestamp REAL NOT NULL,
|
||||||
|
message TEXT NOT NULL,
|
||||||
|
level TEXT NOT NULL,
|
||||||
|
source TEXT NOT NULL,
|
||||||
|
FOREIGN KEY (service_id) REFERENCES services(id) ON DELETE CASCADE
|
||||||
|
)
|
||||||
|
`);
|
||||||
|
query(`INSERT INTO logs_new SELECT * FROM logs`);
|
||||||
|
query(`DROP TABLE logs`);
|
||||||
|
query(`ALTER TABLE logs_new RENAME TO logs`);
|
||||||
|
query(`CREATE INDEX IF NOT EXISTS idx_logs_service_timestamp ON logs(service_id, timestamp DESC)`);
|
||||||
|
|
||||||
|
// Users
|
||||||
|
query(`
|
||||||
|
CREATE TABLE users_new (
|
||||||
|
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||||
|
username TEXT NOT NULL UNIQUE,
|
||||||
|
password_hash TEXT NOT NULL,
|
||||||
|
role TEXT NOT NULL DEFAULT 'user',
|
||||||
|
created_at REAL NOT NULL,
|
||||||
|
updated_at REAL NOT NULL
|
||||||
|
)
|
||||||
|
`);
|
||||||
|
query(`INSERT INTO users_new SELECT * FROM users`);
|
||||||
|
query(`DROP TABLE users`);
|
||||||
|
query(`ALTER TABLE users_new RENAME TO users`);
|
||||||
|
|
||||||
|
// Settings
|
||||||
|
query(`
|
||||||
|
CREATE TABLE settings_new (
|
||||||
|
key TEXT PRIMARY KEY,
|
||||||
|
value TEXT NOT NULL,
|
||||||
|
updated_at REAL NOT NULL
|
||||||
|
)
|
||||||
|
`);
|
||||||
|
query(`INSERT INTO settings_new SELECT * FROM settings`);
|
||||||
|
query(`DROP TABLE settings`);
|
||||||
|
query(`ALTER TABLE settings_new RENAME TO settings`);
|
||||||
|
|
||||||
|
// Migrations table itself
|
||||||
|
query(`
|
||||||
|
CREATE TABLE migrations_new (
|
||||||
|
version INTEGER PRIMARY KEY,
|
||||||
|
applied_at REAL NOT NULL
|
||||||
|
)
|
||||||
|
`);
|
||||||
|
query(`INSERT INTO migrations_new SELECT * FROM migrations`);
|
||||||
|
query(`DROP TABLE migrations`);
|
||||||
|
query(`ALTER TABLE migrations_new RENAME TO migrations`);
|
||||||
|
}
|
||||||
|
}
|
||||||
125
ts/database/migrations/migration-003-domain-management.ts
Normal file
125
ts/database/migrations/migration-003-domain-management.ts
Normal file
@@ -0,0 +1,125 @@
|
|||||||
|
import { BaseMigration } from './base-migration.ts';
|
||||||
|
import type { TQueryFunction } from '../types.ts';
|
||||||
|
|
||||||
|
export class Migration003DomainManagement extends BaseMigration {
|
||||||
|
readonly version = 3;
|
||||||
|
readonly description = 'Domain management tables';
|
||||||
|
|
||||||
|
up(query: TQueryFunction): void {
|
||||||
|
query(`
|
||||||
|
CREATE TABLE domains (
|
||||||
|
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||||
|
domain TEXT NOT NULL UNIQUE,
|
||||||
|
dns_provider TEXT,
|
||||||
|
cloudflare_zone_id TEXT,
|
||||||
|
is_obsolete INTEGER NOT NULL DEFAULT 0,
|
||||||
|
default_wildcard INTEGER NOT NULL DEFAULT 1,
|
||||||
|
created_at REAL NOT NULL,
|
||||||
|
updated_at REAL NOT NULL
|
||||||
|
)
|
||||||
|
`);
|
||||||
|
|
||||||
|
query(`
|
||||||
|
CREATE TABLE certificates (
|
||||||
|
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||||
|
domain_id INTEGER NOT NULL,
|
||||||
|
cert_domain TEXT NOT NULL,
|
||||||
|
is_wildcard INTEGER NOT NULL DEFAULT 0,
|
||||||
|
cert_path TEXT NOT NULL,
|
||||||
|
key_path TEXT NOT NULL,
|
||||||
|
full_chain_path TEXT NOT NULL,
|
||||||
|
expiry_date REAL NOT NULL,
|
||||||
|
issuer TEXT NOT NULL,
|
||||||
|
is_valid INTEGER NOT NULL DEFAULT 1,
|
||||||
|
created_at REAL NOT NULL,
|
||||||
|
updated_at REAL NOT NULL,
|
||||||
|
FOREIGN KEY (domain_id) REFERENCES domains(id) ON DELETE CASCADE
|
||||||
|
)
|
||||||
|
`);
|
||||||
|
|
||||||
|
query(`
|
||||||
|
CREATE TABLE cert_requirements (
|
||||||
|
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||||
|
service_id INTEGER NOT NULL,
|
||||||
|
domain_id INTEGER NOT NULL,
|
||||||
|
subdomain TEXT NOT NULL,
|
||||||
|
certificate_id INTEGER,
|
||||||
|
status TEXT NOT NULL DEFAULT 'pending',
|
||||||
|
created_at REAL NOT NULL,
|
||||||
|
updated_at REAL NOT NULL,
|
||||||
|
FOREIGN KEY (service_id) REFERENCES services(id) ON DELETE CASCADE,
|
||||||
|
FOREIGN KEY (domain_id) REFERENCES domains(id) ON DELETE CASCADE,
|
||||||
|
FOREIGN KEY (certificate_id) REFERENCES certificates(id) ON DELETE SET NULL
|
||||||
|
)
|
||||||
|
`);
|
||||||
|
|
||||||
|
// Migrate data from old ssl_certificates table
|
||||||
|
interface OldSslCert {
|
||||||
|
id?: number;
|
||||||
|
domain?: string;
|
||||||
|
cert_path?: string;
|
||||||
|
key_path?: string;
|
||||||
|
full_chain_path?: string;
|
||||||
|
expiry_date?: number;
|
||||||
|
issuer?: string;
|
||||||
|
created_at?: number;
|
||||||
|
updated_at?: number;
|
||||||
|
[key: number]: unknown;
|
||||||
|
}
|
||||||
|
const existingCerts = query<OldSslCert>('SELECT * FROM ssl_certificates');
|
||||||
|
|
||||||
|
const now = Date.now();
|
||||||
|
const domainMap = new Map<string, number>();
|
||||||
|
|
||||||
|
for (const cert of existingCerts) {
|
||||||
|
const domain = String(cert.domain ?? (cert as Record<number, unknown>)[1]);
|
||||||
|
if (!domainMap.has(domain)) {
|
||||||
|
query(
|
||||||
|
'INSERT INTO domains (domain, dns_provider, is_obsolete, default_wildcard, created_at, updated_at) VALUES (?, ?, ?, ?, ?, ?)',
|
||||||
|
[domain, null, 0, 1, now, now],
|
||||||
|
);
|
||||||
|
const result = query<{ id?: number; [key: number]: unknown }>(
|
||||||
|
'SELECT last_insert_rowid() as id',
|
||||||
|
);
|
||||||
|
const domainId = result[0].id ?? (result[0] as Record<number, unknown>)[0];
|
||||||
|
domainMap.set(domain, Number(domainId));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
for (const cert of existingCerts) {
|
||||||
|
const domain = String(cert.domain ?? (cert as Record<number, unknown>)[1]);
|
||||||
|
const domainId = domainMap.get(domain);
|
||||||
|
|
||||||
|
query(
|
||||||
|
`INSERT INTO certificates (
|
||||||
|
domain_id, cert_domain, is_wildcard, cert_path, key_path, full_chain_path,
|
||||||
|
expiry_date, issuer, is_valid, created_at, updated_at
|
||||||
|
) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)`,
|
||||||
|
[
|
||||||
|
domainId,
|
||||||
|
domain,
|
||||||
|
0,
|
||||||
|
String(cert.cert_path ?? (cert as Record<number, unknown>)[2]),
|
||||||
|
String(cert.key_path ?? (cert as Record<number, unknown>)[3]),
|
||||||
|
String(cert.full_chain_path ?? (cert as Record<number, unknown>)[4]),
|
||||||
|
Number(cert.expiry_date ?? (cert as Record<number, unknown>)[5]),
|
||||||
|
String(cert.issuer ?? (cert as Record<number, unknown>)[6]),
|
||||||
|
1,
|
||||||
|
Number(cert.created_at ?? (cert as Record<number, unknown>)[7]),
|
||||||
|
Number(cert.updated_at ?? (cert as Record<number, unknown>)[8]),
|
||||||
|
],
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
query('DROP TABLE ssl_certificates');
|
||||||
|
query('CREATE INDEX IF NOT EXISTS idx_domains_cloudflare_zone ON domains(cloudflare_zone_id)');
|
||||||
|
query('CREATE INDEX IF NOT EXISTS idx_certificates_domain ON certificates(domain_id)');
|
||||||
|
query('CREATE INDEX IF NOT EXISTS idx_certificates_expiry ON certificates(expiry_date)');
|
||||||
|
query(
|
||||||
|
'CREATE INDEX IF NOT EXISTS idx_cert_requirements_service ON cert_requirements(service_id)',
|
||||||
|
);
|
||||||
|
query(
|
||||||
|
'CREATE INDEX IF NOT EXISTS idx_cert_requirements_domain ON cert_requirements(domain_id)',
|
||||||
|
);
|
||||||
|
}
|
||||||
|
}
|
||||||
16
ts/database/migrations/migration-004-registry-columns.ts
Normal file
16
ts/database/migrations/migration-004-registry-columns.ts
Normal file
@@ -0,0 +1,16 @@
|
|||||||
|
import { BaseMigration } from './base-migration.ts';
|
||||||
|
import type { TQueryFunction } from '../types.ts';
|
||||||
|
|
||||||
|
export class Migration004RegistryColumns extends BaseMigration {
|
||||||
|
readonly version = 4;
|
||||||
|
readonly description = 'Add Onebox Registry columns to services table';
|
||||||
|
|
||||||
|
up(query: TQueryFunction): void {
|
||||||
|
query(`ALTER TABLE services ADD COLUMN use_onebox_registry INTEGER DEFAULT 0`);
|
||||||
|
query(`ALTER TABLE services ADD COLUMN registry_repository TEXT`);
|
||||||
|
query(`ALTER TABLE services ADD COLUMN registry_token TEXT`);
|
||||||
|
query(`ALTER TABLE services ADD COLUMN registry_image_tag TEXT DEFAULT 'latest'`);
|
||||||
|
query(`ALTER TABLE services ADD COLUMN auto_update_on_push INTEGER DEFAULT 0`);
|
||||||
|
query(`ALTER TABLE services ADD COLUMN image_digest TEXT`);
|
||||||
|
}
|
||||||
|
}
|
||||||
30
ts/database/migrations/migration-005-registry-tokens.ts
Normal file
30
ts/database/migrations/migration-005-registry-tokens.ts
Normal file
@@ -0,0 +1,30 @@
|
|||||||
|
import { BaseMigration } from './base-migration.ts';
|
||||||
|
import type { TQueryFunction } from '../types.ts';
|
||||||
|
|
||||||
|
export class Migration005RegistryTokens extends BaseMigration {
|
||||||
|
readonly version = 5;
|
||||||
|
readonly description = 'Registry tokens table';
|
||||||
|
|
||||||
|
up(query: TQueryFunction): void {
|
||||||
|
query(`
|
||||||
|
CREATE TABLE registry_tokens (
|
||||||
|
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||||
|
name TEXT NOT NULL,
|
||||||
|
token_hash TEXT NOT NULL UNIQUE,
|
||||||
|
token_type TEXT NOT NULL,
|
||||||
|
scope TEXT NOT NULL,
|
||||||
|
expires_at REAL,
|
||||||
|
created_at REAL NOT NULL,
|
||||||
|
last_used_at REAL,
|
||||||
|
created_by TEXT NOT NULL
|
||||||
|
)
|
||||||
|
`);
|
||||||
|
|
||||||
|
query(
|
||||||
|
'CREATE INDEX IF NOT EXISTS idx_registry_tokens_type ON registry_tokens(token_type)',
|
||||||
|
);
|
||||||
|
query(
|
||||||
|
'CREATE INDEX IF NOT EXISTS idx_registry_tokens_hash ON registry_tokens(token_hash)',
|
||||||
|
);
|
||||||
|
}
|
||||||
|
}
|
||||||
48
ts/database/migrations/migration-006-drop-registry-token.ts
Normal file
48
ts/database/migrations/migration-006-drop-registry-token.ts
Normal file
@@ -0,0 +1,48 @@
|
|||||||
|
import { BaseMigration } from './base-migration.ts';
|
||||||
|
import type { TQueryFunction } from '../types.ts';
|
||||||
|
|
||||||
|
export class Migration006DropRegistryToken extends BaseMigration {
|
||||||
|
readonly version = 6;
|
||||||
|
readonly description = 'Drop registry_token column from services table';
|
||||||
|
|
||||||
|
up(query: TQueryFunction): void {
|
||||||
|
query(`
|
||||||
|
CREATE TABLE services_new (
|
||||||
|
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||||
|
name TEXT NOT NULL UNIQUE,
|
||||||
|
image TEXT NOT NULL,
|
||||||
|
registry TEXT,
|
||||||
|
env_vars TEXT,
|
||||||
|
port INTEGER NOT NULL,
|
||||||
|
domain TEXT,
|
||||||
|
container_id TEXT,
|
||||||
|
status TEXT NOT NULL,
|
||||||
|
created_at REAL NOT NULL,
|
||||||
|
updated_at REAL NOT NULL,
|
||||||
|
use_onebox_registry INTEGER DEFAULT 0,
|
||||||
|
registry_repository TEXT,
|
||||||
|
registry_image_tag TEXT DEFAULT 'latest',
|
||||||
|
auto_update_on_push INTEGER DEFAULT 0,
|
||||||
|
image_digest TEXT
|
||||||
|
)
|
||||||
|
`);
|
||||||
|
|
||||||
|
query(`
|
||||||
|
INSERT INTO services_new (
|
||||||
|
id, name, image, registry, env_vars, port, domain, container_id, status,
|
||||||
|
created_at, updated_at, use_onebox_registry, registry_repository,
|
||||||
|
registry_image_tag, auto_update_on_push, image_digest
|
||||||
|
)
|
||||||
|
SELECT
|
||||||
|
id, name, image, registry, env_vars, port, domain, container_id, status,
|
||||||
|
created_at, updated_at, use_onebox_registry, registry_repository,
|
||||||
|
registry_image_tag, auto_update_on_push, image_digest
|
||||||
|
FROM services
|
||||||
|
`);
|
||||||
|
|
||||||
|
query('DROP TABLE services');
|
||||||
|
query('ALTER TABLE services_new RENAME TO services');
|
||||||
|
query('CREATE INDEX IF NOT EXISTS idx_services_name ON services(name)');
|
||||||
|
query('CREATE INDEX IF NOT EXISTS idx_services_status ON services(status)');
|
||||||
|
}
|
||||||
|
}
|
||||||
49
ts/database/migrations/migration-007-platform-services.ts
Normal file
49
ts/database/migrations/migration-007-platform-services.ts
Normal file
@@ -0,0 +1,49 @@
|
|||||||
|
import { BaseMigration } from './base-migration.ts';
|
||||||
|
import type { TQueryFunction } from '../types.ts';
|
||||||
|
|
||||||
|
export class Migration007PlatformServices extends BaseMigration {
|
||||||
|
readonly version = 7;
|
||||||
|
readonly description = 'Platform services tables';
|
||||||
|
|
||||||
|
up(query: TQueryFunction): void {
|
||||||
|
query(`
|
||||||
|
CREATE TABLE platform_services (
|
||||||
|
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||||
|
name TEXT NOT NULL UNIQUE,
|
||||||
|
type TEXT NOT NULL,
|
||||||
|
status TEXT NOT NULL DEFAULT 'stopped',
|
||||||
|
container_id TEXT,
|
||||||
|
config TEXT NOT NULL DEFAULT '{}',
|
||||||
|
admin_credentials_encrypted TEXT,
|
||||||
|
created_at REAL NOT NULL,
|
||||||
|
updated_at REAL NOT NULL
|
||||||
|
)
|
||||||
|
`);
|
||||||
|
|
||||||
|
query(`
|
||||||
|
CREATE TABLE platform_resources (
|
||||||
|
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||||
|
platform_service_id INTEGER NOT NULL,
|
||||||
|
service_id INTEGER NOT NULL,
|
||||||
|
resource_type TEXT NOT NULL,
|
||||||
|
resource_name TEXT NOT NULL,
|
||||||
|
credentials_encrypted TEXT NOT NULL,
|
||||||
|
created_at REAL NOT NULL,
|
||||||
|
FOREIGN KEY (platform_service_id) REFERENCES platform_services(id) ON DELETE CASCADE,
|
||||||
|
FOREIGN KEY (service_id) REFERENCES services(id) ON DELETE CASCADE
|
||||||
|
)
|
||||||
|
`);
|
||||||
|
|
||||||
|
query(`ALTER TABLE services ADD COLUMN platform_requirements TEXT DEFAULT '{}'`);
|
||||||
|
|
||||||
|
query(
|
||||||
|
'CREATE INDEX IF NOT EXISTS idx_platform_services_type ON platform_services(type)',
|
||||||
|
);
|
||||||
|
query(
|
||||||
|
'CREATE INDEX IF NOT EXISTS idx_platform_resources_service ON platform_resources(service_id)',
|
||||||
|
);
|
||||||
|
query(
|
||||||
|
'CREATE INDEX IF NOT EXISTS idx_platform_resources_platform ON platform_resources(platform_service_id)',
|
||||||
|
);
|
||||||
|
}
|
||||||
|
}
|
||||||
41
ts/database/migrations/migration-008-cert-pem-content.ts
Normal file
41
ts/database/migrations/migration-008-cert-pem-content.ts
Normal file
@@ -0,0 +1,41 @@
|
|||||||
|
import { BaseMigration } from './base-migration.ts';
|
||||||
|
import type { TQueryFunction } from '../types.ts';
|
||||||
|
|
||||||
|
export class Migration008CertPemContent extends BaseMigration {
|
||||||
|
readonly version = 8;
|
||||||
|
readonly description = 'Convert certificates table to store PEM content';
|
||||||
|
|
||||||
|
up(query: TQueryFunction): void {
|
||||||
|
query(`
|
||||||
|
CREATE TABLE certificates_new (
|
||||||
|
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||||
|
domain_id INTEGER NOT NULL,
|
||||||
|
cert_domain TEXT NOT NULL,
|
||||||
|
is_wildcard INTEGER NOT NULL DEFAULT 0,
|
||||||
|
cert_pem TEXT NOT NULL DEFAULT '',
|
||||||
|
key_pem TEXT NOT NULL DEFAULT '',
|
||||||
|
fullchain_pem TEXT NOT NULL DEFAULT '',
|
||||||
|
expiry_date REAL NOT NULL,
|
||||||
|
issuer TEXT NOT NULL,
|
||||||
|
is_valid INTEGER NOT NULL DEFAULT 1,
|
||||||
|
created_at REAL NOT NULL,
|
||||||
|
updated_at REAL NOT NULL,
|
||||||
|
FOREIGN KEY (domain_id) REFERENCES domains(id) ON DELETE CASCADE
|
||||||
|
)
|
||||||
|
`);
|
||||||
|
|
||||||
|
query(`
|
||||||
|
INSERT INTO certificates_new (id, domain_id, cert_domain, is_wildcard, cert_pem, key_pem, fullchain_pem, expiry_date, issuer, is_valid, created_at, updated_at)
|
||||||
|
SELECT id, domain_id, cert_domain, is_wildcard, '', '', '', expiry_date, issuer, 0, created_at, updated_at FROM certificates
|
||||||
|
`);
|
||||||
|
|
||||||
|
query('DROP TABLE certificates');
|
||||||
|
query('ALTER TABLE certificates_new RENAME TO certificates');
|
||||||
|
query(
|
||||||
|
'CREATE INDEX IF NOT EXISTS idx_certificates_domain ON certificates(domain_id)',
|
||||||
|
);
|
||||||
|
query(
|
||||||
|
'CREATE INDEX IF NOT EXISTS idx_certificates_expiry ON certificates(expiry_date)',
|
||||||
|
);
|
||||||
|
}
|
||||||
|
}
|
||||||
29
ts/database/migrations/migration-009-backup-system.ts
Normal file
29
ts/database/migrations/migration-009-backup-system.ts
Normal file
@@ -0,0 +1,29 @@
|
|||||||
|
import { BaseMigration } from './base-migration.ts';
|
||||||
|
import type { TQueryFunction } from '../types.ts';
|
||||||
|
|
||||||
|
export class Migration009BackupSystem extends BaseMigration {
|
||||||
|
readonly version = 9;
|
||||||
|
readonly description = 'Backup system tables';
|
||||||
|
|
||||||
|
up(query: TQueryFunction): void {
|
||||||
|
query(`ALTER TABLE services ADD COLUMN include_image_in_backup INTEGER DEFAULT 1`);
|
||||||
|
|
||||||
|
query(`
|
||||||
|
CREATE TABLE backups (
|
||||||
|
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||||
|
service_id INTEGER NOT NULL,
|
||||||
|
service_name TEXT NOT NULL,
|
||||||
|
filename TEXT NOT NULL,
|
||||||
|
size_bytes INTEGER NOT NULL,
|
||||||
|
created_at REAL NOT NULL,
|
||||||
|
includes_image INTEGER NOT NULL,
|
||||||
|
platform_resources TEXT NOT NULL DEFAULT '[]',
|
||||||
|
checksum TEXT NOT NULL,
|
||||||
|
FOREIGN KEY (service_id) REFERENCES services(id) ON DELETE CASCADE
|
||||||
|
)
|
||||||
|
`);
|
||||||
|
|
||||||
|
query('CREATE INDEX IF NOT EXISTS idx_backups_service ON backups(service_id)');
|
||||||
|
query('CREATE INDEX IF NOT EXISTS idx_backups_created ON backups(created_at DESC)');
|
||||||
|
}
|
||||||
|
}
|
||||||
39
ts/database/migrations/migration-010-backup-schedules.ts
Normal file
39
ts/database/migrations/migration-010-backup-schedules.ts
Normal file
@@ -0,0 +1,39 @@
|
|||||||
|
import { BaseMigration } from './base-migration.ts';
|
||||||
|
import type { TQueryFunction } from '../types.ts';
|
||||||
|
|
||||||
|
export class Migration010BackupSchedules extends BaseMigration {
|
||||||
|
readonly version = 10;
|
||||||
|
readonly description = 'Backup schedules table';
|
||||||
|
|
||||||
|
up(query: TQueryFunction): void {
|
||||||
|
query(`
|
||||||
|
CREATE TABLE backup_schedules (
|
||||||
|
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||||
|
service_id INTEGER NOT NULL,
|
||||||
|
service_name TEXT NOT NULL,
|
||||||
|
cron_expression TEXT NOT NULL,
|
||||||
|
retention_tier TEXT NOT NULL,
|
||||||
|
enabled INTEGER NOT NULL DEFAULT 1,
|
||||||
|
last_run_at REAL,
|
||||||
|
next_run_at REAL,
|
||||||
|
last_status TEXT,
|
||||||
|
last_error TEXT,
|
||||||
|
created_at REAL NOT NULL,
|
||||||
|
updated_at REAL NOT NULL,
|
||||||
|
FOREIGN KEY (service_id) REFERENCES services(id) ON DELETE CASCADE
|
||||||
|
)
|
||||||
|
`);
|
||||||
|
|
||||||
|
query(
|
||||||
|
'CREATE INDEX IF NOT EXISTS idx_backup_schedules_service ON backup_schedules(service_id)',
|
||||||
|
);
|
||||||
|
query(
|
||||||
|
'CREATE INDEX IF NOT EXISTS idx_backup_schedules_enabled ON backup_schedules(enabled)',
|
||||||
|
);
|
||||||
|
|
||||||
|
query('ALTER TABLE backups ADD COLUMN retention_tier TEXT');
|
||||||
|
query(
|
||||||
|
'ALTER TABLE backups ADD COLUMN schedule_id INTEGER REFERENCES backup_schedules(id) ON DELETE SET NULL',
|
||||||
|
);
|
||||||
|
}
|
||||||
|
}
|
||||||
54
ts/database/migrations/migration-011-scope-columns.ts
Normal file
54
ts/database/migrations/migration-011-scope-columns.ts
Normal file
@@ -0,0 +1,54 @@
|
|||||||
|
import { BaseMigration } from './base-migration.ts';
|
||||||
|
import type { TQueryFunction } from '../types.ts';
|
||||||
|
|
||||||
|
export class Migration011ScopeColumns extends BaseMigration {
|
||||||
|
readonly version = 11;
|
||||||
|
readonly description = 'Add scope columns to backup_schedules';
|
||||||
|
|
||||||
|
up(query: TQueryFunction): void {
|
||||||
|
query(`
|
||||||
|
CREATE TABLE backup_schedules_new (
|
||||||
|
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||||
|
scope_type TEXT NOT NULL DEFAULT 'service',
|
||||||
|
scope_pattern TEXT,
|
||||||
|
service_id INTEGER,
|
||||||
|
service_name TEXT,
|
||||||
|
cron_expression TEXT NOT NULL,
|
||||||
|
retention_tier TEXT NOT NULL,
|
||||||
|
enabled INTEGER NOT NULL DEFAULT 1,
|
||||||
|
last_run_at REAL,
|
||||||
|
next_run_at REAL,
|
||||||
|
last_status TEXT,
|
||||||
|
last_error TEXT,
|
||||||
|
created_at REAL NOT NULL,
|
||||||
|
updated_at REAL NOT NULL,
|
||||||
|
FOREIGN KEY (service_id) REFERENCES services(id) ON DELETE CASCADE
|
||||||
|
)
|
||||||
|
`);
|
||||||
|
|
||||||
|
query(`
|
||||||
|
INSERT INTO backup_schedules_new (
|
||||||
|
id, scope_type, scope_pattern, service_id, service_name, cron_expression,
|
||||||
|
retention_tier, enabled, last_run_at, next_run_at, last_status, last_error,
|
||||||
|
created_at, updated_at
|
||||||
|
)
|
||||||
|
SELECT
|
||||||
|
id, 'service', NULL, service_id, service_name, cron_expression,
|
||||||
|
retention_tier, enabled, last_run_at, next_run_at, last_status, last_error,
|
||||||
|
created_at, updated_at
|
||||||
|
FROM backup_schedules
|
||||||
|
`);
|
||||||
|
|
||||||
|
query('DROP TABLE backup_schedules');
|
||||||
|
query('ALTER TABLE backup_schedules_new RENAME TO backup_schedules');
|
||||||
|
query(
|
||||||
|
'CREATE INDEX IF NOT EXISTS idx_backup_schedules_service ON backup_schedules(service_id)',
|
||||||
|
);
|
||||||
|
query(
|
||||||
|
'CREATE INDEX IF NOT EXISTS idx_backup_schedules_enabled ON backup_schedules(enabled)',
|
||||||
|
);
|
||||||
|
query(
|
||||||
|
'CREATE INDEX IF NOT EXISTS idx_backup_schedules_scope ON backup_schedules(scope_type)',
|
||||||
|
);
|
||||||
|
}
|
||||||
|
}
|
||||||
97
ts/database/migrations/migration-012-gfs-retention.ts
Normal file
97
ts/database/migrations/migration-012-gfs-retention.ts
Normal file
@@ -0,0 +1,97 @@
|
|||||||
|
import { BaseMigration } from './base-migration.ts';
|
||||||
|
import type { TQueryFunction } from '../types.ts';
|
||||||
|
|
||||||
|
export class Migration012GfsRetention extends BaseMigration {
|
||||||
|
readonly version = 12;
|
||||||
|
readonly description = 'GFS retention policy schema';
|
||||||
|
|
||||||
|
up(query: TQueryFunction): void {
|
||||||
|
// Recreate backup_schedules with GFS retention columns
|
||||||
|
query(`
|
||||||
|
CREATE TABLE backup_schedules_new (
|
||||||
|
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||||
|
scope_type TEXT NOT NULL DEFAULT 'service',
|
||||||
|
scope_pattern TEXT,
|
||||||
|
service_id INTEGER,
|
||||||
|
service_name TEXT,
|
||||||
|
cron_expression TEXT NOT NULL,
|
||||||
|
retention_hourly INTEGER NOT NULL DEFAULT 0,
|
||||||
|
retention_daily INTEGER NOT NULL DEFAULT 7,
|
||||||
|
retention_weekly INTEGER NOT NULL DEFAULT 4,
|
||||||
|
retention_monthly INTEGER NOT NULL DEFAULT 12,
|
||||||
|
enabled INTEGER NOT NULL DEFAULT 1,
|
||||||
|
last_run_at REAL,
|
||||||
|
next_run_at REAL,
|
||||||
|
last_status TEXT,
|
||||||
|
last_error TEXT,
|
||||||
|
created_at REAL NOT NULL,
|
||||||
|
updated_at REAL NOT NULL,
|
||||||
|
FOREIGN KEY (service_id) REFERENCES services(id) ON DELETE CASCADE
|
||||||
|
)
|
||||||
|
`);
|
||||||
|
|
||||||
|
// Migrate existing data - convert old retention_tier to new format
|
||||||
|
query(`
|
||||||
|
INSERT INTO backup_schedules_new (
|
||||||
|
id, scope_type, scope_pattern, service_id, service_name, cron_expression,
|
||||||
|
retention_hourly, retention_daily, retention_weekly, retention_monthly,
|
||||||
|
enabled, last_run_at, next_run_at, last_status, last_error, created_at, updated_at
|
||||||
|
)
|
||||||
|
SELECT
|
||||||
|
id, scope_type, scope_pattern, service_id, service_name, cron_expression,
|
||||||
|
0,
|
||||||
|
CASE WHEN retention_tier = 'daily' THEN 7 ELSE 0 END,
|
||||||
|
CASE WHEN retention_tier IN ('daily', 'weekly') THEN 4 ELSE 0 END,
|
||||||
|
CASE WHEN retention_tier IN ('daily', 'weekly', 'monthly') THEN 12
|
||||||
|
WHEN retention_tier = 'yearly' THEN 24 ELSE 12 END,
|
||||||
|
enabled, last_run_at, next_run_at, last_status, last_error, created_at, updated_at
|
||||||
|
FROM backup_schedules
|
||||||
|
`);
|
||||||
|
|
||||||
|
query('DROP TABLE backup_schedules');
|
||||||
|
query('ALTER TABLE backup_schedules_new RENAME TO backup_schedules');
|
||||||
|
query(
|
||||||
|
'CREATE INDEX IF NOT EXISTS idx_backup_schedules_service ON backup_schedules(service_id)',
|
||||||
|
);
|
||||||
|
query(
|
||||||
|
'CREATE INDEX IF NOT EXISTS idx_backup_schedules_enabled ON backup_schedules(enabled)',
|
||||||
|
);
|
||||||
|
query(
|
||||||
|
'CREATE INDEX IF NOT EXISTS idx_backup_schedules_scope ON backup_schedules(scope_type)',
|
||||||
|
);
|
||||||
|
|
||||||
|
// Recreate backups table without retention_tier column
|
||||||
|
query(`
|
||||||
|
CREATE TABLE backups_new (
|
||||||
|
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||||
|
service_id INTEGER NOT NULL,
|
||||||
|
service_name TEXT NOT NULL,
|
||||||
|
filename TEXT NOT NULL,
|
||||||
|
size_bytes INTEGER NOT NULL,
|
||||||
|
created_at REAL NOT NULL,
|
||||||
|
includes_image INTEGER NOT NULL,
|
||||||
|
platform_resources TEXT NOT NULL DEFAULT '[]',
|
||||||
|
checksum TEXT NOT NULL,
|
||||||
|
schedule_id INTEGER REFERENCES backup_schedules(id) ON DELETE SET NULL,
|
||||||
|
FOREIGN KEY (service_id) REFERENCES services(id) ON DELETE CASCADE
|
||||||
|
)
|
||||||
|
`);
|
||||||
|
|
||||||
|
query(`
|
||||||
|
INSERT INTO backups_new (
|
||||||
|
id, service_id, service_name, filename, size_bytes, created_at,
|
||||||
|
includes_image, platform_resources, checksum, schedule_id
|
||||||
|
)
|
||||||
|
SELECT
|
||||||
|
id, service_id, service_name, filename, size_bytes, created_at,
|
||||||
|
includes_image, platform_resources, checksum, schedule_id
|
||||||
|
FROM backups
|
||||||
|
`);
|
||||||
|
|
||||||
|
query('DROP TABLE backups');
|
||||||
|
query('ALTER TABLE backups_new RENAME TO backups');
|
||||||
|
query('CREATE INDEX IF NOT EXISTS idx_backups_service ON backups(service_id)');
|
||||||
|
query('CREATE INDEX IF NOT EXISTS idx_backups_created ON backups(created_at DESC)');
|
||||||
|
query('CREATE INDEX IF NOT EXISTS idx_backups_schedule ON backups(schedule_id)');
|
||||||
|
}
|
||||||
|
}
|
||||||
100
ts/database/migrations/migration-runner.ts
Normal file
100
ts/database/migrations/migration-runner.ts
Normal file
@@ -0,0 +1,100 @@
|
|||||||
|
/**
|
||||||
|
* Migration runner - discovers, orders, and executes database migrations.
|
||||||
|
* Mirrors the pattern from @serve.zone/nupst.
|
||||||
|
*/
|
||||||
|
|
||||||
|
import type { TQueryFunction } from '../types.ts';
|
||||||
|
import { logger } from '../../logging.ts';
|
||||||
|
import { getErrorMessage } from '../../utils/error.ts';
|
||||||
|
|
||||||
|
import { Migration001Initial } from './migration-001-initial.ts';
|
||||||
|
import { Migration002TimestampsToReal } from './migration-002-timestamps-to-real.ts';
|
||||||
|
import { Migration003DomainManagement } from './migration-003-domain-management.ts';
|
||||||
|
import { Migration004RegistryColumns } from './migration-004-registry-columns.ts';
|
||||||
|
import { Migration005RegistryTokens } from './migration-005-registry-tokens.ts';
|
||||||
|
import { Migration006DropRegistryToken } from './migration-006-drop-registry-token.ts';
|
||||||
|
import { Migration007PlatformServices } from './migration-007-platform-services.ts';
|
||||||
|
import { Migration008CertPemContent } from './migration-008-cert-pem-content.ts';
|
||||||
|
import { Migration009BackupSystem } from './migration-009-backup-system.ts';
|
||||||
|
import { Migration010BackupSchedules } from './migration-010-backup-schedules.ts';
|
||||||
|
import { Migration011ScopeColumns } from './migration-011-scope-columns.ts';
|
||||||
|
import { Migration012GfsRetention } from './migration-012-gfs-retention.ts';
|
||||||
|
import type { BaseMigration } from './base-migration.ts';
|
||||||
|
|
||||||
|
export class MigrationRunner {
|
||||||
|
private query: TQueryFunction;
|
||||||
|
private migrations: BaseMigration[];
|
||||||
|
|
||||||
|
constructor(query: TQueryFunction) {
|
||||||
|
this.query = query;
|
||||||
|
|
||||||
|
// Register all migrations in order
|
||||||
|
this.migrations = [
|
||||||
|
new Migration001Initial(),
|
||||||
|
new Migration002TimestampsToReal(),
|
||||||
|
new Migration003DomainManagement(),
|
||||||
|
new Migration004RegistryColumns(),
|
||||||
|
new Migration005RegistryTokens(),
|
||||||
|
new Migration006DropRegistryToken(),
|
||||||
|
new Migration007PlatformServices(),
|
||||||
|
new Migration008CertPemContent(),
|
||||||
|
new Migration009BackupSystem(),
|
||||||
|
new Migration010BackupSchedules(),
|
||||||
|
new Migration011ScopeColumns(),
|
||||||
|
new Migration012GfsRetention(),
|
||||||
|
].sort((a, b) => a.version - b.version);
|
||||||
|
}
|
||||||
|
|
||||||
|
/** Run all pending migrations */
|
||||||
|
run(): void {
|
||||||
|
try {
|
||||||
|
const currentVersion = this.getMigrationVersion();
|
||||||
|
logger.info(`Current database migration version: ${currentVersion}`);
|
||||||
|
|
||||||
|
let applied = 0;
|
||||||
|
for (const migration of this.migrations) {
|
||||||
|
if (migration.version <= currentVersion) continue;
|
||||||
|
|
||||||
|
logger.info(`Running ${migration.getName()}...`);
|
||||||
|
migration.up(this.query);
|
||||||
|
this.setMigrationVersion(migration.version);
|
||||||
|
logger.success(`${migration.getName()} completed`);
|
||||||
|
applied++;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (applied > 0) {
|
||||||
|
logger.success(`Applied ${applied} migration(s)`);
|
||||||
|
}
|
||||||
|
} catch (error) {
|
||||||
|
logger.error(`Migration failed: ${getErrorMessage(error)}`);
|
||||||
|
if (error instanceof Error && error.stack) {
|
||||||
|
logger.error(`Stack: ${error.stack}`);
|
||||||
|
}
|
||||||
|
throw error;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/** Get current migration version from the migrations table */
|
||||||
|
private getMigrationVersion(): number {
|
||||||
|
try {
|
||||||
|
const result = this.query<{ version?: number | null; [key: number]: unknown }>(
|
||||||
|
'SELECT MAX(version) as version FROM migrations',
|
||||||
|
);
|
||||||
|
if (result.length === 0) return 0;
|
||||||
|
|
||||||
|
const versionValue = result[0].version ?? (result[0] as Record<number, unknown>)[0];
|
||||||
|
return versionValue !== null && versionValue !== undefined ? Number(versionValue) : 0;
|
||||||
|
} catch {
|
||||||
|
// Table might not exist yet on fresh databases
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/** Record a migration version as applied */
|
||||||
|
private setMigrationVersion(version: number): void {
|
||||||
|
this.query('INSERT INTO migrations (version, applied_at) VALUES (?, ?)', [
|
||||||
|
version,
|
||||||
|
Date.now(),
|
||||||
|
]);
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -12,6 +12,7 @@ export { OneboxReverseProxy } from './classes/reverseproxy.ts';
|
|||||||
export { OneboxDnsManager } from './classes/dns.ts';
|
export { OneboxDnsManager } from './classes/dns.ts';
|
||||||
export { OneboxSslManager } from './classes/ssl.ts';
|
export { OneboxSslManager } from './classes/ssl.ts';
|
||||||
export { OneboxDaemon } from './classes/daemon.ts';
|
export { OneboxDaemon } from './classes/daemon.ts';
|
||||||
|
export { OneboxSystemd } from './classes/systemd.ts';
|
||||||
export { OneboxHttpServer } from './classes/httpserver.ts';
|
export { OneboxHttpServer } from './classes/httpserver.ts';
|
||||||
export { OneboxApiClient } from './classes/apiclient.ts';
|
export { OneboxApiClient } from './classes/apiclient.ts';
|
||||||
|
|
||||||
|
|||||||
@@ -165,5 +165,42 @@ export class PlatformHandler {
|
|||||||
},
|
},
|
||||||
),
|
),
|
||||||
);
|
);
|
||||||
|
|
||||||
|
// Get platform service logs
|
||||||
|
this.typedrouter.addTypedHandler(
|
||||||
|
new plugins.typedrequest.TypedHandler<interfaces.requests.IReq_GetPlatformServiceLogs>(
|
||||||
|
'getPlatformServiceLogs',
|
||||||
|
async (dataArg) => {
|
||||||
|
await requireValidIdentity(this.opsServerRef.adminHandler, dataArg);
|
||||||
|
const service = this.opsServerRef.oneboxRef.database.getPlatformServiceByType(dataArg.serviceType);
|
||||||
|
if (!service || !service.containerId) {
|
||||||
|
throw new plugins.typedrequest.TypedResponseError('Platform service has no container');
|
||||||
|
}
|
||||||
|
|
||||||
|
const tail = dataArg.tail || 100;
|
||||||
|
const rawLogs = await this.opsServerRef.oneboxRef.docker.getContainerLogs(service.containerId, tail);
|
||||||
|
|
||||||
|
// Parse raw log output into structured entries
|
||||||
|
const logLines = (rawLogs.stdout + rawLogs.stderr)
|
||||||
|
.split('\n')
|
||||||
|
.filter((line: string) => line.trim());
|
||||||
|
|
||||||
|
const logs = logLines.map((line: string, index: number) => {
|
||||||
|
const isError = line.toLowerCase().includes('error') || line.toLowerCase().includes('fatal');
|
||||||
|
const isWarn = line.toLowerCase().includes('warn');
|
||||||
|
return {
|
||||||
|
id: index,
|
||||||
|
serviceId: 0,
|
||||||
|
timestamp: Date.now(),
|
||||||
|
message: line,
|
||||||
|
level: (isError ? 'error' : isWarn ? 'warn' : 'info') as 'info' | 'warn' | 'error' | 'debug',
|
||||||
|
source: 'stdout' as const,
|
||||||
|
};
|
||||||
|
});
|
||||||
|
|
||||||
|
return { logs };
|
||||||
|
},
|
||||||
|
),
|
||||||
|
);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -17,10 +17,6 @@ export { path, fs, http, encoding };
|
|||||||
import { Database } from '@db/sqlite';
|
import { Database } from '@db/sqlite';
|
||||||
export const sqlite = { DB: Database };
|
export const sqlite = { DB: Database };
|
||||||
|
|
||||||
// Systemd Daemon Integration
|
|
||||||
import * as smartdaemon from '@push.rocks/smartdaemon';
|
|
||||||
export { smartdaemon };
|
|
||||||
|
|
||||||
// Docker API Client
|
// Docker API Client
|
||||||
import { DockerHost } from '@apiclient.xyz/docker';
|
import { DockerHost } from '@apiclient.xyz/docker';
|
||||||
export const docker = { Docker: DockerHost };
|
export const docker = { Docker: DockerHost };
|
||||||
|
|||||||
File diff suppressed because one or more lines are too long
@@ -69,3 +69,18 @@ export interface IReq_GetPlatformServiceStats extends plugins.typedrequestInterf
|
|||||||
stats: data.IContainerStats;
|
stats: data.IContainerStats;
|
||||||
};
|
};
|
||||||
}
|
}
|
||||||
|
|
||||||
|
export interface IReq_GetPlatformServiceLogs extends plugins.typedrequestInterfaces.implementsTR<
|
||||||
|
plugins.typedrequestInterfaces.ITypedRequest,
|
||||||
|
IReq_GetPlatformServiceLogs
|
||||||
|
> {
|
||||||
|
method: 'getPlatformServiceLogs';
|
||||||
|
request: {
|
||||||
|
identity: data.IIdentity;
|
||||||
|
serviceType: data.TPlatformServiceType;
|
||||||
|
tail?: number;
|
||||||
|
};
|
||||||
|
response: {
|
||||||
|
logs: data.ILogEntry[];
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|||||||
@@ -3,6 +3,6 @@
|
|||||||
*/
|
*/
|
||||||
export const commitinfo = {
|
export const commitinfo = {
|
||||||
name: '@serve.zone/onebox',
|
name: '@serve.zone/onebox',
|
||||||
version: '1.11.0',
|
version: '1.18.2',
|
||||||
description: 'Self-hosted container platform with automatic SSL and DNS - a mini Heroku for single servers'
|
description: 'Self-hosted container platform with automatic SSL and DNS - a mini Heroku for single servers'
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -26,6 +26,8 @@ export interface IServicesState {
|
|||||||
currentServiceStats: interfaces.data.IContainerStats | null;
|
currentServiceStats: interfaces.data.IContainerStats | null;
|
||||||
platformServices: interfaces.data.IPlatformService[];
|
platformServices: interfaces.data.IPlatformService[];
|
||||||
currentPlatformService: interfaces.data.IPlatformService | null;
|
currentPlatformService: interfaces.data.IPlatformService | null;
|
||||||
|
currentPlatformServiceStats: interfaces.data.IContainerStats | null;
|
||||||
|
currentPlatformServiceLogs: interfaces.data.ILogEntry[];
|
||||||
}
|
}
|
||||||
|
|
||||||
export interface INetworkState {
|
export interface INetworkState {
|
||||||
@@ -88,6 +90,8 @@ export const servicesStatePart = await appState.getStatePart<IServicesState>(
|
|||||||
currentServiceStats: null,
|
currentServiceStats: null,
|
||||||
platformServices: [],
|
platformServices: [],
|
||||||
currentPlatformService: null,
|
currentPlatformService: null,
|
||||||
|
currentPlatformServiceStats: null,
|
||||||
|
currentPlatformServiceLogs: [],
|
||||||
},
|
},
|
||||||
'soft',
|
'soft',
|
||||||
);
|
);
|
||||||
@@ -476,6 +480,46 @@ export const stopPlatformServiceAction = servicesStatePart.createAction<{
|
|||||||
}
|
}
|
||||||
});
|
});
|
||||||
|
|
||||||
|
export const fetchPlatformServiceStatsAction = servicesStatePart.createAction<{
|
||||||
|
serviceType: interfaces.data.TPlatformServiceType;
|
||||||
|
}>(async (statePartArg, dataArg) => {
|
||||||
|
const context = getActionContext();
|
||||||
|
try {
|
||||||
|
const typedRequest = new plugins.domtools.plugins.typedrequest.TypedRequest<
|
||||||
|
interfaces.requests.IReq_GetPlatformServiceStats
|
||||||
|
>('/typedrequest', 'getPlatformServiceStats');
|
||||||
|
const response = await typedRequest.fire({
|
||||||
|
identity: context.identity!,
|
||||||
|
serviceType: dataArg.serviceType,
|
||||||
|
});
|
||||||
|
return { ...statePartArg.getState(), currentPlatformServiceStats: response.stats };
|
||||||
|
} catch (err) {
|
||||||
|
console.error('Failed to fetch platform service stats:', err);
|
||||||
|
return { ...statePartArg.getState(), currentPlatformServiceStats: null };
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
export const fetchPlatformServiceLogsAction = servicesStatePart.createAction<{
|
||||||
|
serviceType: interfaces.data.TPlatformServiceType;
|
||||||
|
tail?: number;
|
||||||
|
}>(async (statePartArg, dataArg) => {
|
||||||
|
const context = getActionContext();
|
||||||
|
try {
|
||||||
|
const typedRequest = new plugins.domtools.plugins.typedrequest.TypedRequest<
|
||||||
|
interfaces.requests.IReq_GetPlatformServiceLogs
|
||||||
|
>('/typedrequest', 'getPlatformServiceLogs');
|
||||||
|
const response = await typedRequest.fire({
|
||||||
|
identity: context.identity!,
|
||||||
|
serviceType: dataArg.serviceType,
|
||||||
|
tail: dataArg.tail || 100,
|
||||||
|
});
|
||||||
|
return { ...statePartArg.getState(), currentPlatformServiceLogs: response.logs };
|
||||||
|
} catch (err) {
|
||||||
|
console.error('Failed to fetch platform service logs:', err);
|
||||||
|
return { ...statePartArg.getState(), currentPlatformServiceLogs: [] };
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
// ============================================================================
|
// ============================================================================
|
||||||
// Network Actions
|
// Network Actions
|
||||||
// ============================================================================
|
// ============================================================================
|
||||||
|
|||||||
@@ -37,15 +37,15 @@ export class ObAppShell extends DeesElement {
|
|||||||
accessor loginError: string = '';
|
accessor loginError: string = '';
|
||||||
|
|
||||||
private viewTabs = [
|
private viewTabs = [
|
||||||
{ name: 'Dashboard', element: (async () => (await import('./ob-view-dashboard.js')).ObViewDashboard)() },
|
{ name: 'Dashboard', iconName: 'lucide:layoutDashboard', element: (async () => (await import('./ob-view-dashboard.js')).ObViewDashboard)() },
|
||||||
{ name: 'Services', element: (async () => (await import('./ob-view-services.js')).ObViewServices)() },
|
{ name: 'Services', iconName: 'lucide:boxes', element: (async () => (await import('./ob-view-services.js')).ObViewServices)() },
|
||||||
{ name: 'Network', element: (async () => (await import('./ob-view-network.js')).ObViewNetwork)() },
|
{ name: 'Network', iconName: 'lucide:network', element: (async () => (await import('./ob-view-network.js')).ObViewNetwork)() },
|
||||||
{ name: 'Registries', element: (async () => (await import('./ob-view-registries.js')).ObViewRegistries)() },
|
{ name: 'Registries', iconName: 'lucide:package', element: (async () => (await import('./ob-view-registries.js')).ObViewRegistries)() },
|
||||||
{ name: 'Tokens', element: (async () => (await import('./ob-view-tokens.js')).ObViewTokens)() },
|
{ name: 'Tokens', iconName: 'lucide:key', element: (async () => (await import('./ob-view-tokens.js')).ObViewTokens)() },
|
||||||
{ name: 'Settings', element: (async () => (await import('./ob-view-settings.js')).ObViewSettings)() },
|
{ name: 'Settings', iconName: 'lucide:settings', element: (async () => (await import('./ob-view-settings.js')).ObViewSettings)() },
|
||||||
];
|
];
|
||||||
|
|
||||||
private resolvedViewTabs: Array<{ name: string; element: any }> = [];
|
private resolvedViewTabs: Array<{ name: string; iconName?: string; element: any }> = [];
|
||||||
|
|
||||||
constructor() {
|
constructor() {
|
||||||
super();
|
super();
|
||||||
@@ -104,6 +104,7 @@ export class ObAppShell extends DeesElement {
|
|||||||
this.resolvedViewTabs = await Promise.all(
|
this.resolvedViewTabs = await Promise.all(
|
||||||
this.viewTabs.map(async (tab) => ({
|
this.viewTabs.map(async (tab) => ({
|
||||||
name: tab.name,
|
name: tab.name,
|
||||||
|
iconName: tab.iconName,
|
||||||
element: await tab.element,
|
element: await tab.element,
|
||||||
})),
|
})),
|
||||||
);
|
);
|
||||||
|
|||||||
@@ -24,6 +24,8 @@ export class ObViewDashboard extends DeesElement {
|
|||||||
currentServiceStats: null,
|
currentServiceStats: null,
|
||||||
platformServices: [],
|
platformServices: [],
|
||||||
currentPlatformService: null,
|
currentPlatformService: null,
|
||||||
|
currentPlatformServiceStats: null,
|
||||||
|
currentPlatformServiceLogs: [],
|
||||||
};
|
};
|
||||||
|
|
||||||
@state()
|
@state()
|
||||||
@@ -149,6 +151,7 @@ export class ObViewDashboard extends DeesElement {
|
|||||||
],
|
],
|
||||||
}}
|
}}
|
||||||
@action-click=${(e: CustomEvent) => this.handleQuickAction(e)}
|
@action-click=${(e: CustomEvent) => this.handleQuickAction(e)}
|
||||||
|
@service-click=${(e: CustomEvent) => this.handlePlatformServiceClick(e)}
|
||||||
></sz-dashboard-view>
|
></sz-dashboard-view>
|
||||||
`;
|
`;
|
||||||
}
|
}
|
||||||
@@ -161,4 +164,21 @@ export class ObViewDashboard extends DeesElement {
|
|||||||
appstate.uiStatePart.dispatchAction(appstate.setActiveViewAction, { view: 'network' });
|
appstate.uiStatePart.dispatchAction(appstate.setActiveViewAction, { view: 'network' });
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
private handlePlatformServiceClick(e: CustomEvent) {
|
||||||
|
// Find the platform service type from the click event
|
||||||
|
const name = e.detail?.name;
|
||||||
|
const ps = this.servicesState.platformServices.find(
|
||||||
|
(p) => p.displayName === name,
|
||||||
|
);
|
||||||
|
if (ps) {
|
||||||
|
// Navigate to services tab — the ObViewServices component will pick up the type
|
||||||
|
// Store the selected platform type so the services view can open it
|
||||||
|
appstate.servicesStatePart.setState({
|
||||||
|
...appstate.servicesStatePart.getState(),
|
||||||
|
currentPlatformService: ps,
|
||||||
|
});
|
||||||
|
appstate.uiStatePart.dispatchAction(appstate.setActiveViewAction, { view: 'services' });
|
||||||
|
}
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -107,6 +107,8 @@ export class ObViewServices extends DeesElement {
|
|||||||
currentServiceStats: null,
|
currentServiceStats: null,
|
||||||
platformServices: [],
|
platformServices: [],
|
||||||
currentPlatformService: null,
|
currentPlatformService: null,
|
||||||
|
currentPlatformServiceStats: null,
|
||||||
|
currentPlatformServiceLogs: [],
|
||||||
};
|
};
|
||||||
|
|
||||||
@state()
|
@state()
|
||||||
@@ -145,7 +147,37 @@ export class ObViewServices extends DeesElement {
|
|||||||
public static styles = [
|
public static styles = [
|
||||||
cssManager.defaultStyles,
|
cssManager.defaultStyles,
|
||||||
shared.viewHostCss,
|
shared.viewHostCss,
|
||||||
css``,
|
css`
|
||||||
|
.page-actions {
|
||||||
|
display: flex;
|
||||||
|
justify-content: flex-end;
|
||||||
|
margin-bottom: 16px;
|
||||||
|
}
|
||||||
|
|
||||||
|
.deploy-button {
|
||||||
|
display: inline-flex;
|
||||||
|
align-items: center;
|
||||||
|
gap: 8px;
|
||||||
|
padding: 10px 20px;
|
||||||
|
background: ${cssManager.bdTheme('#18181b', '#fafafa')};
|
||||||
|
color: ${cssManager.bdTheme('#fafafa', '#18181b')};
|
||||||
|
border: none;
|
||||||
|
border-radius: 6px;
|
||||||
|
font-size: 14px;
|
||||||
|
font-weight: 500;
|
||||||
|
cursor: pointer;
|
||||||
|
transition: opacity 200ms ease;
|
||||||
|
}
|
||||||
|
|
||||||
|
.deploy-button:hover {
|
||||||
|
opacity: 0.9;
|
||||||
|
}
|
||||||
|
|
||||||
|
.deploy-button svg {
|
||||||
|
width: 16px;
|
||||||
|
height: 16px;
|
||||||
|
}
|
||||||
|
`,
|
||||||
];
|
];
|
||||||
|
|
||||||
async connectedCallback() {
|
async connectedCallback() {
|
||||||
@@ -154,6 +186,18 @@ export class ObViewServices extends DeesElement {
|
|||||||
appstate.servicesStatePart.dispatchAction(appstate.fetchServicesAction, null),
|
appstate.servicesStatePart.dispatchAction(appstate.fetchServicesAction, null),
|
||||||
appstate.servicesStatePart.dispatchAction(appstate.fetchPlatformServicesAction, null),
|
appstate.servicesStatePart.dispatchAction(appstate.fetchPlatformServicesAction, null),
|
||||||
]);
|
]);
|
||||||
|
|
||||||
|
// If a platform service was selected from the dashboard, navigate to its detail
|
||||||
|
const state = appstate.servicesStatePart.getState();
|
||||||
|
if (state.currentPlatformService) {
|
||||||
|
const type = state.currentPlatformService.type;
|
||||||
|
// Clear the selection so it doesn't persist on next visit
|
||||||
|
appstate.servicesStatePart.setState({
|
||||||
|
...appstate.servicesStatePart.getState(),
|
||||||
|
currentPlatformService: null,
|
||||||
|
});
|
||||||
|
this.navigateToPlatformDetail(type);
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
public render(): TemplateResult {
|
public render(): TemplateResult {
|
||||||
@@ -178,8 +222,23 @@ export class ObViewServices extends DeesElement {
|
|||||||
domain: s.domain || null,
|
domain: s.domain || null,
|
||||||
status: mapStatus(s.status),
|
status: mapStatus(s.status),
|
||||||
}));
|
}));
|
||||||
|
const mappedPlatformServices = this.servicesState.platformServices.map((ps) => ({
|
||||||
|
name: ps.displayName,
|
||||||
|
status: ps.status === 'running' ? `Running` : ps.status,
|
||||||
|
running: ps.status === 'running',
|
||||||
|
type: ps.type,
|
||||||
|
}));
|
||||||
return html`
|
return html`
|
||||||
<ob-sectionheading>Services</ob-sectionheading>
|
<ob-sectionheading>Services</ob-sectionheading>
|
||||||
|
<div class="page-actions">
|
||||||
|
<button class="deploy-button" @click=${() => { this.currentView = 'create'; }}>
|
||||||
|
<svg viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2">
|
||||||
|
<line x1="12" y1="5" x2="12" y2="19"></line>
|
||||||
|
<line x1="5" y1="12" x2="19" y2="12"></line>
|
||||||
|
</svg>
|
||||||
|
Deploy Service
|
||||||
|
</button>
|
||||||
|
</div>
|
||||||
<sz-services-list-view
|
<sz-services-list-view
|
||||||
.services=${mappedServices}
|
.services=${mappedServices}
|
||||||
@service-click=${(e: CustomEvent) => {
|
@service-click=${(e: CustomEvent) => {
|
||||||
@@ -197,6 +256,20 @@ export class ObViewServices extends DeesElement {
|
|||||||
}}
|
}}
|
||||||
@service-action=${(e: CustomEvent) => this.handleServiceAction(e)}
|
@service-action=${(e: CustomEvent) => this.handleServiceAction(e)}
|
||||||
></sz-services-list-view>
|
></sz-services-list-view>
|
||||||
|
<ob-sectionheading style="margin-top: 32px;">Platform Services</ob-sectionheading>
|
||||||
|
<div style="max-width: 500px;">
|
||||||
|
<sz-platform-services-card
|
||||||
|
.services=${mappedPlatformServices}
|
||||||
|
@service-click=${(e: CustomEvent) => {
|
||||||
|
const type = e.detail.type || this.servicesState.platformServices.find(
|
||||||
|
(ps) => ps.displayName === e.detail.name,
|
||||||
|
)?.type;
|
||||||
|
if (type) {
|
||||||
|
this.navigateToPlatformDetail(type);
|
||||||
|
}
|
||||||
|
}}
|
||||||
|
></sz-platform-services-card>
|
||||||
|
</div>
|
||||||
`;
|
`;
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -206,8 +279,26 @@ export class ObViewServices extends DeesElement {
|
|||||||
<sz-service-create-view
|
<sz-service-create-view
|
||||||
.registries=${[]}
|
.registries=${[]}
|
||||||
@create-service=${async (e: CustomEvent) => {
|
@create-service=${async (e: CustomEvent) => {
|
||||||
|
const formConfig = e.detail;
|
||||||
|
const serviceConfig: interfaces.data.IServiceCreate = {
|
||||||
|
name: formConfig.name,
|
||||||
|
image: formConfig.image,
|
||||||
|
port: formConfig.ports?.[0]?.containerPort
|
||||||
|
? parseInt(formConfig.ports[0].containerPort, 10)
|
||||||
|
: 80,
|
||||||
|
envVars: formConfig.envVars?.reduce(
|
||||||
|
(acc: Record<string, string>, ev: { key: string; value: string }) => {
|
||||||
|
if (ev.key) acc[ev.key] = ev.value;
|
||||||
|
return acc;
|
||||||
|
},
|
||||||
|
{} as Record<string, string>,
|
||||||
|
),
|
||||||
|
enableMongoDB: formConfig.enableMongoDB || false,
|
||||||
|
enableS3: formConfig.enableS3 || false,
|
||||||
|
enableClickHouse: formConfig.enableClickHouse || false,
|
||||||
|
};
|
||||||
await appstate.servicesStatePart.dispatchAction(appstate.createServiceAction, {
|
await appstate.servicesStatePart.dispatchAction(appstate.createServiceAction, {
|
||||||
config: e.detail,
|
config: serviceConfig,
|
||||||
});
|
});
|
||||||
this.currentView = 'list';
|
this.currentView = 'list';
|
||||||
}}
|
}}
|
||||||
@@ -265,10 +356,29 @@ export class ObViewServices extends DeesElement {
|
|||||||
`;
|
`;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
private navigateToPlatformDetail(type: string): void {
|
||||||
|
this.selectedPlatformType = type;
|
||||||
|
// Fetch stats and logs for this platform service
|
||||||
|
const serviceType = type as interfaces.data.TPlatformServiceType;
|
||||||
|
appstate.servicesStatePart.dispatchAction(appstate.fetchPlatformServiceStatsAction, { serviceType });
|
||||||
|
appstate.servicesStatePart.dispatchAction(appstate.fetchPlatformServiceLogsAction, { serviceType });
|
||||||
|
this.currentView = 'platform-detail';
|
||||||
|
}
|
||||||
|
|
||||||
private renderPlatformDetailView(): TemplateResult {
|
private renderPlatformDetailView(): TemplateResult {
|
||||||
const platformService = this.servicesState.platformServices.find(
|
const platformService = this.servicesState.platformServices.find(
|
||||||
(ps) => ps.type === this.selectedPlatformType,
|
(ps) => ps.type === this.selectedPlatformType,
|
||||||
);
|
);
|
||||||
|
const stats = this.servicesState.currentPlatformServiceStats;
|
||||||
|
const metrics = stats
|
||||||
|
? {
|
||||||
|
cpu: Math.round(stats.cpuPercent),
|
||||||
|
memory: Math.round(stats.memoryPercent),
|
||||||
|
storage: 0,
|
||||||
|
connections: 0,
|
||||||
|
}
|
||||||
|
: undefined;
|
||||||
|
|
||||||
return html`
|
return html`
|
||||||
<ob-sectionheading>Platform Service</ob-sectionheading>
|
<ob-sectionheading>Platform Service</ob-sectionheading>
|
||||||
<sz-platform-service-detail-view
|
<sz-platform-service-detail-view
|
||||||
@@ -277,22 +387,49 @@ export class ObViewServices extends DeesElement {
|
|||||||
id: platformService.type,
|
id: platformService.type,
|
||||||
name: platformService.displayName,
|
name: platformService.displayName,
|
||||||
type: platformService.type,
|
type: platformService.type,
|
||||||
status: platformService.status,
|
status: platformService.status === 'running'
|
||||||
|
? 'running'
|
||||||
|
: platformService.status === 'failed'
|
||||||
|
? 'error'
|
||||||
|
: 'stopped',
|
||||||
version: '',
|
version: '',
|
||||||
host: 'localhost',
|
host: 'localhost',
|
||||||
port: 0,
|
port: 0,
|
||||||
config: {},
|
config: {},
|
||||||
|
metrics,
|
||||||
}
|
}
|
||||||
: null}
|
: null}
|
||||||
.logs=${[]}
|
.logs=${this.servicesState.currentPlatformServiceLogs.map((log) => ({
|
||||||
@start=${() => {
|
timestamp: new Date(log.timestamp).toLocaleString(),
|
||||||
appstate.servicesStatePart.dispatchAction(appstate.startPlatformServiceAction, {
|
level: log.level,
|
||||||
serviceType: this.selectedPlatformType as any,
|
message: log.message,
|
||||||
|
}))}
|
||||||
|
@back=${() => {
|
||||||
|
this.currentView = 'list';
|
||||||
|
}}
|
||||||
|
@start=${async () => {
|
||||||
|
await appstate.servicesStatePart.dispatchAction(appstate.startPlatformServiceAction, {
|
||||||
|
serviceType: this.selectedPlatformType as interfaces.data.TPlatformServiceType,
|
||||||
|
});
|
||||||
|
// Refresh stats after starting
|
||||||
|
appstate.servicesStatePart.dispatchAction(appstate.fetchPlatformServiceStatsAction, {
|
||||||
|
serviceType: this.selectedPlatformType as interfaces.data.TPlatformServiceType,
|
||||||
});
|
});
|
||||||
}}
|
}}
|
||||||
@stop=${() => {
|
@stop=${async () => {
|
||||||
appstate.servicesStatePart.dispatchAction(appstate.stopPlatformServiceAction, {
|
await appstate.servicesStatePart.dispatchAction(appstate.stopPlatformServiceAction, {
|
||||||
serviceType: this.selectedPlatformType as any,
|
serviceType: this.selectedPlatformType as interfaces.data.TPlatformServiceType,
|
||||||
|
});
|
||||||
|
}}
|
||||||
|
@restart=${async () => {
|
||||||
|
await appstate.servicesStatePart.dispatchAction(appstate.stopPlatformServiceAction, {
|
||||||
|
serviceType: this.selectedPlatformType as interfaces.data.TPlatformServiceType,
|
||||||
|
});
|
||||||
|
await appstate.servicesStatePart.dispatchAction(appstate.startPlatformServiceAction, {
|
||||||
|
serviceType: this.selectedPlatformType as interfaces.data.TPlatformServiceType,
|
||||||
|
});
|
||||||
|
appstate.servicesStatePart.dispatchAction(appstate.fetchPlatformServiceStatsAction, {
|
||||||
|
serviceType: this.selectedPlatformType as interfaces.data.TPlatformServiceType,
|
||||||
});
|
});
|
||||||
}}
|
}}
|
||||||
></sz-platform-service-detail-view>
|
></sz-platform-service-detail-view>
|
||||||
|
|||||||
Reference in New Issue
Block a user