fix(smarts3): replace TypeScript server with Rust-powered core and IPC bridge
This commit is contained in:
641
readme.md
641
readme.md
@@ -1,300 +1,239 @@
|
||||
# @push.rocks/smarts3 🚀
|
||||
|
||||
**Production-ready S3-compatible server** - A powerful, lightweight Node.js TypeScript package that brings full S3 API compatibility to your local filesystem. Perfect for development, testing, and scenarios where running MinIO is out of scope!
|
||||
|
||||
## 🌟 Features
|
||||
|
||||
- 🏃 **Lightning-fast local S3 simulation** - No more waiting for cloud operations during development
|
||||
- ⚡ **Production-ready architecture** - Built on Node.js http module with zero framework dependencies
|
||||
- 🔄 **Full S3 API compatibility** - Works seamlessly with AWS SDK v3 and any other S3 client
|
||||
- 📂 **Local directory mapping** - Your buckets live right on your filesystem
|
||||
- 🔐 **Simple authentication** - Static credential-based auth for secure access
|
||||
- 🌐 **CORS support** - Configurable cross-origin resource sharing
|
||||
- 📊 **Structured logging** - Multiple levels (error/warn/info/debug) and formats (text/JSON)
|
||||
- 📤 **Multipart uploads** - Full support for large file uploads (>5MB)
|
||||
- 🧪 **Perfect for testing** - Reliable, repeatable tests without cloud dependencies
|
||||
- 🎯 **TypeScript-first** - Built with TypeScript for excellent type safety and IDE support
|
||||
- 🔧 **Flexible configuration** - Comprehensive config system with sensible defaults
|
||||
- 🧹 **Clean slate mode** - Start fresh on every test run
|
||||
A high-performance, S3-compatible local server powered by a **Rust core** with a clean TypeScript API. Drop-in replacement for AWS S3 during development and testing — no cloud, no Docker, no MinIO. Just `npm install` and go.
|
||||
|
||||
## Issue Reporting and Security
|
||||
|
||||
For reporting bugs, issues, or security vulnerabilities, please visit [community.foss.global/](https://community.foss.global/). This is the central community hub for all issue reporting. Developers who sign and comply with our contribution agreement and go through identification can also get a [code.foss.global/](https://code.foss.global/) account to submit Pull Requests directly.
|
||||
|
||||
## 🌟 Why smarts3?
|
||||
|
||||
| Feature | smarts3 | MinIO | s3rver |
|
||||
|---------|---------|-------|--------|
|
||||
| Install | `pnpm add` | Docker / binary | `npm install` |
|
||||
| Startup time | ~20ms | seconds | ~200ms |
|
||||
| Large file uploads | ✅ Streaming, zero-copy | ✅ | ❌ OOM risk |
|
||||
| Range requests | ✅ Seek-based | ✅ | ❌ Full read |
|
||||
| Language | Rust + TypeScript | Go | JavaScript |
|
||||
| Multipart uploads | ✅ Full support | ✅ | ❌ |
|
||||
| Auth | AWS v2/v4 key extraction | Full IAM | Basic |
|
||||
|
||||
### Core Features
|
||||
|
||||
- ⚡ **Rust-powered HTTP server** — hyper 1.x with streaming I/O, zero-copy, backpressure
|
||||
- 🔄 **Full S3 API compatibility** — works with AWS SDK v3, SmartBucket, any S3 client
|
||||
- 📂 **Filesystem-backed storage** — buckets map to directories, objects to files
|
||||
- 📤 **Streaming multipart uploads** — large files without memory pressure
|
||||
- 🎯 **Byte-range requests** — `seek()` directly to the requested byte offset
|
||||
- 🔐 **Authentication** — AWS v2/v4 signature key extraction
|
||||
- 🌐 **CORS middleware** — configurable cross-origin support
|
||||
- 📊 **Structured logging** — tracing-based, error through debug levels
|
||||
- 🧹 **Clean slate mode** — wipe storage on startup for test isolation
|
||||
- 🧪 **Test-first design** — start/stop in milliseconds, no port conflicts
|
||||
|
||||
## 📦 Installation
|
||||
|
||||
Install using your favorite package manager:
|
||||
|
||||
```bash
|
||||
# Using npm
|
||||
npm install @push.rocks/smarts3 --save-dev
|
||||
|
||||
# Using pnpm (recommended)
|
||||
pnpm add @push.rocks/smarts3 -D
|
||||
|
||||
# Using yarn
|
||||
yarn add @push.rocks/smarts3 --dev
|
||||
```
|
||||
|
||||
## 🚀 Quick Start
|
||||
> **Note:** The package ships with precompiled Rust binaries for `linux_amd64` and `linux_arm64`. No Rust toolchain needed on your machine.
|
||||
|
||||
Get up and running in seconds:
|
||||
## 🚀 Quick Start
|
||||
|
||||
```typescript
|
||||
import { Smarts3 } from '@push.rocks/smarts3';
|
||||
|
||||
// Start your local S3 server with minimal config
|
||||
const s3Server = await Smarts3.createAndStart({
|
||||
server: {
|
||||
port: 3000,
|
||||
silent: false,
|
||||
},
|
||||
storage: {
|
||||
cleanSlate: true, // Start with empty buckets
|
||||
},
|
||||
// Start a local S3 server
|
||||
const s3 = await Smarts3.createAndStart({
|
||||
server: { port: 3000 },
|
||||
storage: { cleanSlate: true },
|
||||
});
|
||||
|
||||
// Create a bucket
|
||||
const bucket = await s3Server.createBucket('my-awesome-bucket');
|
||||
await s3.createBucket('my-bucket');
|
||||
|
||||
// Get S3 connection details for use with AWS SDK or other S3 clients
|
||||
const s3Config = await s3Server.getS3Descriptor();
|
||||
// Get connection details for any S3 client
|
||||
const descriptor = await s3.getS3Descriptor();
|
||||
// → { endpoint: 'localhost', port: 3000, accessKey: 'S3RVER', accessSecret: 'S3RVER', useSsl: false }
|
||||
|
||||
// When you're done
|
||||
await s3Server.stop();
|
||||
// When done
|
||||
await s3.stop();
|
||||
```
|
||||
|
||||
## 📖 Configuration Guide
|
||||
## 📖 Configuration
|
||||
|
||||
### Complete Configuration Options
|
||||
|
||||
Smarts3 uses a comprehensive nested configuration structure:
|
||||
All config fields are optional — sensible defaults are applied automatically.
|
||||
|
||||
```typescript
|
||||
import { Smarts3, ISmarts3Config } from '@push.rocks/smarts3';
|
||||
|
||||
const config: ISmarts3Config = {
|
||||
// Server configuration
|
||||
server: {
|
||||
port: 3000, // Port to listen on (default: 3000)
|
||||
address: '0.0.0.0', // Bind address (default: '0.0.0.0')
|
||||
silent: false, // Disable all console output (default: false)
|
||||
port: 3000, // Default: 3000
|
||||
address: '0.0.0.0', // Default: '0.0.0.0'
|
||||
silent: false, // Default: false
|
||||
},
|
||||
|
||||
// Storage configuration
|
||||
storage: {
|
||||
directory: './buckets', // Directory to store buckets (default: .nogit/bucketsDir)
|
||||
cleanSlate: false, // Clear all data on start (default: false)
|
||||
directory: './my-data', // Default: .nogit/bucketsDir
|
||||
cleanSlate: false, // Default: false — set true to wipe on start
|
||||
},
|
||||
|
||||
// Authentication configuration
|
||||
auth: {
|
||||
enabled: false, // Enable authentication (default: false)
|
||||
credentials: [ // List of valid credentials
|
||||
{
|
||||
accessKeyId: 'YOUR_ACCESS_KEY',
|
||||
secretAccessKey: 'YOUR_SECRET_KEY',
|
||||
},
|
||||
],
|
||||
enabled: false, // Default: false
|
||||
credentials: [{
|
||||
accessKeyId: 'MY_KEY',
|
||||
secretAccessKey: 'MY_SECRET',
|
||||
}],
|
||||
},
|
||||
|
||||
// CORS configuration
|
||||
cors: {
|
||||
enabled: false, // Enable CORS (default: false)
|
||||
allowedOrigins: ['*'], // Allowed origins (default: ['*'])
|
||||
allowedMethods: [ // Allowed HTTP methods
|
||||
'GET', 'POST', 'PUT', 'DELETE', 'HEAD', 'OPTIONS'
|
||||
],
|
||||
allowedHeaders: ['*'], // Allowed headers (default: ['*'])
|
||||
exposedHeaders: [ // Headers exposed to client
|
||||
'ETag', 'x-amz-request-id', 'x-amz-version-id'
|
||||
],
|
||||
maxAge: 86400, // Preflight cache duration in seconds
|
||||
allowCredentials: false, // Allow credentials (default: false)
|
||||
enabled: false, // Default: false
|
||||
allowedOrigins: ['*'],
|
||||
allowedMethods: ['GET', 'POST', 'PUT', 'DELETE', 'HEAD', 'OPTIONS'],
|
||||
allowedHeaders: ['*'],
|
||||
exposedHeaders: ['ETag', 'x-amz-request-id', 'x-amz-version-id'],
|
||||
maxAge: 86400,
|
||||
allowCredentials: false,
|
||||
},
|
||||
|
||||
// Logging configuration
|
||||
logging: {
|
||||
level: 'info', // Log level: 'error' | 'warn' | 'info' | 'debug'
|
||||
format: 'text', // Log format: 'text' | 'json'
|
||||
enabled: true, // Enable logging (default: true)
|
||||
level: 'info', // 'error' | 'warn' | 'info' | 'debug'
|
||||
format: 'text', // 'text' | 'json'
|
||||
enabled: true,
|
||||
},
|
||||
|
||||
// Request limits
|
||||
limits: {
|
||||
maxObjectSize: 5 * 1024 * 1024 * 1024, // 5GB max object size
|
||||
maxMetadataSize: 2048, // 2KB max metadata size
|
||||
requestTimeout: 300000, // 5 minutes request timeout
|
||||
maxObjectSize: 5 * 1024 * 1024 * 1024, // 5 GB
|
||||
maxMetadataSize: 2048,
|
||||
requestTimeout: 300000, // 5 minutes
|
||||
},
|
||||
multipart: {
|
||||
expirationDays: 7,
|
||||
cleanupIntervalMinutes: 60,
|
||||
},
|
||||
};
|
||||
|
||||
const s3Server = await Smarts3.createAndStart(config);
|
||||
const s3 = await Smarts3.createAndStart(config);
|
||||
```
|
||||
|
||||
### Simple Configuration Examples
|
||||
### Common Configurations
|
||||
|
||||
**Development Mode (Default)**
|
||||
**CI/CD testing** — silent, clean, fast:
|
||||
```typescript
|
||||
const s3Server = await Smarts3.createAndStart({
|
||||
server: { port: 3000 },
|
||||
const s3 = await Smarts3.createAndStart({
|
||||
server: { port: 9999, silent: true },
|
||||
storage: { cleanSlate: true },
|
||||
});
|
||||
```
|
||||
|
||||
**Production Mode with Auth**
|
||||
**Auth enabled:**
|
||||
```typescript
|
||||
const s3Server = await Smarts3.createAndStart({
|
||||
server: { port: 3000 },
|
||||
const s3 = await Smarts3.createAndStart({
|
||||
auth: {
|
||||
enabled: true,
|
||||
credentials: [
|
||||
{
|
||||
accessKeyId: process.env.S3_ACCESS_KEY,
|
||||
secretAccessKey: process.env.S3_SECRET_KEY,
|
||||
},
|
||||
],
|
||||
},
|
||||
logging: {
|
||||
level: 'warn',
|
||||
format: 'json',
|
||||
credentials: [{ accessKeyId: 'test', secretAccessKey: 'test123' }],
|
||||
},
|
||||
});
|
||||
```
|
||||
|
||||
**CORS-Enabled for Web Apps**
|
||||
**CORS for local web dev:**
|
||||
```typescript
|
||||
const s3Server = await Smarts3.createAndStart({
|
||||
server: { port: 3000 },
|
||||
const s3 = await Smarts3.createAndStart({
|
||||
cors: {
|
||||
enabled: true,
|
||||
allowedOrigins: ['http://localhost:8080', 'https://app.example.com'],
|
||||
allowedOrigins: ['http://localhost:5173'],
|
||||
allowCredentials: true,
|
||||
},
|
||||
});
|
||||
```
|
||||
|
||||
## 🪣 Working with Buckets
|
||||
|
||||
### Creating Buckets
|
||||
## 📤 Usage with AWS SDK v3
|
||||
|
||||
```typescript
|
||||
// Create a new bucket
|
||||
const bucket = await s3Server.createBucket('my-bucket');
|
||||
console.log(`Created bucket: ${bucket.name}`);
|
||||
```
|
||||
import { S3Client, PutObjectCommand, GetObjectCommand, DeleteObjectCommand } from '@aws-sdk/client-s3';
|
||||
|
||||
## 📤 File Operations
|
||||
const descriptor = await s3.getS3Descriptor();
|
||||
|
||||
### Using AWS SDK v3
|
||||
|
||||
```typescript
|
||||
import { S3Client, PutObjectCommand, GetObjectCommand } from '@aws-sdk/client-s3';
|
||||
|
||||
// Get connection config
|
||||
const config = await s3Server.getS3Descriptor();
|
||||
|
||||
// Configure AWS SDK client
|
||||
const s3Client = new S3Client({
|
||||
endpoint: `http://${config.endpoint}:${config.port}`,
|
||||
const client = new S3Client({
|
||||
endpoint: `http://${descriptor.endpoint}:${descriptor.port}`,
|
||||
region: 'us-east-1',
|
||||
credentials: {
|
||||
accessKeyId: config.accessKey,
|
||||
secretAccessKey: config.accessSecret,
|
||||
accessKeyId: descriptor.accessKey,
|
||||
secretAccessKey: descriptor.accessSecret,
|
||||
},
|
||||
forcePathStyle: true,
|
||||
forcePathStyle: true, // Required for path-style S3
|
||||
});
|
||||
|
||||
// Upload a file
|
||||
await s3Client.send(new PutObjectCommand({
|
||||
// Upload
|
||||
await client.send(new PutObjectCommand({
|
||||
Bucket: 'my-bucket',
|
||||
Key: 'test-file.txt',
|
||||
Body: 'Hello from AWS SDK!',
|
||||
Key: 'hello.txt',
|
||||
Body: 'Hello, S3!',
|
||||
ContentType: 'text/plain',
|
||||
}));
|
||||
|
||||
// Download a file
|
||||
const response = await s3Client.send(new GetObjectCommand({
|
||||
// Download
|
||||
const { Body } = await client.send(new GetObjectCommand({
|
||||
Bucket: 'my-bucket',
|
||||
Key: 'test-file.txt',
|
||||
Key: 'hello.txt',
|
||||
}));
|
||||
const content = await Body.transformToString(); // "Hello, S3!"
|
||||
|
||||
const content = await response.Body.transformToString();
|
||||
console.log(content); // "Hello from AWS SDK!"
|
||||
// Delete
|
||||
await client.send(new DeleteObjectCommand({
|
||||
Bucket: 'my-bucket',
|
||||
Key: 'hello.txt',
|
||||
}));
|
||||
```
|
||||
|
||||
### Using SmartBucket
|
||||
## 🪣 Usage with SmartBucket
|
||||
|
||||
```typescript
|
||||
import { SmartBucket } from '@push.rocks/smartbucket';
|
||||
|
||||
// Get connection configuration
|
||||
const s3Config = await s3Server.getS3Descriptor();
|
||||
const smartbucket = new SmartBucket(await s3.getS3Descriptor());
|
||||
const bucket = await smartbucket.createBucket('my-bucket');
|
||||
const dir = await bucket.getBaseDirectory();
|
||||
|
||||
// Create a SmartBucket instance
|
||||
const smartbucket = new SmartBucket(s3Config);
|
||||
const bucket = await smartbucket.getBucket('my-bucket');
|
||||
const baseDir = await bucket.getBaseDirectory();
|
||||
// Upload
|
||||
await dir.fastPut({ path: 'docs/readme.txt', contents: 'Hello!' });
|
||||
|
||||
// Upload files
|
||||
await baseDir.fastStore('path/to/file.txt', 'Hello, S3! 🎉');
|
||||
await baseDir.fastPut({
|
||||
path: 'documents/important.pdf',
|
||||
contents: Buffer.from(yourPdfData),
|
||||
});
|
||||
// Download
|
||||
const content = await dir.fastGet('docs/readme.txt');
|
||||
|
||||
// Download files
|
||||
const content = await baseDir.fastGet('path/to/file.txt');
|
||||
const buffer = await baseDir.fastGetBuffer('documents/important.pdf');
|
||||
|
||||
// List files
|
||||
const files = await baseDir.listFiles();
|
||||
files.forEach((file) => {
|
||||
console.log(`📄 ${file.name} (${file.size} bytes)`);
|
||||
});
|
||||
|
||||
// Delete files
|
||||
await baseDir.fastDelete('old-file.txt');
|
||||
// List
|
||||
const files = await dir.listFiles();
|
||||
```
|
||||
|
||||
## 📤 Multipart Uploads
|
||||
|
||||
Smarts3 supports multipart uploads for large files (>5MB):
|
||||
For files larger than 5 MB, use multipart uploads. smarts3 handles them with **streaming I/O** — parts are written directly to disk, never buffered in memory.
|
||||
|
||||
```typescript
|
||||
import {
|
||||
S3Client,
|
||||
CreateMultipartUploadCommand,
|
||||
UploadPartCommand,
|
||||
CompleteMultipartUploadCommand
|
||||
CompleteMultipartUploadCommand,
|
||||
} from '@aws-sdk/client-s3';
|
||||
|
||||
const s3Client = new S3Client(/* ... */);
|
||||
|
||||
// 1. Initiate multipart upload
|
||||
const { UploadId } = await s3Client.send(new CreateMultipartUploadCommand({
|
||||
// 1. Initiate
|
||||
const { UploadId } = await client.send(new CreateMultipartUploadCommand({
|
||||
Bucket: 'my-bucket',
|
||||
Key: 'large-file.bin',
|
||||
}));
|
||||
|
||||
// 2. Upload parts (in parallel if desired)
|
||||
// 2. Upload parts
|
||||
const parts = [];
|
||||
for (let i = 0; i < numParts; i++) {
|
||||
const part = await s3Client.send(new UploadPartCommand({
|
||||
for (let i = 0; i < chunks.length; i++) {
|
||||
const { ETag } = await client.send(new UploadPartCommand({
|
||||
Bucket: 'my-bucket',
|
||||
Key: 'large-file.bin',
|
||||
UploadId,
|
||||
PartNumber: i + 1,
|
||||
Body: partData[i],
|
||||
Body: chunks[i],
|
||||
}));
|
||||
|
||||
parts.push({
|
||||
PartNumber: i + 1,
|
||||
ETag: part.ETag,
|
||||
});
|
||||
parts.push({ PartNumber: i + 1, ETag });
|
||||
}
|
||||
|
||||
// 3. Complete the upload
|
||||
await s3Client.send(new CompleteMultipartUploadCommand({
|
||||
// 3. Complete
|
||||
await client.send(new CompleteMultipartUploadCommand({
|
||||
Bucket: 'my-bucket',
|
||||
Key: 'large-file.bin',
|
||||
UploadId,
|
||||
@@ -304,298 +243,150 @@ await s3Client.send(new CompleteMultipartUploadCommand({
|
||||
|
||||
## 🧪 Testing Integration
|
||||
|
||||
### Using with Jest
|
||||
|
||||
```typescript
|
||||
import { Smarts3 } from '@push.rocks/smarts3';
|
||||
import { tap, expect } from '@git.zone/tstest/tapbundle';
|
||||
|
||||
describe('S3 Operations', () => {
|
||||
let s3Server: Smarts3;
|
||||
let s3: Smarts3;
|
||||
|
||||
beforeAll(async () => {
|
||||
s3Server = await Smarts3.createAndStart({
|
||||
server: { port: 9999, silent: true },
|
||||
storage: { cleanSlate: true },
|
||||
});
|
||||
});
|
||||
|
||||
afterAll(async () => {
|
||||
await s3Server.stop();
|
||||
});
|
||||
|
||||
test('should upload and retrieve a file', async () => {
|
||||
const bucket = await s3Server.createBucket('test-bucket');
|
||||
// Your test logic here
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
### Using with Mocha
|
||||
|
||||
```typescript
|
||||
import { Smarts3 } from '@push.rocks/smarts3';
|
||||
import { expect } from 'chai';
|
||||
|
||||
describe('S3 Operations', () => {
|
||||
let s3Server: Smarts3;
|
||||
|
||||
before(async () => {
|
||||
s3Server = await Smarts3.createAndStart({
|
||||
server: { port: 9999, silent: true },
|
||||
storage: { cleanSlate: true },
|
||||
});
|
||||
});
|
||||
|
||||
after(async () => {
|
||||
await s3Server.stop();
|
||||
});
|
||||
|
||||
it('should upload and retrieve a file', async () => {
|
||||
const bucket = await s3Server.createBucket('test-bucket');
|
||||
// Your test logic here
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
## 🎯 Real-World Use Cases
|
||||
|
||||
### CI/CD Pipeline Testing
|
||||
|
||||
```typescript
|
||||
// ci-test.ts
|
||||
import { Smarts3 } from '@push.rocks/smarts3';
|
||||
|
||||
export async function setupTestEnvironment() {
|
||||
const s3 = await Smarts3.createAndStart({
|
||||
server: {
|
||||
port: process.env.S3_PORT || 3000,
|
||||
silent: true,
|
||||
},
|
||||
storage: { cleanSlate: true },
|
||||
logging: { level: 'error' }, // Only log errors in CI
|
||||
});
|
||||
|
||||
// Create test buckets
|
||||
await s3.createBucket('uploads');
|
||||
await s3.createBucket('processed');
|
||||
await s3.createBucket('archive');
|
||||
|
||||
return s3;
|
||||
}
|
||||
```
|
||||
|
||||
### Microservice Development
|
||||
|
||||
```typescript
|
||||
// dev-server.ts
|
||||
import { Smarts3 } from '@push.rocks/smarts3';
|
||||
import express from 'express';
|
||||
|
||||
async function startDevelopmentServer() {
|
||||
// Start local S3 with CORS for local development
|
||||
const s3 = await Smarts3.createAndStart({
|
||||
server: { port: 3000 },
|
||||
cors: {
|
||||
enabled: true,
|
||||
allowedOrigins: ['http://localhost:8080'],
|
||||
},
|
||||
});
|
||||
|
||||
await s3.createBucket('user-uploads');
|
||||
|
||||
// Start your API server
|
||||
const app = express();
|
||||
|
||||
app.post('/upload', async (req, res) => {
|
||||
// Your upload logic using local S3
|
||||
});
|
||||
|
||||
app.listen(8080, () => {
|
||||
console.log('🚀 Dev server running with local S3!');
|
||||
});
|
||||
}
|
||||
```
|
||||
|
||||
### Data Migration Testing
|
||||
|
||||
```typescript
|
||||
import { Smarts3 } from '@push.rocks/smarts3';
|
||||
import { SmartBucket } from '@push.rocks/smartbucket';
|
||||
|
||||
async function testDataMigration() {
|
||||
const s3 = await Smarts3.createAndStart({
|
||||
tap.test('setup', async () => {
|
||||
s3 = await Smarts3.createAndStart({
|
||||
server: { port: 4567, silent: true },
|
||||
storage: { cleanSlate: true },
|
||||
});
|
||||
});
|
||||
|
||||
// Create source and destination buckets
|
||||
await s3.createBucket('legacy-data');
|
||||
await s3.createBucket('new-data');
|
||||
tap.test('should store and retrieve objects', async () => {
|
||||
await s3.createBucket('test');
|
||||
// ... your test logic using AWS SDK or SmartBucket
|
||||
});
|
||||
|
||||
// Populate source with test data
|
||||
const config = await s3.getS3Descriptor();
|
||||
const smartbucket = new SmartBucket(config);
|
||||
const source = await smartbucket.getBucket('legacy-data');
|
||||
const sourceDir = await source.getBaseDirectory();
|
||||
tap.test('teardown', async () => {
|
||||
await s3.stop();
|
||||
});
|
||||
|
||||
await sourceDir.fastStore('user-1.json', JSON.stringify({ id: 1, name: 'Alice' }));
|
||||
await sourceDir.fastStore('user-2.json', JSON.stringify({ id: 2, name: 'Bob' }));
|
||||
|
||||
// Run your migration logic
|
||||
await runMigration(config);
|
||||
|
||||
// Verify migration results
|
||||
const dest = await smartbucket.getBucket('new-data');
|
||||
const destDir = await dest.getBaseDirectory();
|
||||
const migratedFiles = await destDir.listFiles();
|
||||
|
||||
console.log(`✅ Migrated ${migratedFiles.length} files successfully!`);
|
||||
}
|
||||
export default tap.start();
|
||||
```
|
||||
|
||||
## 🔧 API Reference
|
||||
|
||||
### Smarts3 Class
|
||||
### `Smarts3` Class
|
||||
|
||||
#### Static Methods
|
||||
#### `static createAndStart(config?: ISmarts3Config): Promise<Smarts3>`
|
||||
|
||||
##### `createAndStart(config?: ISmarts3Config): Promise<Smarts3>`
|
||||
Create and start a server in one call.
|
||||
|
||||
Create and start a Smarts3 instance in one call.
|
||||
#### `start(): Promise<void>`
|
||||
|
||||
**Parameters:**
|
||||
- `config` - Optional configuration object (see Configuration Guide above)
|
||||
Spawn the Rust binary and start the HTTP server.
|
||||
|
||||
**Returns:** Promise that resolves to a running Smarts3 instance
|
||||
#### `stop(): Promise<void>`
|
||||
|
||||
#### Instance Methods
|
||||
Gracefully stop the server and kill the Rust process.
|
||||
|
||||
##### `start(): Promise<void>`
|
||||
#### `createBucket(name: string): Promise<{ name: string }>`
|
||||
|
||||
Start the S3 server.
|
||||
Create an S3 bucket.
|
||||
|
||||
##### `stop(): Promise<void>`
|
||||
#### `getS3Descriptor(options?): Promise<IS3Descriptor>`
|
||||
|
||||
Stop the S3 server and release resources.
|
||||
Get connection details for S3 clients. Returns:
|
||||
|
||||
##### `createBucket(name: string): Promise<{ name: string }>`
|
||||
| Field | Type | Description |
|
||||
|-------|------|-------------|
|
||||
| `endpoint` | `string` | Server hostname (`localhost` by default) |
|
||||
| `port` | `number` | Server port |
|
||||
| `accessKey` | `string` | Access key from first configured credential |
|
||||
| `accessSecret` | `string` | Secret key from first configured credential |
|
||||
| `useSsl` | `boolean` | Always `false` (plain HTTP) |
|
||||
|
||||
Create a new S3 bucket.
|
||||
## 🏗️ Architecture
|
||||
|
||||
**Parameters:**
|
||||
- `name` - Bucket name
|
||||
smarts3 uses a **hybrid Rust + TypeScript** architecture:
|
||||
|
||||
**Returns:** Promise that resolves to bucket information
|
||||
```
|
||||
┌─────────────────────────────────┐
|
||||
│ Your Code (AWS SDK, etc.) │
|
||||
│ ↕ HTTP (localhost:3000) │
|
||||
├─────────────────────────────────┤
|
||||
│ rusts3 binary (Rust) │
|
||||
│ ├─ hyper 1.x HTTP server │
|
||||
│ ├─ S3 path-style routing │
|
||||
│ ├─ Streaming storage layer │
|
||||
│ ├─ Multipart manager │
|
||||
│ ├─ CORS / Auth middleware │
|
||||
│ └─ S3 XML response builder │
|
||||
├─────────────────────────────────┤
|
||||
│ TypeScript (thin IPC wrapper) │
|
||||
│ ├─ Smarts3 class │
|
||||
│ ├─ RustBridge (stdin/stdout) │
|
||||
│ └─ Config & S3 descriptor │
|
||||
└─────────────────────────────────┘
|
||||
```
|
||||
|
||||
##### `getS3Descriptor(options?): Promise<IS3Descriptor>`
|
||||
**Why Rust?** The TypeScript implementation had critical perf issues: OOM on multipart uploads (parts buffered in memory), double stream copying, file descriptor leaks on HEAD requests, full-file reads for range requests, and no backpressure. The Rust binary solves all of these with streaming I/O, zero-copy, and direct `seek()` for range requests.
|
||||
|
||||
Get S3 connection configuration for use with S3 clients.
|
||||
**IPC Protocol:** TypeScript spawns the `rusts3` binary with `--management` and communicates via newline-delimited JSON over stdin/stdout. Commands: `start`, `stop`, `createBucket`.
|
||||
|
||||
**Parameters:**
|
||||
- `options` - Optional partial descriptor to merge with defaults
|
||||
### S3 Operations Supported
|
||||
|
||||
**Returns:** Promise that resolves to S3 descriptor with:
|
||||
- `accessKey` - Access key for authentication
|
||||
- `accessSecret` - Secret key for authentication
|
||||
- `endpoint` - Server endpoint (hostname/IP)
|
||||
- `port` - Server port
|
||||
- `useSsl` - Whether to use SSL (always false for local server)
|
||||
| Operation | Method | Path |
|
||||
|-----------|--------|------|
|
||||
| ListBuckets | `GET /` | |
|
||||
| CreateBucket | `PUT /{bucket}` | |
|
||||
| DeleteBucket | `DELETE /{bucket}` | |
|
||||
| HeadBucket | `HEAD /{bucket}` | |
|
||||
| ListObjects (v1/v2) | `GET /{bucket}` | `?list-type=2` for v2 |
|
||||
| PutObject | `PUT /{bucket}/{key}` | |
|
||||
| GetObject | `GET /{bucket}/{key}` | Supports `Range` header |
|
||||
| HeadObject | `HEAD /{bucket}/{key}` | |
|
||||
| DeleteObject | `DELETE /{bucket}/{key}` | |
|
||||
| CopyObject | `PUT /{bucket}/{key}` | `x-amz-copy-source` header |
|
||||
| InitiateMultipartUpload | `POST /{bucket}/{key}?uploads` | |
|
||||
| UploadPart | `PUT /{bucket}/{key}?partNumber&uploadId` | |
|
||||
| CompleteMultipartUpload | `POST /{bucket}/{key}?uploadId` | |
|
||||
| AbortMultipartUpload | `DELETE /{bucket}/{key}?uploadId` | |
|
||||
| ListMultipartUploads | `GET /{bucket}?uploads` | |
|
||||
|
||||
## 💡 Production Considerations
|
||||
### On-Disk Format
|
||||
|
||||
### When to Use Smarts3 vs MinIO
|
||||
|
||||
**Use Smarts3 when:**
|
||||
- 🎯 You need a lightweight, zero-dependency S3 server
|
||||
- 🧪 Running in CI/CD pipelines or containerized test environments
|
||||
- 🏗️ Local development where MinIO setup is overkill
|
||||
- 📦 Your application needs to bundle an S3-compatible server
|
||||
- 🚀 Quick prototyping without infrastructure setup
|
||||
|
||||
**Use MinIO when:**
|
||||
- 🏢 Production workloads requiring high availability
|
||||
- 📊 Advanced features like versioning, replication, encryption at rest
|
||||
- 🔐 Complex IAM policies and bucket policies
|
||||
- 📈 High-performance requirements with multiple nodes
|
||||
- 🌐 Multi-tenant environments
|
||||
|
||||
### Security Notes
|
||||
|
||||
- Smarts3's authentication is intentionally simple (static credentials)
|
||||
- It does **not** implement AWS Signature V4 verification
|
||||
- Perfect for development/testing, but not for production internet-facing deployments
|
||||
- For production use, place behind a reverse proxy with proper authentication
|
||||
|
||||
## 🐛 Debugging Tips
|
||||
|
||||
1. **Enable debug logging**
|
||||
```typescript
|
||||
const s3 = await Smarts3.createAndStart({
|
||||
logging: { level: 'debug', format: 'json' },
|
||||
});
|
||||
```
|
||||
|
||||
2. **Check the buckets directory** - Find your data in `.nogit/bucketsDir/` by default
|
||||
|
||||
3. **Use the correct endpoint** - Remember to use `127.0.0.1` or `localhost`
|
||||
|
||||
4. **Force path style** - Always use `forcePathStyle: true` with local S3
|
||||
|
||||
5. **Inspect requests** - All requests are logged when `silent: false`
|
||||
|
||||
## 📈 Performance
|
||||
|
||||
Smarts3 is optimized for development and testing scenarios:
|
||||
|
||||
- ⚡ **Instant operations** - No network latency
|
||||
- 💾 **Low memory footprint** - Efficient filesystem operations with streams
|
||||
- 🔄 **Fast cleanup** - Clean slate mode for quick test resets
|
||||
- 🚀 **Parallel operations** - Handle multiple concurrent requests
|
||||
- 📤 **Streaming uploads/downloads** - Low memory usage for large files
|
||||
```
|
||||
{storage.directory}/
|
||||
{bucket}/
|
||||
{key}._S3_object # Object data
|
||||
{key}._S3_object.metadata.json # Metadata (content-type, x-amz-meta-*, etc.)
|
||||
{key}._S3_object.md5 # Cached MD5 hash
|
||||
.multipart/
|
||||
{upload-id}/
|
||||
metadata.json # Upload metadata (bucket, key, parts)
|
||||
part-1 # Part data files
|
||||
part-2
|
||||
...
|
||||
```
|
||||
|
||||
## 🔗 Related Packages
|
||||
|
||||
- [`@push.rocks/smartbucket`](https://www.npmjs.com/package/@push.rocks/smartbucket) - Powerful S3 abstraction layer
|
||||
- [`@push.rocks/smartfs`](https://www.npmjs.com/package/@push.rocks/smartfs) - Modern filesystem with Web Streams support
|
||||
- [`@tsclass/tsclass`](https://www.npmjs.com/package/@tsclass/tsclass) - TypeScript class helpers
|
||||
|
||||
## 📝 Changelog
|
||||
|
||||
### v4.0.0 - Production Ready 🚀
|
||||
|
||||
**Breaking Changes:**
|
||||
- Configuration format changed from flat to nested structure
|
||||
- Old format: `{ port: 3000, cleanSlate: true }`
|
||||
- New format: `{ server: { port: 3000 }, storage: { cleanSlate: true } }`
|
||||
|
||||
**New Features:**
|
||||
- ✨ Production configuration system with comprehensive options
|
||||
- 📊 Structured logging with multiple levels and formats
|
||||
- 🌐 Full CORS middleware support
|
||||
- 🔐 Simple static credentials authentication
|
||||
- 📤 Complete multipart upload support for large files
|
||||
- 🔧 Flexible configuration with sensible defaults
|
||||
|
||||
**Improvements:**
|
||||
- Removed smartbucket from production dependencies (dev-only)
|
||||
- Migrated to @push.rocks/smartfs for modern filesystem operations
|
||||
- Enhanced error handling and logging throughout
|
||||
- Better TypeScript types and documentation
|
||||
- [`@push.rocks/smartbucket`](https://code.foss.global/push.rocks/smartbucket) — High-level S3 abstraction layer
|
||||
- [`@push.rocks/smartrust`](https://code.foss.global/push.rocks/smartrust) — TypeScript ↔ Rust IPC bridge
|
||||
- [`@git.zone/tsrust`](https://code.foss.global/git.zone/tsrust) — Rust cross-compilation for npm packages
|
||||
|
||||
## License and Legal Information
|
||||
|
||||
This repository contains open-source code that is licensed under the MIT License. A copy of the MIT License can be found in the [license](license) file within this repository.
|
||||
This repository contains open-source code licensed under the MIT License. A copy of the license can be found in the [LICENSE](./LICENSE) file.
|
||||
|
||||
**Please note:** The MIT License does not grant permission to use the trade names, trademarks, service marks, or product names of the project, except as required for reasonable and customary use in describing the origin of the work and reproducing the content of the NOTICE file.
|
||||
|
||||
### Trademarks
|
||||
|
||||
This project is owned and maintained by Task Venture Capital GmbH. The names and logos associated with Task Venture Capital GmbH and any related products or services are trademarks of Task Venture Capital GmbH and are not included within the scope of the MIT license granted herein. Use of these trademarks must comply with Task Venture Capital GmbH's Trademark Guidelines, and any usage must be approved in writing by Task Venture Capital GmbH.
|
||||
This project is owned and maintained by Task Venture Capital GmbH. The names and logos associated with Task Venture Capital GmbH and any related products or services are trademarks of Task Venture Capital GmbH or third parties, and are not included within the scope of the MIT license granted herein.
|
||||
|
||||
Use of these trademarks must comply with Task Venture Capital GmbH's Trademark Guidelines or the guidelines of the respective third-party owners, and any usage must be approved in writing. Third-party trademarks used herein are the property of their respective owners and used only in a descriptive manner, e.g. for an implementation of an API or similar.
|
||||
|
||||
### Company Information
|
||||
|
||||
Task Venture Capital GmbH
|
||||
Registered at District court Bremen HRB 35230 HB, Germany
|
||||
Registered at District Court Bremen HRB 35230 HB, Germany
|
||||
|
||||
For any legal inquiries or if you require further information, please contact us via email at hello@task.vc.
|
||||
For any legal inquiries or further information, please contact us via email at hello@task.vc.
|
||||
|
||||
By using this repository, you acknowledge that you have read this section, agree to comply with its terms, and understand that the licensing of the code does not imply endorsement by Task Venture Capital GmbH of any derivative works.
|
||||
|
||||
Reference in New Issue
Block a user