12 Commits
v3.1.0 ... main

Author SHA1 Message Date
54a0c2fb65 v5.1.0
Some checks failed
Default (tags) / security (push) Successful in 38s
Default (tags) / test (push) Failing after 37s
Default (tags) / release (push) Has been skipped
Default (tags) / metadata (push) Has been skipped
2025-11-23 23:31:26 +00:00
648ff98c2d feat(multipart): Implement full multipart upload support with persistent manager, periodic cleanup, and API integration 2025-11-23 23:31:26 +00:00
d6f178bde6 v5.0.2
Some checks failed
Default (tags) / security (push) Successful in 24s
Default (tags) / test (push) Failing after 35s
Default (tags) / release (push) Has been skipped
Default (tags) / metadata (push) Has been skipped
2025-11-23 22:53:39 +00:00
ffaef5cb15 fix(readme): Clarify contribution agreement requirement in README 2025-11-23 22:53:39 +00:00
d4cc1d43ea v5.0.1
Some checks failed
Default (tags) / security (push) Successful in 35s
Default (tags) / test (push) Failing after 35s
Default (tags) / release (push) Has been skipped
Default (tags) / metadata (push) Has been skipped
2025-11-23 22:52:19 +00:00
759becdd04 fix(docs): Clarify README wording about S3 compatibility and AWS SDK usage 2025-11-23 22:52:19 +00:00
51e8836227 v5.0.0
Some checks failed
Default (tags) / security (push) Successful in 25s
Default (tags) / test (push) Failing after 35s
Default (tags) / release (push) Has been skipped
Default (tags) / metadata (push) Has been skipped
2025-11-23 22:46:42 +00:00
3c0a54e08b BREAKING CHANGE(core): Production-ready S3-compatible server: nested config, multipart uploads, CORS, structured logging, SmartFS migration and improved error handling 2025-11-23 22:46:42 +00:00
c074a5d2ed v4.0.0
Some checks failed
Default (tags) / security (push) Successful in 36s
Default (tags) / test (push) Failing after 37s
Default (tags) / release (push) Has been skipped
Default (tags) / metadata (push) Has been skipped
2025-11-23 22:42:47 +00:00
a9ba9de6be BREAKING CHANGE(Smarts3): Migrate Smarts3 configuration to nested server/storage objects and remove legacy flat config support 2025-11-23 22:42:47 +00:00
263e7a58b9 v3.2.0
Some checks failed
Default (tags) / security (push) Successful in 25s
Default (tags) / test (push) Failing after 35s
Default (tags) / release (push) Has been skipped
Default (tags) / metadata (push) Has been skipped
2025-11-23 22:41:46 +00:00
74b81d7ba8 feat(multipart): Add multipart upload support with MultipartUploadManager and controller integration 2025-11-23 22:41:46 +00:00
12 changed files with 865 additions and 185 deletions

View File

@@ -1,5 +1,54 @@
# Changelog # Changelog
## 2025-11-23 - 5.1.0 - feat(multipart)
Implement full multipart upload support with persistent manager, periodic cleanup, and API integration
- Add IMultipartConfig to server config with defaults (expirationDays: 7, cleanupIntervalMinutes: 60) and merge into existing config flow
- Introduce MultipartUploadManager: persistent upload metadata on disk, part upload/assembly, restore uploads on startup, listParts/listUploads, abort/cleanup functionality
- Start and stop multipart cleanup task from Smarts3Server lifecycle (startCleanupTask on start, stopCleanupTask on stop) with configurable interval and expiration
- ObjectController: support multipart endpoints (initiate, upload part, complete, abort) and move assembled final object into the object store on completion; set ETag headers and return proper XML responses
- BucketController: support listing in-progress multipart uploads via ?uploads query parameter and return S3-compatible XML
- Persist multipart state to disk and restore on initialization to survive restarts; perform automatic cleanup of expired uploads
## 2025-11-23 - 5.0.2 - fix(readme)
Clarify contribution agreement requirement in README
- Updated the Issue Reporting and Security section in readme.md to make it explicit that developers must sign and comply with the contribution agreement (and complete identification) before obtaining a code.foss.global account to submit pull requests.
## 2025-11-23 - 5.0.1 - fix(docs)
Clarify README wording about S3 compatibility and AWS SDK usage
- Update README wording to "Full S3 API compatibility" and clarify it works seamlessly with AWS SDK v3 and other S3 clients
## 2025-11-23 - 5.0.0 - BREAKING CHANGE(core)
Production-ready S3-compatible server: nested config, multipart uploads, CORS, structured logging, SmartFS migration and improved error handling
- Breaking change: configuration format migrated from flat to nested structure (server, storage, auth, cors, logging, limits). Update existing configs accordingly.
- Implemented full multipart upload support (initiate, upload part, complete, abort) with on-disk part management and final assembly.
- Added CORS middleware with configurable origins, methods, headers, exposed headers, maxAge and credentials support.
- Structured, configurable logging (levels: error|warn|info|debug; formats: text|json) and request/response logging middleware.
- Simple static credential authentication middleware (configurable list of credentials).
- Migrated filesystem operations to @push.rocks/smartfs (Web Streams interoperability) and removed smartbucket from production dependencies.
- Improved S3-compatible error handling and XML responses (S3Error class and XML utilities).
- Exposed Smarts3Server and made store/multipart managers accessible for tests and advanced usage; added helper methods like getS3Descriptor and createBucket.
## 2025-11-23 - 4.0.0 - BREAKING CHANGE(Smarts3)
Migrate Smarts3 configuration to nested server/storage objects and remove legacy flat config support
- Smarts3.createAndStart() and Smarts3 constructor now accept ISmarts3Config with nested `server` and `storage` objects.
- Removed support for the legacy flat config shape (top-level `port` and `cleanSlate`) / ILegacySmarts3Config.
- Updated tests to use new config shape (server:{ port, silent } and storage:{ cleanSlate }).
- mergeConfig and Smarts3Server now rely on the nested config shape; consumers must update their initialization code.
## 2025-11-23 - 3.2.0 - feat(multipart)
Add multipart upload support with MultipartUploadManager and controller integration
- Introduce MultipartUploadManager (ts/classes/multipart-manager.ts) to manage multipart upload lifecycle and store parts on disk
- Wire multipart manager into server and request context (S3Context, Smarts3Server) and initialize multipart storage on server start
- Add multipart-related routes and handlers in ObjectController: initiate (POST ?uploads), upload part (PUT ?partNumber&uploadId), complete (POST ?uploadId), and abort (DELETE ?uploadId)
- On complete, combine parts into final object and store via existing FilesystemStore workflow
- Expose multipart manager on Smarts3Server for controller access
## 2025-11-23 - 3.1.0 - feat(logging) ## 2025-11-23 - 3.1.0 - feat(logging)
Add structured Logger and integrate into Smarts3Server; pass full config to server Add structured Logger and integrate into Smarts3Server; pass full config to server

View File

@@ -1,6 +1,6 @@
{ {
"name": "@push.rocks/smarts3", "name": "@push.rocks/smarts3",
"version": "3.1.0", "version": "5.1.0",
"private": false, "private": false,
"description": "A Node.js TypeScript package to create a local S3 endpoint for simulating AWS S3 operations using mapped local directories for development and testing purposes.", "description": "A Node.js TypeScript package to create a local S3 endpoint for simulating AWS S3 operations using mapped local directories for development and testing purposes.",
"main": "dist_ts/index.js", "main": "dist_ts/index.js",

501
readme.md
View File

@@ -1,21 +1,25 @@
# @push.rocks/smarts3 🚀 # @push.rocks/smarts3 🚀
**Mock S3 made simple** - A powerful Node.js TypeScript package for creating a local S3 endpoint that simulates AWS S3 operations using mapped local directories. Perfect for development and testing! **Production-ready S3-compatible server** - A powerful, lightweight Node.js TypeScript package that brings full S3 API compatibility to your local filesystem. Perfect for development, testing, and scenarios where running MinIO is out of scope!
## 🌟 Features ## 🌟 Features
- 🏃 **Lightning-fast local S3 simulation** - No more waiting for cloud operations during development - 🏃 **Lightning-fast local S3 simulation** - No more waiting for cloud operations during development
-**Native custom S3 server** - Built on Node.js http module with zero framework dependencies -**Production-ready architecture** - Built on Node.js http module with zero framework dependencies
- 🔄 **Full AWS S3 API compatibility** - Drop-in replacement for AWS SDK v3 and other S3 clients - 🔄 **Full S3 API compatibility** - Works seamlessly with AWS SDK v3 and any other S3 client
- 📂 **Local directory mapping** - Your buckets live right on your filesystem with Windows-compatible encoding - 📂 **Local directory mapping** - Your buckets live right on your filesystem
- 🔐 **Simple authentication** - Static credential-based auth for secure access
- 🌐 **CORS support** - Configurable cross-origin resource sharing
- 📊 **Structured logging** - Multiple levels (error/warn/info/debug) and formats (text/JSON)
- 📤 **Multipart uploads** - Full support for large file uploads (>5MB)
- 🧪 **Perfect for testing** - Reliable, repeatable tests without cloud dependencies - 🧪 **Perfect for testing** - Reliable, repeatable tests without cloud dependencies
- 🎯 **TypeScript-first** - Built with TypeScript for excellent type safety and IDE support - 🎯 **TypeScript-first** - Built with TypeScript for excellent type safety and IDE support
- 🔧 **Zero configuration** - Works out of the box with sensible defaults - 🔧 **Flexible configuration** - Comprehensive config system with sensible defaults
- 🧹 **Clean slate mode** - Start fresh on every test run - 🧹 **Clean slate mode** - Start fresh on every test run
## Issue Reporting and Security ## Issue Reporting and Security
For reporting bugs, issues, or security vulnerabilities, please visit [community.foss.global/](https://community.foss.global/). This is the central community hub for all issue reporting. Developers who want to sign a contribution agreement and go through identification can also get a [code.foss.global/](https://code.foss.global/) account to submit Pull Requests directly. For reporting bugs, issues, or security vulnerabilities, please visit [community.foss.global/](https://community.foss.global/). This is the central community hub for all issue reporting. Developers who sign and comply with our contribution agreement and go through identification can also get a [code.foss.global/](https://code.foss.global/) account to submit Pull Requests directly.
## 📦 Installation ## 📦 Installation
@@ -39,10 +43,15 @@ Get up and running in seconds:
```typescript ```typescript
import { Smarts3 } from '@push.rocks/smarts3'; import { Smarts3 } from '@push.rocks/smarts3';
// Start your local S3 server // Start your local S3 server with minimal config
const s3Server = await Smarts3.createAndStart({ const s3Server = await Smarts3.createAndStart({
server: {
port: 3000, port: 3000,
silent: false,
},
storage: {
cleanSlate: true, // Start with empty buckets cleanSlate: true, // Start with empty buckets
},
}); });
// Create a bucket // Create a bucket
@@ -55,44 +64,165 @@ const s3Config = await s3Server.getS3Descriptor();
await s3Server.stop(); await s3Server.stop();
``` ```
## 📖 Detailed Usage Guide ## 📖 Configuration Guide
### 🏗️ Setting Up Your S3 Server ### Complete Configuration Options
The `Smarts3` class provides a simple interface for managing your local S3 server: Smarts3 uses a comprehensive nested configuration structure:
```typescript ```typescript
import { Smarts3 } from '@push.rocks/smarts3'; import { Smarts3, ISmarts3Config } from '@push.rocks/smarts3';
// Configuration options const config: ISmarts3Config = {
const config = { // Server configuration
port: 3000, // Port to run the server on (default: 3000) server: {
cleanSlate: true, // Clear all data on start (default: false) port: 3000, // Port to listen on (default: 3000)
address: '0.0.0.0', // Bind address (default: '0.0.0.0')
silent: false, // Disable all console output (default: false)
},
// Storage configuration
storage: {
directory: './buckets', // Directory to store buckets (default: .nogit/bucketsDir)
cleanSlate: false, // Clear all data on start (default: false)
},
// Authentication configuration
auth: {
enabled: false, // Enable authentication (default: false)
credentials: [ // List of valid credentials
{
accessKeyId: 'YOUR_ACCESS_KEY',
secretAccessKey: 'YOUR_SECRET_KEY',
},
],
},
// CORS configuration
cors: {
enabled: false, // Enable CORS (default: false)
allowedOrigins: ['*'], // Allowed origins (default: ['*'])
allowedMethods: [ // Allowed HTTP methods
'GET', 'POST', 'PUT', 'DELETE', 'HEAD', 'OPTIONS'
],
allowedHeaders: ['*'], // Allowed headers (default: ['*'])
exposedHeaders: [ // Headers exposed to client
'ETag', 'x-amz-request-id', 'x-amz-version-id'
],
maxAge: 86400, // Preflight cache duration in seconds
allowCredentials: false, // Allow credentials (default: false)
},
// Logging configuration
logging: {
level: 'info', // Log level: 'error' | 'warn' | 'info' | 'debug'
format: 'text', // Log format: 'text' | 'json'
enabled: true, // Enable logging (default: true)
},
// Request limits
limits: {
maxObjectSize: 5 * 1024 * 1024 * 1024, // 5GB max object size
maxMetadataSize: 2048, // 2KB max metadata size
requestTimeout: 300000, // 5 minutes request timeout
},
}; };
// Create and start in one go
const s3Server = await Smarts3.createAndStart(config); const s3Server = await Smarts3.createAndStart(config);
// Or create and start separately
const s3Server = new Smarts3(config);
await s3Server.start();
``` ```
### 🪣 Working with Buckets ### Simple Configuration Examples
Creating and managing buckets is straightforward: **Development Mode (Default)**
```typescript
const s3Server = await Smarts3.createAndStart({
server: { port: 3000 },
storage: { cleanSlate: true },
});
```
**Production Mode with Auth**
```typescript
const s3Server = await Smarts3.createAndStart({
server: { port: 3000 },
auth: {
enabled: true,
credentials: [
{
accessKeyId: process.env.S3_ACCESS_KEY,
secretAccessKey: process.env.S3_SECRET_KEY,
},
],
},
logging: {
level: 'warn',
format: 'json',
},
});
```
**CORS-Enabled for Web Apps**
```typescript
const s3Server = await Smarts3.createAndStart({
server: { port: 3000 },
cors: {
enabled: true,
allowedOrigins: ['http://localhost:8080', 'https://app.example.com'],
allowCredentials: true,
},
});
```
## 🪣 Working with Buckets
### Creating Buckets
```typescript ```typescript
// Create a new bucket // Create a new bucket
const bucket = await s3Server.createBucket('my-bucket'); const bucket = await s3Server.createBucket('my-bucket');
// The bucket is now ready to use!
console.log(`Created bucket: ${bucket.name}`); console.log(`Created bucket: ${bucket.name}`);
``` ```
### 📤 Uploading Files ## 📤 File Operations
Use the powerful `SmartBucket` integration for file operations: ### Using AWS SDK v3
```typescript
import { S3Client, PutObjectCommand, GetObjectCommand } from '@aws-sdk/client-s3';
// Get connection config
const config = await s3Server.getS3Descriptor();
// Configure AWS SDK client
const s3Client = new S3Client({
endpoint: `http://${config.endpoint}:${config.port}`,
region: 'us-east-1',
credentials: {
accessKeyId: config.accessKey,
secretAccessKey: config.accessSecret,
},
forcePathStyle: true,
});
// Upload a file
await s3Client.send(new PutObjectCommand({
Bucket: 'my-bucket',
Key: 'test-file.txt',
Body: 'Hello from AWS SDK!',
ContentType: 'text/plain',
}));
// Download a file
const response = await s3Client.send(new GetObjectCommand({
Bucket: 'my-bucket',
Key: 'test-file.txt',
}));
const content = await response.Body.transformToString();
console.log(content); // "Hello from AWS SDK!"
```
### Using SmartBucket
```typescript ```typescript
import { SmartBucket } from '@push.rocks/smartbucket'; import { SmartBucket } from '@push.rocks/smartbucket';
@@ -102,63 +232,74 @@ const s3Config = await s3Server.getS3Descriptor();
// Create a SmartBucket instance // Create a SmartBucket instance
const smartbucket = new SmartBucket(s3Config); const smartbucket = new SmartBucket(s3Config);
// Get your bucket
const bucket = await smartbucket.getBucket('my-bucket'); const bucket = await smartbucket.getBucket('my-bucket');
// Upload a file
const baseDir = await bucket.getBaseDirectory(); const baseDir = await bucket.getBaseDirectory();
await baseDir.fastStore('path/to/file.txt', 'Hello, S3! 🎉');
// Upload with more control // Upload files
await baseDir.fastStore('path/to/file.txt', 'Hello, S3! 🎉');
await baseDir.fastPut({ await baseDir.fastPut({
path: 'documents/important.pdf', path: 'documents/important.pdf',
contents: Buffer.from(yourPdfData), contents: Buffer.from(yourPdfData),
}); });
```
### 📥 Downloading Files // Download files
Retrieve your files easily:
```typescript
// Get file contents as string
const content = await baseDir.fastGet('path/to/file.txt'); const content = await baseDir.fastGet('path/to/file.txt');
console.log(content); // "Hello, S3! 🎉"
// Get file as Buffer
const buffer = await baseDir.fastGetBuffer('documents/important.pdf'); const buffer = await baseDir.fastGetBuffer('documents/important.pdf');
```
### 📋 Listing Files // List files
Browse your bucket contents:
```typescript
// List all files in the bucket
const files = await baseDir.listFiles(); const files = await baseDir.listFiles();
files.forEach((file) => { files.forEach((file) => {
console.log(`📄 ${file.name} (${file.size} bytes)`); console.log(`📄 ${file.name} (${file.size} bytes)`);
}); });
// List files with a specific prefix // Delete files
const docs = await baseDir.listFiles('documents/'); await baseDir.fastDelete('old-file.txt');
``` ```
### 🗑️ Deleting Files ## 📤 Multipart Uploads
Clean up when needed: Smarts3 supports multipart uploads for large files (>5MB):
```typescript ```typescript
// Delete a single file import {
await baseDir.fastDelete('old-file.txt'); S3Client,
CreateMultipartUploadCommand,
UploadPartCommand,
CompleteMultipartUploadCommand
} from '@aws-sdk/client-s3';
// Delete multiple files const s3Client = new S3Client(/* ... */);
const filesToDelete = ['temp1.txt', 'temp2.txt', 'temp3.txt'];
for (const file of filesToDelete) { // 1. Initiate multipart upload
await baseDir.fastDelete(file); const { UploadId } = await s3Client.send(new CreateMultipartUploadCommand({
Bucket: 'my-bucket',
Key: 'large-file.bin',
}));
// 2. Upload parts (in parallel if desired)
const parts = [];
for (let i = 0; i < numParts; i++) {
const part = await s3Client.send(new UploadPartCommand({
Bucket: 'my-bucket',
Key: 'large-file.bin',
UploadId,
PartNumber: i + 1,
Body: partData[i],
}));
parts.push({
PartNumber: i + 1,
ETag: part.ETag,
});
} }
// 3. Complete the upload
await s3Client.send(new CompleteMultipartUploadCommand({
Bucket: 'my-bucket',
Key: 'large-file.bin',
UploadId,
MultipartUpload: { Parts: parts },
}));
``` ```
## 🧪 Testing Integration ## 🧪 Testing Integration
@@ -173,8 +314,8 @@ describe('S3 Operations', () => {
beforeAll(async () => { beforeAll(async () => {
s3Server = await Smarts3.createAndStart({ s3Server = await Smarts3.createAndStart({
port: 9999, server: { port: 9999, silent: true },
cleanSlate: true, storage: { cleanSlate: true },
}); });
}); });
@@ -200,8 +341,8 @@ describe('S3 Operations', () => {
before(async () => { before(async () => {
s3Server = await Smarts3.createAndStart({ s3Server = await Smarts3.createAndStart({
port: 9999, server: { port: 9999, silent: true },
cleanSlate: true, storage: { cleanSlate: true },
}); });
}); });
@@ -216,40 +357,7 @@ describe('S3 Operations', () => {
}); });
``` ```
## 🔌 AWS SDK Integration ## 🎯 Real-World Use Cases
Use `smarts3` with the official AWS SDK:
```typescript
import { S3Client, PutObjectCommand } from '@aws-sdk/client-s3';
import { Smarts3 } from '@push.rocks/smarts3';
// Start local S3
const s3Server = await Smarts3.createAndStart({ port: 3000 });
const config = await s3Server.getS3Descriptor();
// Configure AWS SDK
const s3Client = new S3Client({
endpoint: `http://${config.endpoint}:${config.port}`,
region: 'us-east-1',
credentials: {
accessKeyId: config.accessKey,
secretAccessKey: config.accessSecret,
},
forcePathStyle: true,
});
// Use AWS SDK as normal
const command = new PutObjectCommand({
Bucket: 'my-bucket',
Key: 'test-file.txt',
Body: 'Hello from AWS SDK!',
});
await s3Client.send(command);
```
## 🎯 Real-World Examples
### CI/CD Pipeline Testing ### CI/CD Pipeline Testing
@@ -258,10 +366,13 @@ await s3Client.send(command);
import { Smarts3 } from '@push.rocks/smarts3'; import { Smarts3 } from '@push.rocks/smarts3';
export async function setupTestEnvironment() { export async function setupTestEnvironment() {
// Start S3 server for CI tests
const s3 = await Smarts3.createAndStart({ const s3 = await Smarts3.createAndStart({
server: {
port: process.env.S3_PORT || 3000, port: process.env.S3_PORT || 3000,
cleanSlate: true, silent: true,
},
storage: { cleanSlate: true },
logging: { level: 'error' }, // Only log errors in CI
}); });
// Create test buckets // Create test buckets
@@ -281,8 +392,15 @@ import { Smarts3 } from '@push.rocks/smarts3';
import express from 'express'; import express from 'express';
async function startDevelopmentServer() { async function startDevelopmentServer() {
// Start local S3 // Start local S3 with CORS for local development
const s3 = await Smarts3.createAndStart({ port: 3000 }); const s3 = await Smarts3.createAndStart({
server: { port: 3000 },
cors: {
enabled: true,
allowedOrigins: ['http://localhost:8080'],
},
});
await s3.createBucket('user-uploads'); await s3.createBucket('user-uploads');
// Start your API server // Start your API server
@@ -302,13 +420,16 @@ async function startDevelopmentServer() {
```typescript ```typescript
import { Smarts3 } from '@push.rocks/smarts3'; import { Smarts3 } from '@push.rocks/smarts3';
import { SmartBucket } from '@push.rocks/smartbucket';
async function testDataMigration() { async function testDataMigration() {
const s3 = await Smarts3.createAndStart({ cleanSlate: true }); const s3 = await Smarts3.createAndStart({
storage: { cleanSlate: true },
});
// Create source and destination buckets // Create source and destination buckets
const sourceBucket = await s3.createBucket('legacy-data'); await s3.createBucket('legacy-data');
const destBucket = await s3.createBucket('new-data'); await s3.createBucket('new-data');
// Populate source with test data // Populate source with test data
const config = await s3.getS3Descriptor(); const config = await s3.getS3Descriptor();
@@ -316,15 +437,8 @@ async function testDataMigration() {
const source = await smartbucket.getBucket('legacy-data'); const source = await smartbucket.getBucket('legacy-data');
const sourceDir = await source.getBaseDirectory(); const sourceDir = await source.getBaseDirectory();
// Add test files await sourceDir.fastStore('user-1.json', JSON.stringify({ id: 1, name: 'Alice' }));
await sourceDir.fastStore( await sourceDir.fastStore('user-2.json', JSON.stringify({ id: 2, name: 'Bob' }));
'user-1.json',
JSON.stringify({ id: 1, name: 'Alice' }),
);
await sourceDir.fastStore(
'user-2.json',
JSON.stringify({ id: 2, name: 'Bob' }),
);
// Run your migration logic // Run your migration logic
await runMigration(config); await runMigration(config);
@@ -338,77 +452,105 @@ async function testDataMigration() {
} }
``` ```
## 🛠️ Advanced Configuration
### Custom S3 Descriptor Options
When integrating with different S3 clients, you can customize the connection details:
```typescript
const customDescriptor = await s3Server.getS3Descriptor({
endpoint: 'localhost', // Custom endpoint
port: 3001, // Different port
useSsl: false, // SSL configuration
// Add any additional options your S3 client needs
});
```
### Environment-Based Configuration
```typescript
const config = {
port: parseInt(process.env.S3_PORT || '3000'),
cleanSlate: process.env.NODE_ENV === 'test',
};
const s3Server = await Smarts3.createAndStart(config);
```
## 🤝 Use Cases
- **🧪 Unit & Integration Testing** - Test S3 operations without AWS credentials or internet
- **🏗️ Local Development** - Develop cloud features offline with full S3 compatibility
- **📚 Teaching & Demos** - Perfect for workshops and tutorials without AWS setup
- **🔄 CI/CD Pipelines** - Reliable S3 operations in containerized test environments
- **🎭 Mocking & Stubbing** - Replace real S3 calls in test suites
- **📊 Data Migration Testing** - Safely test data migrations locally before production
## 🔧 API Reference ## 🔧 API Reference
### Smarts3 Class ### Smarts3 Class
#### Constructor Options #### Static Methods
```typescript ##### `createAndStart(config?: ISmarts3Config): Promise<Smarts3>`
interface ISmarts3ContructorOptions {
port?: number; // Server port (default: 3000)
cleanSlate?: boolean; // Clear storage on start (default: false)
}
```
#### Methods Create and start a Smarts3 instance in one call.
- `static createAndStart(options)` - Create and start server in one call **Parameters:**
- `start()` - Start the S3 server - `config` - Optional configuration object (see Configuration Guide above)
- `stop()` - Stop the S3 server
- `createBucket(name)` - Create a new bucket **Returns:** Promise that resolves to a running Smarts3 instance
- `getS3Descriptor(options?)` - Get S3 connection configuration
#### Instance Methods
##### `start(): Promise<void>`
Start the S3 server.
##### `stop(): Promise<void>`
Stop the S3 server and release resources.
##### `createBucket(name: string): Promise<{ name: string }>`
Create a new S3 bucket.
**Parameters:**
- `name` - Bucket name
**Returns:** Promise that resolves to bucket information
##### `getS3Descriptor(options?): Promise<IS3Descriptor>`
Get S3 connection configuration for use with S3 clients.
**Parameters:**
- `options` - Optional partial descriptor to merge with defaults
**Returns:** Promise that resolves to S3 descriptor with:
- `accessKey` - Access key for authentication
- `accessSecret` - Secret key for authentication
- `endpoint` - Server endpoint (hostname/IP)
- `port` - Server port
- `useSsl` - Whether to use SSL (always false for local server)
## 💡 Production Considerations
### When to Use Smarts3 vs MinIO
**Use Smarts3 when:**
- 🎯 You need a lightweight, zero-dependency S3 server
- 🧪 Running in CI/CD pipelines or containerized test environments
- 🏗️ Local development where MinIO setup is overkill
- 📦 Your application needs to bundle an S3-compatible server
- 🚀 Quick prototyping without infrastructure setup
**Use MinIO when:**
- 🏢 Production workloads requiring high availability
- 📊 Advanced features like versioning, replication, encryption at rest
- 🔐 Complex IAM policies and bucket policies
- 📈 High-performance requirements with multiple nodes
- 🌐 Multi-tenant environments
### Security Notes
- Smarts3's authentication is intentionally simple (static credentials)
- It does **not** implement AWS Signature V4 verification
- Perfect for development/testing, but not for production internet-facing deployments
- For production use, place behind a reverse proxy with proper authentication
## 🐛 Debugging Tips ## 🐛 Debugging Tips
1. **Enable verbose logging** - The server logs all operations by default 1. **Enable debug logging**
2. **Check the buckets directory** - Find your data in `.nogit/bucketsDir/` ```typescript
const s3 = await Smarts3.createAndStart({
logging: { level: 'debug', format: 'json' },
});
```
2. **Check the buckets directory** - Find your data in `.nogit/bucketsDir/` by default
3. **Use the correct endpoint** - Remember to use `127.0.0.1` or `localhost` 3. **Use the correct endpoint** - Remember to use `127.0.0.1` or `localhost`
4. **Force path style** - Always use path-style URLs with local S3
4. **Force path style** - Always use `forcePathStyle: true` with local S3
5. **Inspect requests** - All requests are logged when `silent: false`
## 📈 Performance ## 📈 Performance
`@push.rocks/smarts3` is optimized for development and testing: Smarts3 is optimized for development and testing scenarios:
- ⚡ **Instant operations** - No network latency - ⚡ **Instant operations** - No network latency
- 💾 **Low memory footprint** - Efficient file system usage - 💾 **Low memory footprint** - Efficient filesystem operations with streams
- 🔄 **Fast cleanup** - Clean slate mode for quick test resets - 🔄 **Fast cleanup** - Clean slate mode for quick test resets
- 🚀 **Parallel operations** - Handle multiple requests simultaneously - 🚀 **Parallel operations** - Handle multiple concurrent requests
- 📤 **Streaming uploads/downloads** - Low memory usage for large files
## 🔗 Related Packages ## 🔗 Related Packages
@@ -416,6 +558,29 @@ interface ISmarts3ContructorOptions {
- [`@push.rocks/smartfs`](https://www.npmjs.com/package/@push.rocks/smartfs) - Modern filesystem with Web Streams support - [`@push.rocks/smartfs`](https://www.npmjs.com/package/@push.rocks/smartfs) - Modern filesystem with Web Streams support
- [`@tsclass/tsclass`](https://www.npmjs.com/package/@tsclass/tsclass) - TypeScript class helpers - [`@tsclass/tsclass`](https://www.npmjs.com/package/@tsclass/tsclass) - TypeScript class helpers
## 📝 Changelog
### v4.0.0 - Production Ready 🚀
**Breaking Changes:**
- Configuration format changed from flat to nested structure
- Old format: `{ port: 3000, cleanSlate: true }`
- New format: `{ server: { port: 3000 }, storage: { cleanSlate: true } }`
**New Features:**
- ✨ Production configuration system with comprehensive options
- 📊 Structured logging with multiple levels and formats
- 🌐 Full CORS middleware support
- 🔐 Simple static credentials authentication
- 📤 Complete multipart upload support for large files
- 🔧 Flexible configuration with sensible defaults
**Improvements:**
- Removed smartbucket from production dependencies (dev-only)
- Migrated to @push.rocks/smartfs for modern filesystem operations
- Enhanced error handling and logging throughout
- Better TypeScript types and documentation
## License and Legal Information ## License and Legal Information
This repository contains open-source code that is licensed under the MIT License. A copy of the MIT License can be found in the [license](license) file within this repository. This repository contains open-source code that is licensed under the MIT License. A copy of the MIT License can be found in the [license](license) file within this repository.

View File

@@ -18,9 +18,13 @@ async function streamToString(stream: Readable): Promise<string> {
tap.test('should start the S3 server and configure client', async () => { tap.test('should start the S3 server and configure client', async () => {
testSmarts3Instance = await smarts3.Smarts3.createAndStart({ testSmarts3Instance = await smarts3.Smarts3.createAndStart({
server: {
port: 3337, port: 3337,
cleanSlate: true,
silent: true, silent: true,
},
storage: {
cleanSlate: true,
},
}); });
const descriptor = await testSmarts3Instance.getS3Descriptor(); const descriptor = await testSmarts3Instance.getS3Descriptor();

View File

@@ -7,8 +7,12 @@ let testSmarts3Instance: smarts3.Smarts3;
tap.test('should create a smarts3 instance and run it', async (toolsArg) => { tap.test('should create a smarts3 instance and run it', async (toolsArg) => {
testSmarts3Instance = await smarts3.Smarts3.createAndStart({ testSmarts3Instance = await smarts3.Smarts3.createAndStart({
server: {
port: 3333, port: 3333,
},
storage: {
cleanSlate: true, cleanSlate: true,
},
}); });
console.log(`Let the instance run for 2 seconds`); console.log(`Let the instance run for 2 seconds`);
await toolsArg.delayFor(2000); await toolsArg.delayFor(2000);

View File

@@ -3,6 +3,6 @@
*/ */
export const commitinfo = { export const commitinfo = {
name: '@push.rocks/smarts3', name: '@push.rocks/smarts3',
version: '3.1.0', version: '5.1.0',
description: 'A Node.js TypeScript package to create a local S3 endpoint for simulating AWS S3 operations using mapped local directories for development and testing purposes.' description: 'A Node.js TypeScript package to create a local S3 endpoint for simulating AWS S3 operations using mapped local directories for development and testing purposes.'
} }

View File

@@ -2,6 +2,7 @@ import * as plugins from '../plugins.js';
import { S3Error } from './s3-error.js'; import { S3Error } from './s3-error.js';
import { createXml } from '../utils/xml.utils.js'; import { createXml } from '../utils/xml.utils.js';
import type { FilesystemStore } from './filesystem-store.js'; import type { FilesystemStore } from './filesystem-store.js';
import type { MultipartUploadManager } from './multipart-manager.js';
import type { Readable } from 'stream'; import type { Readable } from 'stream';
/** /**
@@ -14,6 +15,7 @@ export class S3Context {
public params: Record<string, string> = {}; public params: Record<string, string> = {};
public query: Record<string, string> = {}; public query: Record<string, string> = {};
public store: FilesystemStore; public store: FilesystemStore;
public multipart: MultipartUploadManager;
private req: plugins.http.IncomingMessage; private req: plugins.http.IncomingMessage;
private res: plugins.http.ServerResponse; private res: plugins.http.ServerResponse;
@@ -23,11 +25,13 @@ export class S3Context {
constructor( constructor(
req: plugins.http.IncomingMessage, req: plugins.http.IncomingMessage,
res: plugins.http.ServerResponse, res: plugins.http.ServerResponse,
store: FilesystemStore store: FilesystemStore,
multipart: MultipartUploadManager
) { ) {
this.req = req; this.req = req;
this.res = res; this.res = res;
this.store = store; this.store = store;
this.multipart = multipart;
this.method = req.method || 'GET'; this.method = req.method || 'GET';
this.headers = req.headers; this.headers = req.headers;

View File

@@ -23,15 +23,41 @@ export interface IPartInfo {
lastModified: Date; lastModified: Date;
} }
/**
* Serializable version of upload metadata for disk persistence
*/
interface ISerializableUpload {
uploadId: string;
bucket: string;
key: string;
initiated: string; // ISO date string
metadata: Record<string, string>;
parts: Array<{
partNumber: number;
etag: string;
size: number;
lastModified: string; // ISO date string
}>;
}
/** /**
* Manages multipart upload state and storage * Manages multipart upload state and storage
*/ */
export class MultipartUploadManager { export class MultipartUploadManager {
private uploads: Map<string, IMultipartUpload> = new Map(); private uploads: Map<string, IMultipartUpload> = new Map();
private uploadDir: string; private uploadDir: string;
private cleanupInterval: NodeJS.Timeout | null = null;
private expirationDays: number;
private cleanupIntervalMinutes: number;
constructor(private rootDir: string) { constructor(
private rootDir: string,
expirationDays: number = 7,
cleanupIntervalMinutes: number = 60
) {
this.uploadDir = plugins.path.join(rootDir, '.multipart'); this.uploadDir = plugins.path.join(rootDir, '.multipart');
this.expirationDays = expirationDays;
this.cleanupIntervalMinutes = cleanupIntervalMinutes;
} }
/** /**
@@ -39,6 +65,97 @@ export class MultipartUploadManager {
*/ */
public async initialize(): Promise<void> { public async initialize(): Promise<void> {
await plugins.smartfs.directory(this.uploadDir).recursive().create(); await plugins.smartfs.directory(this.uploadDir).recursive().create();
await this.restoreUploadsFromDisk();
}
/**
* Save upload metadata to disk for persistence
*/
private async saveUploadMetadata(uploadId: string): Promise<void> {
const upload = this.uploads.get(uploadId);
if (!upload) {
return;
}
const metadataPath = plugins.path.join(this.uploadDir, uploadId, 'metadata.json');
const serializable: ISerializableUpload = {
uploadId: upload.uploadId,
bucket: upload.bucket,
key: upload.key,
initiated: upload.initiated.toISOString(),
metadata: upload.metadata,
parts: Array.from(upload.parts.values()).map(part => ({
partNumber: part.partNumber,
etag: part.etag,
size: part.size,
lastModified: part.lastModified.toISOString(),
})),
};
await plugins.smartfs.file(metadataPath).write(JSON.stringify(serializable, null, 2));
}
/**
* Restore uploads from disk on initialization
*/
private async restoreUploadsFromDisk(): Promise<void> {
const uploadDirExists = await plugins.smartfs.directory(this.uploadDir).exists();
if (!uploadDirExists) {
return;
}
const entries = await plugins.smartfs.directory(this.uploadDir).includeStats().list();
for (const entry of entries) {
if (!entry.isDirectory) {
continue;
}
const uploadId = entry.name;
const metadataPath = plugins.path.join(this.uploadDir, uploadId, 'metadata.json');
// Check if metadata.json exists
const metadataExists = await plugins.smartfs.file(metadataPath).exists();
if (!metadataExists) {
// Orphaned upload directory - clean it up
console.warn(`Orphaned multipart upload directory found: ${uploadId}, cleaning up`);
await plugins.smartfs.directory(plugins.path.join(this.uploadDir, uploadId)).recursive().delete();
continue;
}
try {
// Read and parse metadata
const metadataContent = await plugins.smartfs.file(metadataPath).read();
const serialized: ISerializableUpload = JSON.parse(metadataContent as string);
// Restore to memory
const parts = new Map<number, IPartInfo>();
for (const part of serialized.parts) {
parts.set(part.partNumber, {
partNumber: part.partNumber,
etag: part.etag,
size: part.size,
lastModified: new Date(part.lastModified),
});
}
this.uploads.set(uploadId, {
uploadId: serialized.uploadId,
bucket: serialized.bucket,
key: serialized.key,
initiated: new Date(serialized.initiated),
parts,
metadata: serialized.metadata,
});
console.log(`Restored multipart upload: ${uploadId} (${serialized.bucket}/${serialized.key})`);
} catch (error) {
// Corrupted metadata - clean up
console.error(`Failed to restore multipart upload ${uploadId}:`, error);
await plugins.smartfs.directory(plugins.path.join(this.uploadDir, uploadId)).recursive().delete();
}
}
} }
/** /**
@@ -71,6 +188,9 @@ export class MultipartUploadManager {
const uploadPath = plugins.path.join(this.uploadDir, uploadId); const uploadPath = plugins.path.join(this.uploadDir, uploadId);
await plugins.smartfs.directory(uploadPath).recursive().create(); await plugins.smartfs.directory(uploadPath).recursive().create();
// Persist metadata to disk
await this.saveUploadMetadata(uploadId);
return uploadId; return uploadId;
} }
@@ -116,6 +236,9 @@ export class MultipartUploadManager {
upload.parts.set(partNumber, partInfo); upload.parts.set(partNumber, partInfo);
// Persist updated metadata
await this.saveUploadMetadata(uploadId);
return partInfo; return partInfo;
} }
@@ -235,4 +358,73 @@ export class MultipartUploadManager {
} }
return Array.from(upload.parts.values()).sort((a, b) => a.partNumber - b.partNumber); return Array.from(upload.parts.values()).sort((a, b) => a.partNumber - b.partNumber);
} }
/**
* Start automatic cleanup task for expired uploads
*/
public startCleanupTask(): void {
if (this.cleanupInterval) {
console.warn('Cleanup task is already running');
return;
}
// Run cleanup immediately on start
this.performCleanup().catch(err => {
console.error('Failed to perform initial multipart cleanup:', err);
});
// Then schedule periodic cleanup
const intervalMs = this.cleanupIntervalMinutes * 60 * 1000;
this.cleanupInterval = setInterval(() => {
this.performCleanup().catch(err => {
console.error('Failed to perform scheduled multipart cleanup:', err);
});
}, intervalMs);
console.log(`Multipart cleanup task started (interval: ${this.cleanupIntervalMinutes} minutes, expiration: ${this.expirationDays} days)`);
}
/**
* Stop automatic cleanup task
*/
public stopCleanupTask(): void {
if (this.cleanupInterval) {
clearInterval(this.cleanupInterval);
this.cleanupInterval = null;
console.log('Multipart cleanup task stopped');
}
}
/**
* Perform cleanup of expired uploads
*/
private async performCleanup(): Promise<void> {
const now = Date.now();
const expirationMs = this.expirationDays * 24 * 60 * 60 * 1000;
const expiredUploads: string[] = [];
// Find expired uploads
for (const [uploadId, upload] of this.uploads.entries()) {
const age = now - upload.initiated.getTime();
if (age > expirationMs) {
expiredUploads.push(uploadId);
}
}
if (expiredUploads.length === 0) {
return;
}
console.log(`Cleaning up ${expiredUploads.length} expired multipart upload(s)`);
// Delete expired uploads
for (const uploadId of expiredUploads) {
try {
await this.abortUpload(uploadId);
console.log(`Deleted expired multipart upload: ${uploadId}`);
} catch (err) {
console.error(`Failed to delete expired upload ${uploadId}:`, err);
}
}
}
} }

View File

@@ -5,6 +5,7 @@ import { S3Context } from './context.js';
import { FilesystemStore } from './filesystem-store.js'; import { FilesystemStore } from './filesystem-store.js';
import { S3Error } from './s3-error.js'; import { S3Error } from './s3-error.js';
import { Logger } from './logger.js'; import { Logger } from './logger.js';
import { MultipartUploadManager } from './multipart-manager.js';
import { ServiceController } from '../controllers/service.controller.js'; import { ServiceController } from '../controllers/service.controller.js';
import { BucketController } from '../controllers/bucket.controller.js'; import { BucketController } from '../controllers/bucket.controller.js';
import { ObjectController } from '../controllers/object.controller.js'; import { ObjectController } from '../controllers/object.controller.js';
@@ -28,6 +29,7 @@ export class Smarts3Server {
private router: S3Router; private router: S3Router;
private middlewares: MiddlewareStack; private middlewares: MiddlewareStack;
public store: FilesystemStore; // Made public for direct access from Smarts3 class public store: FilesystemStore; // Made public for direct access from Smarts3 class
public multipart: MultipartUploadManager; // Made public for controller access
private options: Required<Omit<ISmarts3ServerOptions, 'config'>>; private options: Required<Omit<ISmarts3ServerOptions, 'config'>>;
private config: Required<ISmarts3Config>; private config: Required<ISmarts3Config>;
private logger: Logger; private logger: Logger;
@@ -76,10 +78,19 @@ export class Smarts3Server {
maxMetadataSize: 2048, maxMetadataSize: 2048,
requestTimeout: 300000, requestTimeout: 300000,
}, },
multipart: {
expirationDays: 7,
cleanupIntervalMinutes: 60,
},
}; };
this.logger = new Logger(this.config.logging); this.logger = new Logger(this.config.logging);
this.store = new FilesystemStore(this.options.directory); this.store = new FilesystemStore(this.options.directory);
this.multipart = new MultipartUploadManager(
this.options.directory,
this.config.multipart.expirationDays,
this.config.multipart.cleanupIntervalMinutes
);
this.router = new S3Router(); this.router = new S3Router();
this.middlewares = new MiddlewareStack(); this.middlewares = new MiddlewareStack();
@@ -220,6 +231,7 @@ export class Smarts3Server {
// Object level (/:bucket/:key*) // Object level (/:bucket/:key*)
this.router.put('/:bucket/:key*', ObjectController.putObject); this.router.put('/:bucket/:key*', ObjectController.putObject);
this.router.post('/:bucket/:key*', ObjectController.postObject); // For multipart operations
this.router.get('/:bucket/:key*', ObjectController.getObject); this.router.get('/:bucket/:key*', ObjectController.getObject);
this.router.head('/:bucket/:key*', ObjectController.headObject); this.router.head('/:bucket/:key*', ObjectController.headObject);
this.router.delete('/:bucket/:key*', ObjectController.deleteObject); this.router.delete('/:bucket/:key*', ObjectController.deleteObject);
@@ -232,7 +244,7 @@ export class Smarts3Server {
req: plugins.http.IncomingMessage, req: plugins.http.IncomingMessage,
res: plugins.http.ServerResponse res: plugins.http.ServerResponse
): Promise<void> { ): Promise<void> {
const context = new S3Context(req, res, this.store); const context = new S3Context(req, res, this.store, this.multipart);
try { try {
// Execute middleware stack // Execute middleware stack
@@ -290,6 +302,12 @@ export class Smarts3Server {
// Initialize store // Initialize store
await this.store.initialize(); await this.store.initialize();
// Initialize multipart upload manager
await this.multipart.initialize();
// Start multipart cleanup task
this.multipart.startCleanupTask();
// Clean slate if requested // Clean slate if requested
if (this.options.cleanSlate) { if (this.options.cleanSlate) {
await this.store.reset(); await this.store.reset();
@@ -330,6 +348,9 @@ export class Smarts3Server {
return; return;
} }
// Stop multipart cleanup task
this.multipart.stopCleanupTask();
await new Promise<void>((resolve, reject) => { await new Promise<void>((resolve, reject) => {
this.httpServer!.close((err?: Error) => { this.httpServer!.close((err?: Error) => {
if (err) { if (err) {

View File

@@ -54,8 +54,9 @@ export class BucketController {
} }
/** /**
* GET /:bucket - List objects * GET /:bucket - List objects or multipart uploads
* Supports both V1 and V2 listing (V2 uses list-type=2 query param) * Supports both V1 and V2 listing (V2 uses list-type=2 query param)
* Multipart uploads listing is triggered by ?uploads query parameter
*/ */
public static async listObjects( public static async listObjects(
req: plugins.http.IncomingMessage, req: plugins.http.IncomingMessage,
@@ -64,6 +65,12 @@ export class BucketController {
params: Record<string, string> params: Record<string, string>
): Promise<void> { ): Promise<void> {
const { bucket } = params; const { bucket } = params;
// Check if this is a ListMultipartUploads request
if (ctx.query.uploads !== undefined) {
return BucketController.listMultipartUploads(req, res, ctx, params);
}
const isV2 = ctx.query['list-type'] === '2'; const isV2 = ctx.query['list-type'] === '2';
const result = await ctx.store.listObjects(bucket, { const result = await ctx.store.listObjects(bucket, {
@@ -127,4 +134,47 @@ export class BucketController {
}); });
} }
} }
/**
* GET /:bucket?uploads - List multipart uploads
*/
private static async listMultipartUploads(
req: plugins.http.IncomingMessage,
res: plugins.http.ServerResponse,
ctx: S3Context,
params: Record<string, string>
): Promise<void> {
const { bucket } = params;
// Get all multipart uploads for this bucket
const uploads = ctx.multipart.listUploads(bucket);
// Build XML response
await ctx.sendXML({
ListMultipartUploadsResult: {
'@_xmlns': 'http://s3.amazonaws.com/doc/2006-03-01/',
Bucket: bucket,
KeyMarker: '',
UploadIdMarker: '',
MaxUploads: 1000,
IsTruncated: false,
...(uploads.length > 0 && {
Upload: uploads.map((upload) => ({
Key: upload.key,
UploadId: upload.uploadId,
Initiator: {
ID: 'S3RVER',
DisplayName: 'S3RVER',
},
Owner: {
ID: 'S3RVER',
DisplayName: 'S3RVER',
},
StorageClass: 'STANDARD',
Initiated: upload.initiated.toISOString(),
})),
}),
},
});
}
} }

View File

@@ -6,7 +6,7 @@ import type { S3Context } from '../classes/context.js';
*/ */
export class ObjectController { export class ObjectController {
/** /**
* PUT /:bucket/:key* - Upload object or copy object * PUT /:bucket/:key* - Upload object, copy object, or upload part
*/ */
public static async putObject( public static async putObject(
req: plugins.http.IncomingMessage, req: plugins.http.IncomingMessage,
@@ -16,6 +16,11 @@ export class ObjectController {
): Promise<void> { ): Promise<void> {
const { bucket, key } = params; const { bucket, key } = params;
// Check if this is a multipart upload part
if (ctx.query.partNumber && ctx.query.uploadId) {
return ObjectController.uploadPart(req, res, ctx, params);
}
// Check if this is a COPY operation // Check if this is a COPY operation
const copySource = ctx.headers['x-amz-copy-source'] as string | undefined; const copySource = ctx.headers['x-amz-copy-source'] as string | undefined;
if (copySource) { if (copySource) {
@@ -133,7 +138,7 @@ export class ObjectController {
} }
/** /**
* DELETE /:bucket/:key* - Delete object * DELETE /:bucket/:key* - Delete object or abort multipart upload
*/ */
public static async deleteObject( public static async deleteObject(
req: plugins.http.IncomingMessage, req: plugins.http.IncomingMessage,
@@ -143,6 +148,11 @@ export class ObjectController {
): Promise<void> { ): Promise<void> {
const { bucket, key } = params; const { bucket, key } = params;
// Check if this is an abort multipart upload
if (ctx.query.uploadId) {
return ObjectController.abortMultipartUpload(req, res, ctx, params);
}
await ctx.store.deleteObject(bucket, key); await ctx.store.deleteObject(bucket, key);
ctx.status(204).send(''); ctx.status(204).send('');
} }
@@ -201,4 +211,168 @@ export class ObjectController {
}, },
}); });
} }
/**
* POST /:bucket/:key* - Initiate or complete multipart upload
*/
public static async postObject(
req: plugins.http.IncomingMessage,
res: plugins.http.ServerResponse,
ctx: S3Context,
params: Record<string, string>
): Promise<void> {
// Check if this is initiate multipart upload
if (ctx.query.uploads !== undefined) {
return ObjectController.initiateMultipartUpload(req, res, ctx, params);
}
// Check if this is complete multipart upload
if (ctx.query.uploadId) {
return ObjectController.completeMultipartUpload(req, res, ctx, params);
}
ctx.throw('InvalidRequest', 'Invalid POST request');
}
/**
* Initiate Multipart Upload (POST with ?uploads)
*/
private static async initiateMultipartUpload(
req: plugins.http.IncomingMessage,
res: plugins.http.ServerResponse,
ctx: S3Context,
params: Record<string, string>
): Promise<void> {
const { bucket, key } = params;
// Extract metadata from headers
const metadata: Record<string, string> = {};
for (const [header, value] of Object.entries(ctx.headers)) {
if (header.startsWith('x-amz-meta-')) {
metadata[header] = value as string;
}
if (header === 'content-type' && value) {
metadata['content-type'] = value as string;
}
}
// Initiate upload
const uploadId = await ctx.multipart.initiateUpload(bucket, key, metadata);
// Send XML response
await ctx.sendXML({
InitiateMultipartUploadResult: {
Bucket: bucket,
Key: key,
UploadId: uploadId,
},
});
}
/**
* Upload Part (PUT with ?partNumber&uploadId)
*/
private static async uploadPart(
req: plugins.http.IncomingMessage,
res: plugins.http.ServerResponse,
ctx: S3Context,
params: Record<string, string>
): Promise<void> {
const uploadId = ctx.query.uploadId!;
const partNumber = parseInt(ctx.query.partNumber!);
if (isNaN(partNumber) || partNumber < 1 || partNumber > 10000) {
ctx.throw('InvalidPartNumber', 'Part number must be between 1 and 10000');
}
// Upload the part
const partInfo = await ctx.multipart.uploadPart(
uploadId,
partNumber,
ctx.getRequestStream() as any as import('stream').Readable
);
// Set ETag header (part ETag)
ctx.setHeader('ETag', `"${partInfo.etag}"`);
ctx.status(200).send('');
}
/**
* Complete Multipart Upload (POST with ?uploadId)
*/
private static async completeMultipartUpload(
req: plugins.http.IncomingMessage,
res: plugins.http.ServerResponse,
ctx: S3Context,
params: Record<string, string>
): Promise<void> {
const { bucket, key } = params;
const uploadId = ctx.query.uploadId!;
// Read and parse request body (XML with part list)
const body = await ctx.readBody();
// Parse XML to extract parts
// Expected format: <CompleteMultipartUpload><Part><PartNumber>1</PartNumber><ETag>"etag"</ETag></Part>...</CompleteMultipartUpload>
const partMatches = body.matchAll(/<Part>.*?<PartNumber>(\d+)<\/PartNumber>.*?<ETag>(.*?)<\/ETag>.*?<\/Part>/gs);
const parts: Array<{ PartNumber: number; ETag: string }> = [];
for (const match of partMatches) {
parts.push({
PartNumber: parseInt(match[1]),
ETag: match[2],
});
}
// Complete the upload
const result = await ctx.multipart.completeUpload(uploadId, parts);
// Get upload metadata
const upload = ctx.multipart.getUpload(uploadId);
if (!upload) {
ctx.throw('NoSuchUpload', 'The specified upload does not exist');
}
// Move final file to object store
const finalPath = ctx.multipart.getFinalPath(uploadId);
const finalContent = await plugins.smartfs.file(finalPath).read();
const finalStream = plugins.http.IncomingMessage.prototype;
// Create a readable stream from the buffer
const { Readable } = await import('stream');
const finalReadableStream = Readable.from([finalContent]);
// Store the final object
await ctx.store.putObject(bucket, key, finalReadableStream, upload.metadata);
// Clean up multipart upload data
await ctx.multipart.cleanupUpload(uploadId);
// Send XML response
await ctx.sendXML({
CompleteMultipartUploadResult: {
Location: `/${bucket}/${key}`,
Bucket: bucket,
Key: key,
ETag: `"${result.etag}"`,
},
});
}
/**
* Abort Multipart Upload (DELETE with ?uploadId)
*/
private static async abortMultipartUpload(
req: plugins.http.IncomingMessage,
res: plugins.http.ServerResponse,
ctx: S3Context,
params: Record<string, string>
): Promise<void> {
const uploadId = ctx.query.uploadId!;
// Abort and cleanup
await ctx.multipart.abortUpload(uploadId);
ctx.status(204).send('');
}
} }

View File

@@ -44,6 +44,14 @@ export interface ILimitsConfig {
requestTimeout?: number; requestTimeout?: number;
} }
/**
* Multipart upload configuration
*/
export interface IMultipartConfig {
expirationDays?: number;
cleanupIntervalMinutes?: number;
}
/** /**
* Server configuration * Server configuration
*/ */
@@ -71,6 +79,7 @@ export interface ISmarts3Config {
cors?: ICorsConfig; cors?: ICorsConfig;
logging?: ILoggingConfig; logging?: ILoggingConfig;
limits?: ILimitsConfig; limits?: ILimitsConfig;
multipart?: IMultipartConfig;
} }
/** /**
@@ -114,6 +123,10 @@ const DEFAULT_CONFIG: ISmarts3Config = {
maxMetadataSize: 2048, maxMetadataSize: 2048,
requestTimeout: 300000, // 5 minutes requestTimeout: 300000, // 5 minutes
}, },
multipart: {
expirationDays: 7,
cleanupIntervalMinutes: 60,
},
}; };
/** /**
@@ -145,6 +158,10 @@ function mergeConfig(userConfig: ISmarts3Config): Required<ISmarts3Config> {
...DEFAULT_CONFIG.limits!, ...DEFAULT_CONFIG.limits!,
...(userConfig.limits || {}), ...(userConfig.limits || {}),
}, },
multipart: {
...DEFAULT_CONFIG.multipart!,
...(userConfig.multipart || {}),
},
}; };
} }