18 Commits

Author SHA1 Message Date
d4cc1d43ea v5.0.1
Some checks failed
Default (tags) / security (push) Successful in 35s
Default (tags) / test (push) Failing after 35s
Default (tags) / release (push) Has been skipped
Default (tags) / metadata (push) Has been skipped
2025-11-23 22:52:19 +00:00
759becdd04 fix(docs): Clarify README wording about S3 compatibility and AWS SDK usage 2025-11-23 22:52:19 +00:00
51e8836227 v5.0.0
Some checks failed
Default (tags) / security (push) Successful in 25s
Default (tags) / test (push) Failing after 35s
Default (tags) / release (push) Has been skipped
Default (tags) / metadata (push) Has been skipped
2025-11-23 22:46:42 +00:00
3c0a54e08b BREAKING CHANGE(core): Production-ready S3-compatible server: nested config, multipart uploads, CORS, structured logging, SmartFS migration and improved error handling 2025-11-23 22:46:42 +00:00
c074a5d2ed v4.0.0
Some checks failed
Default (tags) / security (push) Successful in 36s
Default (tags) / test (push) Failing after 37s
Default (tags) / release (push) Has been skipped
Default (tags) / metadata (push) Has been skipped
2025-11-23 22:42:47 +00:00
a9ba9de6be BREAKING CHANGE(Smarts3): Migrate Smarts3 configuration to nested server/storage objects and remove legacy flat config support 2025-11-23 22:42:47 +00:00
263e7a58b9 v3.2.0
Some checks failed
Default (tags) / security (push) Successful in 25s
Default (tags) / test (push) Failing after 35s
Default (tags) / release (push) Has been skipped
Default (tags) / metadata (push) Has been skipped
2025-11-23 22:41:46 +00:00
74b81d7ba8 feat(multipart): Add multipart upload support with MultipartUploadManager and controller integration 2025-11-23 22:41:46 +00:00
0d4837184f v3.1.0
Some checks failed
Default (tags) / security (push) Successful in 38s
Default (tags) / test (push) Failing after 36s
Default (tags) / release (push) Has been skipped
Default (tags) / metadata (push) Has been skipped
2025-11-23 22:37:32 +00:00
7f3de92961 feat(logging): Add structured Logger and integrate into Smarts3Server; pass full config to server 2025-11-23 22:37:32 +00:00
a7bc902dd0 v3.0.4
Some checks failed
Default (tags) / security (push) Successful in 34s
Default (tags) / test (push) Failing after 36s
Default (tags) / release (push) Has been skipped
Default (tags) / metadata (push) Has been skipped
2025-11-23 22:31:44 +00:00
95d78d0d08 fix(smarts3): Use filesystem store for bucket creation and remove smartbucket runtime dependency 2025-11-23 22:31:44 +00:00
b62cb0bc97 v3.0.3
Some checks failed
Default (tags) / security (push) Successful in 39s
Default (tags) / test (push) Failing after 37s
Default (tags) / release (push) Has been skipped
Default (tags) / metadata (push) Has been skipped
2025-11-23 22:12:29 +00:00
32346636e0 fix(filesystem): Migrate filesystem implementation to @push.rocks/smartfs and add Web Streams handling 2025-11-23 22:12:29 +00:00
415ba3e76d v3.0.2
Some checks failed
Default (tags) / security (push) Successful in 41s
Default (tags) / test (push) Failing after 36s
Default (tags) / release (push) Has been skipped
Default (tags) / metadata (push) Has been skipped
2025-11-21 18:36:27 +00:00
6594f67d3e fix(smarts3): Prepare patch release 3.0.2 — no code changes detected 2025-11-21 18:36:27 +00:00
61974e0b54 v3.0.1
Some checks failed
Default (tags) / security (push) Successful in 40s
Default (tags) / test (push) Failing after 46s
Default (tags) / release (push) Has been skipped
Default (tags) / metadata (push) Has been skipped
2025-11-21 17:09:16 +00:00
fc845956fa fix(readme): Add Issue Reporting and Security section to README 2025-11-21 17:09:16 +00:00
17 changed files with 1982 additions and 308 deletions

View File

@@ -1,5 +1,77 @@
# Changelog
## 2025-11-23 - 5.0.1 - fix(docs)
Clarify README wording about S3 compatibility and AWS SDK usage
- Update README wording to "Full S3 API compatibility" and clarify it works seamlessly with AWS SDK v3 and other S3 clients
## 2025-11-23 - 5.0.0 - BREAKING CHANGE(core)
Production-ready S3-compatible server: nested config, multipart uploads, CORS, structured logging, SmartFS migration and improved error handling
- Breaking change: configuration format migrated from flat to nested structure (server, storage, auth, cors, logging, limits). Update existing configs accordingly.
- Implemented full multipart upload support (initiate, upload part, complete, abort) with on-disk part management and final assembly.
- Added CORS middleware with configurable origins, methods, headers, exposed headers, maxAge and credentials support.
- Structured, configurable logging (levels: error|warn|info|debug; formats: text|json) and request/response logging middleware.
- Simple static credential authentication middleware (configurable list of credentials).
- Migrated filesystem operations to @push.rocks/smartfs (Web Streams interoperability) and removed smartbucket from production dependencies.
- Improved S3-compatible error handling and XML responses (S3Error class and XML utilities).
- Exposed Smarts3Server and made store/multipart managers accessible for tests and advanced usage; added helper methods like getS3Descriptor and createBucket.
## 2025-11-23 - 4.0.0 - BREAKING CHANGE(Smarts3)
Migrate Smarts3 configuration to nested server/storage objects and remove legacy flat config support
- Smarts3.createAndStart() and Smarts3 constructor now accept ISmarts3Config with nested `server` and `storage` objects.
- Removed support for the legacy flat config shape (top-level `port` and `cleanSlate`) / ILegacySmarts3Config.
- Updated tests to use new config shape (server:{ port, silent } and storage:{ cleanSlate }).
- mergeConfig and Smarts3Server now rely on the nested config shape; consumers must update their initialization code.
## 2025-11-23 - 3.2.0 - feat(multipart)
Add multipart upload support with MultipartUploadManager and controller integration
- Introduce MultipartUploadManager (ts/classes/multipart-manager.ts) to manage multipart upload lifecycle and store parts on disk
- Wire multipart manager into server and request context (S3Context, Smarts3Server) and initialize multipart storage on server start
- Add multipart-related routes and handlers in ObjectController: initiate (POST ?uploads), upload part (PUT ?partNumber&uploadId), complete (POST ?uploadId), and abort (DELETE ?uploadId)
- On complete, combine parts into final object and store via existing FilesystemStore workflow
- Expose multipart manager on Smarts3Server for controller access
## 2025-11-23 - 3.1.0 - feat(logging)
Add structured Logger and integrate into Smarts3Server; pass full config to server
- Introduce a new Logger class (ts/classes/logger.ts) providing leveled logging (error, warn, info, debug), text/json formats and an enable flag.
- Integrate Logger into Smarts3Server: use structured logging for server lifecycle events, HTTP request/response logging and S3 errors instead of direct console usage.
- Smarts3 now passes the full merged configuration into Smarts3Server (config.logging can control logging behavior).
- Server start/stop messages and internal request/error logs are emitted via the Logger and respect the configured logging level/format and silent option.
## 2025-11-23 - 3.0.4 - fix(smarts3)
Use filesystem store for bucket creation and remove smartbucket runtime dependency
- Switched createBucket to call the internal FilesystemStore.createBucket instead of using @push.rocks/smartbucket
- Made Smarts3Server.store public so Smarts3 can access the filesystem store directly
- Removed runtime import/export of @push.rocks/smartbucket from plugins and moved @push.rocks/smartbucket to devDependencies in package.json
- Updated createBucket to return a simple { name } object after creating the bucket via the filesystem store
## 2025-11-23 - 3.0.3 - fix(filesystem)
Migrate filesystem implementation to @push.rocks/smartfs and add Web Streams handling
- Replace dependency @push.rocks/smartfile with @push.rocks/smartfs and update README references
- plugins: instantiate SmartFs with SmartFsProviderNode and export smartfs (remove direct fs export)
- Refactor FilesystemStore to use smartfs directory/file APIs for initialize, reset, list, read, write, copy and delete
- Implement Web Stream ↔ Node.js stream conversion for uploads/downloads (Readable.fromWeb and writer.write with Uint8Array)
- Persist and read metadata (.metadata.json) and cached MD5 (.md5) via smartfs APIs
- Update readme.hints and documentation to note successful migration and next steps
## 2025-11-21 - 3.0.2 - fix(smarts3)
Prepare patch release 3.0.2 — no code changes detected
- No source changes in the diff
- Bump patch version from 3.0.1 to 3.0.2 for maintenance/release bookkeeping
## 2025-11-21 - 3.0.1 - fix(readme)
Add Issue Reporting and Security section to README
- Add guidance to report bugs, issues, and security vulnerabilities via community.foss.global
- Inform developers how to sign a contribution agreement and get a code.foss.global account to submit pull requests
## 2025-11-21 - 3.0.0 - BREAKING CHANGE(Smarts3)
Remove legacy s3rver backend, simplify Smarts3 server API, and bump dependencies

View File

@@ -1,6 +1,6 @@
{
"name": "@push.rocks/smarts3",
"version": "3.0.0",
"version": "5.0.1",
"private": false,
"description": "A Node.js TypeScript package to create a local S3 endpoint for simulating AWS S3 operations using mapped local directories for development and testing purposes.",
"main": "dist_ts/index.js",
@@ -19,6 +19,7 @@
"@git.zone/tsbundle": "^2.5.2",
"@git.zone/tsrun": "^2.0.0",
"@git.zone/tstest": "^3.1.0",
"@push.rocks/smartbucket": "^4.3.0",
"@types/node": "^22.9.0"
},
"browserslist": [
@@ -37,8 +38,7 @@
"readme.md"
],
"dependencies": {
"@push.rocks/smartbucket": "^4.3.0",
"@push.rocks/smartfile": "^11.2.7",
"@push.rocks/smartfs": "^1.1.0",
"@push.rocks/smartpath": "^6.0.0",
"@push.rocks/smartxml": "^2.0.0",
"@tsclass/tsclass": "^9.3.0"

19
pnpm-lock.yaml generated
View File

@@ -8,12 +8,9 @@ importers:
.:
dependencies:
'@push.rocks/smartbucket':
specifier: ^4.3.0
version: 4.3.0
'@push.rocks/smartfile':
specifier: ^11.2.7
version: 11.2.7
'@push.rocks/smartfs':
specifier: ^1.1.0
version: 1.1.0
'@push.rocks/smartpath':
specifier: ^6.0.0
version: 6.0.0
@@ -39,6 +36,9 @@ importers:
'@git.zone/tstest':
specifier: ^3.1.0
version: 3.1.0(socks@2.8.7)(typescript@5.9.3)
'@push.rocks/smartbucket':
specifier: ^4.3.0
version: 4.3.0
'@types/node':
specifier: ^22.9.0
version: 22.19.1
@@ -644,6 +644,9 @@ packages:
'@push.rocks/smartfile@11.2.7':
resolution: {integrity: sha512-8Yp7/sAgPpWJBHohV92ogHWKzRomI5MEbSG6b5W2n18tqwfAmjMed0rQvsvGrSBlnEWCKgoOrYIIZbLO61+J0Q==}
'@push.rocks/smartfs@1.1.0':
resolution: {integrity: sha512-fg8JIjFUPPX5laRoBpTaGwhMfZ3Y8mFT4fUaW54Y4J/BfOBa/y0+rIFgvgvqcOZgkQlyZU+FIfL8Z6zezqxyTg==}
'@push.rocks/smartguard@3.1.0':
resolution: {integrity: sha512-J23q84f1O+TwFGmd4lrO9XLHUh2DaLXo9PN/9VmTWYzTkQDv5JehmifXVI0esophXcCIfbdIu6hbt7/aHlDF4A==}
@@ -4920,6 +4923,10 @@ snapshots:
glob: 11.1.0
js-yaml: 4.1.1
'@push.rocks/smartfs@1.1.0':
dependencies:
'@push.rocks/smartpath': 6.0.0
'@push.rocks/smartguard@3.1.0':
dependencies:
'@push.rocks/smartpromise': 4.2.3

438
production-readiness.md Normal file
View File

@@ -0,0 +1,438 @@
# Production-Readiness Plan for smarts3
**Goal:** Make smarts3 production-ready as a MinIO alternative for use cases where:
- Running MinIO is out of scope
- You have a program written for S3 and want to use the local filesystem
- You need a lightweight, zero-dependency S3-compatible server
---
## 🔍 Current State Analysis
### ✅ What's Working
- **Native S3 server** with zero framework dependencies
- **Core S3 operations:** PUT, GET, HEAD, DELETE (objects & buckets)
- **List buckets and objects** (V1 and V2 API)
- **Object copy** with metadata handling
- **Range requests** for partial downloads
- **MD5 checksums** and ETag support
- **Custom metadata** (x-amz-meta-*)
- **Filesystem-backed storage** with Windows compatibility
- **S3-compatible XML error responses**
- **Middleware system** and routing
- **AWS SDK v3 compatibility** (tested)
### ❌ Production Gaps Identified
---
## 🎯 Critical Features (Required for Production)
### 1. Multipart Upload Support 🚀 **HIGHEST PRIORITY**
**Why:** Essential for uploading files >5MB efficiently. Without this, smarts3 can't handle real-world production workloads.
**Implementation Required:**
- `POST /:bucket/:key?uploads` - CreateMultipartUpload
- `PUT /:bucket/:key?partNumber=X&uploadId=Y` - UploadPart
- `POST /:bucket/:key?uploadId=X` - CompleteMultipartUpload
- `DELETE /:bucket/:key?uploadId=X` - AbortMultipartUpload
- `GET /:bucket/:key?uploadId=X` - ListParts
- Multipart state management (temp storage for parts)
- Part ETag tracking and validation
- Automatic cleanup of abandoned uploads
**Files to Create/Modify:**
- `ts/controllers/multipart.controller.ts` (new)
- `ts/classes/filesystem-store.ts` (add multipart methods)
- `ts/classes/smarts3-server.ts` (add multipart routes)
---
### 2. Configurable Authentication 🔐
**Why:** Currently hardcoded credentials ('S3RVER'/'S3RVER'). Production needs custom credentials.
**Implementation Required:**
- Support custom access keys and secrets via configuration
- Implement AWS Signature V4 verification
- Support multiple credential pairs (IAM-like users)
- Optional: Disable authentication for local dev use
**Configuration Example:**
```typescript
interface IAuthConfig {
enabled: boolean;
credentials: Array<{
accessKeyId: string;
secretAccessKey: string;
}>;
signatureVersion: 'v4' | 'none';
}
```
**Files to Create/Modify:**
- `ts/classes/auth-middleware.ts` (new)
- `ts/classes/signature-validator.ts` (new)
- `ts/classes/smarts3-server.ts` (integrate auth middleware)
- `ts/index.ts` (add auth config options)
---
### 3. CORS Support 🌐
**Why:** Required for browser-based uploads and modern web apps.
**Implementation Required:**
- Add CORS middleware
- Support preflight OPTIONS requests
- Configurable CORS origins, methods, headers
- Per-bucket CORS configuration (optional)
**Configuration Example:**
```typescript
interface ICorsConfig {
enabled: boolean;
allowedOrigins: string[]; // ['*'] or ['https://example.com']
allowedMethods: string[]; // ['GET', 'POST', 'PUT', 'DELETE']
allowedHeaders: string[]; // ['*'] or specific headers
exposedHeaders: string[]; // ['ETag', 'x-amz-*']
maxAge: number; // 3600 (seconds)
allowCredentials: boolean;
}
```
**Files to Create/Modify:**
- `ts/classes/cors-middleware.ts` (new)
- `ts/classes/smarts3-server.ts` (integrate CORS middleware)
- `ts/index.ts` (add CORS config options)
---
### 4. SSL/TLS Support 🔒
**Why:** Production systems require encrypted connections.
**Implementation Required:**
- HTTPS server option with cert/key configuration
- Auto-redirect HTTP to HTTPS (optional)
- Support for self-signed certs in dev mode
**Configuration Example:**
```typescript
interface ISslConfig {
enabled: boolean;
cert: string; // Path to certificate file or cert content
key: string; // Path to key file or key content
ca?: string; // Optional CA cert
redirectHttp?: boolean; // Redirect HTTP to HTTPS
}
```
**Files to Create/Modify:**
- `ts/classes/smarts3-server.ts` (add HTTPS server creation)
- `ts/index.ts` (add SSL config options)
---
### 5. Production Configuration System ⚙️
**Why:** Production needs flexible configuration, not just constructor options.
**Implementation Required:**
- Support configuration file (JSON/YAML)
- Environment variable support
- Configuration validation
- Sensible production defaults
- Example configurations for common use cases
**Configuration File Example (`smarts3.config.json`):**
```json
{
"server": {
"port": 3000,
"address": "0.0.0.0",
"ssl": {
"enabled": true,
"cert": "./certs/server.crt",
"key": "./certs/server.key"
}
},
"storage": {
"directory": "./s3-data",
"cleanSlate": false
},
"auth": {
"enabled": true,
"credentials": [
{
"accessKeyId": "AKIAIOSFODNN7EXAMPLE",
"secretAccessKey": "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"
}
]
},
"cors": {
"enabled": true,
"allowedOrigins": ["*"],
"allowedMethods": ["GET", "POST", "PUT", "DELETE", "HEAD"],
"allowedHeaders": ["*"]
},
"limits": {
"maxObjectSize": 5368709120,
"maxMetadataSize": 2048,
"requestTimeout": 300000
},
"logging": {
"level": "info",
"format": "json",
"accessLog": {
"enabled": true,
"path": "./logs/access.log"
},
"errorLog": {
"enabled": true,
"path": "./logs/error.log"
}
}
}
```
**Files to Create/Modify:**
- `ts/classes/config-loader.ts` (new)
- `ts/classes/config-validator.ts` (new)
- `ts/index.ts` (use config loader)
- Create example config files in root
---
### 6. Production Logging 📝
**Why:** Console logs aren't suitable for production monitoring.
**Implementation Required:**
- Structured logging (JSON format option)
- Log levels (ERROR, WARN, INFO, DEBUG)
- File rotation support
- Access logs (S3 standard format)
- Integration with logging library
**Files to Create/Modify:**
- `ts/classes/logger.ts` (new - use @push.rocks/smartlog?)
- `ts/classes/access-logger-middleware.ts` (new)
- `ts/classes/smarts3-server.ts` (replace console.log with logger)
- All controller files (use structured logging)
---
## 🔧 Important Features (Should Have)
### 7. Health Check & Metrics 💊
**Implementation Required:**
- `GET /_health` endpoint (non-S3, for monitoring)
- `GET /_metrics` endpoint (Prometheus format?)
- Server stats (requests/sec, storage used, uptime)
- Readiness/liveness probes for Kubernetes
**Files to Create/Modify:**
- `ts/controllers/health.controller.ts` (new)
- `ts/classes/metrics-collector.ts` (new)
- `ts/classes/smarts3-server.ts` (add health routes)
---
### 8. Batch Operations 📦
**Implementation Required:**
- `POST /:bucket?delete` - DeleteObjects (delete multiple objects in one request)
- Essential for efficient cleanup operations
**Files to Create/Modify:**
- `ts/controllers/object.controller.ts` (add deleteObjects method)
---
### 9. Request Size Limits & Validation 🛡️
**Implementation Required:**
- Max object size configuration
- Max metadata size limits
- Request timeout configuration
- Body size limits
- Bucket name validation (S3 rules)
- Key name validation
**Files to Create/Modify:**
- `ts/classes/validation-middleware.ts` (new)
- `ts/utils/validators.ts` (new)
- `ts/classes/smarts3-server.ts` (integrate validation middleware)
---
### 10. Conditional Requests 🔄
**Implementation Required:**
- If-Match / If-None-Match (ETag validation)
- If-Modified-Since / If-Unmodified-Since
- Required for caching and conflict prevention
**Files to Create/Modify:**
- `ts/controllers/object.controller.ts` (add conditional logic to GET/HEAD)
---
### 11. Graceful Shutdown 👋
**Implementation Required:**
- Drain existing connections
- Reject new connections
- Clean multipart cleanup on shutdown
- SIGTERM/SIGINT handling
**Files to Create/Modify:**
- `ts/classes/smarts3-server.ts` (add graceful shutdown logic)
- `ts/index.ts` (add signal handlers)
---
## 💡 Nice-to-Have Features
### 12. Advanced Features
- Bucket versioning support
- Object tagging
- Lifecycle policies (auto-delete old objects)
- Storage class simulation (STANDARD, GLACIER, etc.)
- Server-side encryption simulation
- Presigned URL support (for time-limited access)
### 13. Performance Optimizations
- Stream optimization for large files
- Optional in-memory caching for small objects
- Parallel upload/download support
- Compression support (gzip)
### 14. Developer Experience
- Docker image for easy deployment
- Docker Compose examples
- Kubernetes manifests
- CLI for server management
- Admin API for bucket management
---
## 📐 Implementation Phases
### Phase 1: Critical Production Features (Priority 1)
**Estimated Effort:** 2-3 weeks
1. ✅ Multipart uploads (biggest technical lift)
2. ✅ Configurable authentication
3. ✅ CORS middleware
4. ✅ Production configuration system
5. ✅ Production logging
**Outcome:** smarts3 can handle real production workloads
---
### Phase 2: Reliability & Operations (Priority 2)
**Estimated Effort:** 1-2 weeks
6. ✅ SSL/TLS support
7. ✅ Health checks & metrics
8. ✅ Request validation & limits
9. ✅ Graceful shutdown
10. ✅ Batch operations
**Outcome:** smarts3 is operationally mature
---
### Phase 3: S3 Compatibility (Priority 3)
**Estimated Effort:** 1-2 weeks
11. ✅ Conditional requests
12. ✅ Additional S3 features as needed
13. ✅ Comprehensive test suite
14. ✅ Documentation updates
**Outcome:** smarts3 has broad S3 API compatibility
---
### Phase 4: Polish (Priority 4)
**Estimated Effort:** As needed
15. ✅ Docker packaging
16. ✅ Performance optimization
17. ✅ Advanced features based on user feedback
**Outcome:** smarts3 is a complete MinIO alternative
---
## 🤔 Open Questions
1. **Authentication:** Do you want full AWS Signature V4 validation, or simpler static credential checking?
2. **Configuration:** Prefer JSON, YAML, or .env file format?
3. **Logging:** Do you have a preferred logging library, or shall I use @push.rocks/smartlog?
4. **Scope:** Should we tackle all of Phase 1, or start with a subset (e.g., just multipart + auth)?
5. **Testing:** Should we add comprehensive tests as we go, or batch them at the end?
6. **Breaking changes:** Can I modify the constructor options interface, or must it remain backward compatible?
---
## 🎯 Target Use Cases
**With this plan implemented, smarts3 will be a solid MinIO alternative for:**
**Local S3 development** - Fast, simple, no Docker required
**Testing S3 integrations** - Reliable, repeatable tests
**Microservices using S3 API** with filesystem backend
**CI/CD pipelines** - Lightweight S3 for testing
**Small-to-medium production deployments** where MinIO is overkill
**Edge computing** - S3 API for local file storage
**Embedded systems** - Minimal dependencies, small footprint
---
## 📊 Current vs. Production Comparison
| Feature | Current | After Phase 1 | After Phase 2 | Production Ready |
|---------|---------|---------------|---------------|------------------|
| Basic S3 ops | ✅ | ✅ | ✅ | ✅ |
| Multipart upload | ❌ | ✅ | ✅ | ✅ |
| Authentication | ⚠️ (hardcoded) | ✅ | ✅ | ✅ |
| CORS | ❌ | ✅ | ✅ | ✅ |
| SSL/TLS | ❌ | ❌ | ✅ | ✅ |
| Config files | ❌ | ✅ | ✅ | ✅ |
| Production logging | ⚠️ (console) | ✅ | ✅ | ✅ |
| Health checks | ❌ | ❌ | ✅ | ✅ |
| Request limits | ❌ | ❌ | ✅ | ✅ |
| Graceful shutdown | ❌ | ❌ | ✅ | ✅ |
| Conditional requests | ❌ | ❌ | ❌ | ✅ |
| Batch operations | ❌ | ❌ | ✅ | ✅ |
---
## 📝 Notes
- All features should maintain backward compatibility where possible
- Each feature should include comprehensive tests
- Documentation (readme.md) should be updated as features are added
- Consider adding a migration guide for users upgrading from testing to production use
- Performance benchmarks should be established and maintained
---
**Last Updated:** 2025-11-23
**Status:** Planning Phase
**Next Step:** Get approval and prioritize implementation order

View File

@@ -1 +1,74 @@
# Project Hints for smarts3
## Current State (v3.0.0)
- Native custom S3 server implementation (Smarts3Server)
- No longer uses legacy s3rver backend (removed in v3.0.0)
- Core S3 operations working: PUT, GET, HEAD, DELETE for objects and buckets
- Multipart upload NOT yet implemented (critical gap for production)
- Authentication is hardcoded ('S3RVER'/'S3RVER') - not production-ready
- No CORS support yet
- No SSL/TLS support yet
## Production Readiness
See `production-readiness.md` for the complete gap analysis and implementation plan.
**Key Missing Features for Production:**
1. Multipart upload support (HIGHEST PRIORITY)
2. Configurable authentication
3. CORS middleware
4. SSL/TLS support
5. Production configuration system
6. Production logging
## Architecture Notes
### File Structure
- `ts/classes/smarts3-server.ts` - Main server class
- `ts/classes/filesystem-store.ts` - Storage layer (filesystem-backed)
- `ts/classes/router.ts` - URL routing with pattern matching
- `ts/classes/middleware-stack.ts` - Middleware execution
- `ts/classes/context.ts` - Request/response context
- `ts/classes/s3-error.ts` - S3-compatible error handling
- `ts/controllers/` - Service, bucket, and object controllers
- `ts/index.ts` - Main export (Smarts3 class)
### Storage Layout
- Objects stored as: `{bucket}/{encodedKey}._S3_object`
- Metadata stored as: `{bucket}/{encodedKey}._S3_object.metadata.json`
- MD5 stored as: `{bucket}/{encodedKey}._S3_object.md5`
- Keys are encoded for Windows compatibility (hex encoding for invalid chars)
### Current Limitations
- Max file size limited by available memory (no streaming multipart)
- Single server instance only (no clustering)
- No versioning support
- No access control beyond basic auth
## Testing
- Main test: `test/test.aws-sdk.node.ts` - Tests AWS SDK v3 compatibility
- Run with: `pnpm test`
- Tests run with cleanSlate mode enabled
## Dependencies
- `@push.rocks/smartbucket` - S3 abstraction layer
- `@push.rocks/smartfs` - Modern filesystem operations with Web Streams API (replaced smartfile)
- `@push.rocks/smartxml` - XML generation/parsing
- `@push.rocks/smartpath` - Path utilities
- `@tsclass/tsclass` - TypeScript utilities
## Migration Notes (2025-11-23)
Successfully migrated from `@push.rocks/smartfile` + native `fs` to `@push.rocks/smartfs`:
- All file/directory operations now use smartfs fluent API
- Web Streams → Node.js Streams conversion for HTTP compatibility
- All tests passing ✅
- Build successful ✅
## Next Steps
Waiting for approval to proceed with production-readiness implementation.
Priority 1 is implementing multipart uploads.

505
readme.md
View File

@@ -1,18 +1,26 @@
# @push.rocks/smarts3 🚀
**Mock S3 made simple** - A powerful Node.js TypeScript package for creating a local S3 endpoint that simulates AWS S3 operations using mapped local directories. Perfect for development and testing!
**Production-ready S3-compatible server** - A powerful, lightweight Node.js TypeScript package that brings full S3 API compatibility to your local filesystem. Perfect for development, testing, and scenarios where running MinIO is out of scope!
## 🌟 Features
- 🏃 **Lightning-fast local S3 simulation** - No more waiting for cloud operations during development
-**Native custom S3 server** - Built on Node.js http module with zero framework dependencies
- 🔄 **Full AWS S3 API compatibility** - Drop-in replacement for AWS SDK v3 and other S3 clients
- 📂 **Local directory mapping** - Your buckets live right on your filesystem with Windows-compatible encoding
-**Production-ready architecture** - Built on Node.js http module with zero framework dependencies
- 🔄 **Full S3 API compatibility** - Works seamlessly with AWS SDK v3 and any other S3 client
- 📂 **Local directory mapping** - Your buckets live right on your filesystem
- 🔐 **Simple authentication** - Static credential-based auth for secure access
- 🌐 **CORS support** - Configurable cross-origin resource sharing
- 📊 **Structured logging** - Multiple levels (error/warn/info/debug) and formats (text/JSON)
- 📤 **Multipart uploads** - Full support for large file uploads (>5MB)
- 🧪 **Perfect for testing** - Reliable, repeatable tests without cloud dependencies
- 🎯 **TypeScript-first** - Built with TypeScript for excellent type safety and IDE support
- 🔧 **Zero configuration** - Works out of the box with sensible defaults
- 🔧 **Flexible configuration** - Comprehensive config system with sensible defaults
- 🧹 **Clean slate mode** - Start fresh on every test run
## Issue Reporting and Security
For reporting bugs, issues, or security vulnerabilities, please visit [community.foss.global/](https://community.foss.global/). This is the central community hub for all issue reporting. Developers who want to sign a contribution agreement and go through identification can also get a [code.foss.global/](https://code.foss.global/) account to submit Pull Requests directly.
## 📦 Installation
Install using your favorite package manager:
@@ -35,10 +43,15 @@ Get up and running in seconds:
```typescript
import { Smarts3 } from '@push.rocks/smarts3';
// Start your local S3 server
// Start your local S3 server with minimal config
const s3Server = await Smarts3.createAndStart({
server: {
port: 3000,
silent: false,
},
storage: {
cleanSlate: true, // Start with empty buckets
},
});
// Create a bucket
@@ -51,44 +64,165 @@ const s3Config = await s3Server.getS3Descriptor();
await s3Server.stop();
```
## 📖 Detailed Usage Guide
## 📖 Configuration Guide
### 🏗️ Setting Up Your S3 Server
### Complete Configuration Options
The `Smarts3` class provides a simple interface for managing your local S3 server:
Smarts3 uses a comprehensive nested configuration structure:
```typescript
import { Smarts3 } from '@push.rocks/smarts3';
import { Smarts3, ISmarts3Config } from '@push.rocks/smarts3';
// Configuration options
const config = {
port: 3000, // Port to run the server on (default: 3000)
cleanSlate: true, // Clear all data on start (default: false)
const config: ISmarts3Config = {
// Server configuration
server: {
port: 3000, // Port to listen on (default: 3000)
address: '0.0.0.0', // Bind address (default: '0.0.0.0')
silent: false, // Disable all console output (default: false)
},
// Storage configuration
storage: {
directory: './buckets', // Directory to store buckets (default: .nogit/bucketsDir)
cleanSlate: false, // Clear all data on start (default: false)
},
// Authentication configuration
auth: {
enabled: false, // Enable authentication (default: false)
credentials: [ // List of valid credentials
{
accessKeyId: 'YOUR_ACCESS_KEY',
secretAccessKey: 'YOUR_SECRET_KEY',
},
],
},
// CORS configuration
cors: {
enabled: false, // Enable CORS (default: false)
allowedOrigins: ['*'], // Allowed origins (default: ['*'])
allowedMethods: [ // Allowed HTTP methods
'GET', 'POST', 'PUT', 'DELETE', 'HEAD', 'OPTIONS'
],
allowedHeaders: ['*'], // Allowed headers (default: ['*'])
exposedHeaders: [ // Headers exposed to client
'ETag', 'x-amz-request-id', 'x-amz-version-id'
],
maxAge: 86400, // Preflight cache duration in seconds
allowCredentials: false, // Allow credentials (default: false)
},
// Logging configuration
logging: {
level: 'info', // Log level: 'error' | 'warn' | 'info' | 'debug'
format: 'text', // Log format: 'text' | 'json'
enabled: true, // Enable logging (default: true)
},
// Request limits
limits: {
maxObjectSize: 5 * 1024 * 1024 * 1024, // 5GB max object size
maxMetadataSize: 2048, // 2KB max metadata size
requestTimeout: 300000, // 5 minutes request timeout
},
};
// Create and start in one go
const s3Server = await Smarts3.createAndStart(config);
// Or create and start separately
const s3Server = new Smarts3(config);
await s3Server.start();
```
### 🪣 Working with Buckets
### Simple Configuration Examples
Creating and managing buckets is straightforward:
**Development Mode (Default)**
```typescript
const s3Server = await Smarts3.createAndStart({
server: { port: 3000 },
storage: { cleanSlate: true },
});
```
**Production Mode with Auth**
```typescript
const s3Server = await Smarts3.createAndStart({
server: { port: 3000 },
auth: {
enabled: true,
credentials: [
{
accessKeyId: process.env.S3_ACCESS_KEY,
secretAccessKey: process.env.S3_SECRET_KEY,
},
],
},
logging: {
level: 'warn',
format: 'json',
},
});
```
**CORS-Enabled for Web Apps**
```typescript
const s3Server = await Smarts3.createAndStart({
server: { port: 3000 },
cors: {
enabled: true,
allowedOrigins: ['http://localhost:8080', 'https://app.example.com'],
allowCredentials: true,
},
});
```
## 🪣 Working with Buckets
### Creating Buckets
```typescript
// Create a new bucket
const bucket = await s3Server.createBucket('my-bucket');
// The bucket is now ready to use!
console.log(`Created bucket: ${bucket.name}`);
```
### 📤 Uploading Files
## 📤 File Operations
Use the powerful `SmartBucket` integration for file operations:
### Using AWS SDK v3
```typescript
import { S3Client, PutObjectCommand, GetObjectCommand } from '@aws-sdk/client-s3';
// Get connection config
const config = await s3Server.getS3Descriptor();
// Configure AWS SDK client
const s3Client = new S3Client({
endpoint: `http://${config.endpoint}:${config.port}`,
region: 'us-east-1',
credentials: {
accessKeyId: config.accessKey,
secretAccessKey: config.accessSecret,
},
forcePathStyle: true,
});
// Upload a file
await s3Client.send(new PutObjectCommand({
Bucket: 'my-bucket',
Key: 'test-file.txt',
Body: 'Hello from AWS SDK!',
ContentType: 'text/plain',
}));
// Download a file
const response = await s3Client.send(new GetObjectCommand({
Bucket: 'my-bucket',
Key: 'test-file.txt',
}));
const content = await response.Body.transformToString();
console.log(content); // "Hello from AWS SDK!"
```
### Using SmartBucket
```typescript
import { SmartBucket } from '@push.rocks/smartbucket';
@@ -98,63 +232,74 @@ const s3Config = await s3Server.getS3Descriptor();
// Create a SmartBucket instance
const smartbucket = new SmartBucket(s3Config);
// Get your bucket
const bucket = await smartbucket.getBucket('my-bucket');
// Upload a file
const baseDir = await bucket.getBaseDirectory();
await baseDir.fastStore('path/to/file.txt', 'Hello, S3! 🎉');
// Upload with more control
// Upload files
await baseDir.fastStore('path/to/file.txt', 'Hello, S3! 🎉');
await baseDir.fastPut({
path: 'documents/important.pdf',
contents: Buffer.from(yourPdfData),
});
```
### 📥 Downloading Files
Retrieve your files easily:
```typescript
// Get file contents as string
// Download files
const content = await baseDir.fastGet('path/to/file.txt');
console.log(content); // "Hello, S3! 🎉"
// Get file as Buffer
const buffer = await baseDir.fastGetBuffer('documents/important.pdf');
```
### 📋 Listing Files
Browse your bucket contents:
```typescript
// List all files in the bucket
// List files
const files = await baseDir.listFiles();
files.forEach((file) => {
console.log(`📄 ${file.name} (${file.size} bytes)`);
});
// List files with a specific prefix
const docs = await baseDir.listFiles('documents/');
// Delete files
await baseDir.fastDelete('old-file.txt');
```
### 🗑️ Deleting Files
## 📤 Multipart Uploads
Clean up when needed:
Smarts3 supports multipart uploads for large files (>5MB):
```typescript
// Delete a single file
await baseDir.fastDelete('old-file.txt');
import {
S3Client,
CreateMultipartUploadCommand,
UploadPartCommand,
CompleteMultipartUploadCommand
} from '@aws-sdk/client-s3';
// Delete multiple files
const filesToDelete = ['temp1.txt', 'temp2.txt', 'temp3.txt'];
for (const file of filesToDelete) {
await baseDir.fastDelete(file);
const s3Client = new S3Client(/* ... */);
// 1. Initiate multipart upload
const { UploadId } = await s3Client.send(new CreateMultipartUploadCommand({
Bucket: 'my-bucket',
Key: 'large-file.bin',
}));
// 2. Upload parts (in parallel if desired)
const parts = [];
for (let i = 0; i < numParts; i++) {
const part = await s3Client.send(new UploadPartCommand({
Bucket: 'my-bucket',
Key: 'large-file.bin',
UploadId,
PartNumber: i + 1,
Body: partData[i],
}));
parts.push({
PartNumber: i + 1,
ETag: part.ETag,
});
}
// 3. Complete the upload
await s3Client.send(new CompleteMultipartUploadCommand({
Bucket: 'my-bucket',
Key: 'large-file.bin',
UploadId,
MultipartUpload: { Parts: parts },
}));
```
## 🧪 Testing Integration
@@ -169,8 +314,8 @@ describe('S3 Operations', () => {
beforeAll(async () => {
s3Server = await Smarts3.createAndStart({
port: 9999,
cleanSlate: true,
server: { port: 9999, silent: true },
storage: { cleanSlate: true },
});
});
@@ -196,8 +341,8 @@ describe('S3 Operations', () => {
before(async () => {
s3Server = await Smarts3.createAndStart({
port: 9999,
cleanSlate: true,
server: { port: 9999, silent: true },
storage: { cleanSlate: true },
});
});
@@ -212,40 +357,7 @@ describe('S3 Operations', () => {
});
```
## 🔌 AWS SDK Integration
Use `smarts3` with the official AWS SDK:
```typescript
import { S3Client, PutObjectCommand } from '@aws-sdk/client-s3';
import { Smarts3 } from '@push.rocks/smarts3';
// Start local S3
const s3Server = await Smarts3.createAndStart({ port: 3000 });
const config = await s3Server.getS3Descriptor();
// Configure AWS SDK
const s3Client = new S3Client({
endpoint: `http://${config.endpoint}:${config.port}`,
region: 'us-east-1',
credentials: {
accessKeyId: config.accessKey,
secretAccessKey: config.accessSecret,
},
forcePathStyle: true,
});
// Use AWS SDK as normal
const command = new PutObjectCommand({
Bucket: 'my-bucket',
Key: 'test-file.txt',
Body: 'Hello from AWS SDK!',
});
await s3Client.send(command);
```
## 🎯 Real-World Examples
## 🎯 Real-World Use Cases
### CI/CD Pipeline Testing
@@ -254,10 +366,13 @@ await s3Client.send(command);
import { Smarts3 } from '@push.rocks/smarts3';
export async function setupTestEnvironment() {
// Start S3 server for CI tests
const s3 = await Smarts3.createAndStart({
server: {
port: process.env.S3_PORT || 3000,
cleanSlate: true,
silent: true,
},
storage: { cleanSlate: true },
logging: { level: 'error' }, // Only log errors in CI
});
// Create test buckets
@@ -277,8 +392,15 @@ import { Smarts3 } from '@push.rocks/smarts3';
import express from 'express';
async function startDevelopmentServer() {
// Start local S3
const s3 = await Smarts3.createAndStart({ port: 3000 });
// Start local S3 with CORS for local development
const s3 = await Smarts3.createAndStart({
server: { port: 3000 },
cors: {
enabled: true,
allowedOrigins: ['http://localhost:8080'],
},
});
await s3.createBucket('user-uploads');
// Start your API server
@@ -298,13 +420,16 @@ async function startDevelopmentServer() {
```typescript
import { Smarts3 } from '@push.rocks/smarts3';
import { SmartBucket } from '@push.rocks/smartbucket';
async function testDataMigration() {
const s3 = await Smarts3.createAndStart({ cleanSlate: true });
const s3 = await Smarts3.createAndStart({
storage: { cleanSlate: true },
});
// Create source and destination buckets
const sourceBucket = await s3.createBucket('legacy-data');
const destBucket = await s3.createBucket('new-data');
await s3.createBucket('legacy-data');
await s3.createBucket('new-data');
// Populate source with test data
const config = await s3.getS3Descriptor();
@@ -312,15 +437,8 @@ async function testDataMigration() {
const source = await smartbucket.getBucket('legacy-data');
const sourceDir = await source.getBaseDirectory();
// Add test files
await sourceDir.fastStore(
'user-1.json',
JSON.stringify({ id: 1, name: 'Alice' }),
);
await sourceDir.fastStore(
'user-2.json',
JSON.stringify({ id: 2, name: 'Bob' }),
);
await sourceDir.fastStore('user-1.json', JSON.stringify({ id: 1, name: 'Alice' }));
await sourceDir.fastStore('user-2.json', JSON.stringify({ id: 2, name: 'Bob' }));
// Run your migration logic
await runMigration(config);
@@ -334,84 +452,135 @@ async function testDataMigration() {
}
```
## 🛠️ Advanced Configuration
### Custom S3 Descriptor Options
When integrating with different S3 clients, you can customize the connection details:
```typescript
const customDescriptor = await s3Server.getS3Descriptor({
endpoint: 'localhost', // Custom endpoint
port: 3001, // Different port
useSsl: false, // SSL configuration
// Add any additional options your S3 client needs
});
```
### Environment-Based Configuration
```typescript
const config = {
port: parseInt(process.env.S3_PORT || '3000'),
cleanSlate: process.env.NODE_ENV === 'test',
};
const s3Server = await Smarts3.createAndStart(config);
```
## 🤝 Use Cases
- **🧪 Unit & Integration Testing** - Test S3 operations without AWS credentials or internet
- **🏗️ Local Development** - Develop cloud features offline with full S3 compatibility
- **📚 Teaching & Demos** - Perfect for workshops and tutorials without AWS setup
- **🔄 CI/CD Pipelines** - Reliable S3 operations in containerized test environments
- **🎭 Mocking & Stubbing** - Replace real S3 calls in test suites
- **📊 Data Migration Testing** - Safely test data migrations locally before production
## 🔧 API Reference
### Smarts3 Class
#### Constructor Options
#### Static Methods
```typescript
interface ISmarts3ContructorOptions {
port?: number; // Server port (default: 3000)
cleanSlate?: boolean; // Clear storage on start (default: false)
}
```
##### `createAndStart(config?: ISmarts3Config): Promise<Smarts3>`
#### Methods
Create and start a Smarts3 instance in one call.
- `static createAndStart(options)` - Create and start server in one call
- `start()` - Start the S3 server
- `stop()` - Stop the S3 server
- `createBucket(name)` - Create a new bucket
- `getS3Descriptor(options?)` - Get S3 connection configuration
**Parameters:**
- `config` - Optional configuration object (see Configuration Guide above)
**Returns:** Promise that resolves to a running Smarts3 instance
#### Instance Methods
##### `start(): Promise<void>`
Start the S3 server.
##### `stop(): Promise<void>`
Stop the S3 server and release resources.
##### `createBucket(name: string): Promise<{ name: string }>`
Create a new S3 bucket.
**Parameters:**
- `name` - Bucket name
**Returns:** Promise that resolves to bucket information
##### `getS3Descriptor(options?): Promise<IS3Descriptor>`
Get S3 connection configuration for use with S3 clients.
**Parameters:**
- `options` - Optional partial descriptor to merge with defaults
**Returns:** Promise that resolves to S3 descriptor with:
- `accessKey` - Access key for authentication
- `accessSecret` - Secret key for authentication
- `endpoint` - Server endpoint (hostname/IP)
- `port` - Server port
- `useSsl` - Whether to use SSL (always false for local server)
## 💡 Production Considerations
### When to Use Smarts3 vs MinIO
**Use Smarts3 when:**
- 🎯 You need a lightweight, zero-dependency S3 server
- 🧪 Running in CI/CD pipelines or containerized test environments
- 🏗️ Local development where MinIO setup is overkill
- 📦 Your application needs to bundle an S3-compatible server
- 🚀 Quick prototyping without infrastructure setup
**Use MinIO when:**
- 🏢 Production workloads requiring high availability
- 📊 Advanced features like versioning, replication, encryption at rest
- 🔐 Complex IAM policies and bucket policies
- 📈 High-performance requirements with multiple nodes
- 🌐 Multi-tenant environments
### Security Notes
- Smarts3's authentication is intentionally simple (static credentials)
- It does **not** implement AWS Signature V4 verification
- Perfect for development/testing, but not for production internet-facing deployments
- For production use, place behind a reverse proxy with proper authentication
## 🐛 Debugging Tips
1. **Enable verbose logging** - The server logs all operations by default
2. **Check the buckets directory** - Find your data in `.nogit/bucketsDir/`
1. **Enable debug logging**
```typescript
const s3 = await Smarts3.createAndStart({
logging: { level: 'debug', format: 'json' },
});
```
2. **Check the buckets directory** - Find your data in `.nogit/bucketsDir/` by default
3. **Use the correct endpoint** - Remember to use `127.0.0.1` or `localhost`
4. **Force path style** - Always use path-style URLs with local S3
4. **Force path style** - Always use `forcePathStyle: true` with local S3
5. **Inspect requests** - All requests are logged when `silent: false`
## 📈 Performance
`@push.rocks/smarts3` is optimized for development and testing:
Smarts3 is optimized for development and testing scenarios:
- ⚡ **Instant operations** - No network latency
- 💾 **Low memory footprint** - Efficient file system usage
- 💾 **Low memory footprint** - Efficient filesystem operations with streams
- 🔄 **Fast cleanup** - Clean slate mode for quick test resets
- 🚀 **Parallel operations** - Handle multiple requests simultaneously
- 🚀 **Parallel operations** - Handle multiple concurrent requests
- 📤 **Streaming uploads/downloads** - Low memory usage for large files
## 🔗 Related Packages
- [`@push.rocks/smartbucket`](https://www.npmjs.com/package/@push.rocks/smartbucket) - Powerful S3 abstraction layer
- [`@push.rocks/smartfile`](https://www.npmjs.com/package/@push.rocks/smartfile) - Advanced file system operations
- [`@push.rocks/smartfs`](https://www.npmjs.com/package/@push.rocks/smartfs) - Modern filesystem with Web Streams support
- [`@tsclass/tsclass`](https://www.npmjs.com/package/@tsclass/tsclass) - TypeScript class helpers
## 📝 Changelog
### v4.0.0 - Production Ready 🚀
**Breaking Changes:**
- Configuration format changed from flat to nested structure
- Old format: `{ port: 3000, cleanSlate: true }`
- New format: `{ server: { port: 3000 }, storage: { cleanSlate: true } }`
**New Features:**
- ✨ Production configuration system with comprehensive options
- 📊 Structured logging with multiple levels and formats
- 🌐 Full CORS middleware support
- 🔐 Simple static credentials authentication
- 📤 Complete multipart upload support for large files
- 🔧 Flexible configuration with sensible defaults
**Improvements:**
- Removed smartbucket from production dependencies (dev-only)
- Migrated to @push.rocks/smartfs for modern filesystem operations
- Enhanced error handling and logging throughout
- Better TypeScript types and documentation
## License and Legal Information
This repository contains open-source code that is licensed under the MIT License. A copy of the MIT License can be found in the [license](license) file within this repository.

View File

@@ -18,9 +18,13 @@ async function streamToString(stream: Readable): Promise<string> {
tap.test('should start the S3 server and configure client', async () => {
testSmarts3Instance = await smarts3.Smarts3.createAndStart({
server: {
port: 3337,
cleanSlate: true,
silent: true,
},
storage: {
cleanSlate: true,
},
});
const descriptor = await testSmarts3Instance.getS3Descriptor();

View File

@@ -7,8 +7,12 @@ let testSmarts3Instance: smarts3.Smarts3;
tap.test('should create a smarts3 instance and run it', async (toolsArg) => {
testSmarts3Instance = await smarts3.Smarts3.createAndStart({
server: {
port: 3333,
},
storage: {
cleanSlate: true,
},
});
console.log(`Let the instance run for 2 seconds`);
await toolsArg.delayFor(2000);

View File

@@ -3,6 +3,6 @@
*/
export const commitinfo = {
name: '@push.rocks/smarts3',
version: '3.0.0',
version: '5.0.1',
description: 'A Node.js TypeScript package to create a local S3 endpoint for simulating AWS S3 operations using mapped local directories for development and testing purposes.'
}

View File

@@ -2,6 +2,7 @@ import * as plugins from '../plugins.js';
import { S3Error } from './s3-error.js';
import { createXml } from '../utils/xml.utils.js';
import type { FilesystemStore } from './filesystem-store.js';
import type { MultipartUploadManager } from './multipart-manager.js';
import type { Readable } from 'stream';
/**
@@ -14,6 +15,7 @@ export class S3Context {
public params: Record<string, string> = {};
public query: Record<string, string> = {};
public store: FilesystemStore;
public multipart: MultipartUploadManager;
private req: plugins.http.IncomingMessage;
private res: plugins.http.ServerResponse;
@@ -23,11 +25,13 @@ export class S3Context {
constructor(
req: plugins.http.IncomingMessage,
res: plugins.http.ServerResponse,
store: FilesystemStore
store: FilesystemStore,
multipart: MultipartUploadManager
) {
this.req = req;
this.res = res;
this.store = store;
this.multipart = multipart;
this.method = req.method || 'GET';
this.headers = req.headers;

View File

@@ -1,6 +1,6 @@
import * as plugins from '../plugins.js';
import { S3Error } from './s3-error.js';
import type { Readable } from 'stream';
import { Readable } from 'stream';
export interface IS3Bucket {
name: string;
@@ -39,7 +39,7 @@ export interface IRangeOptions {
}
/**
* Filesystem-backed storage for S3 objects
* Filesystem-backed storage for S3 objects using smartfs
*/
export class FilesystemStore {
constructor(private rootDir: string) {}
@@ -48,14 +48,19 @@ export class FilesystemStore {
* Initialize store (ensure root directory exists)
*/
public async initialize(): Promise<void> {
await plugins.fs.promises.mkdir(this.rootDir, { recursive: true });
await plugins.smartfs.directory(this.rootDir).recursive().create();
}
/**
* Reset store (delete all buckets)
*/
public async reset(): Promise<void> {
await plugins.smartfile.fs.ensureEmptyDir(this.rootDir);
// Delete directory and recreate it
const exists = await plugins.smartfs.directory(this.rootDir).exists();
if (exists) {
await plugins.smartfs.directory(this.rootDir).recursive().delete();
}
await plugins.smartfs.directory(this.rootDir).recursive().create();
}
// ============================
@@ -66,18 +71,17 @@ export class FilesystemStore {
* List all buckets
*/
public async listBuckets(): Promise<IS3Bucket[]> {
const dirs = await plugins.smartfile.fs.listFolders(this.rootDir);
const entries = await plugins.smartfs.directory(this.rootDir).includeStats().list();
const buckets: IS3Bucket[] = [];
for (const dir of dirs) {
const bucketPath = plugins.path.join(this.rootDir, dir);
const stats = await plugins.smartfile.fs.stat(bucketPath);
for (const entry of entries) {
if (entry.isDirectory && entry.stats) {
buckets.push({
name: dir,
creationDate: stats.birthtime,
name: entry.name,
creationDate: entry.stats.birthtime,
});
}
}
return buckets.sort((a, b) => a.name.localeCompare(b.name));
}
@@ -87,7 +91,7 @@ export class FilesystemStore {
*/
public async bucketExists(bucket: string): Promise<boolean> {
const bucketPath = this.getBucketPath(bucket);
return plugins.smartfile.fs.isDirectory(bucketPath);
return plugins.smartfs.directory(bucketPath).exists();
}
/**
@@ -95,7 +99,7 @@ export class FilesystemStore {
*/
public async createBucket(bucket: string): Promise<void> {
const bucketPath = this.getBucketPath(bucket);
await plugins.fs.promises.mkdir(bucketPath, { recursive: true });
await plugins.smartfs.directory(bucketPath).recursive().create();
}
/**
@@ -110,12 +114,12 @@ export class FilesystemStore {
}
// Check if bucket is empty
const files = await plugins.smartfile.fs.listFileTree(bucketPath, '**/*');
const files = await plugins.smartfs.directory(bucketPath).recursive().list();
if (files.length > 0) {
throw new S3Error('BucketNotEmpty', 'The bucket you tried to delete is not empty');
}
await plugins.smartfile.fs.remove(bucketPath);
await plugins.smartfs.directory(bucketPath).recursive().delete();
}
// ============================
@@ -142,13 +146,16 @@ export class FilesystemStore {
continuationToken,
} = options;
// List all object files
const objectPattern = '**/*._S3_object';
const objectFiles = await plugins.smartfile.fs.listFileTree(bucketPath, objectPattern);
// List all object files recursively with filter
const entries = await plugins.smartfs
.directory(bucketPath)
.recursive()
.filter((entry) => entry.name.endsWith('._S3_object'))
.list();
// Convert file paths to keys
let keys = objectFiles.map((filePath) => {
const relativePath = plugins.path.relative(bucketPath, filePath);
let keys = entries.map((entry) => {
const relativePath = plugins.path.relative(bucketPath, entry.path);
const key = this.decodeKey(relativePath.replace(/\._S3_object$/, ''));
return key;
});
@@ -226,7 +233,7 @@ export class FilesystemStore {
const md5Path = `${objectPath}.md5`;
const [stats, metadata, md5] = await Promise.all([
plugins.smartfile.fs.stat(objectPath),
plugins.smartfs.file(objectPath).stat(),
this.readMetadata(metadataPath),
this.readMD5(objectPath, md5Path),
]);
@@ -245,7 +252,7 @@ export class FilesystemStore {
*/
public async objectExists(bucket: string, key: string): Promise<boolean> {
const objectPath = this.getObjectPath(bucket, key);
return plugins.smartfile.fs.fileExists(objectPath);
return plugins.smartfs.file(objectPath).exists();
}
/**
@@ -265,14 +272,15 @@ export class FilesystemStore {
}
// Ensure parent directory exists
await plugins.fs.promises.mkdir(plugins.path.dirname(objectPath), { recursive: true });
const parentDir = plugins.path.dirname(objectPath);
await plugins.smartfs.directory(parentDir).recursive().create();
// Write with MD5 calculation
const result = await this.writeStreamWithMD5(stream, objectPath);
// Save metadata
const metadataPath = `${objectPath}.metadata.json`;
await plugins.fs.promises.writeFile(metadataPath, JSON.stringify(metadata, null, 2));
await plugins.smartfs.file(metadataPath).write(JSON.stringify(metadata, null, 2));
return result;
}
@@ -293,14 +301,50 @@ export class FilesystemStore {
const info = await this.getObjectInfo(bucket, key);
// Create read stream with optional range (using native fs for range support)
const stream = range
? plugins.fs.createReadStream(objectPath, { start: range.start, end: range.end })
: plugins.fs.createReadStream(objectPath);
// Get Web ReadableStream from smartfs
const webStream = await plugins.smartfs.file(objectPath).readStream();
// Convert Web Stream to Node.js Readable stream
let nodeStream = Readable.fromWeb(webStream as any);
// Handle range requests if needed
if (range) {
// For range requests, we need to skip bytes and limit output
let bytesRead = 0;
const rangeStart = range.start;
const rangeEnd = range.end;
nodeStream = nodeStream.pipe(new (require('stream').Transform)({
transform(chunk: Buffer, encoding, callback) {
const chunkStart = bytesRead;
const chunkEnd = bytesRead + chunk.length - 1;
bytesRead += chunk.length;
// Skip chunks before range
if (chunkEnd < rangeStart) {
callback();
return;
}
// Stop after range
if (chunkStart > rangeEnd) {
this.end();
callback();
return;
}
// Slice chunk to fit range
const sliceStart = Math.max(0, rangeStart - chunkStart);
const sliceEnd = Math.min(chunk.length, rangeEnd - chunkStart + 1);
callback(null, chunk.slice(sliceStart, sliceEnd));
}
}));
}
return {
...info,
content: stream,
content: nodeStream,
};
}
@@ -314,9 +358,9 @@ export class FilesystemStore {
// S3 doesn't throw error if object doesn't exist
await Promise.all([
plugins.smartfile.fs.remove(objectPath).catch(() => {}),
plugins.smartfile.fs.remove(metadataPath).catch(() => {}),
plugins.smartfile.fs.remove(md5Path).catch(() => {}),
plugins.smartfs.file(objectPath).delete().catch(() => {}),
plugins.smartfs.file(metadataPath).delete().catch(() => {}),
plugins.smartfs.file(md5Path).delete().catch(() => {}),
]);
}
@@ -345,30 +389,31 @@ export class FilesystemStore {
}
// Ensure parent directory exists
await plugins.fs.promises.mkdir(plugins.path.dirname(destObjectPath), { recursive: true });
const parentDir = plugins.path.dirname(destObjectPath);
await plugins.smartfs.directory(parentDir).recursive().create();
// Copy object file
await plugins.smartfile.fs.copy(srcObjectPath, destObjectPath);
await plugins.smartfs.file(srcObjectPath).copy(destObjectPath);
// Handle metadata
if (metadataDirective === 'COPY') {
// Copy metadata
const srcMetadataPath = `${srcObjectPath}.metadata.json`;
const destMetadataPath = `${destObjectPath}.metadata.json`;
await plugins.smartfile.fs.copy(srcMetadataPath, destMetadataPath).catch(() => {});
await plugins.smartfs.file(srcMetadataPath).copy(destMetadataPath).catch(() => {});
} else if (newMetadata) {
// Replace with new metadata
const destMetadataPath = `${destObjectPath}.metadata.json`;
await plugins.fs.promises.writeFile(destMetadataPath, JSON.stringify(newMetadata, null, 2));
await plugins.smartfs.file(destMetadataPath).write(JSON.stringify(newMetadata, null, 2));
}
// Copy MD5
const srcMD5Path = `${srcObjectPath}.md5`;
const destMD5Path = `${destObjectPath}.md5`;
await plugins.smartfile.fs.copy(srcMD5Path, destMD5Path).catch(() => {});
await plugins.smartfs.file(srcMD5Path).copy(destMD5Path).catch(() => {});
// Get result info
const stats = await plugins.smartfile.fs.stat(destObjectPath);
const stats = await plugins.smartfs.file(destObjectPath).stat();
const md5 = await this.readMD5(destObjectPath, destMD5Path);
return { size: stats.size, md5 };
@@ -432,25 +477,41 @@ export class FilesystemStore {
const hash = plugins.crypto.createHash('md5');
let totalSize = 0;
return new Promise((resolve, reject) => {
const output = plugins.fs.createWriteStream(destPath);
return new Promise(async (resolve, reject) => {
// Get Web WritableStream from smartfs
const webWriteStream = await plugins.smartfs.file(destPath).writeStream();
const writer = webWriteStream.getWriter();
input.on('data', (chunk: Buffer) => {
// Read from Node.js stream and write to Web stream
input.on('data', async (chunk: Buffer) => {
hash.update(chunk);
totalSize += chunk.length;
try {
await writer.write(new Uint8Array(chunk));
} catch (err) {
reject(err);
}
});
input.on('error', reject);
output.on('error', reject);
input.on('error', (err) => {
writer.abort(err);
reject(err);
});
input.pipe(output).on('finish', async () => {
input.on('end', async () => {
try {
await writer.close();
const md5 = hash.digest('hex');
// Save MD5 to separate file
const md5Path = `${destPath}.md5`;
await plugins.fs.promises.writeFile(md5Path, md5);
await plugins.smartfs.file(md5Path).write(md5);
resolve({ size: totalSize, md5 });
} catch (err) {
reject(err);
}
});
});
}
@@ -461,22 +522,28 @@ export class FilesystemStore {
private async readMD5(objectPath: string, md5Path: string): Promise<string> {
try {
// Try to read cached MD5
const md5 = await plugins.smartfile.fs.toStringSync(md5Path);
const md5 = await plugins.smartfs.file(md5Path).encoding('utf8').read() as string;
return md5.trim();
} catch (err) {
// Calculate MD5 if not cached
return new Promise((resolve, reject) => {
return new Promise(async (resolve, reject) => {
const hash = plugins.crypto.createHash('md5');
const stream = plugins.fs.createReadStream(objectPath);
stream.on('data', (chunk: Buffer) => hash.update(chunk));
stream.on('end', async () => {
try {
const webStream = await plugins.smartfs.file(objectPath).readStream();
const nodeStream = Readable.fromWeb(webStream as any);
nodeStream.on('data', (chunk: Buffer) => hash.update(chunk));
nodeStream.on('end', async () => {
const md5 = hash.digest('hex');
// Cache it
await plugins.fs.promises.writeFile(md5Path, md5);
await plugins.smartfs.file(md5Path).write(md5);
resolve(md5);
});
stream.on('error', reject);
nodeStream.on('error', reject);
} catch (err) {
reject(err);
}
});
}
}
@@ -486,7 +553,7 @@ export class FilesystemStore {
*/
private async readMetadata(metadataPath: string): Promise<Record<string, string>> {
try {
const content = await plugins.smartfile.fs.toStringSync(metadataPath);
const content = await plugins.smartfs.file(metadataPath).encoding('utf8').read() as string;
return JSON.parse(content);
} catch (err) {
return {};

130
ts/classes/logger.ts Normal file
View File

@@ -0,0 +1,130 @@
import type { ILoggingConfig } from '../index.js';
/**
* Log levels in order of severity
*/
const LOG_LEVELS = {
error: 0,
warn: 1,
info: 2,
debug: 3,
} as const;
type LogLevel = keyof typeof LOG_LEVELS;
/**
* Structured logger with configurable levels and formats
*/
export class Logger {
private config: Required<ILoggingConfig>;
private minLevel: number;
constructor(config: ILoggingConfig) {
// Apply defaults for any missing config
this.config = {
level: config.level ?? 'info',
format: config.format ?? 'text',
enabled: config.enabled ?? true,
};
this.minLevel = LOG_LEVELS[this.config.level];
}
/**
* Check if a log level should be output
*/
private shouldLog(level: LogLevel): boolean {
if (!this.config.enabled) {
return false;
}
return LOG_LEVELS[level] <= this.minLevel;
}
/**
* Format a log message
*/
private format(level: LogLevel, message: string, meta?: Record<string, any>): string {
const timestamp = new Date().toISOString();
if (this.config.format === 'json') {
return JSON.stringify({
timestamp,
level,
message,
...(meta || {}),
});
}
// Text format
const metaStr = meta ? ` ${JSON.stringify(meta)}` : '';
return `[${timestamp}] ${level.toUpperCase()}: ${message}${metaStr}`;
}
/**
* Log at error level
*/
public error(message: string, meta?: Record<string, any>): void {
if (this.shouldLog('error')) {
console.error(this.format('error', message, meta));
}
}
/**
* Log at warn level
*/
public warn(message: string, meta?: Record<string, any>): void {
if (this.shouldLog('warn')) {
console.warn(this.format('warn', message, meta));
}
}
/**
* Log at info level
*/
public info(message: string, meta?: Record<string, any>): void {
if (this.shouldLog('info')) {
console.log(this.format('info', message, meta));
}
}
/**
* Log at debug level
*/
public debug(message: string, meta?: Record<string, any>): void {
if (this.shouldLog('debug')) {
console.log(this.format('debug', message, meta));
}
}
/**
* Log HTTP request
*/
public request(method: string, url: string, meta?: Record<string, any>): void {
this.info(`${method} ${url}`, meta);
}
/**
* Log HTTP response
*/
public response(method: string, url: string, statusCode: number, duration: number): void {
const level: LogLevel = statusCode >= 500 ? 'error' : statusCode >= 400 ? 'warn' : 'info';
if (this.shouldLog(level)) {
const message = `${method} ${url} - ${statusCode} (${duration}ms)`;
if (level === 'error') {
this.error(message, { statusCode, duration });
} else if (level === 'warn') {
this.warn(message, { statusCode, duration });
} else {
this.info(message, { statusCode, duration });
}
}
}
/**
* Log S3 error
*/
public s3Error(code: string, message: string, status: number): void {
this.error(`[S3Error] ${code}: ${message}`, { code, status });
}
}

View File

@@ -0,0 +1,238 @@
import * as plugins from '../plugins.js';
import { Readable } from 'stream';
/**
* Multipart upload metadata
*/
export interface IMultipartUpload {
uploadId: string;
bucket: string;
key: string;
initiated: Date;
parts: Map<number, IPartInfo>;
metadata: Record<string, string>;
}
/**
* Part information
*/
export interface IPartInfo {
partNumber: number;
etag: string;
size: number;
lastModified: Date;
}
/**
* Manages multipart upload state and storage
*/
export class MultipartUploadManager {
private uploads: Map<string, IMultipartUpload> = new Map();
private uploadDir: string;
constructor(private rootDir: string) {
this.uploadDir = plugins.path.join(rootDir, '.multipart');
}
/**
* Initialize multipart uploads directory
*/
public async initialize(): Promise<void> {
await plugins.smartfs.directory(this.uploadDir).recursive().create();
}
/**
* Generate a unique upload ID
*/
private generateUploadId(): string {
return plugins.crypto.randomBytes(16).toString('hex');
}
/**
* Initiate a new multipart upload
*/
public async initiateUpload(
bucket: string,
key: string,
metadata: Record<string, string>
): Promise<string> {
const uploadId = this.generateUploadId();
this.uploads.set(uploadId, {
uploadId,
bucket,
key,
initiated: new Date(),
parts: new Map(),
metadata,
});
// Create directory for this upload's parts
const uploadPath = plugins.path.join(this.uploadDir, uploadId);
await plugins.smartfs.directory(uploadPath).recursive().create();
return uploadId;
}
/**
* Upload a part
*/
public async uploadPart(
uploadId: string,
partNumber: number,
stream: Readable
): Promise<IPartInfo> {
const upload = this.uploads.get(uploadId);
if (!upload) {
throw new Error('No such upload');
}
const partPath = plugins.path.join(this.uploadDir, uploadId, `part-${partNumber}`);
// Write part to disk
const webWriteStream = await plugins.smartfs.file(partPath).writeStream();
const writer = webWriteStream.getWriter();
let size = 0;
const hash = plugins.crypto.createHash('md5');
for await (const chunk of stream) {
const buffer = Buffer.isBuffer(chunk) ? chunk : Buffer.from(chunk);
await writer.write(new Uint8Array(buffer));
hash.update(buffer);
size += buffer.length;
}
await writer.close();
const etag = hash.digest('hex');
const partInfo: IPartInfo = {
partNumber,
etag,
size,
lastModified: new Date(),
};
upload.parts.set(partNumber, partInfo);
return partInfo;
}
/**
* Complete multipart upload - combine all parts
*/
public async completeUpload(
uploadId: string,
parts: Array<{ PartNumber: number; ETag: string }>
): Promise<{ etag: string; size: number }> {
const upload = this.uploads.get(uploadId);
if (!upload) {
throw new Error('No such upload');
}
// Verify all parts are uploaded
for (const part of parts) {
const uploadedPart = upload.parts.get(part.PartNumber);
if (!uploadedPart) {
throw new Error(`Part ${part.PartNumber} not uploaded`);
}
// Normalize ETag format (remove quotes if present)
const normalizedETag = part.ETag.replace(/"/g, '');
if (uploadedPart.etag !== normalizedETag) {
throw new Error(`Part ${part.PartNumber} ETag mismatch`);
}
}
// Sort parts by part number
const sortedParts = parts.sort((a, b) => a.PartNumber - b.PartNumber);
// Combine parts into final object
const finalPath = plugins.path.join(this.uploadDir, uploadId, 'final');
const webWriteStream = await plugins.smartfs.file(finalPath).writeStream();
const writer = webWriteStream.getWriter();
const hash = plugins.crypto.createHash('md5');
let totalSize = 0;
for (const part of sortedParts) {
const partPath = plugins.path.join(this.uploadDir, uploadId, `part-${part.PartNumber}`);
// Read part and write to final file
const partContent = await plugins.smartfs.file(partPath).read();
const buffer = Buffer.isBuffer(partContent) ? partContent : Buffer.from(partContent as string);
await writer.write(new Uint8Array(buffer));
hash.update(buffer);
totalSize += buffer.length;
}
await writer.close();
const etag = hash.digest('hex');
return { etag, size: totalSize };
}
/**
* Get the final combined file path
*/
public getFinalPath(uploadId: string): string {
return plugins.path.join(this.uploadDir, uploadId, 'final');
}
/**
* Get upload metadata
*/
public getUpload(uploadId: string): IMultipartUpload | undefined {
return this.uploads.get(uploadId);
}
/**
* Abort multipart upload - clean up parts
*/
public async abortUpload(uploadId: string): Promise<void> {
const upload = this.uploads.get(uploadId);
if (!upload) {
throw new Error('No such upload');
}
// Delete upload directory
const uploadPath = plugins.path.join(this.uploadDir, uploadId);
await plugins.smartfs.directory(uploadPath).recursive().delete();
// Remove from memory
this.uploads.delete(uploadId);
}
/**
* Clean up upload after completion
*/
public async cleanupUpload(uploadId: string): Promise<void> {
const uploadPath = plugins.path.join(this.uploadDir, uploadId);
await plugins.smartfs.directory(uploadPath).recursive().delete();
this.uploads.delete(uploadId);
}
/**
* List all in-progress uploads for a bucket
*/
public listUploads(bucket?: string): IMultipartUpload[] {
const uploads = Array.from(this.uploads.values());
if (bucket) {
return uploads.filter((u) => u.bucket === bucket);
}
return uploads;
}
/**
* List parts for an upload
*/
public listParts(uploadId: string): IPartInfo[] {
const upload = this.uploads.get(uploadId);
if (!upload) {
throw new Error('No such upload');
}
return Array.from(upload.parts.values()).sort((a, b) => a.partNumber - b.partNumber);
}
}

View File

@@ -4,9 +4,12 @@ import { MiddlewareStack } from './middleware-stack.js';
import { S3Context } from './context.js';
import { FilesystemStore } from './filesystem-store.js';
import { S3Error } from './s3-error.js';
import { Logger } from './logger.js';
import { MultipartUploadManager } from './multipart-manager.js';
import { ServiceController } from '../controllers/service.controller.js';
import { BucketController } from '../controllers/bucket.controller.js';
import { ObjectController } from '../controllers/object.controller.js';
import type { ISmarts3Config } from '../index.js';
export interface ISmarts3ServerOptions {
port?: number;
@@ -14,6 +17,7 @@ export interface ISmarts3ServerOptions {
directory?: string;
cleanSlate?: boolean;
silent?: boolean;
config?: Required<ISmarts3Config>;
}
/**
@@ -24,20 +28,61 @@ export class Smarts3Server {
private httpServer?: plugins.http.Server;
private router: S3Router;
private middlewares: MiddlewareStack;
private store: FilesystemStore;
private options: Required<ISmarts3ServerOptions>;
public store: FilesystemStore; // Made public for direct access from Smarts3 class
public multipart: MultipartUploadManager; // Made public for controller access
private options: Required<Omit<ISmarts3ServerOptions, 'config'>>;
private config: Required<ISmarts3Config>;
private logger: Logger;
constructor(options: ISmarts3ServerOptions = {}) {
this.options = {
port: 3000,
address: '0.0.0.0',
directory: plugins.path.join(process.cwd(), '.nogit/bucketsDir'),
cleanSlate: false,
silent: false,
...options,
port: options.port ?? 3000,
address: options.address ?? '0.0.0.0',
directory: options.directory ?? plugins.path.join(process.cwd(), '.nogit/bucketsDir'),
cleanSlate: options.cleanSlate ?? false,
silent: options.silent ?? false,
};
// Store config for middleware and feature configuration
// If no config provided, create minimal default (for backward compatibility)
this.config = options.config ?? {
server: {
port: this.options.port,
address: this.options.address,
silent: this.options.silent,
},
storage: {
directory: this.options.directory,
cleanSlate: this.options.cleanSlate,
},
auth: {
enabled: false,
credentials: [{ accessKeyId: 'S3RVER', secretAccessKey: 'S3RVER' }],
},
cors: {
enabled: false,
allowedOrigins: ['*'],
allowedMethods: ['GET', 'POST', 'PUT', 'DELETE', 'HEAD', 'OPTIONS'],
allowedHeaders: ['*'],
exposedHeaders: ['ETag', 'x-amz-request-id', 'x-amz-version-id'],
maxAge: 86400,
allowCredentials: false,
},
logging: {
level: 'info',
format: 'text',
enabled: true,
},
limits: {
maxObjectSize: 5 * 1024 * 1024 * 1024,
maxMetadataSize: 2048,
requestTimeout: 300000,
},
};
this.logger = new Logger(this.config.logging);
this.store = new FilesystemStore(this.options.directory);
this.multipart = new MultipartUploadManager(this.options.directory);
this.router = new S3Router();
this.middlewares = new MiddlewareStack();
@@ -49,20 +94,118 @@ export class Smarts3Server {
* Setup middleware stack
*/
private setupMiddlewares(): void {
// Logger middleware
if (!this.options.silent) {
// CORS middleware (must be first to handle preflight requests)
if (this.config.cors.enabled) {
this.middlewares.use(async (req, res, ctx, next) => {
const start = Date.now();
console.log(`${req.method} ${req.url}`);
console.log(` Headers:`, JSON.stringify(req.headers, null, 2).slice(0, 200));
const origin = req.headers.origin || req.headers.referer;
// Check if origin is allowed
const allowedOrigins = this.config.cors.allowedOrigins || ['*'];
const isOriginAllowed =
allowedOrigins.includes('*') ||
(origin && allowedOrigins.includes(origin));
if (isOriginAllowed) {
// Set CORS headers
res.setHeader(
'Access-Control-Allow-Origin',
allowedOrigins.includes('*') ? '*' : origin || '*'
);
if (this.config.cors.allowCredentials) {
res.setHeader('Access-Control-Allow-Credentials', 'true');
}
// Handle preflight OPTIONS request
if (req.method === 'OPTIONS') {
res.setHeader(
'Access-Control-Allow-Methods',
(this.config.cors.allowedMethods || []).join(', ')
);
res.setHeader(
'Access-Control-Allow-Headers',
(this.config.cors.allowedHeaders || []).join(', ')
);
if (this.config.cors.maxAge) {
res.setHeader(
'Access-Control-Max-Age',
String(this.config.cors.maxAge)
);
}
res.writeHead(204);
res.end();
return; // Don't call next() for OPTIONS
}
// Set exposed headers for actual requests
if (this.config.cors.exposedHeaders && this.config.cors.exposedHeaders.length > 0) {
res.setHeader(
'Access-Control-Expose-Headers',
this.config.cors.exposedHeaders.join(', ')
);
}
}
await next();
const duration = Date.now() - start;
console.log(`${req.method} ${req.url} - ${res.statusCode} (${duration}ms)`);
});
}
// TODO: Add authentication middleware
// TODO: Add CORS middleware
// Authentication middleware (simple static credentials)
if (this.config.auth.enabled) {
this.middlewares.use(async (req, res, ctx, next) => {
const authHeader = req.headers.authorization;
// Extract access key from Authorization header
let accessKeyId: string | undefined;
if (authHeader) {
// Support multiple auth formats:
// 1. AWS accessKeyId:signature
// 2. AWS4-HMAC-SHA256 Credential=accessKeyId/date/region/service/aws4_request, ...
if (authHeader.startsWith('AWS ')) {
accessKeyId = authHeader.substring(4).split(':')[0];
} else if (authHeader.startsWith('AWS4-HMAC-SHA256')) {
const credentialMatch = authHeader.match(/Credential=([^/]+)\//);
accessKeyId = credentialMatch ? credentialMatch[1] : undefined;
}
}
// Check if access key is valid
const isValid = this.config.auth.credentials.some(
(cred) => cred.accessKeyId === accessKeyId
);
if (!isValid) {
ctx.throw('AccessDenied', 'Access Denied');
return;
}
await next();
});
}
// Logger middleware
if (!this.options.silent && this.config.logging.enabled) {
this.middlewares.use(async (req, res, ctx, next) => {
const start = Date.now();
// Log request
this.logger.request(req.method || 'UNKNOWN', req.url || '/', {
headers: req.headers,
});
await next();
// Log response
const duration = Date.now() - start;
this.logger.response(
req.method || 'UNKNOWN',
req.url || '/',
res.statusCode || 500,
duration
);
});
}
}
/**
@@ -80,6 +223,7 @@ export class Smarts3Server {
// Object level (/:bucket/:key*)
this.router.put('/:bucket/:key*', ObjectController.putObject);
this.router.post('/:bucket/:key*', ObjectController.postObject); // For multipart operations
this.router.get('/:bucket/:key*', ObjectController.getObject);
this.router.head('/:bucket/:key*', ObjectController.headObject);
this.router.delete('/:bucket/:key*', ObjectController.deleteObject);
@@ -92,7 +236,7 @@ export class Smarts3Server {
req: plugins.http.IncomingMessage,
res: plugins.http.ServerResponse
): Promise<void> {
const context = new S3Context(req, res, this.store);
const context = new S3Context(req, res, this.store, this.multipart);
try {
// Execute middleware stack
@@ -122,11 +266,14 @@ export class Smarts3Server {
): Promise<void> {
const s3Error = err instanceof S3Error ? err : S3Error.fromError(err);
if (!this.options.silent) {
console.error(`[S3Error] ${s3Error.code}: ${s3Error.message}`);
// Log the error
this.logger.s3Error(s3Error.code, s3Error.message, s3Error.status);
// Log stack trace for server errors
if (s3Error.status >= 500) {
console.error(err.stack || err);
}
this.logger.debug('Error stack trace', {
stack: err.stack || err.toString(),
});
}
// Send error response
@@ -147,6 +294,9 @@ export class Smarts3Server {
// Initialize store
await this.store.initialize();
// Initialize multipart upload manager
await this.multipart.initialize();
// Clean slate if requested
if (this.options.cleanSlate) {
await this.store.reset();
@@ -155,7 +305,10 @@ export class Smarts3Server {
// Create HTTP server
this.httpServer = plugins.http.createServer((req, res) => {
this.handleRequest(req, res).catch((err) => {
console.error('Fatal error in request handler:', err);
this.logger.error('Fatal error in request handler', {
error: err.message,
stack: err.stack,
});
if (!res.headersSent) {
res.writeHead(500, { 'Content-Type': 'text/plain' });
res.end('Internal Server Error');
@@ -169,9 +322,7 @@ export class Smarts3Server {
if (err) {
reject(err);
} else {
if (!this.options.silent) {
console.log(`S3 server listening on ${this.options.address}:${this.options.port}`);
}
this.logger.info(`S3 server listening on ${this.options.address}:${this.options.port}`);
resolve();
}
});
@@ -191,9 +342,7 @@ export class Smarts3Server {
if (err) {
reject(err);
} else {
if (!this.options.silent) {
console.log('S3 server stopped');
}
this.logger.info('S3 server stopped');
resolve();
}
});

View File

@@ -6,7 +6,7 @@ import type { S3Context } from '../classes/context.js';
*/
export class ObjectController {
/**
* PUT /:bucket/:key* - Upload object or copy object
* PUT /:bucket/:key* - Upload object, copy object, or upload part
*/
public static async putObject(
req: plugins.http.IncomingMessage,
@@ -16,6 +16,11 @@ export class ObjectController {
): Promise<void> {
const { bucket, key } = params;
// Check if this is a multipart upload part
if (ctx.query.partNumber && ctx.query.uploadId) {
return ObjectController.uploadPart(req, res, ctx, params);
}
// Check if this is a COPY operation
const copySource = ctx.headers['x-amz-copy-source'] as string | undefined;
if (copySource) {
@@ -133,7 +138,7 @@ export class ObjectController {
}
/**
* DELETE /:bucket/:key* - Delete object
* DELETE /:bucket/:key* - Delete object or abort multipart upload
*/
public static async deleteObject(
req: plugins.http.IncomingMessage,
@@ -143,6 +148,11 @@ export class ObjectController {
): Promise<void> {
const { bucket, key } = params;
// Check if this is an abort multipart upload
if (ctx.query.uploadId) {
return ObjectController.abortMultipartUpload(req, res, ctx, params);
}
await ctx.store.deleteObject(bucket, key);
ctx.status(204).send('');
}
@@ -201,4 +211,168 @@ export class ObjectController {
},
});
}
/**
* POST /:bucket/:key* - Initiate or complete multipart upload
*/
public static async postObject(
req: plugins.http.IncomingMessage,
res: plugins.http.ServerResponse,
ctx: S3Context,
params: Record<string, string>
): Promise<void> {
// Check if this is initiate multipart upload
if (ctx.query.uploads !== undefined) {
return ObjectController.initiateMultipartUpload(req, res, ctx, params);
}
// Check if this is complete multipart upload
if (ctx.query.uploadId) {
return ObjectController.completeMultipartUpload(req, res, ctx, params);
}
ctx.throw('InvalidRequest', 'Invalid POST request');
}
/**
* Initiate Multipart Upload (POST with ?uploads)
*/
private static async initiateMultipartUpload(
req: plugins.http.IncomingMessage,
res: plugins.http.ServerResponse,
ctx: S3Context,
params: Record<string, string>
): Promise<void> {
const { bucket, key } = params;
// Extract metadata from headers
const metadata: Record<string, string> = {};
for (const [header, value] of Object.entries(ctx.headers)) {
if (header.startsWith('x-amz-meta-')) {
metadata[header] = value as string;
}
if (header === 'content-type' && value) {
metadata['content-type'] = value as string;
}
}
// Initiate upload
const uploadId = await ctx.multipart.initiateUpload(bucket, key, metadata);
// Send XML response
await ctx.sendXML({
InitiateMultipartUploadResult: {
Bucket: bucket,
Key: key,
UploadId: uploadId,
},
});
}
/**
* Upload Part (PUT with ?partNumber&uploadId)
*/
private static async uploadPart(
req: plugins.http.IncomingMessage,
res: plugins.http.ServerResponse,
ctx: S3Context,
params: Record<string, string>
): Promise<void> {
const uploadId = ctx.query.uploadId!;
const partNumber = parseInt(ctx.query.partNumber!);
if (isNaN(partNumber) || partNumber < 1 || partNumber > 10000) {
ctx.throw('InvalidPartNumber', 'Part number must be between 1 and 10000');
}
// Upload the part
const partInfo = await ctx.multipart.uploadPart(
uploadId,
partNumber,
ctx.getRequestStream() as any as import('stream').Readable
);
// Set ETag header (part ETag)
ctx.setHeader('ETag', `"${partInfo.etag}"`);
ctx.status(200).send('');
}
/**
* Complete Multipart Upload (POST with ?uploadId)
*/
private static async completeMultipartUpload(
req: plugins.http.IncomingMessage,
res: plugins.http.ServerResponse,
ctx: S3Context,
params: Record<string, string>
): Promise<void> {
const { bucket, key } = params;
const uploadId = ctx.query.uploadId!;
// Read and parse request body (XML with part list)
const body = await ctx.readBody();
// Parse XML to extract parts
// Expected format: <CompleteMultipartUpload><Part><PartNumber>1</PartNumber><ETag>"etag"</ETag></Part>...</CompleteMultipartUpload>
const partMatches = body.matchAll(/<Part>.*?<PartNumber>(\d+)<\/PartNumber>.*?<ETag>(.*?)<\/ETag>.*?<\/Part>/gs);
const parts: Array<{ PartNumber: number; ETag: string }> = [];
for (const match of partMatches) {
parts.push({
PartNumber: parseInt(match[1]),
ETag: match[2],
});
}
// Complete the upload
const result = await ctx.multipart.completeUpload(uploadId, parts);
// Get upload metadata
const upload = ctx.multipart.getUpload(uploadId);
if (!upload) {
ctx.throw('NoSuchUpload', 'The specified upload does not exist');
}
// Move final file to object store
const finalPath = ctx.multipart.getFinalPath(uploadId);
const finalContent = await plugins.smartfs.file(finalPath).read();
const finalStream = plugins.http.IncomingMessage.prototype;
// Create a readable stream from the buffer
const { Readable } = await import('stream');
const finalReadableStream = Readable.from([finalContent]);
// Store the final object
await ctx.store.putObject(bucket, key, finalReadableStream, upload.metadata);
// Clean up multipart upload data
await ctx.multipart.cleanupUpload(uploadId);
// Send XML response
await ctx.sendXML({
CompleteMultipartUploadResult: {
Location: `/${bucket}/${key}`,
Bucket: bucket,
Key: key,
ETag: `"${result.etag}"`,
},
});
}
/**
* Abort Multipart Upload (DELETE with ?uploadId)
*/
private static async abortMultipartUpload(
req: plugins.http.IncomingMessage,
res: plugins.http.ServerResponse,
ctx: S3Context,
params: Record<string, string>
): Promise<void> {
const uploadId = ctx.query.uploadId!;
// Abort and cleanup
await ctx.multipart.abortUpload(uploadId);
ctx.status(204).send('');
}
}

View File

@@ -2,40 +2,186 @@ import * as plugins from './plugins.js';
import * as paths from './paths.js';
import { Smarts3Server } from './classes/smarts3-server.js';
export interface ISmarts3ContructorOptions {
/**
* Authentication configuration
*/
export interface IAuthConfig {
enabled: boolean;
credentials: Array<{
accessKeyId: string;
secretAccessKey: string;
}>;
}
/**
* CORS configuration
*/
export interface ICorsConfig {
enabled: boolean;
allowedOrigins?: string[];
allowedMethods?: string[];
allowedHeaders?: string[];
exposedHeaders?: string[];
maxAge?: number;
allowCredentials?: boolean;
}
/**
* Logging configuration
*/
export interface ILoggingConfig {
level?: 'error' | 'warn' | 'info' | 'debug';
format?: 'text' | 'json';
enabled?: boolean;
}
/**
* Request limits configuration
*/
export interface ILimitsConfig {
maxObjectSize?: number;
maxMetadataSize?: number;
requestTimeout?: number;
}
/**
* Server configuration
*/
export interface IServerConfig {
port?: number;
address?: string;
silent?: boolean;
}
/**
* Storage configuration
*/
export interface IStorageConfig {
directory?: string;
cleanSlate?: boolean;
}
/**
* Complete smarts3 configuration
*/
export interface ISmarts3Config {
server?: IServerConfig;
storage?: IStorageConfig;
auth?: IAuthConfig;
cors?: ICorsConfig;
logging?: ILoggingConfig;
limits?: ILimitsConfig;
}
/**
* Default configuration values
*/
const DEFAULT_CONFIG: ISmarts3Config = {
server: {
port: 3000,
address: '0.0.0.0',
silent: false,
},
storage: {
directory: paths.bucketsDir,
cleanSlate: false,
},
auth: {
enabled: false,
credentials: [
{
accessKeyId: 'S3RVER',
secretAccessKey: 'S3RVER',
},
],
},
cors: {
enabled: false,
allowedOrigins: ['*'],
allowedMethods: ['GET', 'POST', 'PUT', 'DELETE', 'HEAD', 'OPTIONS'],
allowedHeaders: ['*'],
exposedHeaders: ['ETag', 'x-amz-request-id', 'x-amz-version-id'],
maxAge: 86400,
allowCredentials: false,
},
logging: {
level: 'info',
format: 'text',
enabled: true,
},
limits: {
maxObjectSize: 5 * 1024 * 1024 * 1024, // 5GB
maxMetadataSize: 2048,
requestTimeout: 300000, // 5 minutes
},
};
/**
* Merge user config with defaults (deep merge)
*/
function mergeConfig(userConfig: ISmarts3Config): Required<ISmarts3Config> {
return {
server: {
...DEFAULT_CONFIG.server!,
...(userConfig.server || {}),
},
storage: {
...DEFAULT_CONFIG.storage!,
...(userConfig.storage || {}),
},
auth: {
...DEFAULT_CONFIG.auth!,
...(userConfig.auth || {}),
},
cors: {
...DEFAULT_CONFIG.cors!,
...(userConfig.cors || {}),
},
logging: {
...DEFAULT_CONFIG.logging!,
...(userConfig.logging || {}),
},
limits: {
...DEFAULT_CONFIG.limits!,
...(userConfig.limits || {}),
},
};
}
/**
* Main Smarts3 class - production-ready S3-compatible server
*/
export class Smarts3 {
// STATIC
public static async createAndStart(
optionsArg: ConstructorParameters<typeof Smarts3>[0],
) {
const smartS3Instance = new Smarts3(optionsArg);
public static async createAndStart(configArg: ISmarts3Config = {}) {
const smartS3Instance = new Smarts3(configArg);
await smartS3Instance.start();
return smartS3Instance;
}
// INSTANCE
public options: ISmarts3ContructorOptions;
public config: Required<ISmarts3Config>;
public s3Instance: Smarts3Server;
constructor(optionsArg: ISmarts3ContructorOptions) {
this.options = optionsArg;
constructor(configArg: ISmarts3Config = {}) {
this.config = mergeConfig(configArg);
}
public async start() {
this.s3Instance = new Smarts3Server({
port: this.options.port || 3000,
address: '0.0.0.0',
directory: paths.bucketsDir,
cleanSlate: this.options.cleanSlate || false,
silent: false,
port: this.config.server.port,
address: this.config.server.address,
directory: this.config.storage.directory,
cleanSlate: this.config.storage.cleanSlate,
silent: this.config.server.silent,
config: this.config, // Pass full config to server
});
await this.s3Instance.start();
if (!this.config.server.silent) {
console.log('s3 server is running');
}
}
public async getS3Descriptor(
optionsArg?: Partial<plugins.tsclass.storage.IS3Descriptor>,
@@ -48,11 +194,9 @@ export class Smarts3 {
}
public async createBucket(bucketNameArg: string) {
const smartbucketInstance = new plugins.smartbucket.SmartBucket(
await this.getS3Descriptor(),
);
const bucket = await smartbucketInstance.createBucket(bucketNameArg);
return bucket;
// Call the filesystem store directly instead of using the client library
await this.s3Instance.store.createBucket(bucketNameArg);
return { name: bucketNameArg };
}
public async stop() {

View File

@@ -3,17 +3,18 @@ import * as path from 'path';
import * as http from 'http';
import * as crypto from 'crypto';
import * as url from 'url';
import * as fs from 'fs';
export { path, http, crypto, url, fs };
export { path, http, crypto, url };
// @push.rocks scope
import * as smartbucket from '@push.rocks/smartbucket';
import * as smartfile from '@push.rocks/smartfile';
import { SmartFs, SmartFsProviderNode } from '@push.rocks/smartfs';
import * as smartpath from '@push.rocks/smartpath';
import { SmartXml } from '@push.rocks/smartxml';
export { smartbucket, smartfile, smartpath, SmartXml };
// Create SmartFs instance with Node.js provider
export const smartfs = new SmartFs(new SmartFsProviderNode());
export { smartpath, SmartXml };
// @tsclass scope
import * as tsclass from '@tsclass/tsclass';