Compare commits

...

9 Commits

21 changed files with 6018 additions and 5808 deletions

View File

@@ -1,5 +1,47 @@
# Changelog # Changelog
## 2026-03-14 - 4.5.1 - fix(readme)
refresh documentation wording and simplify README formatting
- removes emoji-heavy headings and inline log examples for a cleaner presentation
- clarifies S3-compatible storage wording in the project description
- streamlines testing and error handling sections without changing library functionality
## 2026-03-14 - 4.5.0 - feat(storage)
generalize S3 client and watcher interfaces to storage-oriented naming with backward compatibility
- rename SmartBucket internals from s3Client to storageClient and normalize configuration using storage descriptor types
- update watcher and interface types from S3-specific names to storage-oriented equivalents while keeping deprecated aliases
- refresh tests and documentation comments to reflect S3-compatible object storage terminology
- bump build, test, AWS SDK, and related dependency versions
## 2026-01-25 - 4.4.1 - fix(tests)
add explicit 'as string' type assertions to environment variable retrievals in tests
- Add 'as string' assertions for S3_ACCESSKEY, S3_SECRETKEY, S3_ENDPOINT, and S3_BUCKET to satisfy TypeScript typings in tests
- Cast S3_PORT to string before parseInt to ensure correct input type and avoid compiler errors
- Changes are localized to test/test.listing.node+deno.ts
## 2026-01-25 - 4.4.0 - feat(watcher)
add polling-based BucketWatcher to detect add/modify/delete events and expose RxJS Observable and EventEmitter APIs
- Add new BucketWatcher implementation (ts/classes.watcher.ts) with polling, buffering and change detection by ETag/Size/LastModified
- Expose createWatcher(...) on Bucket (ts/classes.bucket.ts) and export watcher and interfaces from index (ts/index.ts)
- Add watcher-related interfaces (IS3ChangeEvent, IS3ObjectState, IBucketWatcherOptions) to ts/interfaces.ts
- Comprehensive README/docs additions and examples for BucketWatcher, buffering, options and use cases (readme.md, readme.hints.md)
- Add unit/integration tests for watcher behavior (test/test.watcher.node.ts)
- Non-breaking API additions only — ready for a minor version bump from 4.3.1 to 4.4.0
## 2026-01-24 - 4.3.1 - fix(bucket)
propagate S3 client errors instead of silently logging them; update build script, bump dev/dependencies, and refresh npmextra configuration
- Remove .catch(...) wrappers around s3Client.send so errors are no longer swallowed and will propagate to callers
- Update build script to use 'tsbuild tsfolders --allowimplicitany'
- Bump devDependencies: @git.zone/tsbuild to ^4.1.2, @git.zone/tsrun to ^2.0.1, @git.zone/tstest to ^3.1.6
- Bump dependency @aws-sdk/client-s3 to ^3.975.0
- Adjust npmextra.json structure (@git.zone/cli, @git.zone/tsdoc), add release registries and @ship.zone/szci entry
- Remove pnpm-workspace.yaml onlyBuiltDependencies configuration
## 2025-11-20 - 4.3.0 - feat(listing) ## 2025-11-20 - 4.3.0 - feat(listing)
Add memory-efficient listing APIs: async generator, RxJS observable, and cursor pagination; export ListCursor and Minimatch; add minimatch dependency; bump to 4.2.0 Add memory-efficient listing APIs: async generator, RxJS observable, and cursor pagination; export ListCursor and Minimatch; add minimatch dependency; bump to 4.2.0

3591
deno.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,8 +1,5 @@
{ {
"npmci": { "@git.zone/cli": {
"npmGlobalTools": []
},
"gitzone": {
"projectType": "npm", "projectType": "npm",
"module": { "module": {
"githost": "code.foss.global", "githost": "code.foss.global",
@@ -33,9 +30,19 @@
"data management", "data management",
"streaming" "streaming"
] ]
},
"release": {
"registries": [
"https://verdaccio.lossless.digital",
"https://registry.npmjs.org"
],
"accessLevel": "public"
} }
}, },
"tsdoc": { "@git.zone/tsdoc": {
"legal": "\n## License and Legal Information\n\nThis repository contains open-source code that is licensed under the MIT License. A copy of the MIT License can be found in the [license](license) file within this repository. \n\n**Please note:** The MIT License does not grant permission to use the trade names, trademarks, service marks, or product names of the project, except as required for reasonable and customary use in describing the origin of the work and reproducing the content of the NOTICE file.\n\n### Trademarks\n\nThis project is owned and maintained by Task Venture Capital GmbH. The names and logos associated with Task Venture Capital GmbH and any related products or services are trademarks of Task Venture Capital GmbH and are not included within the scope of the MIT license granted herein. Use of these trademarks must comply with Task Venture Capital GmbH's Trademark Guidelines, and any usage must be approved in writing by Task Venture Capital GmbH.\n\n### Company Information\n\nTask Venture Capital GmbH \nRegistered at District court Bremen HRB 35230 HB, Germany\n\nFor any legal inquiries or if you require further information, please contact us via email at hello@task.vc.\n\nBy using this repository, you acknowledge that you have read this section, agree to comply with its terms, and understand that the licensing of the code does not imply endorsement by Task Venture Capital GmbH of any derivative works.\n" "legal": "\n## License and Legal Information\n\nThis repository contains open-source code that is licensed under the MIT License. A copy of the MIT License can be found in the [license](license) file within this repository. \n\n**Please note:** The MIT License does not grant permission to use the trade names, trademarks, service marks, or product names of the project, except as required for reasonable and customary use in describing the origin of the work and reproducing the content of the NOTICE file.\n\n### Trademarks\n\nThis project is owned and maintained by Task Venture Capital GmbH. The names and logos associated with Task Venture Capital GmbH and any related products or services are trademarks of Task Venture Capital GmbH and are not included within the scope of the MIT license granted herein. Use of these trademarks must comply with Task Venture Capital GmbH's Trademark Guidelines, and any usage must be approved in writing by Task Venture Capital GmbH.\n\n### Company Information\n\nTask Venture Capital GmbH \nRegistered at District court Bremen HRB 35230 HB, Germany\n\nFor any legal inquiries or if you require further information, please contact us via email at hello@task.vc.\n\nBy using this repository, you acknowledge that you have read this section, agree to comply with its terms, and understand that the licensing of the code does not imply endorsement by Task Venture Capital GmbH of any derivative works.\n"
},
"@ship.zone/szci": {
"npmGlobalTools": []
} }
} }

View File

@@ -1,6 +1,6 @@
{ {
"name": "@push.rocks/smartbucket", "name": "@push.rocks/smartbucket",
"version": "4.3.0", "version": "4.5.1",
"description": "A TypeScript library providing a cloud-agnostic interface for managing object storage with functionalities like bucket management, file and directory operations, and advanced features such as metadata handling and file locking.", "description": "A TypeScript library providing a cloud-agnostic interface for managing object storage with functionalities like bucket management, file and directory operations, and advanced features such as metadata handling and file locking.",
"main": "dist_ts/index.js", "main": "dist_ts/index.js",
"typings": "dist_ts/index.d.ts", "typings": "dist_ts/index.d.ts",
@@ -9,26 +9,27 @@
"license": "MIT", "license": "MIT",
"scripts": { "scripts": {
"test": "(tstest test/ --verbose --logfile --timeout 120)", "test": "(tstest test/ --verbose --logfile --timeout 120)",
"build": "(tsbuild --web --allowimplicitany)" "build": "(tsbuild tsfolders --allowimplicitany)"
}, },
"devDependencies": { "devDependencies": {
"@git.zone/tsbuild": "^3.1.0", "@git.zone/tsbuild": "^4.3.0",
"@git.zone/tsrun": "^2.0.0", "@git.zone/tsrun": "^2.0.1",
"@git.zone/tstest": "^3.0.1", "@git.zone/tstest": "^3.3.2",
"@push.rocks/qenv": "^6.1.3", "@push.rocks/qenv": "^6.1.3",
"@push.rocks/tapbundle": "^6.0.3" "@push.rocks/tapbundle": "^6.0.3",
"@types/node": "^22.15.29"
}, },
"dependencies": { "dependencies": {
"@aws-sdk/client-s3": "^3.936.0", "@aws-sdk/client-s3": "^3.1009.0",
"@push.rocks/smartmime": "^2.0.4", "@push.rocks/smartmime": "^2.0.4",
"@push.rocks/smartpath": "^6.0.0", "@push.rocks/smartpath": "^6.0.0",
"@push.rocks/smartpromise": "^4.2.3", "@push.rocks/smartpromise": "^4.2.3",
"@push.rocks/smartrx": "^3.0.10", "@push.rocks/smartrx": "^3.0.10",
"@push.rocks/smartstream": "^3.2.5", "@push.rocks/smartstream": "^3.4.0",
"@push.rocks/smartstring": "^4.1.0", "@push.rocks/smartstring": "^4.1.0",
"@push.rocks/smartunique": "^3.0.9", "@push.rocks/smartunique": "^3.0.9",
"@tsclass/tsclass": "^9.3.0", "@tsclass/tsclass": "^9.4.0",
"minimatch": "^10.1.1" "minimatch": "^10.2.4"
}, },
"private": false, "private": false,
"files": [ "files": [

6689
pnpm-lock.yaml generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,4 +0,0 @@
onlyBuiltDependencies:
- esbuild
- mongodb-memory-server
- puppeteer

View File

@@ -3,3 +3,4 @@
* **Use exists() methods to check before getting**: `bucketExists`, `fileExists`, `directoryExists`, `fastExists` * **Use exists() methods to check before getting**: `bucketExists`, `fileExists`, `directoryExists`, `fastExists`
* **No *Strict methods**: All removed (fastPutStrict, getBucketByNameStrict, getFileStrict, getSubDirectoryByNameStrict) * **No *Strict methods**: All removed (fastPutStrict, getBucketByNameStrict, getFileStrict, getSubDirectoryByNameStrict)
* metadata is handled though the MetaData class. Important! * metadata is handled though the MetaData class. Important!
* **BucketWatcher** - Polling-based S3 change watcher (`bucket.createWatcher()`). Detects add/modify/delete via ETag/Size/LastModified comparison. Supports RxJS Observable pattern (`changeSubject`) and EventEmitter pattern (`on('change')`). Options: `prefix`, `pollIntervalMs`, `bufferTimeMs`, `includeInitial`, `pageSize`.

553
readme.md
View File

@@ -1,20 +1,25 @@
# @push.rocks/smartbucket 🪣 # @push.rocks/smartbucket
> A powerful, cloud-agnostic TypeScript library for object storage that makes S3 feel like a modern filesystem. Built for developers who demand simplicity, type-safety, and advanced features like metadata management, file locking, intelligent trash handling, and memory-efficient streaming. A powerful, cloud-agnostic TypeScript library for object storage that makes S3-compatible storage feel like a modern filesystem. Built for developers who demand simplicity, type-safety, and advanced features like real-time bucket watching, metadata management, file locking, intelligent trash handling, and memory-efficient streaming.
## Why SmartBucket? 🎯 ## Issue Reporting and Security
- **🌍 Cloud Agnostic** - Write once, run on AWS S3, MinIO, DigitalOcean Spaces, Backblaze B2, Wasabi, or any S3-compatible storage For reporting bugs, issues, or security vulnerabilities, please visit [community.foss.global/](https://community.foss.global/). This is the central community hub for all issue reporting. Developers who sign and comply with our contribution agreement and go through identification can also get a [code.foss.global/](https://code.foss.global/) account to submit Pull Requests directly.
- **🚀 Modern TypeScript** - First-class TypeScript support with complete type definitions and async/await patterns
- **💾 Memory Efficient** - Handle millions of files with async generators, RxJS observables, and cursor pagination
- **🗑️ Smart Trash System** - Recover accidentally deleted files with built-in trash and restore functionality
- **🔒 File Locking** - Prevent concurrent modifications with built-in locking mechanisms
- **🏷️ Rich Metadata** - Attach custom metadata to any file for powerful organization and search
- **🌊 Streaming Support** - Efficient handling of large files with Node.js and Web streams
- **📁 Directory-like API** - Intuitive filesystem-like operations on object storage
- **⚡ Fail-Fast** - Strict-by-default API catches errors immediately with precise stack traces
## Quick Start 🚀 ## Why SmartBucket?
- **Cloud Agnostic** - Write once, run on AWS S3, MinIO, DigitalOcean Spaces, Backblaze B2, Wasabi, Cloudflare R2, or any S3-compatible storage
- **Modern TypeScript** - First-class TypeScript support with complete type definitions and async/await patterns
- **Real-Time Watching** - Monitor bucket changes with polling-based watcher supporting RxJS and EventEmitter patterns
- **Memory Efficient** - Handle millions of files with async generators, RxJS observables, and cursor pagination
- **Smart Trash System** - Recover accidentally deleted files with built-in trash and restore functionality
- **File Locking** - Prevent concurrent modifications with built-in locking mechanisms
- **Rich Metadata** - Attach custom metadata to any file for powerful organization and search
- **Streaming Support** - Efficient handling of large files with Node.js and Web streams
- **Directory-like API** - Intuitive filesystem-like operations on object storage
- **Fail-Fast** - Strict-by-default API catches errors immediately with precise stack traces
## Quick Start
```typescript ```typescript
import { SmartBucket } from '@push.rocks/smartbucket'; import { SmartBucket } from '@push.rocks/smartbucket';
@@ -39,15 +44,22 @@ await bucket.fastPut({
// Download it back // Download it back
const data = await bucket.fastGet({ path: 'users/profile.json' }); const data = await bucket.fastGet({ path: 'users/profile.json' });
console.log('📄', JSON.parse(data.toString())); console.log(JSON.parse(data.toString()));
// List files efficiently (even with millions of objects!) // List files efficiently (even with millions of objects!)
for await (const key of bucket.listAllObjects('users/')) { for await (const key of bucket.listAllObjects('users/')) {
console.log('🔍 Found:', key); console.log('Found:', key);
} }
// Watch for changes in real-time
const watcher = bucket.createWatcher({ prefix: 'uploads/', pollIntervalMs: 3000 });
watcher.changeSubject.subscribe((change) => {
console.log('Change detected:', change.type, change.key);
});
await watcher.start();
``` ```
## Install 📦 ## Install
```bash ```bash
# Using pnpm (recommended) # Using pnpm (recommended)
@@ -57,23 +69,24 @@ pnpm add @push.rocks/smartbucket
npm install @push.rocks/smartbucket --save npm install @push.rocks/smartbucket --save
``` ```
## Usage 🚀 ## Usage
### Table of Contents ### Table of Contents
1. [🏁 Getting Started](#-getting-started) 1. [Getting Started](#getting-started)
2. [🗂️ Working with Buckets](#-working-with-buckets) 2. [Working with Buckets](#working-with-buckets)
3. [📁 File Operations](#-file-operations) 3. [File Operations](#file-operations)
4. [📋 Memory-Efficient Listing](#-memory-efficient-listing) 4. [Memory-Efficient Listing](#memory-efficient-listing)
5. [📂 Directory Management](#-directory-management) 5. [Bucket Watching](#bucket-watching)
6. [🌊 Streaming Operations](#-streaming-operations) 6. [Directory Management](#directory-management)
7. [🔒 File Locking](#-file-locking) 7. [Streaming Operations](#streaming-operations)
8. [🏷️ Metadata Management](#-metadata-management) 8. [File Locking](#file-locking)
9. [🗑️ Trash & Recovery](#-trash--recovery) 9. [Metadata Management](#metadata-management)
10. [⚡ Advanced Features](#-advanced-features) 10. [Trash & Recovery](#trash--recovery)
11. [☁️ Cloud Provider Support](#-cloud-provider-support) 11. [Advanced Features](#advanced-features)
12. [Cloud Provider Support](#cloud-provider-support)
### 🏁 Getting Started ### Getting Started
First, set up your storage connection: First, set up your storage connection:
@@ -91,7 +104,7 @@ const smartBucket = new SmartBucket({
}); });
``` ```
**For MinIO or self-hosted S3:** **For MinIO or self-hosted storage:**
```typescript ```typescript
const smartBucket = new SmartBucket({ const smartBucket = new SmartBucket({
accessKey: 'minioadmin', accessKey: 'minioadmin',
@@ -102,14 +115,13 @@ const smartBucket = new SmartBucket({
}); });
``` ```
### 🗂️ Working with Buckets ### Working with Buckets
#### Creating Buckets #### Creating Buckets
```typescript ```typescript
// Create a new bucket // Create a new bucket
const myBucket = await smartBucket.createBucket('my-awesome-bucket'); const myBucket = await smartBucket.createBucket('my-awesome-bucket');
console.log(`✅ Bucket created: ${myBucket.name}`);
``` ```
#### Getting Existing Buckets #### Getting Existing Buckets
@@ -121,7 +133,6 @@ const existingBucket = await smartBucket.getBucketByName('existing-bucket');
// Check first, then get (non-throwing approach) // Check first, then get (non-throwing approach)
if (await smartBucket.bucketExists('maybe-exists')) { if (await smartBucket.bucketExists('maybe-exists')) {
const bucket = await smartBucket.getBucketByName('maybe-exists'); const bucket = await smartBucket.getBucketByName('maybe-exists');
console.log('✅ Found bucket:', bucket.name);
} }
``` ```
@@ -130,10 +141,9 @@ if (await smartBucket.bucketExists('maybe-exists')) {
```typescript ```typescript
// Delete a bucket (must be empty) // Delete a bucket (must be empty)
await smartBucket.removeBucket('old-bucket'); await smartBucket.removeBucket('old-bucket');
console.log('🗑️ Bucket removed');
``` ```
### 📁 File Operations ### File Operations
#### Upload Files #### Upload Files
@@ -145,7 +155,6 @@ const file = await bucket.fastPut({
path: 'documents/report.pdf', path: 'documents/report.pdf',
contents: Buffer.from('Your file content here') contents: Buffer.from('Your file content here')
}); });
console.log('✅ Uploaded:', file.path);
// Upload with string content // Upload with string content
await bucket.fastPut({ await bucket.fastPut({
@@ -167,9 +176,7 @@ try {
contents: 'new content' contents: 'new content'
}); });
} catch (error) { } catch (error) {
console.error('Upload failed:', error.message); console.error('Upload failed:', error.message);
// Error: Object already exists at path 'existing-file.txt' in bucket 'my-bucket'.
// Set overwrite:true to replace it.
} }
``` ```
@@ -180,7 +187,6 @@ try {
const fileContent = await bucket.fastGet({ const fileContent = await bucket.fastGet({
path: 'documents/report.pdf' path: 'documents/report.pdf'
}); });
console.log(`📄 File size: ${fileContent.length} bytes`);
// Get file as string // Get file as string
const textContent = fileContent.toString('utf-8'); const textContent = fileContent.toString('utf-8');
@@ -195,7 +201,6 @@ const jsonData = JSON.parse(fileContent.toString());
const exists = await bucket.fastExists({ const exists = await bucket.fastExists({
path: 'documents/report.pdf' path: 'documents/report.pdf'
}); });
console.log(`File exists: ${exists ? '✅' : '❌'}`);
``` ```
#### Delete Files #### Delete Files
@@ -205,7 +210,6 @@ console.log(`File exists: ${exists ? '✅' : '❌'}`);
await bucket.fastRemove({ await bucket.fastRemove({
path: 'old-file.txt' path: 'old-file.txt'
}); });
console.log('🗑️ File deleted permanently');
``` ```
#### Copy & Move Files #### Copy & Move Files
@@ -216,28 +220,34 @@ await bucket.fastCopy({
sourcePath: 'original/file.txt', sourcePath: 'original/file.txt',
destinationPath: 'backup/file-copy.txt' destinationPath: 'backup/file-copy.txt'
}); });
console.log('📋 File copied');
// Move file (copy + delete original) // Move file (copy + delete original)
await bucket.fastMove({ await bucket.fastMove({
sourcePath: 'temp/draft.txt', sourcePath: 'temp/draft.txt',
destinationPath: 'final/document.txt' destinationPath: 'final/document.txt'
}); });
console.log('📦 File moved');
// Copy to different bucket
const targetBucket = await smartBucket.getBucketByName('backup-bucket');
await bucket.fastCopy({
sourcePath: 'important/data.json',
destinationPath: 'archived/data.json',
targetBucket: targetBucket
});
``` ```
### 📋 Memory-Efficient Listing ### Memory-Efficient Listing
SmartBucket provides three powerful patterns for listing objects, optimized for handling **millions of files** efficiently: SmartBucket provides three powerful patterns for listing objects, optimized for handling **millions of files** efficiently:
#### Async Generators (Recommended) #### Async Generators (Recommended)
Memory-efficient streaming using native JavaScript async iteration: Memory-efficient streaming using native JavaScript async iteration:
```typescript ```typescript
// List all objects with prefix - streams one at a time! // List all objects with prefix - streams one at a time!
for await (const key of bucket.listAllObjects('documents/')) { for await (const key of bucket.listAllObjects('documents/')) {
console.log(`📄 Found: ${key}`); console.log('Found:', key);
// Process each file individually (memory efficient!) // Process each file individually (memory efficient!)
const content = await bucket.fastGet({ path: key }); const content = await bucket.fastGet({ path: key });
@@ -247,39 +257,26 @@ for await (const key of bucket.listAllObjects('documents/')) {
if (shouldStop()) break; if (shouldStop()) break;
} }
// List all objects (no prefix)
const allKeys: string[] = [];
for await (const key of bucket.listAllObjects()) {
allKeys.push(key);
}
// Find objects matching glob patterns // Find objects matching glob patterns
for await (const key of bucket.findByGlob('**/*.json')) { for await (const key of bucket.findByGlob('**/*.json')) {
console.log(`📦 JSON file: ${key}`); console.log('JSON file:', key);
} }
// Complex glob patterns // Complex glob patterns
for await (const key of bucket.findByGlob('npm/packages/*/index.json')) { for await (const key of bucket.findByGlob('npm/packages/*/index.json')) {
// Matches: npm/packages/foo/index.json, npm/packages/bar/index.json console.log('Package index:', key);
console.log(`📦 Package index: ${key}`);
}
// More glob examples
for await (const key of bucket.findByGlob('logs/**/*.log')) {
console.log('📋 Log file:', key);
} }
for await (const key of bucket.findByGlob('images/*.{jpg,png,gif}')) { for await (const key of bucket.findByGlob('images/*.{jpg,png,gif}')) {
console.log('🖼️ Image:', key); console.log('Image:', key);
} }
``` ```
**Why use async generators?** **Why use async generators?**
- Processes one item at a time (constant memory usage) - Processes one item at a time (constant memory usage)
- Supports early termination with `break` - Supports early termination with `break`
- Native JavaScript - no dependencies - Native JavaScript - no dependencies
- Perfect for large buckets with millions of objects - Perfect for large buckets with millions of objects
- ✅ Works seamlessly with `for await...of` loops
#### RxJS Observables #### RxJS Observables
@@ -296,16 +293,9 @@ bucket.listAllObjectsObservable('logs/')
map(key => ({ key, timestamp: Date.now() })) map(key => ({ key, timestamp: Date.now() }))
) )
.subscribe({ .subscribe({
next: (item) => console.log(`📋 Log file: ${item.key}`), next: (item) => console.log('Log file:', item.key),
error: (err) => console.error('Error:', err), error: (err) => console.error('Error:', err),
complete: () => console.log('Listing complete') complete: () => console.log('Listing complete')
});
// Simple subscription without operators
bucket.listAllObjectsObservable('data/')
.subscribe({
next: (key) => processKey(key),
complete: () => console.log('✅ Done')
}); });
// Combine with other observables // Combine with other observables
@@ -316,15 +306,9 @@ const backups$ = bucket.listAllObjectsObservable('backups/');
merge(logs$, backups$) merge(logs$, backups$)
.pipe(filter(key => key.includes('2024'))) .pipe(filter(key => key.includes('2024')))
.subscribe(key => console.log('📅 2024 file:', key)); .subscribe(key => console.log('2024 file:', key));
``` ```
**Why use observables?**
- ✅ Rich operator ecosystem (filter, map, debounce, etc.)
- ✅ Composable with other RxJS streams
- ✅ Perfect for reactive architectures
- ✅ Great for complex transformations
#### Cursor Pattern #### Cursor Pattern
Explicit pagination control for UI and resumable operations: Explicit pagination control for UI and resumable operations:
@@ -336,16 +320,13 @@ const cursor = bucket.createCursor('uploads/', { pageSize: 100 });
// Fetch pages manually // Fetch pages manually
while (cursor.hasMore()) { while (cursor.hasMore()) {
const page = await cursor.next(); const page = await cursor.next();
console.log(`📄 Page has ${page.keys.length} items`); console.log(`Page has ${page.keys.length} items`);
for (const key of page.keys) { for (const key of page.keys) {
console.log(` - ${key}`); console.log(` - ${key}`);
} }
if (page.done) { if (page.done) break;
console.log('✅ Reached end');
break;
}
} }
// Save and restore cursor state (perfect for resumable operations!) // Save and restore cursor state (perfect for resumable operations!)
@@ -355,44 +336,133 @@ const token = cursor.getToken();
// ... later, in a different request ... // ... later, in a different request ...
const newCursor = bucket.createCursor('uploads/', { pageSize: 100 }); const newCursor = bucket.createCursor('uploads/', { pageSize: 100 });
newCursor.setToken(token); // Resume from saved position! newCursor.setToken(token); // Resume from saved position!
const nextPage = await cursor.next(); const nextPage = await newCursor.next();
// Reset cursor to start over // Reset cursor to start over
cursor.reset(); cursor.reset();
const firstPage = await cursor.next(); // Back to the beginning
``` ```
**Why use cursors?**
- ✅ Perfect for UI pagination (prev/next buttons)
- ✅ Save/restore state for resumable operations
- ✅ Explicit control over page fetching
- ✅ Great for implementing "Load More" buttons
#### Convenience Methods #### Convenience Methods
```typescript ```typescript
// Collect all keys into array (⚠️ WARNING: loads everything into memory!) // Collect all keys into array (WARNING: loads everything into memory!)
const allKeys = await bucket.listAllObjectsArray('images/'); const allKeys = await bucket.listAllObjectsArray('images/');
console.log(`📦 Found ${allKeys.length} images`); console.log(`Found ${allKeys.length} images`);
// Only use for small result sets
const smallList = await bucket.listAllObjectsArray('config/');
if (smallList.length < 100) {
// Safe to process in memory
smallList.forEach(key => console.log(key));
}
``` ```
**Performance Comparison:** **Performance Comparison:**
| Method | Memory Usage | Best For | Supports Early Exit | | Method | Memory Usage | Best For | Supports Early Exit |
|--------|-------------|----------|-------------------| |--------|-------------|----------|-------------------|
| **Async Generator** | O(1) - constant | Most use cases, large datasets | Yes | | **Async Generator** | O(1) - constant | Most use cases, large datasets | Yes |
| **Observable** | O(1) - constant | Reactive pipelines, RxJS apps | Yes | | **Observable** | O(1) - constant | Reactive pipelines, RxJS apps | Yes |
| **Cursor** | O(pageSize) | UI pagination, resumable ops | Yes | | **Cursor** | O(pageSize) | UI pagination, resumable ops | Yes |
| **Array** | O(n) - grows with results | Small datasets (<10k items) | No | | **Array** | O(n) - grows with results | Small datasets (<10k items) | No |
### 📂 Directory Management ### Bucket Watching
Monitor your storage bucket for changes in real-time with the powerful `BucketWatcher`:
```typescript
// Create a watcher for a specific prefix
const watcher = bucket.createWatcher({
prefix: 'uploads/', // Watch files with this prefix
pollIntervalMs: 3000, // Check every 3 seconds
includeInitial: false, // Don't emit existing files on start
});
// RxJS Observable pattern (recommended for reactive apps)
watcher.changeSubject.subscribe((change) => {
if (change.type === 'add') {
console.log('New file:', change.key);
} else if (change.type === 'modify') {
console.log('Modified:', change.key);
} else if (change.type === 'delete') {
console.log('Deleted:', change.key);
}
});
// EventEmitter pattern (classic Node.js style)
watcher.on('change', (change) => {
console.log(`${change.type}: ${change.key}`);
});
watcher.on('error', (err) => {
console.error('Watcher error:', err);
});
// Start watching
await watcher.start();
// Wait until watcher is ready (initial state built)
await watcher.readyDeferred.promise;
console.log('Watcher is now monitoring the bucket');
// ... your application runs ...
// Stop watching when done
await watcher.stop();
// Or use the alias:
await watcher.close();
```
#### Watcher Options
```typescript
interface IBucketWatcherOptions {
prefix?: string; // Filter objects by prefix (default: '' = all)
pollIntervalMs?: number; // Polling interval in ms (default: 5000)
bufferTimeMs?: number; // Buffer events before emitting (for batching)
includeInitial?: boolean; // Emit existing files as 'add' on start (default: false)
pageSize?: number; // Objects per page when listing (default: 1000)
}
```
#### Buffered Events
For high-frequency change environments, buffer events to reduce processing overhead:
```typescript
const watcher = bucket.createWatcher({
prefix: 'high-traffic/',
pollIntervalMs: 1000,
bufferTimeMs: 2000, // Collect events for 2 seconds before emitting
});
// Receive batched events as arrays
watcher.changeSubject.subscribe((changes) => {
if (Array.isArray(changes)) {
console.log(`Batch of ${changes.length} changes:`);
changes.forEach(c => console.log(` - ${c.type}: ${c.key}`));
}
});
await watcher.start();
```
#### Change Event Structure
```typescript
interface IStorageChangeEvent {
type: 'add' | 'modify' | 'delete';
key: string; // Object key (path)
bucket: string; // Bucket name
size?: number; // File size (not present for deletes)
etag?: string; // ETag hash (not present for deletes)
lastModified?: Date; // Last modified date (not present for deletes)
}
```
#### Watch Use Cases
- **Sync systems** - Detect changes to trigger synchronization
- **Analytics** - Track file uploads/modifications in real-time
- **Notifications** - Alert users when their files are ready
- **Processing pipelines** - Trigger workflows on new file uploads
- **Backup systems** - Detect changes for incremental backups
- **Audit logs** - Track all bucket activity
### Directory Management
SmartBucket provides powerful directory-like operations for organizing your files: SmartBucket provides powerful directory-like operations for organizing your files:
@@ -404,8 +474,8 @@ const baseDir = await bucket.getBaseDirectory();
const directories = await baseDir.listDirectories(); const directories = await baseDir.listDirectories();
const files = await baseDir.listFiles(); const files = await baseDir.listFiles();
console.log(`📁 Found ${directories.length} directories`); console.log(`Found ${directories.length} directories`);
console.log(`📄 Found ${files.length} files`); console.log(`Found ${files.length} files`);
// Navigate subdirectories // Navigate subdirectories
const subDir = await baseDir.getSubDirectoryByName('projects/2024'); const subDir = await baseDir.getSubDirectoryByName('projects/2024');
@@ -418,16 +488,19 @@ await subDir.fastPut({
// Get directory tree structure // Get directory tree structure
const tree = await subDir.getTreeArray(); const tree = await subDir.getTreeArray();
console.log('🌳 Directory tree:', tree);
// Get directory path // Get directory path
console.log('📂 Base path:', subDir.getBasePath()); // "projects/2024/" console.log('Base path:', subDir.getBasePath()); // "projects/2024/"
// Create empty file as placeholder // Create empty file as placeholder
await subDir.createEmptyFile('placeholder.txt'); await subDir.createEmptyFile('placeholder.txt');
// Check existence
const fileExists = await subDir.fileExists({ path: 'report.pdf' });
const dirExists = await baseDir.directoryExists('projects');
``` ```
### 🌊 Streaming Operations ### Streaming Operations
Handle large files efficiently with streaming support: Handle large files efficiently with streaming support:
@@ -470,16 +543,12 @@ import * as fs from 'node:fs';
const readStream = fs.createReadStream('big-data.csv'); const readStream = fs.createReadStream('big-data.csv');
await bucket.fastPutStream({ await bucket.fastPutStream({
path: 'uploads/big-data.csv', path: 'uploads/big-data.csv',
stream: readStream, readableStream: readStream,
metadata: { nativeMetadata: {
contentType: 'text/csv', 'content-type': 'text/csv',
userMetadata: { 'x-custom-header': 'my-value'
uploadedBy: 'data-team',
version: '2.0'
}
} }
}); });
console.log('✅ Large file uploaded via stream');
``` ```
#### Reactive Streams with RxJS #### Reactive Streams with RxJS
@@ -487,14 +556,13 @@ console.log('✅ Large file uploaded via stream');
```typescript ```typescript
// Get file as ReplaySubject for reactive programming // Get file as ReplaySubject for reactive programming
const replaySubject = await bucket.fastGetReplaySubject({ const replaySubject = await bucket.fastGetReplaySubject({
path: 'data/sensor-readings.json', path: 'data/sensor-readings.json'
chunkSize: 1024
}); });
// Multiple subscribers can consume the same data // Multiple subscribers can consume the same data
replaySubject.subscribe({ replaySubject.subscribe({
next: (chunk) => processChunk(chunk), next: (chunk) => processChunk(chunk),
complete: () => console.log('Stream complete') complete: () => console.log('Stream complete')
}); });
replaySubject.subscribe({ replaySubject.subscribe({
@@ -502,150 +570,141 @@ replaySubject.subscribe({
}); });
``` ```
### 🔒 File Locking ### File Locking
Prevent concurrent modifications with built-in file locking: Prevent concurrent modifications with built-in file locking:
```typescript ```typescript
const file = await bucket.getBaseDirectory() const baseDir = await bucket.getBaseDirectory();
.getFile({ path: 'important-config.json' }); const file = await baseDir.getFile({ path: 'important-config.json' });
// Lock file for 10 minutes // Lock file for 10 minutes
await file.lock({ timeoutMillis: 600000 }); await file.lock({ timeoutMillis: 600000 });
console.log('🔒 File locked');
// Try to modify locked file (will throw error) // Check lock status via metadata
try { const metadata = await file.getMetaData();
await file.delete(); const isLocked = await metadata.checkLocked();
} catch (error) {
console.log('❌ Cannot delete locked file');
}
// Check lock status // Get lock info
const isLocked = await file.isLocked(); const lockInfo = await metadata.getLockInfo();
console.log(`Lock status: ${isLocked ? '🔒 Locked' : '🔓 Unlocked'}`); console.log(`Lock expires: ${new Date(lockInfo.expires)}`);
// Unlock when done // Unlock when done
await file.unlock(); await file.unlock();
console.log('🔓 File unlocked');
// Force unlock (even if locked by another process)
await file.unlock({ force: true });
``` ```
**Lock use cases:** **Lock use cases:**
- 🔄 Prevent concurrent writes during critical updates - Prevent concurrent writes during critical updates
- 🔐 Protect configuration files during deployment - Protect configuration files during deployment
- 🚦 Coordinate distributed workers - Coordinate distributed workers
- 🛡️ Ensure data consistency - Ensure data consistency
### 🏷️ Metadata Management ### Metadata Management
Attach and manage rich metadata for your files: Attach and manage rich metadata for your files:
```typescript ```typescript
const file = await bucket.getBaseDirectory() const baseDir = await bucket.getBaseDirectory();
.getFile({ path: 'document.pdf' }); const file = await baseDir.getFile({ path: 'document.pdf' });
// Get metadata handler // Get metadata handler
const metadata = await file.getMetaData(); const metadata = await file.getMetaData();
// Set custom metadata // Store custom metadata (can be any JSON-serializable value)
await metadata.setCustomMetaData({ await metadata.storeCustomMetaData({
key: 'author', key: 'author',
value: 'John Doe' value: 'John Doe'
}); });
await metadata.setCustomMetaData({ await metadata.storeCustomMetaData({
key: 'department', key: 'tags',
value: 'Engineering' value: ['important', 'quarterly-report', '2024']
}); });
await metadata.setCustomMetaData({ await metadata.storeCustomMetaData({
key: 'version', key: 'workflow',
value: '1.0.0' value: { status: 'approved', approvedBy: 'jane@company.com' }
}); });
// Retrieve metadata // Retrieve metadata
const author = await metadata.getCustomMetaData({ key: 'author' }); const author = await metadata.getCustomMetaData({ key: 'author' });
console.log(`📝 Author: ${author}`);
// Get all metadata // Delete metadata
const allMeta = await metadata.getAllCustomMetaData(); await metadata.deleteCustomMetaData({ key: 'workflow' });
console.log('📋 All metadata:', allMeta);
// { author: 'John Doe', department: 'Engineering', version: '1.0.0' }
// Check if metadata exists // Check if file has any metadata
const hasMetadata = await metadata.hasMetaData(); const hasMetadata = await file.hasMetaData();
console.log(`Has metadata: ${hasMetadata ? '✅' : '❌'}`);
// Get file type detection
const fileType = await metadata.getFileType({ useFileExtension: true });
// Get file type from magic bytes (more accurate)
const detectedType = await metadata.getFileType({ useMagicBytes: true });
// Get file size
const size = await metadata.getSizeInBytes();
``` ```
**Metadata use cases:** **Metadata use cases:**
- 👤 Track file ownership and authorship - Track file ownership and authorship
- 🏷️ Add tags and categories for search - Add tags and categories for search
- 📊 Store processing status or workflow state - Store processing status or workflow state
- 🔍 Enable rich querying and filtering - Enable rich querying and filtering
- 📝 Maintain audit trails - Maintain audit trails
### 🗑️ Trash & Recovery ### Trash & Recovery
SmartBucket includes an intelligent trash system for safe file deletion and recovery: SmartBucket includes an intelligent trash system for safe file deletion and recovery:
```typescript ```typescript
const file = await bucket.getBaseDirectory() const baseDir = await bucket.getBaseDirectory();
.getFile({ path: 'important-data.xlsx' }); const file = await baseDir.getFile({ path: 'important-data.xlsx' });
// Move to trash instead of permanent deletion // Move to trash instead of permanent deletion
await file.delete({ mode: 'trash' }); await file.delete({ mode: 'trash' });
console.log('🗑️ File moved to trash (can be restored!)');
// Permanent deletion (use with caution!) // Permanent deletion (use with caution!)
await file.delete({ mode: 'permanent' }); await file.delete({ mode: 'permanent' });
console.log('💀 File permanently deleted (cannot be recovered)');
// Access trash // Access trash
const trash = await bucket.getTrash(); const trash = await bucket.getTrash();
const trashDir = await trash.getTrashDir(); const trashDir = await trash.getTrashDir();
const trashedFiles = await trashDir.listFiles(); const trashedFiles = await trashDir.listFiles();
console.log(`📦 ${trashedFiles.length} files in trash`);
// Restore from trash // Restore from trash
const trashedFile = await bucket.getBaseDirectory() const trashedFile = await baseDir.getFile({
.getFile({ path: 'important-data.xlsx',
path: 'important-data.xlsx', getFromTrash: true
getFromTrash: true });
});
await trashedFile.restore({ useOriginalPath: true }); await trashedFile.restore({ useOriginalPath: true });
console.log('♻️ File restored to original location');
// Or restore to a different location // Or restore to a different location
await trashedFile.restore({ await trashedFile.restore({
useOriginalPath: false, toPath: 'recovered/important-data.xlsx'
restorePath: 'recovered/important-data.xlsx'
}); });
console.log('♻️ File restored to new location');
// Empty trash permanently
await trash.emptyTrash();
console.log('🧹 Trash emptied');
``` ```
**Trash features:** **Trash features:**
- ♻️ Recover accidentally deleted files - Recover accidentally deleted files
- 🏷️ Preserves original path in metadata - Preserves original path in metadata
- Tracks deletion timestamp - Tracks deletion timestamp
- 🔍 List and inspect trashed files - List and inspect trashed files
- 🧹 Bulk empty trash operation
### Advanced Features ### Advanced Features
#### File Statistics #### File Statistics
```typescript ```typescript
// Get detailed file statistics // Get detailed file statistics
const stats = await bucket.fastStat({ path: 'document.pdf' }); const stats = await bucket.fastStat({ path: 'document.pdf' });
console.log(`📊 Size: ${stats.size} bytes`); console.log(`Size: ${stats.ContentLength} bytes`);
console.log(`📅 Last modified: ${stats.lastModified}`); console.log(`Last modified: ${stats.LastModified}`);
console.log(`🏷️ ETag: ${stats.etag}`); console.log(`ETag: ${stats.ETag}`);
console.log(`🗂️ Storage class: ${stats.storageClass}`); console.log(`Content type: ${stats.ContentType}`);
``` ```
#### Magic Bytes Detection #### Magic Bytes Detection
@@ -658,30 +717,28 @@ const magicBytes = await bucket.getMagicBytes({
path: 'mystery-file', path: 'mystery-file',
length: 16 length: 16
}); });
console.log(`🔮 Magic bytes: ${magicBytes.toString('hex')}`);
// Or from a File object // Or from a File object
const file = await bucket.getBaseDirectory() const baseDir = await bucket.getBaseDirectory();
.getFile({ path: 'image.jpg' }); const file = await baseDir.getFile({ path: 'image.jpg' });
const magic = await file.getMagicBytes({ length: 4 }); const magic = await file.getMagicBytes({ length: 4 });
// Check file signatures // Check file signatures
if (magic[0] === 0xFF && magic[1] === 0xD8) { if (magic[0] === 0xFF && magic[1] === 0xD8) {
console.log('📸 This is a JPEG image'); console.log('This is a JPEG image');
} else if (magic[0] === 0x89 && magic[1] === 0x50) { } else if (magic[0] === 0x89 && magic[1] === 0x50) {
console.log('🖼️ This is a PNG image'); console.log('This is a PNG image');
} }
``` ```
#### JSON Data Operations #### JSON Data Operations
```typescript ```typescript
const file = await bucket.getBaseDirectory() const baseDir = await bucket.getBaseDirectory();
.getFile({ path: 'config.json' }); const file = await baseDir.getFile({ path: 'config.json' });
// Read JSON data // Read JSON data
const config = await file.getJsonData(); const config = await file.getJsonData();
console.log('⚙️ Config loaded:', config);
// Update JSON data // Update JSON data
config.version = '2.0'; config.version = '2.0';
@@ -689,7 +746,6 @@ config.updated = new Date().toISOString();
config.features.push('newFeature'); config.features.push('newFeature');
await file.writeJsonData(config); await file.writeJsonData(config);
console.log('💾 Config updated');
``` ```
#### Directory & File Type Detection #### Directory & File Type Detection
@@ -700,35 +756,29 @@ const isDir = await bucket.isDirectory({ path: 'uploads/' });
// Check if path is a file // Check if path is a file
const isFile = await bucket.isFile({ path: 'uploads/document.pdf' }); const isFile = await bucket.isFile({ path: 'uploads/document.pdf' });
console.log(`Is directory: ${isDir ? '📁' : '❌'}`);
console.log(`Is file: ${isFile ? '📄' : '❌'}`);
``` ```
#### Clean Bucket Contents #### Clean Bucket Contents
```typescript ```typescript
// Remove all files and directories (⚠️ use with caution!) // Remove all files and directories (use with caution!)
await bucket.cleanAllContents(); await bucket.cleanAllContents();
console.log('🧹 Bucket cleaned');
``` ```
### ☁️ Cloud Provider Support ### Cloud Provider Support
SmartBucket works seamlessly with all major S3-compatible providers: SmartBucket works seamlessly with all major S3-compatible providers:
| Provider | Status | Notes | | Provider | Status | Notes |
|----------|--------|-------| |----------|--------|-------|
| **AWS S3** | ✅ Full support | Native S3 API | | **AWS S3** | Supported | Native S3 API |
| **MinIO** | ✅ Full support | Self-hosted, perfect for development | | **MinIO** | Supported | Self-hosted, perfect for development |
| **DigitalOcean Spaces** | ✅ Full support | Cost-effective S3-compatible | | **DigitalOcean Spaces** | Supported | Cost-effective S3-compatible |
| **Backblaze B2** | ✅ Full support | Very affordable storage | | **Backblaze B2** | Supported | Very affordable storage |
| **Wasabi** | ✅ Full support | High-performance hot storage | | **Wasabi** | Supported | High-performance hot storage |
| **Google Cloud Storage** | ✅ Full support | Via S3-compatible API | | **Google Cloud Storage** | Supported | Via S3-compatible API |
| **Cloudflare R2** | ✅ Full support | Zero egress fees | | **Cloudflare R2** | Supported | Zero egress fees |
| **Any S3-compatible** | ✅ Full support | Works with any S3-compatible provider | | **Any S3-compatible** | Supported | Works with any S3-compatible provider |
The library automatically handles provider quirks and optimizes operations for each platform while maintaining a consistent API.
**Configuration examples:** **Configuration examples:**
@@ -768,9 +818,18 @@ const b2Storage = new SmartBucket({
region: 'us-west-002', region: 'us-west-002',
useSsl: true useSsl: true
}); });
// Cloudflare R2
const r2Storage = new SmartBucket({
accessKey: process.env.R2_ACCESS_KEY_ID,
accessSecret: process.env.R2_SECRET_ACCESS_KEY,
endpoint: `${process.env.R2_ACCOUNT_ID}.r2.cloudflarestorage.com`,
region: 'auto',
useSsl: true
});
``` ```
### 🔧 Advanced Configuration ### Advanced Configuration
```typescript ```typescript
// Environment-based configuration with @push.rocks/qenv // Environment-based configuration with @push.rocks/qenv
@@ -779,42 +838,40 @@ import { Qenv } from '@push.rocks/qenv';
const qenv = new Qenv('./', './.nogit/'); const qenv = new Qenv('./', './.nogit/');
const smartBucket = new SmartBucket({ const smartBucket = new SmartBucket({
accessKey: await qenv.getEnvVarOnDemandStrict('S3_ACCESS_KEY'), accessKey: await qenv.getEnvVarOnDemand('S3_ACCESS_KEY'),
accessSecret: await qenv.getEnvVarOnDemandStrict('S3_SECRET'), accessSecret: await qenv.getEnvVarOnDemand('S3_SECRET'),
endpoint: await qenv.getEnvVarOnDemandStrict('S3_ENDPOINT'), endpoint: await qenv.getEnvVarOnDemand('S3_ENDPOINT'),
port: parseInt(await qenv.getEnvVarOnDemandStrict('S3_PORT')), port: parseInt(await qenv.getEnvVarOnDemand('S3_PORT')),
useSsl: await qenv.getEnvVarOnDemandStrict('S3_USE_SSL') === 'true', useSsl: await qenv.getEnvVarOnDemand('S3_USE_SSL') === 'true',
region: await qenv.getEnvVarOnDemandStrict('S3_REGION') region: await qenv.getEnvVarOnDemand('S3_REGION')
}); });
``` ```
### 🧪 Testing ### Testing
SmartBucket is thoroughly tested with 82 comprehensive tests covering all features:
```bash ```bash
# Run all tests # Run all tests
pnpm test pnpm test
# Run specific test file # Run specific test file
pnpm tstest test/test.listing.node+deno.ts --verbose pnpm tstest test/test.watcher.node.ts --verbose
# Run tests with log file # Run tests with log file
pnpm test --logfile pnpm test --logfile
``` ```
### 🛡️ Error Handling Best Practices ### Error Handling Best Practices
SmartBucket uses a **strict-by-default** approach - methods throw errors instead of returning null: SmartBucket uses a **strict-by-default** approach - methods throw errors instead of returning null:
```typescript ```typescript
// ✅ Good: Check existence first // Check existence first
if (await bucket.fastExists({ path: 'file.txt' })) { if (await bucket.fastExists({ path: 'file.txt' })) {
const content = await bucket.fastGet({ path: 'file.txt' }); const content = await bucket.fastGet({ path: 'file.txt' });
process(content); process(content);
} }
// ✅ Good: Try/catch for expected failures // Try/catch for expected failures
try { try {
const file = await bucket.fastGet({ path: 'might-not-exist.txt' }); const file = await bucket.fastGet({ path: 'might-not-exist.txt' });
process(file); process(file);
@@ -823,7 +880,7 @@ try {
useDefault(); useDefault();
} }
// ✅ Good: Explicit overwrite control // Explicit overwrite control
try { try {
await bucket.fastPut({ await bucket.fastPut({
path: 'existing-file.txt', path: 'existing-file.txt',
@@ -833,12 +890,9 @@ try {
} catch (error) { } catch (error) {
console.log('File already exists'); console.log('File already exists');
} }
// ❌ Bad: Assuming file exists without checking
const content = await bucket.fastGet({ path: 'file.txt' }); // May throw!
``` ```
### 💡 Best Practices ### Best Practices
1. **Always use strict mode** for critical operations to catch errors early 1. **Always use strict mode** for critical operations to catch errors early
2. **Check existence first** with `fastExists()`, `bucketExists()`, etc. before operations 2. **Check existence first** with `fastExists()`, `bucketExists()`, etc. before operations
@@ -849,9 +903,9 @@ const content = await bucket.fastGet({ path: 'file.txt' }); // May throw!
7. **Lock files** during critical operations to prevent race conditions 7. **Lock files** during critical operations to prevent race conditions
8. **Use async generators** for listing large buckets to avoid memory issues 8. **Use async generators** for listing large buckets to avoid memory issues
9. **Set explicit overwrite flags** to prevent accidental file overwrites 9. **Set explicit overwrite flags** to prevent accidental file overwrites
10. **Clean up resources** properly when done 10. **Use the watcher** for real-time synchronization and event-driven architectures
### 📊 Performance Tips ### Performance Tips
- **Listing**: Use async generators or cursors for buckets with >10,000 objects - **Listing**: Use async generators or cursors for buckets with >10,000 objects
- **Uploads**: Use streams for files >100MB - **Uploads**: Use streams for files >100MB
@@ -859,22 +913,25 @@ const content = await bucket.fastGet({ path: 'file.txt' }); // May throw!
- **Metadata**: Cache metadata when reading frequently - **Metadata**: Cache metadata when reading frequently
- **Locking**: Keep lock durations as short as possible - **Locking**: Keep lock durations as short as possible
- **Glob patterns**: Be specific to reduce objects scanned - **Glob patterns**: Be specific to reduce objects scanned
- **Watching**: Use appropriate `pollIntervalMs` based on your change frequency
## License and Legal Information ## License and Legal Information
This repository contains open-source code that is licensed under the MIT License. A copy of the MIT License can be found in the [license](license) file within this repository. This repository contains open-source code licensed under the MIT License. A copy of the license can be found in the [LICENSE](./LICENSE) file.
**Please note:** The MIT License does not grant permission to use the trade names, trademarks, service marks, or product names of the project, except as required for reasonable and customary use in describing the origin of the work and reproducing the content of the NOTICE file. **Please note:** The MIT License does not grant permission to use the trade names, trademarks, service marks, or product names of the project, except as required for reasonable and customary use in describing the origin of the work and reproducing the content of the NOTICE file.
### Trademarks ### Trademarks
This project is owned and maintained by Task Venture Capital GmbH. The names and logos associated with Task Venture Capital GmbH and any related products or services are trademarks of Task Venture Capital GmbH and are not included within the scope of the MIT license granted herein. Use of these trademarks must comply with Task Venture Capital GmbH's Trademark Guidelines, and any usage must be approved in writing by Task Venture Capital GmbH. This project is owned and maintained by Task Venture Capital GmbH. The names and logos associated with Task Venture Capital GmbH and any related products or services are trademarks of Task Venture Capital GmbH or third parties, and are not included within the scope of the MIT license granted herein.
Use of these trademarks must comply with Task Venture Capital GmbH's Trademark Guidelines or the guidelines of the respective third-party owners, and any usage must be approved in writing. Third-party trademarks used herein are the property of their respective owners and used only in a descriptive manner, e.g. for an implementation of an API or similar.
### Company Information ### Company Information
Task Venture Capital GmbH Task Venture Capital GmbH
Registered at District court Bremen HRB 35230 HB, Germany Registered at District Court Bremen HRB 35230 HB, Germany
For any legal inquiries or if you require further information, please contact us via email at hello@task.vc. For any legal inquiries or further information, please contact us via email at hello@task.vc.
By using this repository, you acknowledge that you have read this section, agree to comply with its terms, and understand that the licensing of the code does not imply endorsement by Task Venture Capital GmbH of any derivative works. By using this repository, you acknowledge that you have read this section, agree to comply with its terms, and understand that the licensing of the code does not imply endorsement by Task Venture Capital GmbH of any derivative works.

View File

@@ -14,16 +14,16 @@ let testSmartbucket: smartbucket.SmartBucket;
// Setup: Create test bucket and populate with test data // Setup: Create test bucket and populate with test data
tap.test('should create valid smartbucket and bucket', async () => { tap.test('should create valid smartbucket and bucket', async () => {
testSmartbucket = new smartbucket.SmartBucket({ testSmartbucket = new smartbucket.SmartBucket({
accessKey: await testQenv.getEnvVarOnDemand('S3_ACCESSKEY'), accessKey: await testQenv.getEnvVarOnDemand('S3_ACCESSKEY') as string,
accessSecret: await testQenv.getEnvVarOnDemand('S3_SECRETKEY'), accessSecret: await testQenv.getEnvVarOnDemand('S3_SECRETKEY') as string,
endpoint: await testQenv.getEnvVarOnDemand('S3_ENDPOINT'), endpoint: await testQenv.getEnvVarOnDemand('S3_ENDPOINT') as string,
port: parseInt(await testQenv.getEnvVarOnDemand('S3_PORT')), port: parseInt(await testQenv.getEnvVarOnDemand('S3_PORT') as string),
useSsl: false, useSsl: false,
}); });
testBucket = await smartbucket.Bucket.getBucketByName( testBucket = await smartbucket.Bucket.getBucketByName(
testSmartbucket, testSmartbucket,
await testQenv.getEnvVarOnDemand('S3_BUCKET') await testQenv.getEnvVarOnDemand('S3_BUCKET') as string
); );
expect(testBucket).toBeInstanceOf(smartbucket.Bucket); expect(testBucket).toBeInstanceOf(smartbucket.Bucket);
}); });

View File

@@ -3,7 +3,7 @@ import { expect, tap } from '@git.zone/tstest/tapbundle';
import * as plugins from '../ts/plugins.js'; import * as plugins from '../ts/plugins.js';
import * as smartbucket from '../ts/index.js'; import * as smartbucket from '../ts/index.js';
class FakeS3Client { class FakeStorageClient {
private callIndex = 0; private callIndex = 0;
constructor(private readonly pages: Array<Partial<plugins.s3.ListObjectsV2Output>>) {} constructor(private readonly pages: Array<Partial<plugins.s3.ListObjectsV2Output>>) {}
@@ -30,7 +30,7 @@ tap.test('MetaData.hasMetaData should return false when metadata file does not e
}); });
tap.test('getSubDirectoryByName should create correct parent chain for new nested directories', async () => { tap.test('getSubDirectoryByName should create correct parent chain for new nested directories', async () => {
const fakeSmartbucket = { s3Client: new FakeS3Client([{ Contents: [], CommonPrefixes: [] }]) } as unknown as smartbucket.SmartBucket; const fakeSmartbucket = { storageClient: new FakeStorageClient([{ Contents: [], CommonPrefixes: [] }]) } as unknown as smartbucket.SmartBucket;
const bucket = new smartbucket.Bucket(fakeSmartbucket, 'test-bucket'); const bucket = new smartbucket.Bucket(fakeSmartbucket, 'test-bucket');
const baseDirectory = new smartbucket.Directory(bucket, null as any, ''); const baseDirectory = new smartbucket.Directory(bucket, null as any, '');
@@ -51,7 +51,7 @@ tap.test('listFiles should aggregate results across paginated ListObjectsV2 resp
Contents: Array.from({ length: 200 }, (_, index) => ({ Key: `file-${1000 + index}` })), Contents: Array.from({ length: 200 }, (_, index) => ({ Key: `file-${1000 + index}` })),
IsTruncated: false, IsTruncated: false,
}; };
const fakeSmartbucket = { s3Client: new FakeS3Client([firstPage, secondPage]) } as unknown as smartbucket.SmartBucket; const fakeSmartbucket = { storageClient: new FakeStorageClient([firstPage, secondPage]) } as unknown as smartbucket.SmartBucket;
const bucket = new smartbucket.Bucket(fakeSmartbucket, 'test-bucket'); const bucket = new smartbucket.Bucket(fakeSmartbucket, 'test-bucket');
const baseDirectory = new smartbucket.Directory(bucket, null as any, ''); const baseDirectory = new smartbucket.Directory(bucket, null as any, '');
@@ -61,7 +61,7 @@ tap.test('listFiles should aggregate results across paginated ListObjectsV2 resp
tap.test('listDirectories should aggregate CommonPrefixes across pagination', async () => { tap.test('listDirectories should aggregate CommonPrefixes across pagination', async () => {
const fakeSmartbucket = { const fakeSmartbucket = {
s3Client: new FakeS3Client([ storageClient: new FakeStorageClient([
{ CommonPrefixes: [{ Prefix: 'dirA/' }], IsTruncated: true, NextContinuationToken: 'token-1' }, { CommonPrefixes: [{ Prefix: 'dirA/' }], IsTruncated: true, NextContinuationToken: 'token-1' },
{ CommonPrefixes: [{ Prefix: 'dirB/' }], IsTruncated: false }, { CommonPrefixes: [{ Prefix: 'dirB/' }], IsTruncated: false },
]), ]),

410
test/test.watcher.node.ts Normal file
View File

@@ -0,0 +1,410 @@
// test.watcher.node.ts - Tests for BucketWatcher
import { tap, expect } from '@git.zone/tstest/tapbundle';
import * as smartbucket from '../ts/index.js';
import type { IStorageChangeEvent } from '../ts/interfaces.js';
// Get test configuration
import * as qenv from '@push.rocks/qenv';
const testQenv = new qenv.Qenv('./', './.nogit/');
// Test bucket reference
let testBucket: smartbucket.Bucket;
let testSmartbucket: smartbucket.SmartBucket;
// Setup: Create test bucket
tap.test('should create valid smartbucket and bucket', async () => {
testSmartbucket = new smartbucket.SmartBucket({
accessKey: await testQenv.getEnvVarOnDemand('S3_ACCESSKEY'),
accessSecret: await testQenv.getEnvVarOnDemand('S3_SECRETKEY'),
endpoint: await testQenv.getEnvVarOnDemand('S3_ENDPOINT'),
port: parseInt(await testQenv.getEnvVarOnDemand('S3_PORT')),
useSsl: false,
});
testBucket = await smartbucket.Bucket.getBucketByName(
testSmartbucket,
await testQenv.getEnvVarOnDemand('S3_BUCKET')
);
expect(testBucket).toBeInstanceOf(smartbucket.Bucket);
});
tap.test('should clean bucket for watcher tests', async () => {
await testBucket.cleanAllContents();
});
// ==========================
// Basic Watcher Tests
// ==========================
tap.test('should create watcher with default options', async () => {
const watcher = testBucket.createWatcher();
expect(watcher).toBeInstanceOf(smartbucket.BucketWatcher);
expect(watcher.changeSubject).toBeDefined();
});
tap.test('should create watcher with custom options', async () => {
const watcher = testBucket.createWatcher({
prefix: 'test/',
pollIntervalMs: 2000,
includeInitial: true,
});
expect(watcher).toBeInstanceOf(smartbucket.BucketWatcher);
});
// ==========================
// Add Event Detection Tests
// ==========================
tap.test('should detect add events for new files', async () => {
const events: IStorageChangeEvent[] = [];
const watcher = testBucket.createWatcher({
prefix: 'watcher-test/',
pollIntervalMs: 500,
});
watcher.changeSubject.subscribe((event) => {
if (!Array.isArray(event)) {
events.push(event);
}
});
await watcher.start();
await watcher.readyDeferred.promise;
// Create a new file
await testBucket.fastPut({
path: 'watcher-test/new-file.txt',
contents: 'test content',
});
// Wait for poll to detect the change
await new Promise((resolve) => setTimeout(resolve, 1200));
await watcher.stop();
expect(events.length).toBeGreaterThanOrEqual(1);
const addEvent = events.find((e) => e.type === 'add' && e.key === 'watcher-test/new-file.txt');
expect(addEvent).toBeDefined();
expect(addEvent!.bucket).toEqual(testBucket.name);
});
// ==========================
// Modify Event Detection Tests
// ==========================
tap.test('should detect modify events for changed files', async () => {
const events: IStorageChangeEvent[] = [];
const watcher = testBucket.createWatcher({
prefix: 'watcher-test/',
pollIntervalMs: 500,
});
watcher.changeSubject.subscribe((event) => {
if (!Array.isArray(event)) {
events.push(event);
}
});
await watcher.start();
await watcher.readyDeferred.promise;
// Modify the file
await testBucket.fastPut({
path: 'watcher-test/new-file.txt',
contents: 'modified content with different size',
overwrite: true,
});
// Wait for poll to detect the change
await new Promise((resolve) => setTimeout(resolve, 1200));
await watcher.stop();
expect(events.length).toBeGreaterThanOrEqual(1);
const modifyEvent = events.find((e) => e.type === 'modify' && e.key === 'watcher-test/new-file.txt');
expect(modifyEvent).toBeDefined();
});
// ==========================
// Delete Event Detection Tests
// ==========================
tap.test('should detect delete events for removed files', async () => {
const events: IStorageChangeEvent[] = [];
const watcher = testBucket.createWatcher({
prefix: 'watcher-test/',
pollIntervalMs: 500,
});
watcher.changeSubject.subscribe((event) => {
if (!Array.isArray(event)) {
events.push(event);
}
});
await watcher.start();
await watcher.readyDeferred.promise;
// Delete the file
await testBucket.fastRemove({ path: 'watcher-test/new-file.txt' });
// Wait for poll to detect the change
await new Promise((resolve) => setTimeout(resolve, 1200));
await watcher.stop();
expect(events.length).toBeGreaterThanOrEqual(1);
const deleteEvent = events.find((e) => e.type === 'delete' && e.key === 'watcher-test/new-file.txt');
expect(deleteEvent).toBeDefined();
});
// ==========================
// Initial State Tests
// ==========================
tap.test('should emit initial state as add events when includeInitial is true', async () => {
// First, create some test files
await testBucket.fastPut({
path: 'watcher-initial/file1.txt',
contents: 'content 1',
});
await testBucket.fastPut({
path: 'watcher-initial/file2.txt',
contents: 'content 2',
});
const events: IStorageChangeEvent[] = [];
const watcher = testBucket.createWatcher({
prefix: 'watcher-initial/',
pollIntervalMs: 10000, // Long interval - we only care about initial events
includeInitial: true,
});
watcher.changeSubject.subscribe((event) => {
if (!Array.isArray(event)) {
events.push(event);
}
});
await watcher.start();
await watcher.readyDeferred.promise;
// Give a moment for events to propagate
await new Promise((resolve) => setTimeout(resolve, 100));
await watcher.stop();
expect(events.length).toEqual(2);
expect(events.every((e) => e.type === 'add')).toBeTrue();
expect(events.some((e) => e.key === 'watcher-initial/file1.txt')).toBeTrue();
expect(events.some((e) => e.key === 'watcher-initial/file2.txt')).toBeTrue();
});
// ==========================
// EventEmitter Pattern Tests
// ==========================
tap.test('should emit events via EventEmitter pattern', async () => {
const events: IStorageChangeEvent[] = [];
const watcher = testBucket.createWatcher({
prefix: 'watcher-emitter/',
pollIntervalMs: 500,
});
watcher.on('change', (event: IStorageChangeEvent) => {
events.push(event);
});
await watcher.start();
await watcher.readyDeferred.promise;
// Create a new file
await testBucket.fastPut({
path: 'watcher-emitter/emitter-test.txt',
contents: 'test content',
});
// Wait for poll to detect the change
await new Promise((resolve) => setTimeout(resolve, 1200));
await watcher.stop();
expect(events.length).toBeGreaterThanOrEqual(1);
expect(events[0].type).toEqual('add');
});
// ==========================
// Buffered Events Tests
// ==========================
tap.test('should buffer events when bufferTimeMs is set', async () => {
const bufferedEvents: (IStorageChangeEvent | IStorageChangeEvent[])[] = [];
const watcher = testBucket.createWatcher({
prefix: 'watcher-buffer/',
pollIntervalMs: 200,
bufferTimeMs: 1000,
});
watcher.changeSubject.subscribe((event) => {
bufferedEvents.push(event);
});
await watcher.start();
await watcher.readyDeferred.promise;
// Create multiple files quickly
await testBucket.fastPut({
path: 'watcher-buffer/file1.txt',
contents: 'content 1',
});
await testBucket.fastPut({
path: 'watcher-buffer/file2.txt',
contents: 'content 2',
});
await testBucket.fastPut({
path: 'watcher-buffer/file3.txt',
contents: 'content 3',
});
// Wait for buffer to emit
await new Promise((resolve) => setTimeout(resolve, 1500));
await watcher.stop();
// With buffering, events should come as arrays
expect(bufferedEvents.length).toBeGreaterThanOrEqual(1);
// At least one buffered emission should contain multiple events
const hasBufferedArray = bufferedEvents.some(
(e) => Array.isArray(e) && e.length >= 1
);
expect(hasBufferedArray).toBeTrue();
});
// ==========================
// Error Handling Tests
// ==========================
tap.test('should emit error events on error', async () => {
const errors: Error[] = [];
const watcher = testBucket.createWatcher({
prefix: 'watcher-error/',
pollIntervalMs: 500,
});
watcher.on('error', (err: Error) => {
errors.push(err);
});
await watcher.start();
await watcher.readyDeferred.promise;
// Note: Triggering actual S3 errors is tricky in tests.
// We just verify the error handler is properly attached.
await watcher.stop();
// No errors expected in normal operation
expect(errors.length).toEqual(0);
});
// ==========================
// Graceful Stop Tests
// ==========================
tap.test('should stop gracefully with stop()', async () => {
const watcher = testBucket.createWatcher({
prefix: 'watcher-stop/',
pollIntervalMs: 100,
});
await watcher.start();
await watcher.readyDeferred.promise;
// Let it poll a few times
await new Promise((resolve) => setTimeout(resolve, 300));
// Stop should complete without errors
await watcher.stop();
// Watcher should not poll after stop
const eventsCaptured: IStorageChangeEvent[] = [];
watcher.on('change', (event: IStorageChangeEvent) => {
eventsCaptured.push(event);
});
// Create a file after stop
await testBucket.fastPut({
path: 'watcher-stop/after-stop.txt',
contents: 'should not be detected',
});
await new Promise((resolve) => setTimeout(resolve, 200));
// No events should be captured after stop
expect(eventsCaptured.length).toEqual(0);
});
tap.test('should stop gracefully with close() alias', async () => {
const watcher = testBucket.createWatcher({
prefix: 'watcher-close/',
pollIntervalMs: 100,
});
await watcher.start();
await watcher.readyDeferred.promise;
// close() should work as alias for stop()
await watcher.close();
});
// ==========================
// Prefix Filtering Tests
// ==========================
tap.test('should only detect changes within specified prefix', async () => {
const events: IStorageChangeEvent[] = [];
const watcher = testBucket.createWatcher({
prefix: 'watcher-prefix-a/',
pollIntervalMs: 500,
});
watcher.changeSubject.subscribe((event) => {
if (!Array.isArray(event)) {
events.push(event);
}
});
await watcher.start();
await watcher.readyDeferred.promise;
// Create file in watched prefix
await testBucket.fastPut({
path: 'watcher-prefix-a/watched.txt',
contents: 'watched content',
});
// Create file outside watched prefix
await testBucket.fastPut({
path: 'watcher-prefix-b/unwatched.txt',
contents: 'unwatched content',
});
// Wait for poll
await new Promise((resolve) => setTimeout(resolve, 1200));
await watcher.stop();
// Should only see the file in watcher-prefix-a/
expect(events.length).toEqual(1);
expect(events[0].key).toEqual('watcher-prefix-a/watched.txt');
});
// ==========================
// Cleanup
// ==========================
tap.test('should clean up test data', async () => {
await testBucket.cleanAllContents();
});
export default tap.start();

View File

@@ -3,6 +3,6 @@
*/ */
export const commitinfo = { export const commitinfo = {
name: '@push.rocks/smartbucket', name: '@push.rocks/smartbucket',
version: '4.3.0', version: '4.5.1',
description: 'A TypeScript library providing a cloud-agnostic interface for managing object storage with functionalities like bucket management, file and directory operations, and advanced features such as metadata handling and file locking.' description: 'A TypeScript library providing a cloud-agnostic interface for managing object storage with functionalities like bucket management, file and directory operations, and advanced features such as metadata handling and file locking.'
} }

View File

@@ -8,16 +8,17 @@ import { Directory } from './classes.directory.js';
import { File } from './classes.file.js'; import { File } from './classes.file.js';
import { Trash } from './classes.trash.js'; import { Trash } from './classes.trash.js';
import { ListCursor, type IListCursorOptions } from './classes.listcursor.js'; import { ListCursor, type IListCursorOptions } from './classes.listcursor.js';
import { BucketWatcher } from './classes.watcher.js';
/** /**
* The bucket class exposes the basic functionality of a bucket. * The bucket class exposes the basic functionality of a bucket.
* The functions of the bucket alone are enough to * The functions of the bucket alone are enough to
* operate in S3 basic fashion on blobs of data. * operate on blobs of data in an S3-compatible object store.
*/ */
export class Bucket { export class Bucket {
public static async getBucketByName(smartbucketRef: SmartBucket, bucketNameArg: string): Promise<Bucket> { public static async getBucketByName(smartbucketRef: SmartBucket, bucketNameArg: string): Promise<Bucket> {
const command = new plugins.s3.ListBucketsCommand({}); const command = new plugins.s3.ListBucketsCommand({});
const buckets = await smartbucketRef.s3Client.send(command); const buckets = await smartbucketRef.storageClient.send(command);
const foundBucket = buckets.Buckets!.find((bucket) => bucket.Name === bucketNameArg); const foundBucket = buckets.Buckets!.find((bucket) => bucket.Name === bucketNameArg);
if (foundBucket) { if (foundBucket) {
@@ -31,13 +32,13 @@ export class Bucket {
public static async createBucketByName(smartbucketRef: SmartBucket, bucketName: string) { public static async createBucketByName(smartbucketRef: SmartBucket, bucketName: string) {
const command = new plugins.s3.CreateBucketCommand({ Bucket: bucketName }); const command = new plugins.s3.CreateBucketCommand({ Bucket: bucketName });
await smartbucketRef.s3Client.send(command).catch((e) => console.log(e)); await smartbucketRef.storageClient.send(command);
return new Bucket(smartbucketRef, bucketName); return new Bucket(smartbucketRef, bucketName);
} }
public static async removeBucketByName(smartbucketRef: SmartBucket, bucketName: string) { public static async removeBucketByName(smartbucketRef: SmartBucket, bucketName: string) {
const command = new plugins.s3.DeleteBucketCommand({ Bucket: bucketName }); const command = new plugins.s3.DeleteBucketCommand({ Bucket: bucketName });
await smartbucketRef.s3Client.send(command).catch((e) => console.log(e)); await smartbucketRef.storageClient.send(command);
} }
public smartbucketRef: SmartBucket; public smartbucketRef: SmartBucket;
@@ -111,7 +112,7 @@ export class Bucket {
Key: reducedPath, Key: reducedPath,
Body: optionsArg.contents, Body: optionsArg.contents,
}); });
await this.smartbucketRef.s3Client.send(command); await this.smartbucketRef.storageClient.send(command);
console.log(`Object '${reducedPath}' has been successfully stored in bucket '${this.name}'.`); console.log(`Object '${reducedPath}' has been successfully stored in bucket '${this.name}'.`);
const parsedPath = plugins.path.parse(reducedPath); const parsedPath = plugins.path.parse(reducedPath);
@@ -171,7 +172,7 @@ export class Bucket {
Bucket: this.name, Bucket: this.name,
Key: optionsArg.path, Key: optionsArg.path,
}); });
const response = await this.smartbucketRef.s3Client.send(command); const response = await this.smartbucketRef.storageClient.send(command);
const replaySubject = new plugins.smartrx.rxjs.ReplaySubject<Buffer>(); const replaySubject = new plugins.smartrx.rxjs.ReplaySubject<Buffer>();
// Convert the stream to a format that supports piping // Convert the stream to a format that supports piping
@@ -215,7 +216,7 @@ export class Bucket {
Bucket: this.name, Bucket: this.name,
Key: optionsArg.path, Key: optionsArg.path,
}); });
const response = await this.smartbucketRef.s3Client.send(command); const response = await this.smartbucketRef.storageClient.send(command);
const stream = response.Body as any; // SdkStreamMixin includes readable stream const stream = response.Body as any; // SdkStreamMixin includes readable stream
const duplexStream = new plugins.smartstream.SmartDuplex<Buffer, Buffer>({ const duplexStream = new plugins.smartstream.SmartDuplex<Buffer, Buffer>({
@@ -271,7 +272,7 @@ export class Bucket {
Body: optionsArg.readableStream, Body: optionsArg.readableStream,
Metadata: optionsArg.nativeMetadata, Metadata: optionsArg.nativeMetadata,
}); });
await this.smartbucketRef.s3Client.send(command); await this.smartbucketRef.storageClient.send(command);
console.log( console.log(
`Object '${optionsArg.path}' has been successfully stored in bucket '${this.name}'.` `Object '${optionsArg.path}' has been successfully stored in bucket '${this.name}'.`
@@ -296,7 +297,7 @@ export class Bucket {
const targetBucketName = optionsArg.targetBucket ? optionsArg.targetBucket.name : this.name; const targetBucketName = optionsArg.targetBucket ? optionsArg.targetBucket.name : this.name;
// Retrieve current object information to use in copy conditions // Retrieve current object information to use in copy conditions
const currentObjInfo = await this.smartbucketRef.s3Client.send( const currentObjInfo = await this.smartbucketRef.storageClient.send(
new plugins.s3.HeadObjectCommand({ new plugins.s3.HeadObjectCommand({
Bucket: this.name, Bucket: this.name,
Key: optionsArg.sourcePath, Key: optionsArg.sourcePath,
@@ -318,7 +319,7 @@ export class Bucket {
Metadata: newNativeMetadata, Metadata: newNativeMetadata,
MetadataDirective: optionsArg.deleteExistingNativeMetadata ? 'REPLACE' : 'COPY', MetadataDirective: optionsArg.deleteExistingNativeMetadata ? 'REPLACE' : 'COPY',
}); });
await this.smartbucketRef.s3Client.send(command); await this.smartbucketRef.storageClient.send(command);
} catch (err) { } catch (err) {
console.error('Error updating metadata:', err); console.error('Error updating metadata:', err);
throw err; // rethrow to allow caller to handle throw err; // rethrow to allow caller to handle
@@ -378,7 +379,7 @@ export class Bucket {
Bucket: this.name, Bucket: this.name,
Key: optionsArg.path, Key: optionsArg.path,
}); });
await this.smartbucketRef.s3Client.send(command); await this.smartbucketRef.storageClient.send(command);
} }
/** /**
@@ -392,7 +393,7 @@ export class Bucket {
Bucket: this.name, Bucket: this.name,
Key: optionsArg.path, Key: optionsArg.path,
}); });
await this.smartbucketRef.s3Client.send(command); await this.smartbucketRef.storageClient.send(command);
console.log(`Object '${optionsArg.path}' exists in bucket '${this.name}'.`); console.log(`Object '${optionsArg.path}' exists in bucket '${this.name}'.`);
return true; return true;
} catch (error: any) { } catch (error: any) {
@@ -410,7 +411,7 @@ export class Bucket {
* deletes this bucket * deletes this bucket
*/ */
public async delete() { public async delete() {
await this.smartbucketRef.s3Client.send( await this.smartbucketRef.storageClient.send(
new plugins.s3.DeleteBucketCommand({ Bucket: this.name }) new plugins.s3.DeleteBucketCommand({ Bucket: this.name })
); );
} }
@@ -421,7 +422,7 @@ export class Bucket {
Bucket: this.name, Bucket: this.name,
Key: checkPath, Key: checkPath,
}); });
return this.smartbucketRef.s3Client.send(command); return this.smartbucketRef.storageClient.send(command);
} }
public async isDirectory(pathDescriptor: interfaces.IPathDecriptor): Promise<boolean> { public async isDirectory(pathDescriptor: interfaces.IPathDecriptor): Promise<boolean> {
@@ -431,7 +432,7 @@ export class Bucket {
Prefix: checkPath, Prefix: checkPath,
Delimiter: '/', Delimiter: '/',
}); });
const { CommonPrefixes } = await this.smartbucketRef.s3Client.send(command); const { CommonPrefixes } = await this.smartbucketRef.storageClient.send(command);
return !!CommonPrefixes && CommonPrefixes.length > 0; return !!CommonPrefixes && CommonPrefixes.length > 0;
} }
@@ -442,7 +443,7 @@ export class Bucket {
Prefix: checkPath, Prefix: checkPath,
Delimiter: '/', Delimiter: '/',
}); });
const { Contents } = await this.smartbucketRef.s3Client.send(command); const { Contents } = await this.smartbucketRef.storageClient.send(command);
return !!Contents && Contents.length > 0; return !!Contents && Contents.length > 0;
} }
@@ -453,7 +454,7 @@ export class Bucket {
Key: optionsArg.path, Key: optionsArg.path,
Range: `bytes=0-${optionsArg.length - 1}`, Range: `bytes=0-${optionsArg.length - 1}`,
}); });
const response = await this.smartbucketRef.s3Client.send(command); const response = await this.smartbucketRef.storageClient.send(command);
const chunks: Buffer[] = []; const chunks: Buffer[] = [];
const stream = response.Body as any; // SdkStreamMixin includes readable stream const stream = response.Body as any; // SdkStreamMixin includes readable stream
@@ -496,7 +497,7 @@ export class Bucket {
ContinuationToken: continuationToken, ContinuationToken: continuationToken,
}); });
const response = await this.smartbucketRef.s3Client.send(command); const response = await this.smartbucketRef.storageClient.send(command);
for (const obj of response.Contents || []) { for (const obj of response.Contents || []) {
if (obj.Key) yield obj.Key; if (obj.Key) yield obj.Key;
@@ -530,7 +531,7 @@ export class Bucket {
ContinuationToken: token, ContinuationToken: token,
}); });
const response = await this.smartbucketRef.s3Client.send(command); const response = await this.smartbucketRef.storageClient.send(command);
for (const obj of response.Contents || []) { for (const obj of response.Contents || []) {
if (obj.Key) subscriber.next(obj.Key); if (obj.Key) subscriber.next(obj.Key);
@@ -568,6 +569,23 @@ export class Bucket {
return new ListCursor(this, prefix, options); return new ListCursor(this, prefix, options);
} }
/**
* Create a watcher for monitoring bucket changes (add/modify/delete)
* @param options - Watcher options (prefix, pollIntervalMs, etc.)
* @returns BucketWatcher instance
* @example
* ```ts
* const watcher = bucket.createWatcher({ prefix: 'uploads/', pollIntervalMs: 3000 });
* watcher.changeSubject.subscribe((change) => console.log('Change:', change));
* await watcher.start();
* // ... later
* await watcher.stop();
* ```
*/
public createWatcher(options?: interfaces.IBucketWatcherOptions): BucketWatcher {
return new BucketWatcher(this, options);
}
// ========================================== // ==========================================
// High-Level Listing Helpers (Phase 2) // High-Level Listing Helpers (Phase 2)
// ========================================== // ==========================================
@@ -628,7 +646,7 @@ export class Bucket {
// Explicitly type the response // Explicitly type the response
const response: plugins.s3.ListObjectsV2Output = const response: plugins.s3.ListObjectsV2Output =
await this.smartbucketRef.s3Client.send(listCommand); await this.smartbucketRef.storageClient.send(listCommand);
console.log(`Cleaning contents of bucket '${this.name}': Now deleting ${response.Contents?.length} items...`); console.log(`Cleaning contents of bucket '${this.name}': Now deleting ${response.Contents?.length} items...`);
@@ -642,7 +660,7 @@ export class Bucket {
}, },
}); });
await this.smartbucketRef.s3Client.send(deleteCommand); await this.smartbucketRef.storageClient.send(deleteCommand);
} }
// Update continuation token and truncation status // Update continuation token and truncation status

View File

@@ -135,7 +135,7 @@ export class Directory {
Delimiter: delimiter, Delimiter: delimiter,
ContinuationToken: continuationToken, ContinuationToken: continuationToken,
}); });
const response = await this.bucketRef.smartbucketRef.s3Client.send(command); const response = await this.bucketRef.smartbucketRef.storageClient.send(command);
if (response.Contents) { if (response.Contents) {
allContents.push(...response.Contents); allContents.push(...response.Contents);
@@ -213,7 +213,7 @@ export class Directory {
Prefix: this.getBasePath(), Prefix: this.getBasePath(),
Delimiter: '/', Delimiter: '/',
}); });
const response = await this.bucketRef.smartbucketRef.s3Client.send(command); const response = await this.bucketRef.smartbucketRef.storageClient.send(command);
return response.Contents; return response.Contents;
} }
@@ -222,12 +222,12 @@ export class Directory {
*/ */
public async getSubDirectoryByName(dirNameArg: string, optionsArg: { public async getSubDirectoryByName(dirNameArg: string, optionsArg: {
/** /**
* in s3 a directory does not exist if it is empty * in object storage a directory does not exist if it is empty
* this option returns a directory even if it is empty * this option returns a directory even if it is empty
*/ */
getEmptyDirectory?: boolean; getEmptyDirectory?: boolean;
/** /**
* in s3 a directory does not exist if it is empty * in object storage a directory does not exist if it is empty
* this option creates a directory even if it is empty using a initializer file * this option creates a directory even if it is empty using a initializer file
*/ */
createWithInitializerFile?: boolean; createWithInitializerFile?: boolean;

View File

@@ -12,7 +12,7 @@ export class File {
/** /**
* creates a file in draft mode * creates a file in draft mode
* you need to call .save() to store it in s3 * you need to call .save() to store it in object storage
* @param optionsArg * @param optionsArg
*/ */
public static async create(optionsArg: { public static async create(optionsArg: {

View File

@@ -45,7 +45,7 @@ export class ListCursor {
ContinuationToken: this.continuationToken, ContinuationToken: this.continuationToken,
}); });
const response = await this.bucket.smartbucketRef.s3Client.send(command); const response = await this.bucket.smartbucketRef.storageClient.send(command);
const keys = (response.Contents || []) const keys = (response.Contents || [])
.map((obj) => obj.Key) .map((obj) => obj.Key)

View File

@@ -2,26 +2,28 @@
import * as plugins from './plugins.js'; import * as plugins from './plugins.js';
import { Bucket } from './classes.bucket.js'; import { Bucket } from './classes.bucket.js';
import { normalizeS3Descriptor } from './helpers.js'; import { normalizeStorageDescriptor } from './helpers.js';
export class SmartBucket { export class SmartBucket {
public config: plugins.tsclass.storage.IS3Descriptor; public config: plugins.tsclass.storage.IStorageDescriptor;
public s3Client: plugins.s3.S3Client; public storageClient: plugins.s3.S3Client;
/** @deprecated Use storageClient instead */
public get s3Client(): plugins.s3.S3Client {
return this.storageClient;
}
/** /**
* the constructor of SmartBucket * the constructor of SmartBucket
*/ */
/** constructor(configArg: plugins.tsclass.storage.IStorageDescriptor) {
* the constructor of SmartBucket
*/
constructor(configArg: plugins.tsclass.storage.IS3Descriptor) {
this.config = configArg; this.config = configArg;
// Use the normalizer to handle various endpoint formats // Use the normalizer to handle various endpoint formats
const { normalized } = normalizeS3Descriptor(configArg); const { normalized } = normalizeStorageDescriptor(configArg);
this.s3Client = new plugins.s3.S3Client({ this.storageClient = new plugins.s3.S3Client({
endpoint: normalized.endpointUrl, endpoint: normalized.endpointUrl,
region: normalized.region, region: normalized.region,
credentials: normalized.credentials, credentials: normalized.credentials,
@@ -47,7 +49,7 @@ export class SmartBucket {
*/ */
public async bucketExists(bucketNameArg: string): Promise<boolean> { public async bucketExists(bucketNameArg: string): Promise<boolean> {
const command = new plugins.s3.ListBucketsCommand({}); const command = new plugins.s3.ListBucketsCommand({});
const buckets = await this.s3Client.send(command); const buckets = await this.storageClient.send(command);
return buckets.Buckets?.some(bucket => bucket.Name === bucketNameArg) ?? false; return buckets.Buckets?.some(bucket => bucket.Name === bucketNameArg) ?? false;
} }
} }

289
ts/classes.watcher.ts Normal file
View File

@@ -0,0 +1,289 @@
// classes.watcher.ts
import * as plugins from './plugins.js';
import * as interfaces from './interfaces.js';
import type { Bucket } from './classes.bucket.js';
import { EventEmitter } from 'node:events';
/**
* BucketWatcher monitors a storage bucket for changes (add/modify/delete)
* using a polling-based approach. Designed to follow the SmartdataDbWatcher pattern.
*
* @example
* ```typescript
* const watcher = bucket.createWatcher({ prefix: 'uploads/', pollIntervalMs: 3000 });
*
* // RxJS Observable pattern
* watcher.changeSubject.subscribe((change) => {
* console.log('Change:', change);
* });
*
* // EventEmitter pattern
* watcher.on('change', (change) => console.log(change));
* watcher.on('error', (err) => console.error(err));
*
* await watcher.start();
* await watcher.readyDeferred.promise; // Wait for initial state
*
* // Later...
* await watcher.stop();
* ```
*/
export class BucketWatcher extends EventEmitter {
/** Deferred that resolves when initial state is built and watcher is ready */
public readyDeferred = plugins.smartpromise.defer();
/** Observable for receiving change events (supports RxJS operators) */
public changeSubject: plugins.smartrx.rxjs.Observable<interfaces.IStorageChangeEvent | interfaces.IStorageChangeEvent[]>;
// Internal subjects and state
private rawSubject: plugins.smartrx.rxjs.Subject<interfaces.IStorageChangeEvent>;
private previousState: Map<string, interfaces.IStorageObjectState>;
private pollIntervalId: ReturnType<typeof setInterval> | null = null;
private isPolling = false;
private isStopped = false;
// Configuration
private readonly bucketRef: Bucket;
private readonly prefix: string;
private readonly pollIntervalMs: number;
private readonly bufferTimeMs?: number;
private readonly includeInitial: boolean;
private readonly pageSize: number;
constructor(bucketRef: Bucket, options: interfaces.IBucketWatcherOptions = {}) {
super();
this.bucketRef = bucketRef;
this.prefix = options.prefix ?? '';
this.pollIntervalMs = options.pollIntervalMs ?? 5000;
this.bufferTimeMs = options.bufferTimeMs;
this.includeInitial = options.includeInitial ?? false;
this.pageSize = options.pageSize ?? 1000;
// Initialize state tracking
this.previousState = new Map();
// Initialize raw subject for emitting changes
this.rawSubject = new plugins.smartrx.rxjs.Subject<interfaces.IStorageChangeEvent>();
// Configure the public observable with optional buffering
if (this.bufferTimeMs && this.bufferTimeMs > 0) {
this.changeSubject = this.rawSubject.pipe(
plugins.smartrx.rxjs.ops.bufferTime(this.bufferTimeMs),
plugins.smartrx.rxjs.ops.filter((events: interfaces.IStorageChangeEvent[]) => events.length > 0)
);
} else {
this.changeSubject = this.rawSubject.asObservable();
}
}
/**
* Start watching the bucket for changes
*/
public async start(): Promise<void> {
if (this.pollIntervalId !== null) {
console.log('BucketWatcher is already running');
return;
}
this.isStopped = false;
// Build initial state
await this.buildInitialState();
// Emit initial state as 'add' events if configured
if (this.includeInitial) {
for (const state of this.previousState.values()) {
this.emitChange({
type: 'add',
key: state.key,
size: state.size,
etag: state.etag,
lastModified: state.lastModified,
bucket: this.bucketRef.name,
});
}
}
// Mark as ready
this.readyDeferred.resolve();
// Start polling loop
this.pollIntervalId = setInterval(() => {
this.poll().catch((err) => {
this.emit('error', err);
});
}, this.pollIntervalMs);
}
/**
* Stop watching the bucket
*/
public async stop(): Promise<void> {
this.isStopped = true;
if (this.pollIntervalId !== null) {
clearInterval(this.pollIntervalId);
this.pollIntervalId = null;
}
// Wait for any in-progress poll to complete
while (this.isPolling) {
await new Promise<void>((resolve) => setTimeout(resolve, 50));
}
this.rawSubject.complete();
}
/**
* Alias for stop() - for consistency with other APIs
*/
public async close(): Promise<void> {
return this.stop();
}
/**
* Build the initial state by listing all objects with metadata
*/
private async buildInitialState(): Promise<void> {
this.previousState.clear();
for await (const obj of this.listObjectsWithMetadata()) {
if (obj.Key) {
this.previousState.set(obj.Key, {
key: obj.Key,
etag: obj.ETag ?? '',
size: obj.Size ?? 0,
lastModified: obj.LastModified ?? new Date(0),
});
}
}
}
/**
* Poll for changes by comparing current state against previous state
*/
private async poll(): Promise<void> {
// Guard against overlapping polls
if (this.isPolling || this.isStopped) {
return;
}
this.isPolling = true;
try {
// Build current state
const currentState = new Map<string, interfaces.IStorageObjectState>();
for await (const obj of this.listObjectsWithMetadata()) {
if (this.isStopped) {
break;
}
if (obj.Key) {
currentState.set(obj.Key, {
key: obj.Key,
etag: obj.ETag ?? '',
size: obj.Size ?? 0,
lastModified: obj.LastModified ?? new Date(0),
});
}
}
if (!this.isStopped) {
this.detectChanges(currentState);
this.previousState = currentState;
}
} catch (err) {
this.emit('error', err);
} finally {
this.isPolling = false;
}
}
/**
* Detect changes between current and previous state
*/
private detectChanges(currentState: Map<string, interfaces.IStorageObjectState>): void {
// Detect added and modified objects
for (const [key, current] of currentState) {
const previous = this.previousState.get(key);
if (!previous) {
// New object - emit 'add' event
this.emitChange({
type: 'add',
key: current.key,
size: current.size,
etag: current.etag,
lastModified: current.lastModified,
bucket: this.bucketRef.name,
});
} else if (
previous.etag !== current.etag ||
previous.size !== current.size ||
previous.lastModified.getTime() !== current.lastModified.getTime()
) {
// Object modified - emit 'modify' event
this.emitChange({
type: 'modify',
key: current.key,
size: current.size,
etag: current.etag,
lastModified: current.lastModified,
bucket: this.bucketRef.name,
});
}
}
// Detect deleted objects
for (const [key, previous] of this.previousState) {
if (!currentState.has(key)) {
// Object deleted - emit 'delete' event
this.emitChange({
type: 'delete',
key: previous.key,
bucket: this.bucketRef.name,
});
}
}
}
/**
* Emit a change event via both RxJS Subject and EventEmitter
*/
private emitChange(event: interfaces.IStorageChangeEvent): void {
this.rawSubject.next(event);
this.emit('change', event);
}
/**
* List objects with full metadata (ETag, Size, LastModified)
* This is a private method that yields full _Object data, not just keys
*/
private async *listObjectsWithMetadata(): AsyncIterableIterator<plugins.s3._Object> {
let continuationToken: string | undefined;
do {
if (this.isStopped) {
return;
}
const command = new plugins.s3.ListObjectsV2Command({
Bucket: this.bucketRef.name,
Prefix: this.prefix,
MaxKeys: this.pageSize,
ContinuationToken: continuationToken,
});
const response = await this.bucketRef.smartbucketRef.storageClient.send(command);
for (const obj of response.Contents || []) {
yield obj;
}
continuationToken = response.NextContinuationToken;
} while (continuationToken);
}
}

View File

@@ -21,13 +21,13 @@ export const reducePathDescriptorToPath = async (pathDescriptorArg: interfaces.I
return returnPath; return returnPath;
} }
// S3 Descriptor Normalization // Storage Descriptor Normalization
export interface IS3Warning { export interface IStorageWarning {
code: string; code: string;
message: string; message: string;
} }
export interface INormalizedS3Config { export interface INormalizedStorageConfig {
endpointUrl: string; endpointUrl: string;
host: string; host: string;
protocol: 'http' | 'https'; protocol: 'http' | 'https';
@@ -40,7 +40,7 @@ export interface INormalizedS3Config {
forcePathStyle: boolean; forcePathStyle: boolean;
} }
function coerceBooleanMaybe(value: unknown): { value: boolean | undefined; warning?: IS3Warning } { function coerceBooleanMaybe(value: unknown): { value: boolean | undefined; warning?: IStorageWarning } {
if (typeof value === 'boolean') return { value }; if (typeof value === 'boolean') return { value };
if (typeof value === 'string') { if (typeof value === 'string') {
const v = value.trim().toLowerCase(); const v = value.trim().toLowerCase();
@@ -66,7 +66,7 @@ function coerceBooleanMaybe(value: unknown): { value: boolean | undefined; warni
return { value: undefined }; return { value: undefined };
} }
function coercePortMaybe(port: unknown): { value: number | undefined; warning?: IS3Warning } { function coercePortMaybe(port: unknown): { value: number | undefined; warning?: IStorageWarning } {
if (port === undefined || port === null || port === '') return { value: undefined }; if (port === undefined || port === null || port === '') return { value: undefined };
const n = typeof port === 'number' ? port : Number(String(port).trim()); const n = typeof port === 'number' ? port : Number(String(port).trim());
if (!Number.isFinite(n) || !Number.isInteger(n) || n <= 0 || n > 65535) { if (!Number.isFinite(n) || !Number.isInteger(n) || n <= 0 || n > 65535) {
@@ -81,8 +81,8 @@ function coercePortMaybe(port: unknown): { value: number | undefined; warning?:
return { value: n }; return { value: n };
} }
function sanitizeEndpointString(raw: unknown): { value: string; warnings: IS3Warning[] } { function sanitizeEndpointString(raw: unknown): { value: string; warnings: IStorageWarning[] } {
const warnings: IS3Warning[] = []; const warnings: IStorageWarning[] = [];
let s = String(raw ?? '').trim(); let s = String(raw ?? '').trim();
if (s !== String(raw ?? '')) { if (s !== String(raw ?? '')) {
warnings.push({ warnings.push({
@@ -138,17 +138,17 @@ function parseEndpointHostPort(
return { hadScheme, host, port, extras }; return { hadScheme, host, port, extras };
} }
export function normalizeS3Descriptor( export function normalizeStorageDescriptor(
input: plugins.tsclass.storage.IS3Descriptor, input: plugins.tsclass.storage.IStorageDescriptor,
logger?: { warn: (msg: string) => void } logger?: { warn: (msg: string) => void }
): { normalized: INormalizedS3Config; warnings: IS3Warning[] } { ): { normalized: INormalizedStorageConfig; warnings: IStorageWarning[] } {
const warnings: IS3Warning[] = []; const warnings: IStorageWarning[] = [];
const logWarn = (w: IS3Warning) => { const logWarn = (w: IStorageWarning) => {
warnings.push(w); warnings.push(w);
if (logger) { if (logger) {
logger.warn(`[SmartBucket S3] ${w.code}: ${w.message}`); logger.warn(`[SmartBucket] ${w.code}: ${w.message}`);
} else { } else {
console.warn(`[SmartBucket S3] ${w.code}: ${w.message}`); console.warn(`[SmartBucket] ${w.code}: ${w.message}`);
} }
}; };
@@ -163,7 +163,7 @@ export function normalizeS3Descriptor(
endpointSanWarnings.forEach(logWarn); endpointSanWarnings.forEach(logWarn);
if (!endpointStr) { if (!endpointStr) {
throw new Error('S3 endpoint is required (got empty string). Provide hostname or URL.'); throw new Error('Storage endpoint is required (got empty string). Provide hostname or URL.');
} }
// Provisional protocol selection for parsing host:port forms // Provisional protocol selection for parsing host:port forms

View File

@@ -5,3 +5,5 @@ export * from './classes.file.js';
export * from './classes.listcursor.js'; export * from './classes.listcursor.js';
export * from './classes.metadata.js'; export * from './classes.metadata.js';
export * from './classes.trash.js'; export * from './classes.trash.js';
export * from './classes.watcher.js';
export * from './interfaces.js';

View File

@@ -3,4 +3,65 @@ import type { Directory } from "./classes.directory.js";
export interface IPathDecriptor { export interface IPathDecriptor {
path?: string; path?: string;
directory?: Directory; directory?: Directory;
} }
// ================================
// Bucket Watcher Interfaces
// ================================
/**
* Internal state tracking for a storage object
*/
export interface IStorageObjectState {
key: string;
etag: string;
size: number;
lastModified: Date;
}
/**
* Change event emitted by BucketWatcher
*/
export interface IStorageChangeEvent {
type: 'add' | 'modify' | 'delete';
key: string;
size?: number;
etag?: string;
lastModified?: Date;
bucket: string;
}
/**
* Watcher mode - 'poll' is the default, 'websocket' reserved for future implementation
*/
export type TBucketWatcherMode = 'poll' | 'websocket';
/**
* Options for creating a BucketWatcher
*/
export interface IBucketWatcherOptions {
/** Watcher mode: 'poll' (default) or 'websocket' (future) */
mode?: TBucketWatcherMode;
/** Prefix to filter objects (default: '' for all objects) */
prefix?: string;
/** Polling interval in milliseconds (default: 5000, poll mode only) */
pollIntervalMs?: number;
/** Optional RxJS buffering time in milliseconds */
bufferTimeMs?: number;
/** Emit initial state as 'add' events (default: false) */
includeInitial?: boolean;
/** Page size for listing operations (default: 1000) */
pageSize?: number;
// Future websocket options will be added here
// websocketUrl?: string;
}
// ================================
// Deprecated aliases
// ================================
/** @deprecated Use IStorageObjectState instead */
export type IS3ObjectState = IStorageObjectState;
/** @deprecated Use IStorageChangeEvent instead */
export type IS3ChangeEvent = IStorageChangeEvent;