fix(readme): refresh documentation wording and simplify README formatting
This commit is contained in:
@@ -1,5 +1,12 @@
|
||||
# Changelog
|
||||
|
||||
## 2026-03-14 - 4.5.1 - fix(readme)
|
||||
refresh documentation wording and simplify README formatting
|
||||
|
||||
- removes emoji-heavy headings and inline log examples for a cleaner presentation
|
||||
- clarifies S3-compatible storage wording in the project description
|
||||
- streamlines testing and error handling sections without changing library functionality
|
||||
|
||||
## 2026-03-14 - 4.5.0 - feat(storage)
|
||||
generalize S3 client and watcher interfaces to storage-oriented naming with backward compatibility
|
||||
|
||||
|
||||
2
deno.lock
generated
2
deno.lock
generated
@@ -15,6 +15,7 @@
|
||||
"npm:@push.rocks/smartunique@^3.0.9": "3.0.9",
|
||||
"npm:@push.rocks/tapbundle@^6.0.3": "6.0.3",
|
||||
"npm:@tsclass/tsclass@^9.4.0": "9.4.0",
|
||||
"npm:@types/node@^22.15.29": "22.19.15",
|
||||
"npm:minimatch@^10.2.4": "10.2.4"
|
||||
},
|
||||
"npm": {
|
||||
@@ -8368,6 +8369,7 @@
|
||||
"npm:@push.rocks/smartunique@^3.0.9",
|
||||
"npm:@push.rocks/tapbundle@^6.0.3",
|
||||
"npm:@tsclass/tsclass@^9.4.0",
|
||||
"npm:@types/node@^22.15.29",
|
||||
"npm:minimatch@^10.2.4"
|
||||
]
|
||||
}
|
||||
|
||||
1941
pnpm-lock.yaml
generated
1941
pnpm-lock.yaml
generated
File diff suppressed because it is too large
Load Diff
332
readme.md
332
readme.md
@@ -1,25 +1,25 @@
|
||||
# @push.rocks/smartbucket 🪣
|
||||
# @push.rocks/smartbucket
|
||||
|
||||
A powerful, cloud-agnostic TypeScript library for object storage that makes S3 feel like a modern filesystem. Built for developers who demand simplicity, type-safety, and advanced features like real-time bucket watching, metadata management, file locking, intelligent trash handling, and memory-efficient streaming.
|
||||
A powerful, cloud-agnostic TypeScript library for object storage that makes S3-compatible storage feel like a modern filesystem. Built for developers who demand simplicity, type-safety, and advanced features like real-time bucket watching, metadata management, file locking, intelligent trash handling, and memory-efficient streaming.
|
||||
|
||||
## Issue Reporting and Security
|
||||
|
||||
For reporting bugs, issues, or security vulnerabilities, please visit [community.foss.global/](https://community.foss.global/). This is the central community hub for all issue reporting. Developers who sign and comply with our contribution agreement and go through identification can also get a [code.foss.global/](https://code.foss.global/) account to submit Pull Requests directly.
|
||||
|
||||
## Why SmartBucket? 🎯
|
||||
## Why SmartBucket?
|
||||
|
||||
- **🌍 Cloud Agnostic** - Write once, run on AWS S3, MinIO, DigitalOcean Spaces, Backblaze B2, Wasabi, Cloudflare R2, or any S3-compatible storage
|
||||
- **🚀 Modern TypeScript** - First-class TypeScript support with complete type definitions and async/await patterns
|
||||
- **👀 Real-Time Watching** - Monitor bucket changes with polling-based watcher supporting RxJS and EventEmitter patterns
|
||||
- **💾 Memory Efficient** - Handle millions of files with async generators, RxJS observables, and cursor pagination
|
||||
- **🗑️ Smart Trash System** - Recover accidentally deleted files with built-in trash and restore functionality
|
||||
- **🔒 File Locking** - Prevent concurrent modifications with built-in locking mechanisms
|
||||
- **🏷️ Rich Metadata** - Attach custom metadata to any file for powerful organization and search
|
||||
- **🌊 Streaming Support** - Efficient handling of large files with Node.js and Web streams
|
||||
- **📁 Directory-like API** - Intuitive filesystem-like operations on object storage
|
||||
- **⚡ Fail-Fast** - Strict-by-default API catches errors immediately with precise stack traces
|
||||
- **Cloud Agnostic** - Write once, run on AWS S3, MinIO, DigitalOcean Spaces, Backblaze B2, Wasabi, Cloudflare R2, or any S3-compatible storage
|
||||
- **Modern TypeScript** - First-class TypeScript support with complete type definitions and async/await patterns
|
||||
- **Real-Time Watching** - Monitor bucket changes with polling-based watcher supporting RxJS and EventEmitter patterns
|
||||
- **Memory Efficient** - Handle millions of files with async generators, RxJS observables, and cursor pagination
|
||||
- **Smart Trash System** - Recover accidentally deleted files with built-in trash and restore functionality
|
||||
- **File Locking** - Prevent concurrent modifications with built-in locking mechanisms
|
||||
- **Rich Metadata** - Attach custom metadata to any file for powerful organization and search
|
||||
- **Streaming Support** - Efficient handling of large files with Node.js and Web streams
|
||||
- **Directory-like API** - Intuitive filesystem-like operations on object storage
|
||||
- **Fail-Fast** - Strict-by-default API catches errors immediately with precise stack traces
|
||||
|
||||
## Quick Start 🚀
|
||||
## Quick Start
|
||||
|
||||
```typescript
|
||||
import { SmartBucket } from '@push.rocks/smartbucket';
|
||||
@@ -44,22 +44,22 @@ await bucket.fastPut({
|
||||
|
||||
// Download it back
|
||||
const data = await bucket.fastGet({ path: 'users/profile.json' });
|
||||
console.log('📄', JSON.parse(data.toString()));
|
||||
console.log(JSON.parse(data.toString()));
|
||||
|
||||
// List files efficiently (even with millions of objects!)
|
||||
for await (const key of bucket.listAllObjects('users/')) {
|
||||
console.log('🔍 Found:', key);
|
||||
console.log('Found:', key);
|
||||
}
|
||||
|
||||
// Watch for changes in real-time
|
||||
const watcher = bucket.createWatcher({ prefix: 'uploads/', pollIntervalMs: 3000 });
|
||||
watcher.changeSubject.subscribe((change) => {
|
||||
console.log('🔔 Change detected:', change.type, change.key);
|
||||
console.log('Change detected:', change.type, change.key);
|
||||
});
|
||||
await watcher.start();
|
||||
```
|
||||
|
||||
## Install 📦
|
||||
## Install
|
||||
|
||||
```bash
|
||||
# Using pnpm (recommended)
|
||||
@@ -69,24 +69,24 @@ pnpm add @push.rocks/smartbucket
|
||||
npm install @push.rocks/smartbucket --save
|
||||
```
|
||||
|
||||
## Usage 🚀
|
||||
## Usage
|
||||
|
||||
### Table of Contents
|
||||
|
||||
1. [🏁 Getting Started](#-getting-started)
|
||||
2. [🗂️ Working with Buckets](#️-working-with-buckets)
|
||||
3. [📁 File Operations](#-file-operations)
|
||||
4. [📋 Memory-Efficient Listing](#-memory-efficient-listing)
|
||||
5. [👀 Bucket Watching](#-bucket-watching)
|
||||
6. [📂 Directory Management](#-directory-management)
|
||||
7. [🌊 Streaming Operations](#-streaming-operations)
|
||||
8. [🔒 File Locking](#-file-locking)
|
||||
9. [🏷️ Metadata Management](#️-metadata-management)
|
||||
10. [🗑️ Trash & Recovery](#️-trash--recovery)
|
||||
11. [⚡ Advanced Features](#-advanced-features)
|
||||
12. [☁️ Cloud Provider Support](#️-cloud-provider-support)
|
||||
1. [Getting Started](#getting-started)
|
||||
2. [Working with Buckets](#working-with-buckets)
|
||||
3. [File Operations](#file-operations)
|
||||
4. [Memory-Efficient Listing](#memory-efficient-listing)
|
||||
5. [Bucket Watching](#bucket-watching)
|
||||
6. [Directory Management](#directory-management)
|
||||
7. [Streaming Operations](#streaming-operations)
|
||||
8. [File Locking](#file-locking)
|
||||
9. [Metadata Management](#metadata-management)
|
||||
10. [Trash & Recovery](#trash--recovery)
|
||||
11. [Advanced Features](#advanced-features)
|
||||
12. [Cloud Provider Support](#cloud-provider-support)
|
||||
|
||||
### 🏁 Getting Started
|
||||
### Getting Started
|
||||
|
||||
First, set up your storage connection:
|
||||
|
||||
@@ -104,7 +104,7 @@ const smartBucket = new SmartBucket({
|
||||
});
|
||||
```
|
||||
|
||||
**For MinIO or self-hosted S3:**
|
||||
**For MinIO or self-hosted storage:**
|
||||
```typescript
|
||||
const smartBucket = new SmartBucket({
|
||||
accessKey: 'minioadmin',
|
||||
@@ -115,14 +115,13 @@ const smartBucket = new SmartBucket({
|
||||
});
|
||||
```
|
||||
|
||||
### 🗂️ Working with Buckets
|
||||
### Working with Buckets
|
||||
|
||||
#### Creating Buckets
|
||||
|
||||
```typescript
|
||||
// Create a new bucket
|
||||
const myBucket = await smartBucket.createBucket('my-awesome-bucket');
|
||||
console.log(`✅ Bucket created: ${myBucket.name}`);
|
||||
```
|
||||
|
||||
#### Getting Existing Buckets
|
||||
@@ -134,7 +133,6 @@ const existingBucket = await smartBucket.getBucketByName('existing-bucket');
|
||||
// Check first, then get (non-throwing approach)
|
||||
if (await smartBucket.bucketExists('maybe-exists')) {
|
||||
const bucket = await smartBucket.getBucketByName('maybe-exists');
|
||||
console.log('✅ Found bucket:', bucket.name);
|
||||
}
|
||||
```
|
||||
|
||||
@@ -143,10 +141,9 @@ if (await smartBucket.bucketExists('maybe-exists')) {
|
||||
```typescript
|
||||
// Delete a bucket (must be empty)
|
||||
await smartBucket.removeBucket('old-bucket');
|
||||
console.log('🗑️ Bucket removed');
|
||||
```
|
||||
|
||||
### 📁 File Operations
|
||||
### File Operations
|
||||
|
||||
#### Upload Files
|
||||
|
||||
@@ -158,7 +155,6 @@ const file = await bucket.fastPut({
|
||||
path: 'documents/report.pdf',
|
||||
contents: Buffer.from('Your file content here')
|
||||
});
|
||||
console.log('✅ Uploaded:', file.getBasePath());
|
||||
|
||||
// Upload with string content
|
||||
await bucket.fastPut({
|
||||
@@ -180,9 +176,7 @@ try {
|
||||
contents: 'new content'
|
||||
});
|
||||
} catch (error) {
|
||||
console.error('❌ Upload failed:', error.message);
|
||||
// Error: Object already exists at path 'existing-file.txt' in bucket 'my-bucket'.
|
||||
// Set overwrite:true to replace it.
|
||||
console.error('Upload failed:', error.message);
|
||||
}
|
||||
```
|
||||
|
||||
@@ -193,7 +187,6 @@ try {
|
||||
const fileContent = await bucket.fastGet({
|
||||
path: 'documents/report.pdf'
|
||||
});
|
||||
console.log(`📄 File size: ${fileContent.length} bytes`);
|
||||
|
||||
// Get file as string
|
||||
const textContent = fileContent.toString('utf-8');
|
||||
@@ -208,7 +201,6 @@ const jsonData = JSON.parse(fileContent.toString());
|
||||
const exists = await bucket.fastExists({
|
||||
path: 'documents/report.pdf'
|
||||
});
|
||||
console.log(`File exists: ${exists ? '✅' : '❌'}`);
|
||||
```
|
||||
|
||||
#### Delete Files
|
||||
@@ -218,7 +210,6 @@ console.log(`File exists: ${exists ? '✅' : '❌'}`);
|
||||
await bucket.fastRemove({
|
||||
path: 'old-file.txt'
|
||||
});
|
||||
console.log('🗑️ File deleted permanently');
|
||||
```
|
||||
|
||||
#### Copy & Move Files
|
||||
@@ -229,14 +220,12 @@ await bucket.fastCopy({
|
||||
sourcePath: 'original/file.txt',
|
||||
destinationPath: 'backup/file-copy.txt'
|
||||
});
|
||||
console.log('📋 File copied');
|
||||
|
||||
// Move file (copy + delete original)
|
||||
await bucket.fastMove({
|
||||
sourcePath: 'temp/draft.txt',
|
||||
destinationPath: 'final/document.txt'
|
||||
});
|
||||
console.log('📦 File moved');
|
||||
|
||||
// Copy to different bucket
|
||||
const targetBucket = await smartBucket.getBucketByName('backup-bucket');
|
||||
@@ -247,18 +236,18 @@ await bucket.fastCopy({
|
||||
});
|
||||
```
|
||||
|
||||
### 📋 Memory-Efficient Listing
|
||||
### Memory-Efficient Listing
|
||||
|
||||
SmartBucket provides three powerful patterns for listing objects, optimized for handling **millions of files** efficiently:
|
||||
|
||||
#### Async Generators (Recommended) ⭐
|
||||
#### Async Generators (Recommended)
|
||||
|
||||
Memory-efficient streaming using native JavaScript async iteration:
|
||||
|
||||
```typescript
|
||||
// List all objects with prefix - streams one at a time!
|
||||
for await (const key of bucket.listAllObjects('documents/')) {
|
||||
console.log(`📄 Found: ${key}`);
|
||||
console.log('Found:', key);
|
||||
|
||||
// Process each file individually (memory efficient!)
|
||||
const content = await bucket.fastGet({ path: key });
|
||||
@@ -268,39 +257,26 @@ for await (const key of bucket.listAllObjects('documents/')) {
|
||||
if (shouldStop()) break;
|
||||
}
|
||||
|
||||
// List all objects (no prefix)
|
||||
const allKeys: string[] = [];
|
||||
for await (const key of bucket.listAllObjects()) {
|
||||
allKeys.push(key);
|
||||
}
|
||||
|
||||
// Find objects matching glob patterns
|
||||
for await (const key of bucket.findByGlob('**/*.json')) {
|
||||
console.log(`📦 JSON file: ${key}`);
|
||||
console.log('JSON file:', key);
|
||||
}
|
||||
|
||||
// Complex glob patterns
|
||||
for await (const key of bucket.findByGlob('npm/packages/*/index.json')) {
|
||||
// Matches: npm/packages/foo/index.json, npm/packages/bar/index.json
|
||||
console.log(`📦 Package index: ${key}`);
|
||||
}
|
||||
|
||||
// More glob examples
|
||||
for await (const key of bucket.findByGlob('logs/**/*.log')) {
|
||||
console.log('📋 Log file:', key);
|
||||
console.log('Package index:', key);
|
||||
}
|
||||
|
||||
for await (const key of bucket.findByGlob('images/*.{jpg,png,gif}')) {
|
||||
console.log('🖼️ Image:', key);
|
||||
console.log('Image:', key);
|
||||
}
|
||||
```
|
||||
|
||||
**Why use async generators?**
|
||||
- ✅ Processes one item at a time (constant memory usage)
|
||||
- ✅ Supports early termination with `break`
|
||||
- ✅ Native JavaScript - no dependencies
|
||||
- ✅ Perfect for large buckets with millions of objects
|
||||
- ✅ Works seamlessly with `for await...of` loops
|
||||
- Processes one item at a time (constant memory usage)
|
||||
- Supports early termination with `break`
|
||||
- Native JavaScript - no dependencies
|
||||
- Perfect for large buckets with millions of objects
|
||||
|
||||
#### RxJS Observables
|
||||
|
||||
@@ -317,16 +293,9 @@ bucket.listAllObjectsObservable('logs/')
|
||||
map(key => ({ key, timestamp: Date.now() }))
|
||||
)
|
||||
.subscribe({
|
||||
next: (item) => console.log(`📋 Log file: ${item.key}`),
|
||||
error: (err) => console.error('❌ Error:', err),
|
||||
complete: () => console.log('✅ Listing complete')
|
||||
});
|
||||
|
||||
// Simple subscription without operators
|
||||
bucket.listAllObjectsObservable('data/')
|
||||
.subscribe({
|
||||
next: (key) => processKey(key),
|
||||
complete: () => console.log('✅ Done')
|
||||
next: (item) => console.log('Log file:', item.key),
|
||||
error: (err) => console.error('Error:', err),
|
||||
complete: () => console.log('Listing complete')
|
||||
});
|
||||
|
||||
// Combine with other observables
|
||||
@@ -337,15 +306,9 @@ const backups$ = bucket.listAllObjectsObservable('backups/');
|
||||
|
||||
merge(logs$, backups$)
|
||||
.pipe(filter(key => key.includes('2024')))
|
||||
.subscribe(key => console.log('📅 2024 file:', key));
|
||||
.subscribe(key => console.log('2024 file:', key));
|
||||
```
|
||||
|
||||
**Why use observables?**
|
||||
- ✅ Rich operator ecosystem (filter, map, debounce, etc.)
|
||||
- ✅ Composable with other RxJS streams
|
||||
- ✅ Perfect for reactive architectures
|
||||
- ✅ Great for complex transformations
|
||||
|
||||
#### Cursor Pattern
|
||||
|
||||
Explicit pagination control for UI and resumable operations:
|
||||
@@ -357,16 +320,13 @@ const cursor = bucket.createCursor('uploads/', { pageSize: 100 });
|
||||
// Fetch pages manually
|
||||
while (cursor.hasMore()) {
|
||||
const page = await cursor.next();
|
||||
console.log(`📄 Page has ${page.keys.length} items`);
|
||||
console.log(`Page has ${page.keys.length} items`);
|
||||
|
||||
for (const key of page.keys) {
|
||||
console.log(` - ${key}`);
|
||||
}
|
||||
|
||||
if (page.done) {
|
||||
console.log('✅ Reached end');
|
||||
break;
|
||||
}
|
||||
if (page.done) break;
|
||||
}
|
||||
|
||||
// Save and restore cursor state (perfect for resumable operations!)
|
||||
@@ -380,42 +340,28 @@ const nextPage = await newCursor.next();
|
||||
|
||||
// Reset cursor to start over
|
||||
cursor.reset();
|
||||
const firstPage = await cursor.next(); // Back to the beginning
|
||||
```
|
||||
|
||||
**Why use cursors?**
|
||||
- ✅ Perfect for UI pagination (prev/next buttons)
|
||||
- ✅ Save/restore state for resumable operations
|
||||
- ✅ Explicit control over page fetching
|
||||
- ✅ Great for implementing "Load More" buttons
|
||||
|
||||
#### Convenience Methods
|
||||
|
||||
```typescript
|
||||
// Collect all keys into array (⚠️ WARNING: loads everything into memory!)
|
||||
// Collect all keys into array (WARNING: loads everything into memory!)
|
||||
const allKeys = await bucket.listAllObjectsArray('images/');
|
||||
console.log(`📦 Found ${allKeys.length} images`);
|
||||
|
||||
// Only use for small result sets
|
||||
const smallList = await bucket.listAllObjectsArray('config/');
|
||||
if (smallList.length < 100) {
|
||||
// Safe to process in memory
|
||||
smallList.forEach(key => console.log(key));
|
||||
}
|
||||
console.log(`Found ${allKeys.length} images`);
|
||||
```
|
||||
|
||||
**Performance Comparison:**
|
||||
|
||||
| Method | Memory Usage | Best For | Supports Early Exit |
|
||||
|--------|-------------|----------|-------------------|
|
||||
| **Async Generator** | O(1) - constant | Most use cases, large datasets | ✅ Yes |
|
||||
| **Observable** | O(1) - constant | Reactive pipelines, RxJS apps | ✅ Yes |
|
||||
| **Cursor** | O(pageSize) | UI pagination, resumable ops | ✅ Yes |
|
||||
| **Array** | O(n) - grows with results | Small datasets (<10k items) | ❌ No |
|
||||
| **Async Generator** | O(1) - constant | Most use cases, large datasets | Yes |
|
||||
| **Observable** | O(1) - constant | Reactive pipelines, RxJS apps | Yes |
|
||||
| **Cursor** | O(pageSize) | UI pagination, resumable ops | Yes |
|
||||
| **Array** | O(n) - grows with results | Small datasets (<10k items) | No |
|
||||
|
||||
### 👀 Bucket Watching
|
||||
### Bucket Watching
|
||||
|
||||
Monitor your S3 bucket for changes in real-time with the powerful `BucketWatcher`:
|
||||
Monitor your storage bucket for changes in real-time with the powerful `BucketWatcher`:
|
||||
|
||||
```typescript
|
||||
// Create a watcher for a specific prefix
|
||||
@@ -428,21 +374,21 @@ const watcher = bucket.createWatcher({
|
||||
// RxJS Observable pattern (recommended for reactive apps)
|
||||
watcher.changeSubject.subscribe((change) => {
|
||||
if (change.type === 'add') {
|
||||
console.log('📥 New file:', change.key);
|
||||
console.log('New file:', change.key);
|
||||
} else if (change.type === 'modify') {
|
||||
console.log('✏️ Modified:', change.key);
|
||||
console.log('Modified:', change.key);
|
||||
} else if (change.type === 'delete') {
|
||||
console.log('🗑️ Deleted:', change.key);
|
||||
console.log('Deleted:', change.key);
|
||||
}
|
||||
});
|
||||
|
||||
// EventEmitter pattern (classic Node.js style)
|
||||
watcher.on('change', (change) => {
|
||||
console.log(`🔔 ${change.type}: ${change.key}`);
|
||||
console.log(`${change.type}: ${change.key}`);
|
||||
});
|
||||
|
||||
watcher.on('error', (err) => {
|
||||
console.error('❌ Watcher error:', err);
|
||||
console.error('Watcher error:', err);
|
||||
});
|
||||
|
||||
// Start watching
|
||||
@@ -450,7 +396,7 @@ await watcher.start();
|
||||
|
||||
// Wait until watcher is ready (initial state built)
|
||||
await watcher.readyDeferred.promise;
|
||||
console.log('👀 Watcher is now monitoring the bucket');
|
||||
console.log('Watcher is now monitoring the bucket');
|
||||
|
||||
// ... your application runs ...
|
||||
|
||||
@@ -486,7 +432,7 @@ const watcher = bucket.createWatcher({
|
||||
// Receive batched events as arrays
|
||||
watcher.changeSubject.subscribe((changes) => {
|
||||
if (Array.isArray(changes)) {
|
||||
console.log(`📦 Batch of ${changes.length} changes:`);
|
||||
console.log(`Batch of ${changes.length} changes:`);
|
||||
changes.forEach(c => console.log(` - ${c.type}: ${c.key}`));
|
||||
}
|
||||
});
|
||||
@@ -497,7 +443,7 @@ await watcher.start();
|
||||
#### Change Event Structure
|
||||
|
||||
```typescript
|
||||
interface IS3ChangeEvent {
|
||||
interface IStorageChangeEvent {
|
||||
type: 'add' | 'modify' | 'delete';
|
||||
key: string; // Object key (path)
|
||||
bucket: string; // Bucket name
|
||||
@@ -509,14 +455,14 @@ interface IS3ChangeEvent {
|
||||
|
||||
#### Watch Use Cases
|
||||
|
||||
- 🔄 **Sync systems** - Detect changes to trigger synchronization
|
||||
- 📊 **Analytics** - Track file uploads/modifications in real-time
|
||||
- 🔔 **Notifications** - Alert users when their files are ready
|
||||
- 🔄 **Processing pipelines** - Trigger workflows on new file uploads
|
||||
- 🗄️ **Backup systems** - Detect changes for incremental backups
|
||||
- 📝 **Audit logs** - Track all bucket activity
|
||||
- **Sync systems** - Detect changes to trigger synchronization
|
||||
- **Analytics** - Track file uploads/modifications in real-time
|
||||
- **Notifications** - Alert users when their files are ready
|
||||
- **Processing pipelines** - Trigger workflows on new file uploads
|
||||
- **Backup systems** - Detect changes for incremental backups
|
||||
- **Audit logs** - Track all bucket activity
|
||||
|
||||
### 📂 Directory Management
|
||||
### Directory Management
|
||||
|
||||
SmartBucket provides powerful directory-like operations for organizing your files:
|
||||
|
||||
@@ -528,8 +474,8 @@ const baseDir = await bucket.getBaseDirectory();
|
||||
const directories = await baseDir.listDirectories();
|
||||
const files = await baseDir.listFiles();
|
||||
|
||||
console.log(`📁 Found ${directories.length} directories`);
|
||||
console.log(`📄 Found ${files.length} files`);
|
||||
console.log(`Found ${directories.length} directories`);
|
||||
console.log(`Found ${files.length} files`);
|
||||
|
||||
// Navigate subdirectories
|
||||
const subDir = await baseDir.getSubDirectoryByName('projects/2024');
|
||||
@@ -542,10 +488,9 @@ await subDir.fastPut({
|
||||
|
||||
// Get directory tree structure
|
||||
const tree = await subDir.getTreeArray();
|
||||
console.log('🌳 Directory tree:', tree);
|
||||
|
||||
// Get directory path
|
||||
console.log('📂 Base path:', subDir.getBasePath()); // "projects/2024/"
|
||||
console.log('Base path:', subDir.getBasePath()); // "projects/2024/"
|
||||
|
||||
// Create empty file as placeholder
|
||||
await subDir.createEmptyFile('placeholder.txt');
|
||||
@@ -555,7 +500,7 @@ const fileExists = await subDir.fileExists({ path: 'report.pdf' });
|
||||
const dirExists = await baseDir.directoryExists('projects');
|
||||
```
|
||||
|
||||
### 🌊 Streaming Operations
|
||||
### Streaming Operations
|
||||
|
||||
Handle large files efficiently with streaming support:
|
||||
|
||||
@@ -604,7 +549,6 @@ await bucket.fastPutStream({
|
||||
'x-custom-header': 'my-value'
|
||||
}
|
||||
});
|
||||
console.log('✅ Large file uploaded via stream');
|
||||
```
|
||||
|
||||
#### Reactive Streams with RxJS
|
||||
@@ -618,7 +562,7 @@ const replaySubject = await bucket.fastGetReplaySubject({
|
||||
// Multiple subscribers can consume the same data
|
||||
replaySubject.subscribe({
|
||||
next: (chunk) => processChunk(chunk),
|
||||
complete: () => console.log('✅ Stream complete')
|
||||
complete: () => console.log('Stream complete')
|
||||
});
|
||||
|
||||
replaySubject.subscribe({
|
||||
@@ -626,7 +570,7 @@ replaySubject.subscribe({
|
||||
});
|
||||
```
|
||||
|
||||
### 🔒 File Locking
|
||||
### File Locking
|
||||
|
||||
Prevent concurrent modifications with built-in file locking:
|
||||
|
||||
@@ -636,19 +580,10 @@ const file = await baseDir.getFile({ path: 'important-config.json' });
|
||||
|
||||
// Lock file for 10 minutes
|
||||
await file.lock({ timeoutMillis: 600000 });
|
||||
console.log('🔒 File locked');
|
||||
|
||||
// Try to modify locked file (will throw error)
|
||||
try {
|
||||
await file.delete();
|
||||
} catch (error) {
|
||||
console.log('❌ Cannot delete locked file');
|
||||
}
|
||||
|
||||
// Check lock status via metadata
|
||||
const metadata = await file.getMetaData();
|
||||
const isLocked = await metadata.checkLocked();
|
||||
console.log(`Lock status: ${isLocked ? '🔒 Locked' : '🔓 Unlocked'}`);
|
||||
|
||||
// Get lock info
|
||||
const lockInfo = await metadata.getLockInfo();
|
||||
@@ -656,19 +591,18 @@ console.log(`Lock expires: ${new Date(lockInfo.expires)}`);
|
||||
|
||||
// Unlock when done
|
||||
await file.unlock();
|
||||
console.log('🔓 File unlocked');
|
||||
|
||||
// Force unlock (even if locked by another process)
|
||||
await file.unlock({ force: true });
|
||||
```
|
||||
|
||||
**Lock use cases:**
|
||||
- 🔄 Prevent concurrent writes during critical updates
|
||||
- 🔐 Protect configuration files during deployment
|
||||
- 🚦 Coordinate distributed workers
|
||||
- 🛡️ Ensure data consistency
|
||||
- Prevent concurrent writes during critical updates
|
||||
- Protect configuration files during deployment
|
||||
- Coordinate distributed workers
|
||||
- Ensure data consistency
|
||||
|
||||
### 🏷️ Metadata Management
|
||||
### Metadata Management
|
||||
|
||||
Attach and manage rich metadata for your files:
|
||||
|
||||
@@ -697,36 +631,31 @@ await metadata.storeCustomMetaData({
|
||||
|
||||
// Retrieve metadata
|
||||
const author = await metadata.getCustomMetaData({ key: 'author' });
|
||||
console.log(`📝 Author: ${author}`);
|
||||
|
||||
// Delete metadata
|
||||
await metadata.deleteCustomMetaData({ key: 'workflow' });
|
||||
|
||||
// Check if file has any metadata
|
||||
const hasMetadata = await file.hasMetaData();
|
||||
console.log(`Has metadata: ${hasMetadata ? '✅' : '❌'}`);
|
||||
|
||||
// Get file type detection
|
||||
const fileType = await metadata.getFileType({ useFileExtension: true });
|
||||
console.log(`📄 MIME type: ${fileType?.mime}`);
|
||||
|
||||
// Get file type from magic bytes (more accurate)
|
||||
const detectedType = await metadata.getFileType({ useMagicBytes: true });
|
||||
console.log(`🔮 Detected type: ${detectedType?.mime}`);
|
||||
|
||||
// Get file size
|
||||
const size = await metadata.getSizeInBytes();
|
||||
console.log(`📊 Size: ${size} bytes`);
|
||||
```
|
||||
|
||||
**Metadata use cases:**
|
||||
- 👤 Track file ownership and authorship
|
||||
- 🏷️ Add tags and categories for search
|
||||
- 📊 Store processing status or workflow state
|
||||
- 🔍 Enable rich querying and filtering
|
||||
- 📝 Maintain audit trails
|
||||
- Track file ownership and authorship
|
||||
- Add tags and categories for search
|
||||
- Store processing status or workflow state
|
||||
- Enable rich querying and filtering
|
||||
- Maintain audit trails
|
||||
|
||||
### 🗑️ Trash & Recovery
|
||||
### Trash & Recovery
|
||||
|
||||
SmartBucket includes an intelligent trash system for safe file deletion and recovery:
|
||||
|
||||
@@ -736,17 +665,14 @@ const file = await baseDir.getFile({ path: 'important-data.xlsx' });
|
||||
|
||||
// Move to trash instead of permanent deletion
|
||||
await file.delete({ mode: 'trash' });
|
||||
console.log('🗑️ File moved to trash (can be restored!)');
|
||||
|
||||
// Permanent deletion (use with caution!)
|
||||
await file.delete({ mode: 'permanent' });
|
||||
console.log('💀 File permanently deleted (cannot be recovered)');
|
||||
|
||||
// Access trash
|
||||
const trash = await bucket.getTrash();
|
||||
const trashDir = await trash.getTrashDir();
|
||||
const trashedFiles = await trashDir.listFiles();
|
||||
console.log(`📦 ${trashedFiles.length} files in trash`);
|
||||
|
||||
// Restore from trash
|
||||
const trashedFile = await baseDir.getFile({
|
||||
@@ -755,32 +681,30 @@ const trashedFile = await baseDir.getFile({
|
||||
});
|
||||
|
||||
await trashedFile.restore({ useOriginalPath: true });
|
||||
console.log('♻️ File restored to original location');
|
||||
|
||||
// Or restore to a different location
|
||||
await trashedFile.restore({
|
||||
toPath: 'recovered/important-data.xlsx'
|
||||
});
|
||||
console.log('♻️ File restored to new location');
|
||||
```
|
||||
|
||||
**Trash features:**
|
||||
- ♻️ Recover accidentally deleted files
|
||||
- 🏷️ Preserves original path in metadata
|
||||
- ⏰ Tracks deletion timestamp
|
||||
- 🔍 List and inspect trashed files
|
||||
- Recover accidentally deleted files
|
||||
- Preserves original path in metadata
|
||||
- Tracks deletion timestamp
|
||||
- List and inspect trashed files
|
||||
|
||||
### ⚡ Advanced Features
|
||||
### Advanced Features
|
||||
|
||||
#### File Statistics
|
||||
|
||||
```typescript
|
||||
// Get detailed file statistics
|
||||
const stats = await bucket.fastStat({ path: 'document.pdf' });
|
||||
console.log(`📊 Size: ${stats.ContentLength} bytes`);
|
||||
console.log(`📅 Last modified: ${stats.LastModified}`);
|
||||
console.log(`🏷️ ETag: ${stats.ETag}`);
|
||||
console.log(`🗂️ Content type: ${stats.ContentType}`);
|
||||
console.log(`Size: ${stats.ContentLength} bytes`);
|
||||
console.log(`Last modified: ${stats.LastModified}`);
|
||||
console.log(`ETag: ${stats.ETag}`);
|
||||
console.log(`Content type: ${stats.ContentType}`);
|
||||
```
|
||||
|
||||
#### Magic Bytes Detection
|
||||
@@ -793,7 +717,6 @@ const magicBytes = await bucket.getMagicBytes({
|
||||
path: 'mystery-file',
|
||||
length: 16
|
||||
});
|
||||
console.log(`🔮 Magic bytes: ${magicBytes.toString('hex')}`);
|
||||
|
||||
// Or from a File object
|
||||
const baseDir = await bucket.getBaseDirectory();
|
||||
@@ -802,9 +725,9 @@ const magic = await file.getMagicBytes({ length: 4 });
|
||||
|
||||
// Check file signatures
|
||||
if (magic[0] === 0xFF && magic[1] === 0xD8) {
|
||||
console.log('📸 This is a JPEG image');
|
||||
console.log('This is a JPEG image');
|
||||
} else if (magic[0] === 0x89 && magic[1] === 0x50) {
|
||||
console.log('🖼️ This is a PNG image');
|
||||
console.log('This is a PNG image');
|
||||
}
|
||||
```
|
||||
|
||||
@@ -816,7 +739,6 @@ const file = await baseDir.getFile({ path: 'config.json' });
|
||||
|
||||
// Read JSON data
|
||||
const config = await file.getJsonData();
|
||||
console.log('⚙️ Config loaded:', config);
|
||||
|
||||
// Update JSON data
|
||||
config.version = '2.0';
|
||||
@@ -824,7 +746,6 @@ config.updated = new Date().toISOString();
|
||||
config.features.push('newFeature');
|
||||
|
||||
await file.writeJsonData(config);
|
||||
console.log('💾 Config updated');
|
||||
```
|
||||
|
||||
#### Directory & File Type Detection
|
||||
@@ -835,35 +756,29 @@ const isDir = await bucket.isDirectory({ path: 'uploads/' });
|
||||
|
||||
// Check if path is a file
|
||||
const isFile = await bucket.isFile({ path: 'uploads/document.pdf' });
|
||||
|
||||
console.log(`Is directory: ${isDir ? '📁' : '❌'}`);
|
||||
console.log(`Is file: ${isFile ? '📄' : '❌'}`);
|
||||
```
|
||||
|
||||
#### Clean Bucket Contents
|
||||
|
||||
```typescript
|
||||
// Remove all files and directories (⚠️ use with caution!)
|
||||
// Remove all files and directories (use with caution!)
|
||||
await bucket.cleanAllContents();
|
||||
console.log('🧹 Bucket cleaned');
|
||||
```
|
||||
|
||||
### ☁️ Cloud Provider Support
|
||||
### Cloud Provider Support
|
||||
|
||||
SmartBucket works seamlessly with all major S3-compatible providers:
|
||||
|
||||
| Provider | Status | Notes |
|
||||
|----------|--------|-------|
|
||||
| **AWS S3** | ✅ Full support | Native S3 API |
|
||||
| **MinIO** | ✅ Full support | Self-hosted, perfect for development |
|
||||
| **DigitalOcean Spaces** | ✅ Full support | Cost-effective S3-compatible |
|
||||
| **Backblaze B2** | ✅ Full support | Very affordable storage |
|
||||
| **Wasabi** | ✅ Full support | High-performance hot storage |
|
||||
| **Google Cloud Storage** | ✅ Full support | Via S3-compatible API |
|
||||
| **Cloudflare R2** | ✅ Full support | Zero egress fees |
|
||||
| **Any S3-compatible** | ✅ Full support | Works with any S3-compatible provider |
|
||||
|
||||
The library automatically handles provider quirks and optimizes operations for each platform while maintaining a consistent API.
|
||||
| **AWS S3** | Supported | Native S3 API |
|
||||
| **MinIO** | Supported | Self-hosted, perfect for development |
|
||||
| **DigitalOcean Spaces** | Supported | Cost-effective S3-compatible |
|
||||
| **Backblaze B2** | Supported | Very affordable storage |
|
||||
| **Wasabi** | Supported | High-performance hot storage |
|
||||
| **Google Cloud Storage** | Supported | Via S3-compatible API |
|
||||
| **Cloudflare R2** | Supported | Zero egress fees |
|
||||
| **Any S3-compatible** | Supported | Works with any S3-compatible provider |
|
||||
|
||||
**Configuration examples:**
|
||||
|
||||
@@ -914,7 +829,7 @@ const r2Storage = new SmartBucket({
|
||||
});
|
||||
```
|
||||
|
||||
### 🔧 Advanced Configuration
|
||||
### Advanced Configuration
|
||||
|
||||
```typescript
|
||||
// Environment-based configuration with @push.rocks/qenv
|
||||
@@ -932,9 +847,7 @@ const smartBucket = new SmartBucket({
|
||||
});
|
||||
```
|
||||
|
||||
### 🧪 Testing
|
||||
|
||||
SmartBucket is thoroughly tested with 97 comprehensive tests covering all features:
|
||||
### Testing
|
||||
|
||||
```bash
|
||||
# Run all tests
|
||||
@@ -947,18 +860,18 @@ pnpm tstest test/test.watcher.node.ts --verbose
|
||||
pnpm test --logfile
|
||||
```
|
||||
|
||||
### 🛡️ Error Handling Best Practices
|
||||
### Error Handling Best Practices
|
||||
|
||||
SmartBucket uses a **strict-by-default** approach - methods throw errors instead of returning null:
|
||||
|
||||
```typescript
|
||||
// ✅ Good: Check existence first
|
||||
// Check existence first
|
||||
if (await bucket.fastExists({ path: 'file.txt' })) {
|
||||
const content = await bucket.fastGet({ path: 'file.txt' });
|
||||
process(content);
|
||||
}
|
||||
|
||||
// ✅ Good: Try/catch for expected failures
|
||||
// Try/catch for expected failures
|
||||
try {
|
||||
const file = await bucket.fastGet({ path: 'might-not-exist.txt' });
|
||||
process(file);
|
||||
@@ -967,7 +880,7 @@ try {
|
||||
useDefault();
|
||||
}
|
||||
|
||||
// ✅ Good: Explicit overwrite control
|
||||
// Explicit overwrite control
|
||||
try {
|
||||
await bucket.fastPut({
|
||||
path: 'existing-file.txt',
|
||||
@@ -977,12 +890,9 @@ try {
|
||||
} catch (error) {
|
||||
console.log('File already exists');
|
||||
}
|
||||
|
||||
// ❌ Bad: Assuming file exists without checking
|
||||
const content = await bucket.fastGet({ path: 'file.txt' }); // May throw!
|
||||
```
|
||||
|
||||
### 💡 Best Practices
|
||||
### Best Practices
|
||||
|
||||
1. **Always use strict mode** for critical operations to catch errors early
|
||||
2. **Check existence first** with `fastExists()`, `bucketExists()`, etc. before operations
|
||||
@@ -995,7 +905,7 @@ const content = await bucket.fastGet({ path: 'file.txt' }); // May throw!
|
||||
9. **Set explicit overwrite flags** to prevent accidental file overwrites
|
||||
10. **Use the watcher** for real-time synchronization and event-driven architectures
|
||||
|
||||
### 📊 Performance Tips
|
||||
### Performance Tips
|
||||
|
||||
- **Listing**: Use async generators or cursors for buckets with >10,000 objects
|
||||
- **Uploads**: Use streams for files >100MB
|
||||
|
||||
@@ -3,6 +3,6 @@
|
||||
*/
|
||||
export const commitinfo = {
|
||||
name: '@push.rocks/smartbucket',
|
||||
version: '4.5.0',
|
||||
version: '4.5.1',
|
||||
description: 'A TypeScript library providing a cloud-agnostic interface for managing object storage with functionalities like bucket management, file and directory operations, and advanced features such as metadata handling and file locking.'
|
||||
}
|
||||
|
||||
Reference in New Issue
Block a user