Compare commits
4 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
| bd73004bd6 | |||
| 65c7bcf12c | |||
| dd6efa4908 | |||
| 1f4b7319d3 |
67
changelog.md
67
changelog.md
@@ -1,5 +1,72 @@
|
|||||||
# Changelog
|
# Changelog
|
||||||
|
|
||||||
|
## 2025-11-20 - 4.3.0 - feat(listing)
|
||||||
|
Add memory-efficient listing APIs: async generator, RxJS observable, and cursor pagination; export ListCursor and Minimatch; add minimatch dependency; bump to 4.2.0
|
||||||
|
|
||||||
|
- Added memory-efficient listing methods on Bucket: listAllObjects (async generator), listAllObjectsObservable (RxJS Observable), createCursor (returns ListCursor) and listAllObjectsArray (convenience array collector).
|
||||||
|
- New ListCursor class (ts/classes.listcursor.ts) providing page-based iteration: next(), hasMore(), reset(), getToken()/setToken().
|
||||||
|
- Added glob matching helper findByGlob(pattern) using minimatch (exported via plugins.Minimatch).
|
||||||
|
- Exported ListCursor from ts/index.ts and exported Minimatch via ts/plugins.ts.
|
||||||
|
- Added minimatch dependency in package.json and bumped package version to 4.2.0; increased test timeout to 120s.
|
||||||
|
- Updated tests to read S3_SECRETKEY, S3_PORT and to assert bucket name from env (test/test.node+deno.ts, test/test.trash.node+deno.ts).
|
||||||
|
- No breaking changes: new APIs are additive and existing behavior preserved.
|
||||||
|
|
||||||
|
## 2025-11-20 - 4.2.0 - feat(listing)
|
||||||
|
Add memory-efficient listing with async generators, RxJS observables, and cursor pagination for huge buckets
|
||||||
|
|
||||||
|
**New Memory-Efficient Listing Methods:**
|
||||||
|
|
||||||
|
**Async Generator (Recommended for most use cases):**
|
||||||
|
- `Bucket.listAllObjects(prefix?)` - Stream object keys one at a time using `for await...of`
|
||||||
|
- `Bucket.findByGlob(pattern)` - Find objects matching glob patterns (e.g., `**/*.json`, `npm/packages/*/index.json`)
|
||||||
|
- Memory efficient, supports early termination, composable
|
||||||
|
|
||||||
|
**RxJS Observable (For complex reactive pipelines):**
|
||||||
|
- `Bucket.listAllObjectsObservable(prefix?)` - Emit keys as Observable for use with RxJS operators (filter, map, take, etc.)
|
||||||
|
- Perfect for complex data transformations and reactive architectures
|
||||||
|
|
||||||
|
**Cursor Pattern (For manual pagination control):**
|
||||||
|
- `Bucket.createCursor(prefix?, options?)` - Create cursor for explicit page-by-page iteration
|
||||||
|
- `ListCursor.next()` - Fetch next page of results
|
||||||
|
- `ListCursor.hasMore()` - Check if more results available
|
||||||
|
- `ListCursor.reset()` - Reset to beginning
|
||||||
|
- `ListCursor.getToken()` / `ListCursor.setToken()` - Save/restore pagination state
|
||||||
|
- Ideal for UI pagination and resumable operations
|
||||||
|
|
||||||
|
**Convenience Methods:**
|
||||||
|
- `Bucket.listAllObjectsArray(prefix?)` - Collect all keys into array (WARNING: loads all into memory)
|
||||||
|
|
||||||
|
**Benefits:**
|
||||||
|
- ✅ Memory-efficient streaming for buckets with millions of objects
|
||||||
|
- ✅ Three patterns for different use cases (generators, observables, cursors)
|
||||||
|
- ✅ Support for early termination and incremental processing
|
||||||
|
- ✅ Glob pattern matching with minimatch
|
||||||
|
- ✅ Full TypeScript support with proper types
|
||||||
|
- ✅ Zero breaking changes - all new methods
|
||||||
|
|
||||||
|
**Dependencies:**
|
||||||
|
- Added `minimatch` for glob pattern support
|
||||||
|
|
||||||
|
**Files Changed:**
|
||||||
|
- `ts/classes.bucket.ts` - Added all listing methods
|
||||||
|
- `ts/classes.listcursor.ts` - NEW: Cursor implementation
|
||||||
|
- `ts/plugins.ts` - Export Minimatch
|
||||||
|
- `ts/index.ts` - Export ListCursor
|
||||||
|
- `test/test.listing.node+deno.ts` - NEW: Comprehensive listing tests
|
||||||
|
- `package.json` - Added minimatch dependency
|
||||||
|
|
||||||
|
## 2025-11-20 - 4.1.0 - feat(core)
|
||||||
|
Add S3 endpoint normalization, directory pagination, improved metadata checks, trash support, and related tests
|
||||||
|
|
||||||
|
- Add normalizeS3Descriptor helper to sanitize and normalize various S3 endpoint formats and emit warnings for mismatches (helpers.ts).
|
||||||
|
- Use normalized endpoint and credentials when constructing S3 client in SmartBucket (classes.smartbucket.ts).
|
||||||
|
- Implement paginated listing helper listObjectsV2AllPages in Directory and use it for listFiles and listDirectories to aggregate Contents and CommonPrefixes across pages (classes.directory.ts).
|
||||||
|
- Improve MetaData.hasMetaData to catch NotFound errors and return false instead of throwing (classes.metadata.ts).
|
||||||
|
- Export metadata and trash modules from index (ts/index.ts) and add a Trash class with utilities for trashed files and key encoding (classes.trash.ts).
|
||||||
|
- Enhance Bucket operations: fastCopy now preserves or replaces native metadata correctly, cleanAllContents supports paginated deletion, and improved fastExists error handling (classes.bucket.ts).
|
||||||
|
- Fix Directory.getSubDirectoryByName to construct new Directory instances with the correct parent directory reference.
|
||||||
|
- Add tests covering metadata absence and pagination behavior (test/test.local.node+deno.ts).
|
||||||
|
|
||||||
## 2025-11-20 - 4.0.1 - fix(plugins)
|
## 2025-11-20 - 4.0.1 - fix(plugins)
|
||||||
Use explicit node: imports for native path and stream modules in ts/plugins.ts
|
Use explicit node: imports for native path and stream modules in ts/plugins.ts
|
||||||
|
|
||||||
|
|||||||
6
deno.lock
generated
6
deno.lock
generated
@@ -14,7 +14,8 @@
|
|||||||
"npm:@push.rocks/smartstring@^4.1.0": "4.1.0",
|
"npm:@push.rocks/smartstring@^4.1.0": "4.1.0",
|
||||||
"npm:@push.rocks/smartunique@^3.0.9": "3.0.9",
|
"npm:@push.rocks/smartunique@^3.0.9": "3.0.9",
|
||||||
"npm:@push.rocks/tapbundle@^6.0.3": "6.0.3",
|
"npm:@push.rocks/tapbundle@^6.0.3": "6.0.3",
|
||||||
"npm:@tsclass/tsclass@^9.3.0": "9.3.0"
|
"npm:@tsclass/tsclass@^9.3.0": "9.3.0",
|
||||||
|
"npm:minimatch@^10.1.1": "10.1.1"
|
||||||
},
|
},
|
||||||
"npm": {
|
"npm": {
|
||||||
"@api.global/typedrequest-interfaces@2.0.2": {
|
"@api.global/typedrequest-interfaces@2.0.2": {
|
||||||
@@ -8113,7 +8114,8 @@
|
|||||||
"npm:@push.rocks/smartstring@^4.1.0",
|
"npm:@push.rocks/smartstring@^4.1.0",
|
||||||
"npm:@push.rocks/smartunique@^3.0.9",
|
"npm:@push.rocks/smartunique@^3.0.9",
|
||||||
"npm:@push.rocks/tapbundle@^6.0.3",
|
"npm:@push.rocks/tapbundle@^6.0.3",
|
||||||
"npm:@tsclass/tsclass@^9.3.0"
|
"npm:@tsclass/tsclass@^9.3.0",
|
||||||
|
"npm:minimatch@^10.1.1"
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,6 +1,6 @@
|
|||||||
{
|
{
|
||||||
"name": "@push.rocks/smartbucket",
|
"name": "@push.rocks/smartbucket",
|
||||||
"version": "4.0.1",
|
"version": "4.3.0",
|
||||||
"description": "A TypeScript library providing a cloud-agnostic interface for managing object storage with functionalities like bucket management, file and directory operations, and advanced features such as metadata handling and file locking.",
|
"description": "A TypeScript library providing a cloud-agnostic interface for managing object storage with functionalities like bucket management, file and directory operations, and advanced features such as metadata handling and file locking.",
|
||||||
"main": "dist_ts/index.js",
|
"main": "dist_ts/index.js",
|
||||||
"typings": "dist_ts/index.d.ts",
|
"typings": "dist_ts/index.d.ts",
|
||||||
@@ -8,7 +8,7 @@
|
|||||||
"author": "Task Venture Capital GmbH",
|
"author": "Task Venture Capital GmbH",
|
||||||
"license": "MIT",
|
"license": "MIT",
|
||||||
"scripts": {
|
"scripts": {
|
||||||
"test": "(tstest test/ --verbose --logfile --timeout 60)",
|
"test": "(tstest test/ --verbose --logfile --timeout 120)",
|
||||||
"build": "(tsbuild --web --allowimplicitany)"
|
"build": "(tsbuild --web --allowimplicitany)"
|
||||||
},
|
},
|
||||||
"devDependencies": {
|
"devDependencies": {
|
||||||
@@ -27,7 +27,8 @@
|
|||||||
"@push.rocks/smartstream": "^3.2.5",
|
"@push.rocks/smartstream": "^3.2.5",
|
||||||
"@push.rocks/smartstring": "^4.1.0",
|
"@push.rocks/smartstring": "^4.1.0",
|
||||||
"@push.rocks/smartunique": "^3.0.9",
|
"@push.rocks/smartunique": "^3.0.9",
|
||||||
"@tsclass/tsclass": "^9.3.0"
|
"@tsclass/tsclass": "^9.3.0",
|
||||||
|
"minimatch": "^10.1.1"
|
||||||
},
|
},
|
||||||
"private": false,
|
"private": false,
|
||||||
"files": [
|
"files": [
|
||||||
|
|||||||
15
pnpm-lock.yaml
generated
15
pnpm-lock.yaml
generated
@@ -35,6 +35,9 @@ importers:
|
|||||||
'@tsclass/tsclass':
|
'@tsclass/tsclass':
|
||||||
specifier: ^9.3.0
|
specifier: ^9.3.0
|
||||||
version: 9.3.0
|
version: 9.3.0
|
||||||
|
minimatch:
|
||||||
|
specifier: ^10.1.1
|
||||||
|
version: 10.1.1
|
||||||
devDependencies:
|
devDependencies:
|
||||||
'@git.zone/tsbuild':
|
'@git.zone/tsbuild':
|
||||||
specifier: ^3.1.0
|
specifier: ^3.1.0
|
||||||
@@ -3562,10 +3565,6 @@ packages:
|
|||||||
minimalistic-crypto-utils@1.0.1:
|
minimalistic-crypto-utils@1.0.1:
|
||||||
resolution: {integrity: sha1-9sAMHAsIIkblxNmd+4x8CDsrWCo=}
|
resolution: {integrity: sha1-9sAMHAsIIkblxNmd+4x8CDsrWCo=}
|
||||||
|
|
||||||
minimatch@10.0.3:
|
|
||||||
resolution: {integrity: sha512-IPZ167aShDZZUMdRk66cyQAW3qr0WzbHkPdMYa8bzZhlHhO3jALbKdxcaak7W9FfT2rZNpQuUu4Od7ILEpXSaw==}
|
|
||||||
engines: {node: 20 || >=22}
|
|
||||||
|
|
||||||
minimatch@10.1.1:
|
minimatch@10.1.1:
|
||||||
resolution: {integrity: sha512-enIvLvRAFZYXJzkCYG5RKmPfrFArdLv+R+lbQ53BmIMLIry74bjKzX6iHAm8WYamJkhSSEabrWN5D97XnKObjQ==}
|
resolution: {integrity: sha512-enIvLvRAFZYXJzkCYG5RKmPfrFArdLv+R+lbQ53BmIMLIry74bjKzX6iHAm8WYamJkhSSEabrWN5D97XnKObjQ==}
|
||||||
engines: {node: 20 || >=22}
|
engines: {node: 20 || >=22}
|
||||||
@@ -8144,7 +8143,7 @@ snapshots:
|
|||||||
|
|
||||||
'@types/minimatch@6.0.0':
|
'@types/minimatch@6.0.0':
|
||||||
dependencies:
|
dependencies:
|
||||||
minimatch: 10.0.3
|
minimatch: 10.1.1
|
||||||
|
|
||||||
'@types/ms@2.1.0': {}
|
'@types/ms@2.1.0': {}
|
||||||
|
|
||||||
@@ -9315,7 +9314,7 @@ snapshots:
|
|||||||
dependencies:
|
dependencies:
|
||||||
foreground-child: 3.3.1
|
foreground-child: 3.3.1
|
||||||
jackspeak: 4.1.1
|
jackspeak: 4.1.1
|
||||||
minimatch: 10.0.3
|
minimatch: 10.1.1
|
||||||
minipass: 7.1.2
|
minipass: 7.1.2
|
||||||
package-json-from-dist: 1.0.1
|
package-json-from-dist: 1.0.1
|
||||||
path-scurry: 2.0.0
|
path-scurry: 2.0.0
|
||||||
@@ -10257,10 +10256,6 @@ snapshots:
|
|||||||
|
|
||||||
minimalistic-crypto-utils@1.0.1: {}
|
minimalistic-crypto-utils@1.0.1: {}
|
||||||
|
|
||||||
minimatch@10.0.3:
|
|
||||||
dependencies:
|
|
||||||
'@isaacs/brace-expansion': 5.0.0
|
|
||||||
|
|
||||||
minimatch@10.1.1:
|
minimatch@10.1.1:
|
||||||
dependencies:
|
dependencies:
|
||||||
'@isaacs/brace-expansion': 5.0.0
|
'@isaacs/brace-expansion': 5.0.0
|
||||||
|
|||||||
543
readme.md
543
readme.md
@@ -1,39 +1,77 @@
|
|||||||
# @push.rocks/smartbucket 🪣
|
# @push.rocks/smartbucket 🪣
|
||||||
|
|
||||||
> A powerful, cloud-agnostic TypeScript library for object storage with advanced features like file locking, metadata management, and intelligent trash handling.
|
> A powerful, cloud-agnostic TypeScript library for object storage that makes S3 feel like a modern filesystem. Built for developers who demand simplicity, type-safety, and advanced features like metadata management, file locking, intelligent trash handling, and memory-efficient streaming.
|
||||||
|
|
||||||
|
## Why SmartBucket? 🎯
|
||||||
|
|
||||||
|
- **🌍 Cloud Agnostic** - Write once, run on AWS S3, MinIO, DigitalOcean Spaces, Backblaze B2, Wasabi, or any S3-compatible storage
|
||||||
|
- **🚀 Modern TypeScript** - First-class TypeScript support with complete type definitions and async/await patterns
|
||||||
|
- **💾 Memory Efficient** - Handle millions of files with async generators, RxJS observables, and cursor pagination
|
||||||
|
- **🗑️ Smart Trash System** - Recover accidentally deleted files with built-in trash and restore functionality
|
||||||
|
- **🔒 File Locking** - Prevent concurrent modifications with built-in locking mechanisms
|
||||||
|
- **🏷️ Rich Metadata** - Attach custom metadata to any file for powerful organization and search
|
||||||
|
- **🌊 Streaming Support** - Efficient handling of large files with Node.js and Web streams
|
||||||
|
- **📁 Directory-like API** - Intuitive filesystem-like operations on object storage
|
||||||
|
- **⚡ Fail-Fast** - Strict-by-default API catches errors immediately with precise stack traces
|
||||||
|
|
||||||
|
## Quick Start 🚀
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
import { SmartBucket } from '@push.rocks/smartbucket';
|
||||||
|
|
||||||
|
// Connect to your storage
|
||||||
|
const storage = new SmartBucket({
|
||||||
|
accessKey: 'your-access-key',
|
||||||
|
accessSecret: 'your-secret-key',
|
||||||
|
endpoint: 's3.amazonaws.com',
|
||||||
|
port: 443,
|
||||||
|
useSsl: true
|
||||||
|
});
|
||||||
|
|
||||||
|
// Get or create a bucket
|
||||||
|
const bucket = await storage.getBucketByName('my-app-data');
|
||||||
|
|
||||||
|
// Upload a file
|
||||||
|
await bucket.fastPut({
|
||||||
|
path: 'users/profile.json',
|
||||||
|
contents: JSON.stringify({ name: 'Alice', role: 'admin' })
|
||||||
|
});
|
||||||
|
|
||||||
|
// Download it back
|
||||||
|
const data = await bucket.fastGet({ path: 'users/profile.json' });
|
||||||
|
console.log('📄', JSON.parse(data.toString()));
|
||||||
|
|
||||||
|
// List files efficiently (even with millions of objects!)
|
||||||
|
for await (const key of bucket.listAllObjects('users/')) {
|
||||||
|
console.log('🔍 Found:', key);
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
## Install 📦
|
## Install 📦
|
||||||
|
|
||||||
To install `@push.rocks/smartbucket`, run:
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
|
# Using pnpm (recommended)
|
||||||
|
pnpm add @push.rocks/smartbucket
|
||||||
|
|
||||||
|
# Using npm
|
||||||
npm install @push.rocks/smartbucket --save
|
npm install @push.rocks/smartbucket --save
|
||||||
```
|
```
|
||||||
|
|
||||||
Or if you're using pnpm (recommended):
|
|
||||||
|
|
||||||
```bash
|
|
||||||
pnpm add @push.rocks/smartbucket
|
|
||||||
```
|
|
||||||
|
|
||||||
## Usage 🚀
|
## Usage 🚀
|
||||||
|
|
||||||
### Introduction
|
|
||||||
|
|
||||||
`@push.rocks/smartbucket` provides a unified, cloud-agnostic API for object storage operations across major providers like AWS S3, Google Cloud Storage, MinIO, and more. It abstracts away provider-specific complexities while offering advanced features like metadata management, file locking, streaming operations, and intelligent trash management.
|
|
||||||
|
|
||||||
### Table of Contents
|
### Table of Contents
|
||||||
|
|
||||||
1. [🏁 Getting Started](#-getting-started)
|
1. [🏁 Getting Started](#-getting-started)
|
||||||
2. [🗂️ Working with Buckets](#️-working-with-buckets)
|
2. [🗂️ Working with Buckets](#️-working-with-buckets)
|
||||||
3. [📁 File Operations](#-file-operations)
|
3. [📁 File Operations](#-file-operations)
|
||||||
4. [📂 Directory Management](#-directory-management)
|
4. [📋 Memory-Efficient Listing](#-memory-efficient-listing)
|
||||||
5. [🌊 Streaming Operations](#-streaming-operations)
|
5. [📂 Directory Management](#-directory-management)
|
||||||
6. [🔒 File Locking](#-file-locking)
|
6. [🌊 Streaming Operations](#-streaming-operations)
|
||||||
7. [🏷️ Metadata Management](#️-metadata-management)
|
7. [🔒 File Locking](#-file-locking)
|
||||||
8. [🗑️ Trash & Recovery](#️-trash--recovery)
|
8. [🏷️ Metadata Management](#️-metadata-management)
|
||||||
9. [⚡ Advanced Features](#-advanced-features)
|
9. [🗑️ Trash & Recovery](#️-trash--recovery)
|
||||||
10. [☁️ Cloud Provider Support](#️-cloud-provider-support)
|
10. [⚡ Advanced Features](#-advanced-features)
|
||||||
|
11. [☁️ Cloud Provider Support](#️-cloud-provider-support)
|
||||||
|
|
||||||
### 🏁 Getting Started
|
### 🏁 Getting Started
|
||||||
|
|
||||||
@@ -53,6 +91,17 @@ const smartBucket = new SmartBucket({
|
|||||||
});
|
});
|
||||||
```
|
```
|
||||||
|
|
||||||
|
**For MinIO or self-hosted S3:**
|
||||||
|
```typescript
|
||||||
|
const smartBucket = new SmartBucket({
|
||||||
|
accessKey: 'minioadmin',
|
||||||
|
accessSecret: 'minioadmin',
|
||||||
|
endpoint: 'localhost',
|
||||||
|
port: 9000,
|
||||||
|
useSsl: false // MinIO often runs without SSL locally
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
### 🗂️ Working with Buckets
|
### 🗂️ Working with Buckets
|
||||||
|
|
||||||
#### Creating Buckets
|
#### Creating Buckets
|
||||||
@@ -66,11 +115,14 @@ console.log(`✅ Bucket created: ${myBucket.name}`);
|
|||||||
#### Getting Existing Buckets
|
#### Getting Existing Buckets
|
||||||
|
|
||||||
```typescript
|
```typescript
|
||||||
// Get a bucket reference
|
// Get a bucket reference (throws if not found - strict by default!)
|
||||||
const existingBucket = await smartBucket.getBucketByName('existing-bucket');
|
const existingBucket = await smartBucket.getBucketByName('existing-bucket');
|
||||||
|
|
||||||
// Or use strict mode (throws if bucket doesn't exist)
|
// Check first, then get (non-throwing approach)
|
||||||
const bucketStrict = await smartBucket.getBucketByNameStrict('must-exist-bucket');
|
if (await smartBucket.bucketExists('maybe-exists')) {
|
||||||
|
const bucket = await smartBucket.getBucketByName('maybe-exists');
|
||||||
|
console.log('✅ Found bucket:', bucket.name);
|
||||||
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
#### Removing Buckets
|
#### Removing Buckets
|
||||||
@@ -93,6 +145,7 @@ const file = await bucket.fastPut({
|
|||||||
path: 'documents/report.pdf',
|
path: 'documents/report.pdf',
|
||||||
contents: Buffer.from('Your file content here')
|
contents: Buffer.from('Your file content here')
|
||||||
});
|
});
|
||||||
|
console.log('✅ Uploaded:', file.path);
|
||||||
|
|
||||||
// Upload with string content
|
// Upload with string content
|
||||||
await bucket.fastPut({
|
await bucket.fastPut({
|
||||||
@@ -114,8 +167,9 @@ try {
|
|||||||
contents: 'new content'
|
contents: 'new content'
|
||||||
});
|
});
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
console.error('Upload failed:', error.message);
|
console.error('❌ Upload failed:', error.message);
|
||||||
// Error: Object already exists at path 'existing-file.txt' in bucket 'my-bucket'. Set overwrite:true to replace it.
|
// Error: Object already exists at path 'existing-file.txt' in bucket 'my-bucket'.
|
||||||
|
// Set overwrite:true to replace it.
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -130,6 +184,9 @@ console.log(`📄 File size: ${fileContent.length} bytes`);
|
|||||||
|
|
||||||
// Get file as string
|
// Get file as string
|
||||||
const textContent = fileContent.toString('utf-8');
|
const textContent = fileContent.toString('utf-8');
|
||||||
|
|
||||||
|
// Parse JSON files directly
|
||||||
|
const jsonData = JSON.parse(fileContent.toString());
|
||||||
```
|
```
|
||||||
|
|
||||||
#### Check File Existence
|
#### Check File Existence
|
||||||
@@ -148,6 +205,7 @@ console.log(`File exists: ${exists ? '✅' : '❌'}`);
|
|||||||
await bucket.fastRemove({
|
await bucket.fastRemove({
|
||||||
path: 'old-file.txt'
|
path: 'old-file.txt'
|
||||||
});
|
});
|
||||||
|
console.log('🗑️ File deleted permanently');
|
||||||
```
|
```
|
||||||
|
|
||||||
#### Copy & Move Files
|
#### Copy & Move Files
|
||||||
@@ -158,14 +216,182 @@ await bucket.fastCopy({
|
|||||||
sourcePath: 'original/file.txt',
|
sourcePath: 'original/file.txt',
|
||||||
destinationPath: 'backup/file-copy.txt'
|
destinationPath: 'backup/file-copy.txt'
|
||||||
});
|
});
|
||||||
|
console.log('📋 File copied');
|
||||||
|
|
||||||
// Move file (copy + delete original)
|
// Move file (copy + delete original)
|
||||||
await bucket.fastMove({
|
await bucket.fastMove({
|
||||||
sourcePath: 'temp/draft.txt',
|
sourcePath: 'temp/draft.txt',
|
||||||
destinationPath: 'final/document.txt'
|
destinationPath: 'final/document.txt'
|
||||||
});
|
});
|
||||||
|
console.log('📦 File moved');
|
||||||
```
|
```
|
||||||
|
|
||||||
|
### 📋 Memory-Efficient Listing
|
||||||
|
|
||||||
|
SmartBucket provides three powerful patterns for listing objects, optimized for handling **millions of files** efficiently:
|
||||||
|
|
||||||
|
#### Async Generators (Recommended) ⭐
|
||||||
|
|
||||||
|
Memory-efficient streaming using native JavaScript async iteration:
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// List all objects with prefix - streams one at a time!
|
||||||
|
for await (const key of bucket.listAllObjects('documents/')) {
|
||||||
|
console.log(`📄 Found: ${key}`);
|
||||||
|
|
||||||
|
// Process each file individually (memory efficient!)
|
||||||
|
const content = await bucket.fastGet({ path: key });
|
||||||
|
processFile(content);
|
||||||
|
|
||||||
|
// Early termination support
|
||||||
|
if (shouldStop()) break;
|
||||||
|
}
|
||||||
|
|
||||||
|
// List all objects (no prefix)
|
||||||
|
const allKeys: string[] = [];
|
||||||
|
for await (const key of bucket.listAllObjects()) {
|
||||||
|
allKeys.push(key);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Find objects matching glob patterns
|
||||||
|
for await (const key of bucket.findByGlob('**/*.json')) {
|
||||||
|
console.log(`📦 JSON file: ${key}`);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Complex glob patterns
|
||||||
|
for await (const key of bucket.findByGlob('npm/packages/*/index.json')) {
|
||||||
|
// Matches: npm/packages/foo/index.json, npm/packages/bar/index.json
|
||||||
|
console.log(`📦 Package index: ${key}`);
|
||||||
|
}
|
||||||
|
|
||||||
|
// More glob examples
|
||||||
|
for await (const key of bucket.findByGlob('logs/**/*.log')) {
|
||||||
|
console.log('📋 Log file:', key);
|
||||||
|
}
|
||||||
|
|
||||||
|
for await (const key of bucket.findByGlob('images/*.{jpg,png,gif}')) {
|
||||||
|
console.log('🖼️ Image:', key);
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Why use async generators?**
|
||||||
|
- ✅ Processes one item at a time (constant memory usage)
|
||||||
|
- ✅ Supports early termination with `break`
|
||||||
|
- ✅ Native JavaScript - no dependencies
|
||||||
|
- ✅ Perfect for large buckets with millions of objects
|
||||||
|
- ✅ Works seamlessly with `for await...of` loops
|
||||||
|
|
||||||
|
#### RxJS Observables
|
||||||
|
|
||||||
|
Perfect for reactive pipelines and complex data transformations:
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
import { filter, take, map } from 'rxjs/operators';
|
||||||
|
|
||||||
|
// Stream keys as Observable with powerful operators
|
||||||
|
bucket.listAllObjectsObservable('logs/')
|
||||||
|
.pipe(
|
||||||
|
filter(key => key.endsWith('.log')),
|
||||||
|
take(100),
|
||||||
|
map(key => ({ key, timestamp: Date.now() }))
|
||||||
|
)
|
||||||
|
.subscribe({
|
||||||
|
next: (item) => console.log(`📋 Log file: ${item.key}`),
|
||||||
|
error: (err) => console.error('❌ Error:', err),
|
||||||
|
complete: () => console.log('✅ Listing complete')
|
||||||
|
});
|
||||||
|
|
||||||
|
// Simple subscription without operators
|
||||||
|
bucket.listAllObjectsObservable('data/')
|
||||||
|
.subscribe({
|
||||||
|
next: (key) => processKey(key),
|
||||||
|
complete: () => console.log('✅ Done')
|
||||||
|
});
|
||||||
|
|
||||||
|
// Combine with other observables
|
||||||
|
import { merge } from 'rxjs';
|
||||||
|
|
||||||
|
const logs$ = bucket.listAllObjectsObservable('logs/');
|
||||||
|
const backups$ = bucket.listAllObjectsObservable('backups/');
|
||||||
|
|
||||||
|
merge(logs$, backups$)
|
||||||
|
.pipe(filter(key => key.includes('2024')))
|
||||||
|
.subscribe(key => console.log('📅 2024 file:', key));
|
||||||
|
```
|
||||||
|
|
||||||
|
**Why use observables?**
|
||||||
|
- ✅ Rich operator ecosystem (filter, map, debounce, etc.)
|
||||||
|
- ✅ Composable with other RxJS streams
|
||||||
|
- ✅ Perfect for reactive architectures
|
||||||
|
- ✅ Great for complex transformations
|
||||||
|
|
||||||
|
#### Cursor Pattern
|
||||||
|
|
||||||
|
Explicit pagination control for UI and resumable operations:
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// Create cursor with custom page size
|
||||||
|
const cursor = bucket.createCursor('uploads/', { pageSize: 100 });
|
||||||
|
|
||||||
|
// Fetch pages manually
|
||||||
|
while (cursor.hasMore()) {
|
||||||
|
const page = await cursor.next();
|
||||||
|
console.log(`📄 Page has ${page.keys.length} items`);
|
||||||
|
|
||||||
|
for (const key of page.keys) {
|
||||||
|
console.log(` - ${key}`);
|
||||||
|
}
|
||||||
|
|
||||||
|
if (page.done) {
|
||||||
|
console.log('✅ Reached end');
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Save and restore cursor state (perfect for resumable operations!)
|
||||||
|
const token = cursor.getToken();
|
||||||
|
// Store token in database or session...
|
||||||
|
|
||||||
|
// ... later, in a different request ...
|
||||||
|
const newCursor = bucket.createCursor('uploads/', { pageSize: 100 });
|
||||||
|
newCursor.setToken(token); // Resume from saved position!
|
||||||
|
const nextPage = await cursor.next();
|
||||||
|
|
||||||
|
// Reset cursor to start over
|
||||||
|
cursor.reset();
|
||||||
|
const firstPage = await cursor.next(); // Back to the beginning
|
||||||
|
```
|
||||||
|
|
||||||
|
**Why use cursors?**
|
||||||
|
- ✅ Perfect for UI pagination (prev/next buttons)
|
||||||
|
- ✅ Save/restore state for resumable operations
|
||||||
|
- ✅ Explicit control over page fetching
|
||||||
|
- ✅ Great for implementing "Load More" buttons
|
||||||
|
|
||||||
|
#### Convenience Methods
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// Collect all keys into array (⚠️ WARNING: loads everything into memory!)
|
||||||
|
const allKeys = await bucket.listAllObjectsArray('images/');
|
||||||
|
console.log(`📦 Found ${allKeys.length} images`);
|
||||||
|
|
||||||
|
// Only use for small result sets
|
||||||
|
const smallList = await bucket.listAllObjectsArray('config/');
|
||||||
|
if (smallList.length < 100) {
|
||||||
|
// Safe to process in memory
|
||||||
|
smallList.forEach(key => console.log(key));
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Performance Comparison:**
|
||||||
|
|
||||||
|
| Method | Memory Usage | Best For | Supports Early Exit |
|
||||||
|
|--------|-------------|----------|-------------------|
|
||||||
|
| **Async Generator** | O(1) - constant | Most use cases, large datasets | ✅ Yes |
|
||||||
|
| **Observable** | O(1) - constant | Reactive pipelines, RxJS apps | ✅ Yes |
|
||||||
|
| **Cursor** | O(pageSize) | UI pagination, resumable ops | ✅ Yes |
|
||||||
|
| **Array** | O(n) - grows with results | Small datasets (<10k items) | ❌ No |
|
||||||
|
|
||||||
### 📂 Directory Management
|
### 📂 Directory Management
|
||||||
|
|
||||||
SmartBucket provides powerful directory-like operations for organizing your files:
|
SmartBucket provides powerful directory-like operations for organizing your files:
|
||||||
@@ -194,25 +420,41 @@ await subDir.fastPut({
|
|||||||
const tree = await subDir.getTreeArray();
|
const tree = await subDir.getTreeArray();
|
||||||
console.log('🌳 Directory tree:', tree);
|
console.log('🌳 Directory tree:', tree);
|
||||||
|
|
||||||
|
// Get directory path
|
||||||
|
console.log('📂 Base path:', subDir.getBasePath()); // "projects/2024/"
|
||||||
|
|
||||||
// Create empty file as placeholder
|
// Create empty file as placeholder
|
||||||
await subDir.createEmptyFile('placeholder.txt');
|
await subDir.createEmptyFile('placeholder.txt');
|
||||||
```
|
```
|
||||||
|
|
||||||
### 🌊 Streaming Operations
|
### 🌊 Streaming Operations
|
||||||
|
|
||||||
Handle large files efficiently with streaming:
|
Handle large files efficiently with streaming support:
|
||||||
|
|
||||||
#### Download Streams
|
#### Download Streams
|
||||||
|
|
||||||
```typescript
|
```typescript
|
||||||
// Node.js stream
|
// Node.js stream (for file I/O, HTTP responses, etc.)
|
||||||
const nodeStream = await bucket.fastGetStream(
|
const nodeStream = await bucket.fastGetStream(
|
||||||
{ path: 'large-video.mp4' },
|
{ path: 'large-video.mp4' },
|
||||||
'nodestream'
|
'nodestream'
|
||||||
);
|
);
|
||||||
|
|
||||||
|
// Pipe to file
|
||||||
|
import * as fs from 'node:fs';
|
||||||
nodeStream.pipe(fs.createWriteStream('local-video.mp4'));
|
nodeStream.pipe(fs.createWriteStream('local-video.mp4'));
|
||||||
|
|
||||||
// Web stream (for modern environments)
|
// Pipe to HTTP response
|
||||||
|
app.get('/download', async (req, res) => {
|
||||||
|
const stream = await bucket.fastGetStream(
|
||||||
|
{ path: 'file.pdf' },
|
||||||
|
'nodestream'
|
||||||
|
);
|
||||||
|
res.setHeader('Content-Type', 'application/pdf');
|
||||||
|
stream.pipe(res);
|
||||||
|
});
|
||||||
|
|
||||||
|
// Web stream (for modern browser/Deno environments)
|
||||||
const webStream = await bucket.fastGetStream(
|
const webStream = await bucket.fastGetStream(
|
||||||
{ path: 'large-file.zip' },
|
{ path: 'large-file.zip' },
|
||||||
'webstream'
|
'webstream'
|
||||||
@@ -222,6 +464,8 @@ const webStream = await bucket.fastGetStream(
|
|||||||
#### Upload Streams
|
#### Upload Streams
|
||||||
|
|
||||||
```typescript
|
```typescript
|
||||||
|
import * as fs from 'node:fs';
|
||||||
|
|
||||||
// Stream upload from file
|
// Stream upload from file
|
||||||
const readStream = fs.createReadStream('big-data.csv');
|
const readStream = fs.createReadStream('big-data.csv');
|
||||||
await bucket.fastPutStream({
|
await bucket.fastPutStream({
|
||||||
@@ -235,6 +479,7 @@ await bucket.fastPutStream({
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
});
|
});
|
||||||
|
console.log('✅ Large file uploaded via stream');
|
||||||
```
|
```
|
||||||
|
|
||||||
#### Reactive Streams with RxJS
|
#### Reactive Streams with RxJS
|
||||||
@@ -246,19 +491,24 @@ const replaySubject = await bucket.fastGetReplaySubject({
|
|||||||
chunkSize: 1024
|
chunkSize: 1024
|
||||||
});
|
});
|
||||||
|
|
||||||
|
// Multiple subscribers can consume the same data
|
||||||
replaySubject.subscribe({
|
replaySubject.subscribe({
|
||||||
next: (chunk) => processChunk(chunk),
|
next: (chunk) => processChunk(chunk),
|
||||||
complete: () => console.log('✅ Stream complete')
|
complete: () => console.log('✅ Stream complete')
|
||||||
});
|
});
|
||||||
|
|
||||||
|
replaySubject.subscribe({
|
||||||
|
next: (chunk) => logChunk(chunk)
|
||||||
|
});
|
||||||
```
|
```
|
||||||
|
|
||||||
### 🔒 File Locking
|
### 🔒 File Locking
|
||||||
|
|
||||||
Prevent accidental modifications with file locking:
|
Prevent concurrent modifications with built-in file locking:
|
||||||
|
|
||||||
```typescript
|
```typescript
|
||||||
const file = await bucket.getBaseDirectory()
|
const file = await bucket.getBaseDirectory()
|
||||||
.getFileStrict({ path: 'important-config.json' });
|
.getFile({ path: 'important-config.json' });
|
||||||
|
|
||||||
// Lock file for 10 minutes
|
// Lock file for 10 minutes
|
||||||
await file.lock({ timeoutMillis: 600000 });
|
await file.lock({ timeoutMillis: 600000 });
|
||||||
@@ -271,18 +521,28 @@ try {
|
|||||||
console.log('❌ Cannot delete locked file');
|
console.log('❌ Cannot delete locked file');
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Check lock status
|
||||||
|
const isLocked = await file.isLocked();
|
||||||
|
console.log(`Lock status: ${isLocked ? '🔒 Locked' : '🔓 Unlocked'}`);
|
||||||
|
|
||||||
// Unlock when done
|
// Unlock when done
|
||||||
await file.unlock();
|
await file.unlock();
|
||||||
console.log('🔓 File unlocked');
|
console.log('🔓 File unlocked');
|
||||||
```
|
```
|
||||||
|
|
||||||
|
**Lock use cases:**
|
||||||
|
- 🔄 Prevent concurrent writes during critical updates
|
||||||
|
- 🔐 Protect configuration files during deployment
|
||||||
|
- 🚦 Coordinate distributed workers
|
||||||
|
- 🛡️ Ensure data consistency
|
||||||
|
|
||||||
### 🏷️ Metadata Management
|
### 🏷️ Metadata Management
|
||||||
|
|
||||||
Attach and manage metadata for your files:
|
Attach and manage rich metadata for your files:
|
||||||
|
|
||||||
```typescript
|
```typescript
|
||||||
const file = await bucket.getBaseDirectory()
|
const file = await bucket.getBaseDirectory()
|
||||||
.getFileStrict({ path: 'document.pdf' });
|
.getFile({ path: 'document.pdf' });
|
||||||
|
|
||||||
// Get metadata handler
|
// Get metadata handler
|
||||||
const metadata = await file.getMetaData();
|
const metadata = await file.getMetaData();
|
||||||
@@ -298,6 +558,11 @@ await metadata.setCustomMetaData({
|
|||||||
value: 'Engineering'
|
value: 'Engineering'
|
||||||
});
|
});
|
||||||
|
|
||||||
|
await metadata.setCustomMetaData({
|
||||||
|
key: 'version',
|
||||||
|
value: '1.0.0'
|
||||||
|
});
|
||||||
|
|
||||||
// Retrieve metadata
|
// Retrieve metadata
|
||||||
const author = await metadata.getCustomMetaData({ key: 'author' });
|
const author = await metadata.getCustomMetaData({ key: 'author' });
|
||||||
console.log(`📝 Author: ${author}`);
|
console.log(`📝 Author: ${author}`);
|
||||||
@@ -305,19 +570,35 @@ console.log(`📝 Author: ${author}`);
|
|||||||
// Get all metadata
|
// Get all metadata
|
||||||
const allMeta = await metadata.getAllCustomMetaData();
|
const allMeta = await metadata.getAllCustomMetaData();
|
||||||
console.log('📋 All metadata:', allMeta);
|
console.log('📋 All metadata:', allMeta);
|
||||||
|
// { author: 'John Doe', department: 'Engineering', version: '1.0.0' }
|
||||||
|
|
||||||
|
// Check if metadata exists
|
||||||
|
const hasMetadata = await metadata.hasMetaData();
|
||||||
|
console.log(`Has metadata: ${hasMetadata ? '✅' : '❌'}`);
|
||||||
```
|
```
|
||||||
|
|
||||||
|
**Metadata use cases:**
|
||||||
|
- 👤 Track file ownership and authorship
|
||||||
|
- 🏷️ Add tags and categories for search
|
||||||
|
- 📊 Store processing status or workflow state
|
||||||
|
- 🔍 Enable rich querying and filtering
|
||||||
|
- 📝 Maintain audit trails
|
||||||
|
|
||||||
### 🗑️ Trash & Recovery
|
### 🗑️ Trash & Recovery
|
||||||
|
|
||||||
SmartBucket includes an intelligent trash system for safe file deletion:
|
SmartBucket includes an intelligent trash system for safe file deletion and recovery:
|
||||||
|
|
||||||
```typescript
|
```typescript
|
||||||
const file = await bucket.getBaseDirectory()
|
const file = await bucket.getBaseDirectory()
|
||||||
.getFileStrict({ path: 'important-data.xlsx' });
|
.getFile({ path: 'important-data.xlsx' });
|
||||||
|
|
||||||
// Move to trash instead of permanent deletion
|
// Move to trash instead of permanent deletion
|
||||||
await file.delete({ mode: 'trash' });
|
await file.delete({ mode: 'trash' });
|
||||||
console.log('🗑️ File moved to trash');
|
console.log('🗑️ File moved to trash (can be restored!)');
|
||||||
|
|
||||||
|
// Permanent deletion (use with caution!)
|
||||||
|
await file.delete({ mode: 'permanent' });
|
||||||
|
console.log('💀 File permanently deleted (cannot be recovered)');
|
||||||
|
|
||||||
// Access trash
|
// Access trash
|
||||||
const trash = await bucket.getTrash();
|
const trash = await bucket.getTrash();
|
||||||
@@ -327,19 +608,33 @@ console.log(`📦 ${trashedFiles.length} files in trash`);
|
|||||||
|
|
||||||
// Restore from trash
|
// Restore from trash
|
||||||
const trashedFile = await bucket.getBaseDirectory()
|
const trashedFile = await bucket.getBaseDirectory()
|
||||||
.getFileStrict({
|
.getFile({
|
||||||
path: 'important-data.xlsx',
|
path: 'important-data.xlsx',
|
||||||
getFromTrash: true
|
getFromTrash: true
|
||||||
});
|
});
|
||||||
|
|
||||||
await trashedFile.restore({ useOriginalPath: true });
|
await trashedFile.restore({ useOriginalPath: true });
|
||||||
console.log('♻️ File restored successfully');
|
console.log('♻️ File restored to original location');
|
||||||
|
|
||||||
// Permanent deletion from trash
|
// Or restore to a different location
|
||||||
|
await trashedFile.restore({
|
||||||
|
useOriginalPath: false,
|
||||||
|
restorePath: 'recovered/important-data.xlsx'
|
||||||
|
});
|
||||||
|
console.log('♻️ File restored to new location');
|
||||||
|
|
||||||
|
// Empty trash permanently
|
||||||
await trash.emptyTrash();
|
await trash.emptyTrash();
|
||||||
console.log('🧹 Trash emptied');
|
console.log('🧹 Trash emptied');
|
||||||
```
|
```
|
||||||
|
|
||||||
|
**Trash features:**
|
||||||
|
- ♻️ Recover accidentally deleted files
|
||||||
|
- 🏷️ Preserves original path in metadata
|
||||||
|
- ⏰ Tracks deletion timestamp
|
||||||
|
- 🔍 List and inspect trashed files
|
||||||
|
- 🧹 Bulk empty trash operation
|
||||||
|
|
||||||
### ⚡ Advanced Features
|
### ⚡ Advanced Features
|
||||||
|
|
||||||
#### File Statistics
|
#### File Statistics
|
||||||
@@ -350,29 +645,39 @@ const stats = await bucket.fastStat({ path: 'document.pdf' });
|
|||||||
console.log(`📊 Size: ${stats.size} bytes`);
|
console.log(`📊 Size: ${stats.size} bytes`);
|
||||||
console.log(`📅 Last modified: ${stats.lastModified}`);
|
console.log(`📅 Last modified: ${stats.lastModified}`);
|
||||||
console.log(`🏷️ ETag: ${stats.etag}`);
|
console.log(`🏷️ ETag: ${stats.etag}`);
|
||||||
|
console.log(`🗂️ Storage class: ${stats.storageClass}`);
|
||||||
```
|
```
|
||||||
|
|
||||||
#### Magic Bytes Detection
|
#### Magic Bytes Detection
|
||||||
|
|
||||||
|
Detect file types by examining the first bytes (useful for validation):
|
||||||
|
|
||||||
```typescript
|
```typescript
|
||||||
// Read first bytes for file type detection
|
// Read first bytes for file type detection
|
||||||
const magicBytes = await bucket.getMagicBytes({
|
const magicBytes = await bucket.getMagicBytes({
|
||||||
path: 'mystery-file',
|
path: 'mystery-file',
|
||||||
length: 16
|
length: 16
|
||||||
});
|
});
|
||||||
|
console.log(`🔮 Magic bytes: ${magicBytes.toString('hex')}`);
|
||||||
|
|
||||||
// Or from a File object
|
// Or from a File object
|
||||||
const file = await bucket.getBaseDirectory()
|
const file = await bucket.getBaseDirectory()
|
||||||
.getFileStrict({ path: 'image.jpg' });
|
.getFile({ path: 'image.jpg' });
|
||||||
const magic = await file.getMagicBytes({ length: 4 });
|
const magic = await file.getMagicBytes({ length: 4 });
|
||||||
console.log(`🔮 Magic bytes: ${magic.toString('hex')}`);
|
|
||||||
|
// Check file signatures
|
||||||
|
if (magic[0] === 0xFF && magic[1] === 0xD8) {
|
||||||
|
console.log('📸 This is a JPEG image');
|
||||||
|
} else if (magic[0] === 0x89 && magic[1] === 0x50) {
|
||||||
|
console.log('🖼️ This is a PNG image');
|
||||||
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
#### JSON Data Operations
|
#### JSON Data Operations
|
||||||
|
|
||||||
```typescript
|
```typescript
|
||||||
const file = await bucket.getBaseDirectory()
|
const file = await bucket.getBaseDirectory()
|
||||||
.getFileStrict({ path: 'config.json' });
|
.getFile({ path: 'config.json' });
|
||||||
|
|
||||||
// Read JSON data
|
// Read JSON data
|
||||||
const config = await file.getJsonData();
|
const config = await file.getJsonData();
|
||||||
@@ -381,6 +686,8 @@ console.log('⚙️ Config loaded:', config);
|
|||||||
// Update JSON data
|
// Update JSON data
|
||||||
config.version = '2.0';
|
config.version = '2.0';
|
||||||
config.updated = new Date().toISOString();
|
config.updated = new Date().toISOString();
|
||||||
|
config.features.push('newFeature');
|
||||||
|
|
||||||
await file.writeJsonData(config);
|
await file.writeJsonData(config);
|
||||||
console.log('💾 Config updated');
|
console.log('💾 Config updated');
|
||||||
```
|
```
|
||||||
@@ -401,71 +708,161 @@ console.log(`Is file: ${isFile ? '📄' : '❌'}`);
|
|||||||
#### Clean Bucket Contents
|
#### Clean Bucket Contents
|
||||||
|
|
||||||
```typescript
|
```typescript
|
||||||
// Remove all files and directories (use with caution!)
|
// Remove all files and directories (⚠️ use with caution!)
|
||||||
await bucket.cleanAllContents();
|
await bucket.cleanAllContents();
|
||||||
console.log('🧹 Bucket cleaned');
|
console.log('🧹 Bucket cleaned');
|
||||||
```
|
```
|
||||||
|
|
||||||
### ☁️ Cloud Provider Support
|
### ☁️ Cloud Provider Support
|
||||||
|
|
||||||
SmartBucket works seamlessly with:
|
SmartBucket works seamlessly with all major S3-compatible providers:
|
||||||
|
|
||||||
- ✅ **AWS S3** - Full compatibility with S3 API
|
| Provider | Status | Notes |
|
||||||
- ✅ **Google Cloud Storage** - Via S3-compatible API
|
|----------|--------|-------|
|
||||||
- ✅ **MinIO** - Self-hosted S3-compatible storage
|
| **AWS S3** | ✅ Full support | Native S3 API |
|
||||||
- ✅ **DigitalOcean Spaces** - S3-compatible object storage
|
| **MinIO** | ✅ Full support | Self-hosted, perfect for development |
|
||||||
- ✅ **Backblaze B2** - Cost-effective cloud storage
|
| **DigitalOcean Spaces** | ✅ Full support | Cost-effective S3-compatible |
|
||||||
- ✅ **Wasabi** - High-performance S3-compatible storage
|
| **Backblaze B2** | ✅ Full support | Very affordable storage |
|
||||||
- ✅ **Any S3-compatible provider**
|
| **Wasabi** | ✅ Full support | High-performance hot storage |
|
||||||
|
| **Google Cloud Storage** | ✅ Full support | Via S3-compatible API |
|
||||||
|
| **Cloudflare R2** | ✅ Full support | Zero egress fees |
|
||||||
|
| **Any S3-compatible** | ✅ Full support | Works with any S3-compatible provider |
|
||||||
|
|
||||||
The library automatically handles provider quirks and optimizes operations for each platform while maintaining a consistent API.
|
The library automatically handles provider quirks and optimizes operations for each platform while maintaining a consistent API.
|
||||||
|
|
||||||
|
**Configuration examples:**
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// AWS S3
|
||||||
|
const awsStorage = new SmartBucket({
|
||||||
|
accessKey: process.env.AWS_ACCESS_KEY_ID,
|
||||||
|
accessSecret: process.env.AWS_SECRET_ACCESS_KEY,
|
||||||
|
endpoint: 's3.amazonaws.com',
|
||||||
|
region: 'us-east-1',
|
||||||
|
useSsl: true
|
||||||
|
});
|
||||||
|
|
||||||
|
// MinIO (local development)
|
||||||
|
const minioStorage = new SmartBucket({
|
||||||
|
accessKey: 'minioadmin',
|
||||||
|
accessSecret: 'minioadmin',
|
||||||
|
endpoint: 'localhost',
|
||||||
|
port: 9000,
|
||||||
|
useSsl: false
|
||||||
|
});
|
||||||
|
|
||||||
|
// DigitalOcean Spaces
|
||||||
|
const doStorage = new SmartBucket({
|
||||||
|
accessKey: process.env.DO_SPACES_KEY,
|
||||||
|
accessSecret: process.env.DO_SPACES_SECRET,
|
||||||
|
endpoint: 'nyc3.digitaloceanspaces.com',
|
||||||
|
region: 'nyc3',
|
||||||
|
useSsl: true
|
||||||
|
});
|
||||||
|
|
||||||
|
// Backblaze B2
|
||||||
|
const b2Storage = new SmartBucket({
|
||||||
|
accessKey: process.env.B2_KEY_ID,
|
||||||
|
accessSecret: process.env.B2_APPLICATION_KEY,
|
||||||
|
endpoint: 's3.us-west-002.backblazeb2.com',
|
||||||
|
region: 'us-west-002',
|
||||||
|
useSsl: true
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
### 🔧 Advanced Configuration
|
### 🔧 Advanced Configuration
|
||||||
|
|
||||||
```typescript
|
```typescript
|
||||||
// Configure with custom options
|
// Environment-based configuration with @push.rocks/qenv
|
||||||
const smartBucket = new SmartBucket({
|
|
||||||
accessKey: process.env.S3_ACCESS_KEY,
|
|
||||||
accessSecret: process.env.S3_SECRET_KEY,
|
|
||||||
endpoint: process.env.S3_ENDPOINT,
|
|
||||||
port: 443,
|
|
||||||
useSsl: true,
|
|
||||||
region: 'eu-central-1',
|
|
||||||
// Additional S3 client options can be passed through
|
|
||||||
});
|
|
||||||
|
|
||||||
// Environment-based configuration
|
|
||||||
import { Qenv } from '@push.rocks/qenv';
|
import { Qenv } from '@push.rocks/qenv';
|
||||||
|
|
||||||
const qenv = new Qenv('./', './.nogit/');
|
const qenv = new Qenv('./', './.nogit/');
|
||||||
|
|
||||||
const smartBucket = new SmartBucket({
|
const smartBucket = new SmartBucket({
|
||||||
accessKey: await qenv.getEnvVarOnDemandStrict('S3_ACCESS_KEY'),
|
accessKey: await qenv.getEnvVarOnDemandStrict('S3_ACCESS_KEY'),
|
||||||
accessSecret: await qenv.getEnvVarOnDemandStrict('S3_SECRET'),
|
accessSecret: await qenv.getEnvVarOnDemandStrict('S3_SECRET'),
|
||||||
endpoint: await qenv.getEnvVarOnDemandStrict('S3_ENDPOINT'),
|
endpoint: await qenv.getEnvVarOnDemandStrict('S3_ENDPOINT'),
|
||||||
|
port: parseInt(await qenv.getEnvVarOnDemandStrict('S3_PORT')),
|
||||||
|
useSsl: await qenv.getEnvVarOnDemandStrict('S3_USE_SSL') === 'true',
|
||||||
|
region: await qenv.getEnvVarOnDemandStrict('S3_REGION')
|
||||||
});
|
});
|
||||||
```
|
```
|
||||||
|
|
||||||
### 🧪 Testing
|
### 🧪 Testing
|
||||||
|
|
||||||
SmartBucket is thoroughly tested. Run tests with:
|
SmartBucket is thoroughly tested with 82 comprehensive tests covering all features:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
|
# Run all tests
|
||||||
pnpm test
|
pnpm test
|
||||||
|
|
||||||
|
# Run specific test file
|
||||||
|
pnpm tstest test/test.listing.node+deno.ts --verbose
|
||||||
|
|
||||||
|
# Run tests with log file
|
||||||
|
pnpm test --logfile
|
||||||
```
|
```
|
||||||
|
|
||||||
### 🤝 Best Practices
|
### 🛡️ Error Handling Best Practices
|
||||||
|
|
||||||
|
SmartBucket uses a **strict-by-default** approach - methods throw errors instead of returning null:
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// ✅ Good: Check existence first
|
||||||
|
if (await bucket.fastExists({ path: 'file.txt' })) {
|
||||||
|
const content = await bucket.fastGet({ path: 'file.txt' });
|
||||||
|
process(content);
|
||||||
|
}
|
||||||
|
|
||||||
|
// ✅ Good: Try/catch for expected failures
|
||||||
|
try {
|
||||||
|
const file = await bucket.fastGet({ path: 'might-not-exist.txt' });
|
||||||
|
process(file);
|
||||||
|
} catch (error) {
|
||||||
|
console.log('File not found, using default');
|
||||||
|
useDefault();
|
||||||
|
}
|
||||||
|
|
||||||
|
// ✅ Good: Explicit overwrite control
|
||||||
|
try {
|
||||||
|
await bucket.fastPut({
|
||||||
|
path: 'existing-file.txt',
|
||||||
|
contents: 'new data',
|
||||||
|
overwrite: false // Explicitly fail if exists
|
||||||
|
});
|
||||||
|
} catch (error) {
|
||||||
|
console.log('File already exists');
|
||||||
|
}
|
||||||
|
|
||||||
|
// ❌ Bad: Assuming file exists without checking
|
||||||
|
const content = await bucket.fastGet({ path: 'file.txt' }); // May throw!
|
||||||
|
```
|
||||||
|
|
||||||
|
### 💡 Best Practices
|
||||||
|
|
||||||
1. **Always use strict mode** for critical operations to catch errors early
|
1. **Always use strict mode** for critical operations to catch errors early
|
||||||
2. **Implement proper error handling** for network and permission issues
|
2. **Check existence first** with `fastExists()`, `bucketExists()`, etc. before operations
|
||||||
3. **Use streaming** for large files to optimize memory usage
|
3. **Implement proper error handling** for network and permission issues
|
||||||
4. **Leverage metadata** for organizing and searching files
|
4. **Use streaming** for large files (>100MB) to optimize memory usage
|
||||||
5. **Enable trash mode** for important data to prevent accidental loss
|
5. **Leverage metadata** for organizing and searching files
|
||||||
6. **Lock files** during critical operations to prevent race conditions
|
6. **Enable trash mode** for important data to prevent accidental loss
|
||||||
7. **Clean up resources** properly when done
|
7. **Lock files** during critical operations to prevent race conditions
|
||||||
|
8. **Use async generators** for listing large buckets to avoid memory issues
|
||||||
|
9. **Set explicit overwrite flags** to prevent accidental file overwrites
|
||||||
|
10. **Clean up resources** properly when done
|
||||||
|
|
||||||
|
### 📊 Performance Tips
|
||||||
|
|
||||||
|
- **Listing**: Use async generators or cursors for buckets with >10,000 objects
|
||||||
|
- **Uploads**: Use streams for files >100MB
|
||||||
|
- **Downloads**: Use streams for files you'll process incrementally
|
||||||
|
- **Metadata**: Cache metadata when reading frequently
|
||||||
|
- **Locking**: Keep lock durations as short as possible
|
||||||
|
- **Glob patterns**: Be specific to reduce objects scanned
|
||||||
|
|
||||||
## License and Legal Information
|
## License and Legal Information
|
||||||
|
|
||||||
This repository contains open-source code that is licensed under the MIT License. A copy of the MIT License can be found in the [license](license) file within this repository.
|
This repository contains open-source code that is licensed under the MIT License. A copy of the MIT License can be found in the [license](license) file within this repository.
|
||||||
|
|
||||||
**Please note:** The MIT License does not grant permission to use the trade names, trademarks, service marks, or product names of the project, except as required for reasonable and customary use in describing the origin of the work and reproducing the content of the NOTICE file.
|
**Please note:** The MIT License does not grant permission to use the trade names, trademarks, service marks, or product names of the project, except as required for reasonable and customary use in describing the origin of the work and reproducing the content of the NOTICE file.
|
||||||
|
|
||||||
@@ -475,9 +872,9 @@ This project is owned and maintained by Task Venture Capital GmbH. The names and
|
|||||||
|
|
||||||
### Company Information
|
### Company Information
|
||||||
|
|
||||||
Task Venture Capital GmbH
|
Task Venture Capital GmbH
|
||||||
Registered at District court Bremen HRB 35230 HB, Germany
|
Registered at District court Bremen HRB 35230 HB, Germany
|
||||||
|
|
||||||
For any legal inquiries or if you require further information, please contact us via email at hello@task.vc.
|
For any legal inquiries or if you require further information, please contact us via email at hello@task.vc.
|
||||||
|
|
||||||
By using this repository, you acknowledge that you have read this section, agree to comply with its terms, and understand that the licensing of the code does not imply endorsement by Task Venture Capital GmbH of any derivative works.
|
By using this repository, you acknowledge that you have read this section, agree to comply with its terms, and understand that the licensing of the code does not imply endorsement by Task Venture Capital GmbH of any derivative works.
|
||||||
|
|||||||
298
test/test.listing.node+deno.ts
Normal file
298
test/test.listing.node+deno.ts
Normal file
@@ -0,0 +1,298 @@
|
|||||||
|
// test.listing.node+deno.ts - Tests for memory-efficient listing methods
|
||||||
|
|
||||||
|
import { tap, expect } from '@git.zone/tstest/tapbundle';
|
||||||
|
import * as smartbucket from '../ts/index.js';
|
||||||
|
|
||||||
|
// Get test configuration
|
||||||
|
import * as qenv from '@push.rocks/qenv';
|
||||||
|
const testQenv = new qenv.Qenv('./', './.nogit/');
|
||||||
|
|
||||||
|
// Test bucket reference
|
||||||
|
let testBucket: smartbucket.Bucket;
|
||||||
|
let testSmartbucket: smartbucket.SmartBucket;
|
||||||
|
|
||||||
|
// Setup: Create test bucket and populate with test data
|
||||||
|
tap.test('should create valid smartbucket and bucket', async () => {
|
||||||
|
testSmartbucket = new smartbucket.SmartBucket({
|
||||||
|
accessKey: await testQenv.getEnvVarOnDemand('S3_ACCESSKEY'),
|
||||||
|
accessSecret: await testQenv.getEnvVarOnDemand('S3_SECRETKEY'),
|
||||||
|
endpoint: await testQenv.getEnvVarOnDemand('S3_ENDPOINT'),
|
||||||
|
port: parseInt(await testQenv.getEnvVarOnDemand('S3_PORT')),
|
||||||
|
useSsl: false,
|
||||||
|
});
|
||||||
|
|
||||||
|
testBucket = await smartbucket.Bucket.getBucketByName(
|
||||||
|
testSmartbucket,
|
||||||
|
await testQenv.getEnvVarOnDemand('S3_BUCKET')
|
||||||
|
);
|
||||||
|
expect(testBucket).toBeInstanceOf(smartbucket.Bucket);
|
||||||
|
});
|
||||||
|
|
||||||
|
tap.test('should clean bucket and create test data for listing tests', async () => {
|
||||||
|
// Clean bucket first
|
||||||
|
await testBucket.cleanAllContents();
|
||||||
|
|
||||||
|
// Create test structure:
|
||||||
|
// npm/packages/foo/index.json
|
||||||
|
// npm/packages/foo/1.0.0.tgz
|
||||||
|
// npm/packages/bar/index.json
|
||||||
|
// npm/packages/bar/2.0.0.tgz
|
||||||
|
// oci/blobs/sha256-abc.tar
|
||||||
|
// oci/blobs/sha256-def.tar
|
||||||
|
// oci/manifests/latest.json
|
||||||
|
// docs/readme.md
|
||||||
|
// docs/api.md
|
||||||
|
|
||||||
|
const testFiles = [
|
||||||
|
'npm/packages/foo/index.json',
|
||||||
|
'npm/packages/foo/1.0.0.tgz',
|
||||||
|
'npm/packages/bar/index.json',
|
||||||
|
'npm/packages/bar/2.0.0.tgz',
|
||||||
|
'oci/blobs/sha256-abc.tar',
|
||||||
|
'oci/blobs/sha256-def.tar',
|
||||||
|
'oci/manifests/latest.json',
|
||||||
|
'docs/readme.md',
|
||||||
|
'docs/api.md',
|
||||||
|
];
|
||||||
|
|
||||||
|
for (const filePath of testFiles) {
|
||||||
|
await testBucket.fastPut({
|
||||||
|
path: filePath,
|
||||||
|
contents: `test content for ${filePath}`,
|
||||||
|
});
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
// ==========================
|
||||||
|
// Async Generator Tests
|
||||||
|
// ==========================
|
||||||
|
|
||||||
|
tap.test('listAllObjects should iterate all objects with prefix', async () => {
|
||||||
|
const keys: string[] = [];
|
||||||
|
for await (const key of testBucket.listAllObjects('npm/')) {
|
||||||
|
keys.push(key);
|
||||||
|
}
|
||||||
|
|
||||||
|
expect(keys.length).toEqual(4);
|
||||||
|
expect(keys).toContain('npm/packages/foo/index.json');
|
||||||
|
expect(keys).toContain('npm/packages/bar/2.0.0.tgz');
|
||||||
|
});
|
||||||
|
|
||||||
|
tap.test('listAllObjects should support early termination', async () => {
|
||||||
|
let count = 0;
|
||||||
|
for await (const key of testBucket.listAllObjects('')) {
|
||||||
|
count++;
|
||||||
|
if (count >= 3) break; // Early exit
|
||||||
|
}
|
||||||
|
|
||||||
|
expect(count).toEqual(3);
|
||||||
|
});
|
||||||
|
|
||||||
|
tap.test('listAllObjects without prefix should list all objects', async () => {
|
||||||
|
const keys: string[] = [];
|
||||||
|
for await (const key of testBucket.listAllObjects()) {
|
||||||
|
keys.push(key);
|
||||||
|
}
|
||||||
|
|
||||||
|
expect(keys.length).toBeGreaterThanOrEqual(9);
|
||||||
|
});
|
||||||
|
|
||||||
|
// ==========================
|
||||||
|
// Observable Tests
|
||||||
|
// ==========================
|
||||||
|
|
||||||
|
tap.test('listAllObjectsObservable should emit all objects', async () => {
|
||||||
|
const keys: string[] = [];
|
||||||
|
|
||||||
|
await new Promise<void>((resolve, reject) => {
|
||||||
|
testBucket.listAllObjectsObservable('oci/')
|
||||||
|
.subscribe({
|
||||||
|
next: (key) => keys.push(key),
|
||||||
|
error: (err) => reject(err),
|
||||||
|
complete: () => resolve(),
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
expect(keys.length).toEqual(3);
|
||||||
|
expect(keys).toContain('oci/blobs/sha256-abc.tar');
|
||||||
|
expect(keys).toContain('oci/manifests/latest.json');
|
||||||
|
});
|
||||||
|
|
||||||
|
tap.test('listAllObjectsObservable should support RxJS operators', async () => {
|
||||||
|
const jsonFiles: string[] = [];
|
||||||
|
|
||||||
|
await new Promise<void>((resolve, reject) => {
|
||||||
|
testBucket.listAllObjectsObservable('npm/')
|
||||||
|
.subscribe({
|
||||||
|
next: (key: string) => {
|
||||||
|
if (key.endsWith('.json')) {
|
||||||
|
jsonFiles.push(key);
|
||||||
|
}
|
||||||
|
},
|
||||||
|
error: (err: any) => reject(err),
|
||||||
|
complete: () => resolve(),
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
expect(jsonFiles.length).toEqual(2);
|
||||||
|
expect(jsonFiles.every((k) => k.endsWith('.json'))).toBeTrue();
|
||||||
|
});
|
||||||
|
|
||||||
|
// ==========================
|
||||||
|
// Cursor Tests
|
||||||
|
// ==========================
|
||||||
|
|
||||||
|
tap.test('createCursor should allow manual pagination', async () => {
|
||||||
|
const cursor = testBucket.createCursor('npm/', { pageSize: 2 });
|
||||||
|
|
||||||
|
// First page
|
||||||
|
const page1 = await cursor.next();
|
||||||
|
expect(page1.keys.length).toEqual(2);
|
||||||
|
expect(page1.done).toBeFalse();
|
||||||
|
|
||||||
|
// Second page
|
||||||
|
const page2 = await cursor.next();
|
||||||
|
expect(page2.keys.length).toEqual(2);
|
||||||
|
expect(page2.done).toBeTrue();
|
||||||
|
});
|
||||||
|
|
||||||
|
tap.test('cursor.hasMore() should accurately track state', async () => {
|
||||||
|
const cursor = testBucket.createCursor('docs/', { pageSize: 10 });
|
||||||
|
|
||||||
|
expect(cursor.hasMore()).toBeTrue();
|
||||||
|
|
||||||
|
await cursor.next(); // Should get all docs files
|
||||||
|
|
||||||
|
expect(cursor.hasMore()).toBeFalse();
|
||||||
|
});
|
||||||
|
|
||||||
|
tap.test('cursor.reset() should allow re-iteration', async () => {
|
||||||
|
const cursor = testBucket.createCursor('docs/');
|
||||||
|
|
||||||
|
const firstRun = await cursor.next();
|
||||||
|
expect(firstRun.keys.length).toBeGreaterThan(0);
|
||||||
|
|
||||||
|
cursor.reset();
|
||||||
|
expect(cursor.hasMore()).toBeTrue();
|
||||||
|
|
||||||
|
const secondRun = await cursor.next();
|
||||||
|
expect(secondRun.keys).toEqual(firstRun.keys);
|
||||||
|
});
|
||||||
|
|
||||||
|
tap.test('cursor should support save/restore with token', async () => {
|
||||||
|
const cursor1 = testBucket.createCursor('npm/', { pageSize: 2 });
|
||||||
|
|
||||||
|
await cursor1.next(); // Advance cursor
|
||||||
|
const token = cursor1.getToken();
|
||||||
|
expect(token).toBeDefined();
|
||||||
|
|
||||||
|
// Create new cursor and restore state
|
||||||
|
const cursor2 = testBucket.createCursor('npm/', { pageSize: 2 });
|
||||||
|
cursor2.setToken(token);
|
||||||
|
|
||||||
|
const page = await cursor2.next();
|
||||||
|
expect(page.keys.length).toBeGreaterThan(0);
|
||||||
|
});
|
||||||
|
|
||||||
|
// ==========================
|
||||||
|
// findByGlob Tests
|
||||||
|
// ==========================
|
||||||
|
|
||||||
|
tap.test('findByGlob should match simple patterns', async () => {
|
||||||
|
const matches: string[] = [];
|
||||||
|
for await (const key of testBucket.findByGlob('**/*.json')) {
|
||||||
|
matches.push(key);
|
||||||
|
}
|
||||||
|
|
||||||
|
expect(matches.length).toEqual(3); // foo/index.json, bar/index.json, latest.json
|
||||||
|
expect(matches.every((k) => k.endsWith('.json'))).toBeTrue();
|
||||||
|
});
|
||||||
|
|
||||||
|
tap.test('findByGlob should match specific path patterns', async () => {
|
||||||
|
const matches: string[] = [];
|
||||||
|
for await (const key of testBucket.findByGlob('npm/packages/*/index.json')) {
|
||||||
|
matches.push(key);
|
||||||
|
}
|
||||||
|
|
||||||
|
expect(matches.length).toEqual(2);
|
||||||
|
expect(matches).toContain('npm/packages/foo/index.json');
|
||||||
|
expect(matches).toContain('npm/packages/bar/index.json');
|
||||||
|
});
|
||||||
|
|
||||||
|
tap.test('findByGlob should match wildcard patterns', async () => {
|
||||||
|
const matches: string[] = [];
|
||||||
|
for await (const key of testBucket.findByGlob('oci/blobs/*')) {
|
||||||
|
matches.push(key);
|
||||||
|
}
|
||||||
|
|
||||||
|
expect(matches.length).toEqual(2);
|
||||||
|
expect(matches.every((k) => k.startsWith('oci/blobs/'))).toBeTrue();
|
||||||
|
});
|
||||||
|
|
||||||
|
// ==========================
|
||||||
|
// listAllObjectsArray Tests
|
||||||
|
// ==========================
|
||||||
|
|
||||||
|
tap.test('listAllObjectsArray should collect all keys into array', async () => {
|
||||||
|
const keys = await testBucket.listAllObjectsArray('docs/');
|
||||||
|
|
||||||
|
expect(Array.isArray(keys)).toBeTrue();
|
||||||
|
expect(keys.length).toEqual(2);
|
||||||
|
expect(keys).toContain('docs/readme.md');
|
||||||
|
expect(keys).toContain('docs/api.md');
|
||||||
|
});
|
||||||
|
|
||||||
|
tap.test('listAllObjectsArray without prefix should return all objects', async () => {
|
||||||
|
const keys = await testBucket.listAllObjectsArray();
|
||||||
|
|
||||||
|
expect(keys.length).toBeGreaterThanOrEqual(9);
|
||||||
|
});
|
||||||
|
|
||||||
|
// ==========================
|
||||||
|
// Performance/Edge Case Tests
|
||||||
|
// ==========================
|
||||||
|
|
||||||
|
tap.test('should handle empty prefix results gracefully', async () => {
|
||||||
|
const keys: string[] = [];
|
||||||
|
for await (const key of testBucket.listAllObjects('nonexistent/')) {
|
||||||
|
keys.push(key);
|
||||||
|
}
|
||||||
|
|
||||||
|
expect(keys.length).toEqual(0);
|
||||||
|
});
|
||||||
|
|
||||||
|
tap.test('cursor should handle empty results', async () => {
|
||||||
|
const cursor = testBucket.createCursor('nonexistent/');
|
||||||
|
const result = await cursor.next();
|
||||||
|
|
||||||
|
expect(result.keys.length).toEqual(0);
|
||||||
|
expect(result.done).toBeTrue();
|
||||||
|
expect(cursor.hasMore()).toBeFalse();
|
||||||
|
});
|
||||||
|
|
||||||
|
tap.test('observable should complete immediately on empty results', async () => {
|
||||||
|
let completed = false;
|
||||||
|
let count = 0;
|
||||||
|
|
||||||
|
await new Promise<void>((resolve, reject) => {
|
||||||
|
testBucket.listAllObjectsObservable('nonexistent/')
|
||||||
|
.subscribe({
|
||||||
|
next: () => count++,
|
||||||
|
error: (err) => reject(err),
|
||||||
|
complete: () => {
|
||||||
|
completed = true;
|
||||||
|
resolve();
|
||||||
|
},
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
expect(count).toEqual(0);
|
||||||
|
expect(completed).toBeTrue();
|
||||||
|
});
|
||||||
|
|
||||||
|
// Cleanup
|
||||||
|
tap.test('should clean up test data', async () => {
|
||||||
|
await testBucket.cleanAllContents();
|
||||||
|
});
|
||||||
|
|
||||||
|
export default tap.start();
|
||||||
76
test/test.local.node+deno.ts
Normal file
76
test/test.local.node+deno.ts
Normal file
@@ -0,0 +1,76 @@
|
|||||||
|
import { expect, tap } from '@git.zone/tstest/tapbundle';
|
||||||
|
|
||||||
|
import * as plugins from '../ts/plugins.js';
|
||||||
|
import * as smartbucket from '../ts/index.js';
|
||||||
|
|
||||||
|
class FakeS3Client {
|
||||||
|
private callIndex = 0;
|
||||||
|
|
||||||
|
constructor(private readonly pages: Array<Partial<plugins.s3.ListObjectsV2Output>>) {}
|
||||||
|
|
||||||
|
public async send(_command: any) {
|
||||||
|
const page = this.pages[this.callIndex] || { Contents: [], CommonPrefixes: [], IsTruncated: false };
|
||||||
|
this.callIndex += 1;
|
||||||
|
return page;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
tap.test('MetaData.hasMetaData should return false when metadata file does not exist', async () => {
|
||||||
|
const fakeFile = {
|
||||||
|
name: 'file.txt',
|
||||||
|
parentDirectoryRef: {
|
||||||
|
async getFile() {
|
||||||
|
throw new Error(`File not found at path 'file.txt.metadata'`);
|
||||||
|
},
|
||||||
|
},
|
||||||
|
} as unknown as smartbucket.File;
|
||||||
|
|
||||||
|
const hasMetaData = await smartbucket.MetaData.hasMetaData({ file: fakeFile });
|
||||||
|
expect(hasMetaData).toBeFalse();
|
||||||
|
});
|
||||||
|
|
||||||
|
tap.test('getSubDirectoryByName should create correct parent chain for new nested directories', async () => {
|
||||||
|
const fakeSmartbucket = { s3Client: new FakeS3Client([{ Contents: [], CommonPrefixes: [] }]) } as unknown as smartbucket.SmartBucket;
|
||||||
|
const bucket = new smartbucket.Bucket(fakeSmartbucket, 'test-bucket');
|
||||||
|
const baseDirectory = new smartbucket.Directory(bucket, null as any, '');
|
||||||
|
|
||||||
|
const nestedDirectory = await baseDirectory.getSubDirectoryByName('level1/level2', { getEmptyDirectory: true });
|
||||||
|
|
||||||
|
expect(nestedDirectory.name).toEqual('level2');
|
||||||
|
expect(nestedDirectory.parentDirectoryRef.name).toEqual('level1');
|
||||||
|
expect(nestedDirectory.getBasePath()).toEqual('level1/level2/');
|
||||||
|
});
|
||||||
|
|
||||||
|
tap.test('listFiles should aggregate results across paginated ListObjectsV2 responses', async () => {
|
||||||
|
const firstPage = {
|
||||||
|
Contents: Array.from({ length: 1000 }, (_, index) => ({ Key: `file-${index}` })),
|
||||||
|
IsTruncated: true,
|
||||||
|
NextContinuationToken: 'token-1',
|
||||||
|
};
|
||||||
|
const secondPage = {
|
||||||
|
Contents: Array.from({ length: 200 }, (_, index) => ({ Key: `file-${1000 + index}` })),
|
||||||
|
IsTruncated: false,
|
||||||
|
};
|
||||||
|
const fakeSmartbucket = { s3Client: new FakeS3Client([firstPage, secondPage]) } as unknown as smartbucket.SmartBucket;
|
||||||
|
const bucket = new smartbucket.Bucket(fakeSmartbucket, 'test-bucket');
|
||||||
|
const baseDirectory = new smartbucket.Directory(bucket, null as any, '');
|
||||||
|
|
||||||
|
const files = await baseDirectory.listFiles();
|
||||||
|
expect(files.length).toEqual(1200);
|
||||||
|
});
|
||||||
|
|
||||||
|
tap.test('listDirectories should aggregate CommonPrefixes across pagination', async () => {
|
||||||
|
const fakeSmartbucket = {
|
||||||
|
s3Client: new FakeS3Client([
|
||||||
|
{ CommonPrefixes: [{ Prefix: 'dirA/' }], IsTruncated: true, NextContinuationToken: 'token-1' },
|
||||||
|
{ CommonPrefixes: [{ Prefix: 'dirB/' }], IsTruncated: false },
|
||||||
|
]),
|
||||||
|
} as unknown as smartbucket.SmartBucket;
|
||||||
|
const bucket = new smartbucket.Bucket(fakeSmartbucket, 'test-bucket');
|
||||||
|
const baseDirectory = new smartbucket.Directory(bucket, null as any, '');
|
||||||
|
|
||||||
|
const directories = await baseDirectory.listDirectories();
|
||||||
|
expect(directories.map((d) => d.name)).toEqual(['dirA', 'dirB']);
|
||||||
|
});
|
||||||
|
|
||||||
|
export default tap.start();
|
||||||
@@ -12,13 +12,16 @@ let baseDirectory: smartbucket.Directory;
|
|||||||
tap.test('should create a valid smartbucket', async () => {
|
tap.test('should create a valid smartbucket', async () => {
|
||||||
testSmartbucket = new smartbucket.SmartBucket({
|
testSmartbucket = new smartbucket.SmartBucket({
|
||||||
accessKey: await testQenv.getEnvVarOnDemandStrict('S3_ACCESSKEY'),
|
accessKey: await testQenv.getEnvVarOnDemandStrict('S3_ACCESSKEY'),
|
||||||
accessSecret: await testQenv.getEnvVarOnDemandStrict('S3_ACCESSSECRET'),
|
accessSecret: await testQenv.getEnvVarOnDemandStrict('S3_SECRETKEY'),
|
||||||
endpoint: await testQenv.getEnvVarOnDemandStrict('S3_ENDPOINT'),
|
endpoint: await testQenv.getEnvVarOnDemandStrict('S3_ENDPOINT'),
|
||||||
|
port: parseInt(await testQenv.getEnvVarOnDemandStrict('S3_PORT')),
|
||||||
|
useSsl: false,
|
||||||
});
|
});
|
||||||
expect(testSmartbucket).toBeInstanceOf(smartbucket.SmartBucket);
|
expect(testSmartbucket).toBeInstanceOf(smartbucket.SmartBucket);
|
||||||
myBucket = await testSmartbucket.getBucketByName(await testQenv.getEnvVarOnDemandStrict('S3_BUCKET'),);
|
const bucketName = await testQenv.getEnvVarOnDemandStrict('S3_BUCKET');
|
||||||
|
myBucket = await testSmartbucket.getBucketByName(bucketName);
|
||||||
expect(myBucket).toBeInstanceOf(smartbucket.Bucket);
|
expect(myBucket).toBeInstanceOf(smartbucket.Bucket);
|
||||||
expect(myBucket.name).toEqual('test-pushrocks-smartbucket');
|
expect(myBucket.name).toEqual(bucketName);
|
||||||
});
|
});
|
||||||
|
|
||||||
tap.test('should clean all contents', async () => {
|
tap.test('should clean all contents', async () => {
|
||||||
|
|||||||
@@ -13,13 +13,15 @@ let baseDirectory: smartbucket.Directory;
|
|||||||
tap.test('should create a valid smartbucket', async () => {
|
tap.test('should create a valid smartbucket', async () => {
|
||||||
testSmartbucket = new smartbucket.SmartBucket({
|
testSmartbucket = new smartbucket.SmartBucket({
|
||||||
accessKey: await testQenv.getEnvVarOnDemandStrict('S3_ACCESSKEY'),
|
accessKey: await testQenv.getEnvVarOnDemandStrict('S3_ACCESSKEY'),
|
||||||
accessSecret: await testQenv.getEnvVarOnDemandStrict('S3_ACCESSSECRET'),
|
accessSecret: await testQenv.getEnvVarOnDemandStrict('S3_SECRETKEY'),
|
||||||
endpoint: await testQenv.getEnvVarOnDemandStrict('S3_ENDPOINT'),
|
endpoint: await testQenv.getEnvVarOnDemandStrict('S3_ENDPOINT'),
|
||||||
|
port: parseInt(await testQenv.getEnvVarOnDemandStrict('S3_PORT')),
|
||||||
|
useSsl: false,
|
||||||
});
|
});
|
||||||
expect(testSmartbucket).toBeInstanceOf(smartbucket.SmartBucket);
|
expect(testSmartbucket).toBeInstanceOf(smartbucket.SmartBucket);
|
||||||
myBucket = await testSmartbucket.getBucketByName(await testQenv.getEnvVarOnDemandStrict('S3_BUCKET'),);
|
const bucketName = await testQenv.getEnvVarOnDemandStrict('S3_BUCKET');
|
||||||
|
myBucket = await testSmartbucket.getBucketByName(bucketName);
|
||||||
expect(myBucket).toBeInstanceOf(smartbucket.Bucket);
|
expect(myBucket).toBeInstanceOf(smartbucket.Bucket);
|
||||||
expect(myBucket.name).toEqual('test-pushrocks-smartbucket');
|
|
||||||
});
|
});
|
||||||
|
|
||||||
tap.test('should clean all contents', async () => {
|
tap.test('should clean all contents', async () => {
|
||||||
|
|||||||
@@ -3,6 +3,6 @@
|
|||||||
*/
|
*/
|
||||||
export const commitinfo = {
|
export const commitinfo = {
|
||||||
name: '@push.rocks/smartbucket',
|
name: '@push.rocks/smartbucket',
|
||||||
version: '4.0.1',
|
version: '4.3.0',
|
||||||
description: 'A TypeScript library providing a cloud-agnostic interface for managing object storage with functionalities like bucket management, file and directory operations, and advanced features such as metadata handling and file locking.'
|
description: 'A TypeScript library providing a cloud-agnostic interface for managing object storage with functionalities like bucket management, file and directory operations, and advanced features such as metadata handling and file locking.'
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -7,6 +7,7 @@ import { SmartBucket } from './classes.smartbucket.js';
|
|||||||
import { Directory } from './classes.directory.js';
|
import { Directory } from './classes.directory.js';
|
||||||
import { File } from './classes.file.js';
|
import { File } from './classes.file.js';
|
||||||
import { Trash } from './classes.trash.js';
|
import { Trash } from './classes.trash.js';
|
||||||
|
import { ListCursor, type IListCursorOptions } from './classes.listcursor.js';
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* The bucket class exposes the basic functionality of a bucket.
|
* The bucket class exposes the basic functionality of a bucket.
|
||||||
@@ -469,6 +470,145 @@ export class Bucket {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// ==========================================
|
||||||
|
// Memory-Efficient Listing Methods (Phase 1)
|
||||||
|
// ==========================================
|
||||||
|
|
||||||
|
/**
|
||||||
|
* List all objects with a given prefix using async generator (memory-efficient streaming)
|
||||||
|
* @param prefix - Optional prefix to filter objects (default: '' for all objects)
|
||||||
|
* @yields Object keys one at a time
|
||||||
|
* @example
|
||||||
|
* ```ts
|
||||||
|
* for await (const key of bucket.listAllObjects('npm/')) {
|
||||||
|
* console.log(key);
|
||||||
|
* if (shouldStop) break; // Early exit supported
|
||||||
|
* }
|
||||||
|
* ```
|
||||||
|
*/
|
||||||
|
public async *listAllObjects(prefix: string = ''): AsyncIterableIterator<string> {
|
||||||
|
let continuationToken: string | undefined;
|
||||||
|
|
||||||
|
do {
|
||||||
|
const command = new plugins.s3.ListObjectsV2Command({
|
||||||
|
Bucket: this.name,
|
||||||
|
Prefix: prefix,
|
||||||
|
ContinuationToken: continuationToken,
|
||||||
|
});
|
||||||
|
|
||||||
|
const response = await this.smartbucketRef.s3Client.send(command);
|
||||||
|
|
||||||
|
for (const obj of response.Contents || []) {
|
||||||
|
if (obj.Key) yield obj.Key;
|
||||||
|
}
|
||||||
|
|
||||||
|
continuationToken = response.NextContinuationToken;
|
||||||
|
} while (continuationToken);
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* List all objects as an RxJS Observable (for complex reactive pipelines)
|
||||||
|
* @param prefix - Optional prefix to filter objects (default: '' for all objects)
|
||||||
|
* @returns Observable that emits object keys
|
||||||
|
* @example
|
||||||
|
* ```ts
|
||||||
|
* bucket.listAllObjectsObservable('npm/')
|
||||||
|
* .pipe(
|
||||||
|
* filter(key => key.endsWith('.json')),
|
||||||
|
* take(100)
|
||||||
|
* )
|
||||||
|
* .subscribe(key => console.log(key));
|
||||||
|
* ```
|
||||||
|
*/
|
||||||
|
public listAllObjectsObservable(prefix: string = ''): plugins.smartrx.rxjs.Observable<string> {
|
||||||
|
return new plugins.smartrx.rxjs.Observable<string>((subscriber) => {
|
||||||
|
const fetchPage = async (token?: string) => {
|
||||||
|
try {
|
||||||
|
const command = new plugins.s3.ListObjectsV2Command({
|
||||||
|
Bucket: this.name,
|
||||||
|
Prefix: prefix,
|
||||||
|
ContinuationToken: token,
|
||||||
|
});
|
||||||
|
|
||||||
|
const response = await this.smartbucketRef.s3Client.send(command);
|
||||||
|
|
||||||
|
for (const obj of response.Contents || []) {
|
||||||
|
if (obj.Key) subscriber.next(obj.Key);
|
||||||
|
}
|
||||||
|
|
||||||
|
if (response.NextContinuationToken) {
|
||||||
|
await fetchPage(response.NextContinuationToken);
|
||||||
|
} else {
|
||||||
|
subscriber.complete();
|
||||||
|
}
|
||||||
|
} catch (error) {
|
||||||
|
subscriber.error(error);
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
fetchPage();
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Create a cursor for manual pagination control
|
||||||
|
* @param prefix - Optional prefix to filter objects (default: '' for all objects)
|
||||||
|
* @param options - Cursor options (pageSize, etc.)
|
||||||
|
* @returns ListCursor instance
|
||||||
|
* @example
|
||||||
|
* ```ts
|
||||||
|
* const cursor = bucket.createCursor('npm/', { pageSize: 500 });
|
||||||
|
* while (cursor.hasMore()) {
|
||||||
|
* const { keys, done } = await cursor.next();
|
||||||
|
* console.log(`Processing ${keys.length} keys...`);
|
||||||
|
* }
|
||||||
|
* ```
|
||||||
|
*/
|
||||||
|
public createCursor(prefix: string = '', options?: IListCursorOptions): ListCursor {
|
||||||
|
return new ListCursor(this, prefix, options);
|
||||||
|
}
|
||||||
|
|
||||||
|
// ==========================================
|
||||||
|
// High-Level Listing Helpers (Phase 2)
|
||||||
|
// ==========================================
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Find objects matching a glob pattern (memory-efficient)
|
||||||
|
* @param pattern - Glob pattern (e.g., "**\/*.json", "npm/packages/*\/index.json")
|
||||||
|
* @yields Matching object keys
|
||||||
|
* @example
|
||||||
|
* ```ts
|
||||||
|
* for await (const key of bucket.findByGlob('npm/packages/*\/index.json')) {
|
||||||
|
* console.log('Found package index:', key);
|
||||||
|
* }
|
||||||
|
* ```
|
||||||
|
*/
|
||||||
|
public async *findByGlob(pattern: string): AsyncIterableIterator<string> {
|
||||||
|
const matcher = new plugins.Minimatch(pattern);
|
||||||
|
for await (const key of this.listAllObjects('')) {
|
||||||
|
if (matcher.match(key)) yield key;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* List all objects and collect into an array (convenience method)
|
||||||
|
* WARNING: Loads entire result set into memory. Use listAllObjects() generator for large buckets.
|
||||||
|
* @param prefix - Optional prefix to filter objects (default: '' for all objects)
|
||||||
|
* @returns Array of all object keys
|
||||||
|
* @example
|
||||||
|
* ```ts
|
||||||
|
* const allKeys = await bucket.listAllObjectsArray('npm/');
|
||||||
|
* console.log(`Found ${allKeys.length} objects`);
|
||||||
|
* ```
|
||||||
|
*/
|
||||||
|
public async listAllObjectsArray(prefix: string = ''): Promise<string[]> {
|
||||||
|
const keys: string[] = [];
|
||||||
|
for await (const key of this.listAllObjects(prefix)) {
|
||||||
|
keys.push(key);
|
||||||
|
}
|
||||||
|
return keys;
|
||||||
|
}
|
||||||
|
|
||||||
public async cleanAllContents(): Promise<void> {
|
public async cleanAllContents(): Promise<void> {
|
||||||
try {
|
try {
|
||||||
// Define the command type explicitly
|
// Define the command type explicitly
|
||||||
|
|||||||
@@ -120,19 +120,44 @@ export class Directory {
|
|||||||
return directories.some(dir => dir.name === dirNameArg);
|
return directories.some(dir => dir.name === dirNameArg);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Collects all ListObjectsV2 pages for a prefix.
|
||||||
|
*/
|
||||||
|
private async listObjectsV2AllPages(prefix: string, delimiter?: string) {
|
||||||
|
const allContents: plugins.s3._Object[] = [];
|
||||||
|
const allCommonPrefixes: plugins.s3.CommonPrefix[] = [];
|
||||||
|
let continuationToken: string | undefined;
|
||||||
|
|
||||||
|
do {
|
||||||
|
const command = new plugins.s3.ListObjectsV2Command({
|
||||||
|
Bucket: this.bucketRef.name,
|
||||||
|
Prefix: prefix,
|
||||||
|
Delimiter: delimiter,
|
||||||
|
ContinuationToken: continuationToken,
|
||||||
|
});
|
||||||
|
const response = await this.bucketRef.smartbucketRef.s3Client.send(command);
|
||||||
|
|
||||||
|
if (response.Contents) {
|
||||||
|
allContents.push(...response.Contents);
|
||||||
|
}
|
||||||
|
if (response.CommonPrefixes) {
|
||||||
|
allCommonPrefixes.push(...response.CommonPrefixes);
|
||||||
|
}
|
||||||
|
|
||||||
|
continuationToken = response.IsTruncated ? response.NextContinuationToken : undefined;
|
||||||
|
} while (continuationToken);
|
||||||
|
|
||||||
|
return { contents: allContents, commonPrefixes: allCommonPrefixes };
|
||||||
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* lists all files
|
* lists all files
|
||||||
*/
|
*/
|
||||||
public async listFiles(): Promise<File[]> {
|
public async listFiles(): Promise<File[]> {
|
||||||
const command = new plugins.s3.ListObjectsV2Command({
|
const { contents } = await this.listObjectsV2AllPages(this.getBasePath(), '/');
|
||||||
Bucket: this.bucketRef.name,
|
|
||||||
Prefix: this.getBasePath(),
|
|
||||||
Delimiter: '/',
|
|
||||||
});
|
|
||||||
const response = await this.bucketRef.smartbucketRef.s3Client.send(command);
|
|
||||||
const fileArray: File[] = [];
|
const fileArray: File[] = [];
|
||||||
|
|
||||||
response.Contents?.forEach((item) => {
|
contents.forEach((item) => {
|
||||||
if (item.Key && !item.Key.endsWith('/')) {
|
if (item.Key && !item.Key.endsWith('/')) {
|
||||||
const subtractedPath = item.Key.replace(this.getBasePath(), '');
|
const subtractedPath = item.Key.replace(this.getBasePath(), '');
|
||||||
if (!subtractedPath.includes('/')) {
|
if (!subtractedPath.includes('/')) {
|
||||||
@@ -154,16 +179,11 @@ export class Directory {
|
|||||||
*/
|
*/
|
||||||
public async listDirectories(): Promise<Directory[]> {
|
public async listDirectories(): Promise<Directory[]> {
|
||||||
try {
|
try {
|
||||||
const command = new plugins.s3.ListObjectsV2Command({
|
const { commonPrefixes } = await this.listObjectsV2AllPages(this.getBasePath(), '/');
|
||||||
Bucket: this.bucketRef.name,
|
|
||||||
Prefix: this.getBasePath(),
|
|
||||||
Delimiter: '/',
|
|
||||||
});
|
|
||||||
const response = await this.bucketRef.smartbucketRef.s3Client.send(command);
|
|
||||||
const directoryArray: Directory[] = [];
|
const directoryArray: Directory[] = [];
|
||||||
|
|
||||||
if (response.CommonPrefixes) {
|
if (commonPrefixes) {
|
||||||
response.CommonPrefixes.forEach((item) => {
|
commonPrefixes.forEach((item) => {
|
||||||
if (item.Prefix) {
|
if (item.Prefix) {
|
||||||
const subtractedPath = item.Prefix.replace(this.getBasePath(), '');
|
const subtractedPath = item.Prefix.replace(this.getBasePath(), '');
|
||||||
if (subtractedPath.endsWith('/')) {
|
if (subtractedPath.endsWith('/')) {
|
||||||
@@ -235,7 +255,7 @@ export class Directory {
|
|||||||
return returnDirectory;
|
return returnDirectory;
|
||||||
}
|
}
|
||||||
if (optionsArg.getEmptyDirectory || optionsArg.createWithInitializerFile) {
|
if (optionsArg.getEmptyDirectory || optionsArg.createWithInitializerFile) {
|
||||||
returnDirectory = new Directory(this.bucketRef, this, dirNameToSearch);
|
returnDirectory = new Directory(this.bucketRef, directoryArg, dirNameToSearch);
|
||||||
}
|
}
|
||||||
if (isFinalDirectory && optionsArg.createWithInitializerFile) {
|
if (isFinalDirectory && optionsArg.createWithInitializerFile) {
|
||||||
returnDirectory?.createEmptyFile('00init.txt');
|
returnDirectory?.createEmptyFile('00init.txt');
|
||||||
|
|||||||
89
ts/classes.listcursor.ts
Normal file
89
ts/classes.listcursor.ts
Normal file
@@ -0,0 +1,89 @@
|
|||||||
|
// classes.listcursor.ts
|
||||||
|
|
||||||
|
import * as plugins from './plugins.js';
|
||||||
|
import type { Bucket } from './classes.bucket.js';
|
||||||
|
|
||||||
|
export interface IListCursorOptions {
|
||||||
|
pageSize?: number;
|
||||||
|
}
|
||||||
|
|
||||||
|
export interface IListCursorResult {
|
||||||
|
keys: string[];
|
||||||
|
done: boolean;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* ListCursor provides explicit pagination control for listing objects in a bucket.
|
||||||
|
* Useful for UI pagination, resumable operations, and manual batch processing.
|
||||||
|
*/
|
||||||
|
export class ListCursor {
|
||||||
|
private continuationToken?: string;
|
||||||
|
private exhausted = false;
|
||||||
|
private pageSize: number;
|
||||||
|
|
||||||
|
constructor(
|
||||||
|
private bucket: Bucket,
|
||||||
|
private prefix: string,
|
||||||
|
options: IListCursorOptions = {}
|
||||||
|
) {
|
||||||
|
this.pageSize = options.pageSize || 1000;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Fetch the next page of object keys
|
||||||
|
* @returns Object with keys array and done flag
|
||||||
|
*/
|
||||||
|
public async next(): Promise<IListCursorResult> {
|
||||||
|
if (this.exhausted) {
|
||||||
|
return { keys: [], done: true };
|
||||||
|
}
|
||||||
|
|
||||||
|
const command = new plugins.s3.ListObjectsV2Command({
|
||||||
|
Bucket: this.bucket.name,
|
||||||
|
Prefix: this.prefix,
|
||||||
|
MaxKeys: this.pageSize,
|
||||||
|
ContinuationToken: this.continuationToken,
|
||||||
|
});
|
||||||
|
|
||||||
|
const response = await this.bucket.smartbucketRef.s3Client.send(command);
|
||||||
|
|
||||||
|
const keys = (response.Contents || [])
|
||||||
|
.map((obj) => obj.Key)
|
||||||
|
.filter((key): key is string => !!key);
|
||||||
|
|
||||||
|
this.continuationToken = response.NextContinuationToken;
|
||||||
|
this.exhausted = !this.continuationToken;
|
||||||
|
|
||||||
|
return { keys, done: this.exhausted };
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Check if there are more pages to fetch
|
||||||
|
*/
|
||||||
|
public hasMore(): boolean {
|
||||||
|
return !this.exhausted;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Reset the cursor to start from the beginning
|
||||||
|
*/
|
||||||
|
public reset(): void {
|
||||||
|
this.continuationToken = undefined;
|
||||||
|
this.exhausted = false;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Get the current continuation token (for saving/restoring state)
|
||||||
|
*/
|
||||||
|
public getToken(): string | undefined {
|
||||||
|
return this.continuationToken;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Set the continuation token (for resuming from a saved state)
|
||||||
|
*/
|
||||||
|
public setToken(token: string | undefined): void {
|
||||||
|
this.continuationToken = token;
|
||||||
|
this.exhausted = !token;
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -4,11 +4,23 @@ import { File } from './classes.file.js';
|
|||||||
|
|
||||||
export class MetaData {
|
export class MetaData {
|
||||||
public static async hasMetaData(optionsArg: { file: File }) {
|
public static async hasMetaData(optionsArg: { file: File }) {
|
||||||
// lets find the existing metadata file
|
// try finding the existing metadata file; return false if it doesn't exist
|
||||||
const existingFile = await optionsArg.file.parentDirectoryRef.getFile({
|
try {
|
||||||
path: optionsArg.file.name + '.metadata',
|
const existingFile = await optionsArg.file.parentDirectoryRef.getFile({
|
||||||
});
|
path: optionsArg.file.name + '.metadata',
|
||||||
return !!existingFile;
|
});
|
||||||
|
return !!existingFile;
|
||||||
|
} catch (error: any) {
|
||||||
|
const message = error?.message || '';
|
||||||
|
const isNotFound =
|
||||||
|
message.includes('File not found') ||
|
||||||
|
error?.name === 'NotFound' ||
|
||||||
|
error?.$metadata?.httpStatusCode === 404;
|
||||||
|
if (isNotFound) {
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
throw error;
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// static
|
// static
|
||||||
|
|||||||
@@ -2,3 +2,6 @@ export * from './classes.smartbucket.js';
|
|||||||
export * from './classes.bucket.js';
|
export * from './classes.bucket.js';
|
||||||
export * from './classes.directory.js';
|
export * from './classes.directory.js';
|
||||||
export * from './classes.file.js';
|
export * from './classes.file.js';
|
||||||
|
export * from './classes.listcursor.js';
|
||||||
|
export * from './classes.metadata.js';
|
||||||
|
export * from './classes.trash.js';
|
||||||
|
|||||||
@@ -26,7 +26,9 @@ export {
|
|||||||
|
|
||||||
// third party scope
|
// third party scope
|
||||||
import * as s3 from '@aws-sdk/client-s3';
|
import * as s3 from '@aws-sdk/client-s3';
|
||||||
|
import { Minimatch } from 'minimatch';
|
||||||
|
|
||||||
export {
|
export {
|
||||||
s3,
|
s3,
|
||||||
|
Minimatch,
|
||||||
}
|
}
|
||||||
|
|||||||
Reference in New Issue
Block a user