feat(rust-provider): Add Rust-backed provider with XFS-safe durability via IPC bridge, TypeScript provider, tests and docs
This commit is contained in:
349
readme.md
349
readme.md
@@ -1,6 +1,6 @@
|
||||
# @push.rocks/smartfs
|
||||
|
||||
Modern, pluggable filesystem module with fluent API, Web Streams support, and multiple storage backends.
|
||||
Modern, pluggable filesystem module with fluent API, Web Streams, Rust-powered durability, and multiple storage backends.
|
||||
|
||||
## Issue Reporting and Security
|
||||
|
||||
@@ -8,15 +8,15 @@ For reporting bugs, issues, or security vulnerabilities, please visit [community
|
||||
|
||||
## Features
|
||||
|
||||
- 🎯 **Fluent API** - Action-last chainable interface for elegant code
|
||||
- 🔌 **Pluggable Providers** - Support for multiple storage backends (Node.js fs, memory, S3, etc.)
|
||||
- 🌊 **Web Streams** - Modern streaming with Web Streams API
|
||||
- 💾 **Transactions** - Atomic multi-file operations with automatic rollback
|
||||
- 👀 **File Watching** - Event-based file system monitoring
|
||||
- 🔐 **Tree Hashing** - SHA-256 directory hashing for cache-busting
|
||||
- ⚡ **Async-Only** - Modern async/await patterns throughout
|
||||
- 📦 **Zero Dependencies** - Core functionality with minimal footprint
|
||||
- 🎨 **TypeScript** - Full type safety and IntelliSense support
|
||||
- 🎯 **Fluent API** — Action-last chainable interface for elegant, readable code
|
||||
- 🔌 **Pluggable Providers** — Swap backends (Node.js fs, in-memory, Rust) without changing a line of application code
|
||||
- 🦀 **Rust Provider** — XFS-safe `fsync` durability, cross-compiled binary via IPC for production-grade reliability
|
||||
- 🌊 **Web Streams** — True chunked streaming with the Web Streams API (including over IPC for the Rust provider)
|
||||
- 💾 **Transactions** — Atomic multi-file operations with automatic rollback on failure
|
||||
- 👀 **File Watching** — Event-based filesystem monitoring with debounce, filters, and recursive watching
|
||||
- 🔐 **Tree Hashing** — Deterministic SHA-256 directory hashing for cache-busting and change detection
|
||||
- ⚡ **Async-Only** — Modern `async`/`await` patterns throughout — no sync footguns
|
||||
- 🎨 **TypeScript-First** — Full type safety, IntelliSense, and exported interfaces
|
||||
|
||||
## Installation
|
||||
|
||||
@@ -31,14 +31,15 @@ pnpm add @push.rocks/smartfs
|
||||
```typescript
|
||||
import { SmartFs, SmartFsProviderNode } from '@push.rocks/smartfs';
|
||||
|
||||
// Create a SmartFS instance with Node.js provider
|
||||
// Create a SmartFS instance with the Node.js provider
|
||||
const fs = new SmartFs(new SmartFsProviderNode());
|
||||
|
||||
// Write and read files with fluent API
|
||||
// Write a file
|
||||
await fs.file('/path/to/file.txt')
|
||||
.encoding('utf8')
|
||||
.write('Hello, World!');
|
||||
|
||||
// Read it back
|
||||
const content = await fs.file('/path/to/file.txt')
|
||||
.encoding('utf8')
|
||||
.read();
|
||||
@@ -48,88 +49,78 @@ console.log(content); // "Hello, World!"
|
||||
|
||||
## API Overview
|
||||
|
||||
### File Operations
|
||||
### 📄 File Operations
|
||||
|
||||
The fluent API uses **action-last pattern** - configure first, then execute:
|
||||
The fluent API uses an **action-last pattern** — configure first, then execute:
|
||||
|
||||
```typescript
|
||||
// Read file
|
||||
// Read
|
||||
const content = await fs.file('/path/to/file.txt')
|
||||
.encoding('utf8')
|
||||
.read();
|
||||
|
||||
// Write file
|
||||
// Write
|
||||
await fs.file('/path/to/file.txt')
|
||||
.encoding('utf8')
|
||||
.mode(0o644)
|
||||
.write('content');
|
||||
|
||||
// Atomic write (write to temp, then rename)
|
||||
// Atomic write (write to temp file, then rename — crash-safe)
|
||||
await fs.file('/path/to/file.txt')
|
||||
.atomic()
|
||||
.write('content');
|
||||
|
||||
// Append to file
|
||||
// Append
|
||||
await fs.file('/path/to/file.txt')
|
||||
.encoding('utf8')
|
||||
.append('more content');
|
||||
|
||||
// Copy file
|
||||
// Copy with preserved timestamps
|
||||
await fs.file('/source.txt')
|
||||
.preserveTimestamps()
|
||||
.copy('/destination.txt');
|
||||
|
||||
// Move file
|
||||
await fs.file('/old.txt')
|
||||
.move('/new.txt');
|
||||
// Move / rename
|
||||
await fs.file('/old.txt').move('/new.txt');
|
||||
|
||||
// Delete file
|
||||
await fs.file('/path/to/file.txt')
|
||||
.delete();
|
||||
// Delete
|
||||
await fs.file('/path/to/file.txt').delete();
|
||||
|
||||
// Check existence
|
||||
// Existence check
|
||||
const exists = await fs.file('/path/to/file.txt').exists();
|
||||
|
||||
// Get stats
|
||||
// Stats (size, timestamps, permissions, etc.)
|
||||
const stats = await fs.file('/path/to/file.txt').stat();
|
||||
```
|
||||
|
||||
### Directory Operations
|
||||
### 📂 Directory Operations
|
||||
|
||||
```typescript
|
||||
// Create directory
|
||||
await fs.directory('/path/to/dir').create();
|
||||
// Create directory (recursive by default)
|
||||
await fs.directory('/path/to/nested/dir').create();
|
||||
|
||||
// Create nested directories
|
||||
await fs.directory('/path/to/nested/dir')
|
||||
.recursive()
|
||||
.create();
|
||||
|
||||
// List directory
|
||||
// List contents
|
||||
const entries = await fs.directory('/path/to/dir').list();
|
||||
|
||||
// List recursively with filter
|
||||
const tsFiles = await fs.directory('/path/to/dir')
|
||||
// List recursively with glob filter and stats
|
||||
const tsFiles = await fs.directory('/src')
|
||||
.recursive()
|
||||
.filter('*.ts')
|
||||
.includeStats()
|
||||
.list();
|
||||
|
||||
// Filter with RegExp
|
||||
const files = await fs.directory('/path/to/dir')
|
||||
.filter(/\.txt$/)
|
||||
const configs = await fs.directory('/project')
|
||||
.filter(/\.config\.(ts|js)$/)
|
||||
.list();
|
||||
|
||||
// Filter with function
|
||||
const largeFiles = await fs.directory('/path/to/dir')
|
||||
const largeFiles = await fs.directory('/data')
|
||||
.includeStats()
|
||||
.filter(entry => entry.stats && entry.stats.size > 1024)
|
||||
.list();
|
||||
|
||||
// Delete directory
|
||||
await fs.directory('/path/to/dir')
|
||||
.recursive()
|
||||
.delete();
|
||||
// Delete directory recursively
|
||||
await fs.directory('/path/to/dir').recursive().delete();
|
||||
|
||||
// Check existence
|
||||
const exists = await fs.directory('/path/to/dir').exists();
|
||||
@@ -140,10 +131,10 @@ const exists = await fs.directory('/path/to/dir').exists();
|
||||
Copy or move entire directory trees with fine-grained control:
|
||||
|
||||
```typescript
|
||||
// Basic copy - copies all files recursively
|
||||
// Basic copy
|
||||
await fs.directory('/source').copy('/destination');
|
||||
|
||||
// Basic move - moves directory to new location
|
||||
// Basic move
|
||||
await fs.directory('/old-location').move('/new-location');
|
||||
|
||||
// Copy with options
|
||||
@@ -153,10 +144,10 @@ await fs.directory('/source')
|
||||
.preserveTimestamps(true) // Keep original timestamps
|
||||
.copy('/destination');
|
||||
|
||||
// Copy all files (ignore filter setting)
|
||||
// Ignore filter for copy (copy everything regardless of list filter)
|
||||
await fs.directory('/source')
|
||||
.filter('*.ts')
|
||||
.applyFilter(false) // Ignore filter, copy everything
|
||||
.applyFilter(false)
|
||||
.copy('/destination');
|
||||
|
||||
// Handle target directory conflicts
|
||||
@@ -174,6 +165,7 @@ await fs.directory('/source')
|
||||
```
|
||||
|
||||
**Configuration Options:**
|
||||
|
||||
| Method | Default | Description |
|
||||
|--------|---------|-------------|
|
||||
| `filter(pattern)` | none | Filter files by glob, regex, or function |
|
||||
@@ -182,49 +174,14 @@ await fs.directory('/source')
|
||||
| `preserveTimestamps(bool)` | `false` | Preserve original file timestamps |
|
||||
| `onConflict(mode)` | `'merge'` | `'merge'`, `'error'`, or `'replace'` |
|
||||
|
||||
### 🔐 Tree Hashing (Cache-Busting)
|
||||
### 🌊 Streaming Operations
|
||||
|
||||
Compute a deterministic hash of all files in a directory - perfect for cache invalidation:
|
||||
|
||||
```typescript
|
||||
// Hash all files in a directory recursively
|
||||
const hash = await fs.directory('/assets')
|
||||
.recursive()
|
||||
.treeHash();
|
||||
// Returns: "a3f2b8c9d4e5f6a7b8c9d0e1f2a3b4c5d6e7f8a9b0c1d2e3f4a5b6c7d8e9f0a1"
|
||||
|
||||
// Hash only specific file types
|
||||
const cssHash = await fs.directory('/styles')
|
||||
.filter(/\.css$/)
|
||||
.recursive()
|
||||
.treeHash();
|
||||
|
||||
// Use different algorithm
|
||||
const sha512Hash = await fs.directory('/data')
|
||||
.recursive()
|
||||
.treeHash({ algorithm: 'sha512' });
|
||||
```
|
||||
|
||||
**How it works:**
|
||||
- Files are sorted by path for deterministic ordering
|
||||
- Hashes relative paths + file contents (streaming, memory-efficient)
|
||||
- Does NOT include metadata (mtime/size) - pure content-based
|
||||
- Same content always produces same hash, regardless of timestamps
|
||||
|
||||
**Use cases:**
|
||||
- 🚀 Cache-busting static assets
|
||||
- 📦 Detecting when served files change
|
||||
- 🔄 Incremental build triggers
|
||||
- ✅ Content verification
|
||||
|
||||
### Streaming Operations
|
||||
|
||||
SmartFS uses **Web Streams API** for efficient handling of large files:
|
||||
SmartFS uses the **Web Streams API** for efficient, memory-friendly handling of large files. All providers — including the Rust provider over IPC — support true chunked streaming:
|
||||
|
||||
```typescript
|
||||
// Read stream
|
||||
const readStream = await fs.file('/large-file.bin')
|
||||
.chunkSize(64 * 1024)
|
||||
.chunkSize(64 * 1024) // 64 KB chunks
|
||||
.readStream();
|
||||
|
||||
const reader = readStream.getReader();
|
||||
@@ -232,7 +189,6 @@ while (true) {
|
||||
const { done, value } = await reader.read();
|
||||
if (done) break;
|
||||
// Process chunk (Uint8Array)
|
||||
console.log('Chunk size:', value.length);
|
||||
}
|
||||
|
||||
// Write stream
|
||||
@@ -243,18 +199,18 @@ await writer.write(new Uint8Array([1, 2, 3]));
|
||||
await writer.write(new Uint8Array([4, 5, 6]));
|
||||
await writer.close();
|
||||
|
||||
// Pipe streams
|
||||
// Pipe one stream to another
|
||||
const input = await fs.file('/input.txt').readStream();
|
||||
const output = await fs.file('/output.txt').writeStream();
|
||||
await input.pipeTo(output);
|
||||
```
|
||||
|
||||
### Transactions
|
||||
### 💾 Transactions
|
||||
|
||||
Execute multiple file operations atomically with automatic rollback on failure:
|
||||
|
||||
```typescript
|
||||
// Simple transaction
|
||||
// Simple transaction — all-or-nothing
|
||||
await fs.transaction()
|
||||
.file('/file1.txt').write('content 1')
|
||||
.file('/file2.txt').write('content 2')
|
||||
@@ -276,48 +232,79 @@ try {
|
||||
}
|
||||
```
|
||||
|
||||
### File Watching
|
||||
### 👀 File Watching
|
||||
|
||||
Monitor filesystem changes with event-based watching:
|
||||
|
||||
```typescript
|
||||
// Watch a single file
|
||||
const watcher = await fs.watch('/path/to/file.txt')
|
||||
.onChange(event => {
|
||||
console.log('File changed:', event.path);
|
||||
})
|
||||
.onChange(event => console.log('Changed:', event.path))
|
||||
.start();
|
||||
|
||||
// Watch directory recursively
|
||||
const dirWatcher = await fs.watch('/path/to/dir')
|
||||
// Watch a directory recursively with filters and debounce
|
||||
const dirWatcher = await fs.watch('/src')
|
||||
.recursive()
|
||||
.filter('*.ts')
|
||||
.debounce(100)
|
||||
.filter(/\.ts$/)
|
||||
.debounce(100) // ms
|
||||
.onChange(event => console.log('Changed:', event.path))
|
||||
.onAdd(event => console.log('Added:', event.path))
|
||||
.onDelete(event => console.log('Deleted:', event.path))
|
||||
.start();
|
||||
|
||||
// Stop watching
|
||||
await dirWatcher.stop();
|
||||
|
||||
// Watch with custom filter
|
||||
const customWatcher = await fs.watch('/path/to/dir')
|
||||
// Watch with a function filter
|
||||
const customWatcher = await fs.watch('/src')
|
||||
.recursive()
|
||||
.filter(path => path.endsWith('.ts') && !path.includes('test'))
|
||||
.onAll(event => {
|
||||
console.log(`${event.type}: ${event.path}`);
|
||||
})
|
||||
.onAll(event => console.log(`${event.type}: ${event.path}`))
|
||||
.start();
|
||||
|
||||
// Stop watching
|
||||
await dirWatcher.stop();
|
||||
```
|
||||
|
||||
### 🔐 Tree Hashing (Cache-Busting)
|
||||
|
||||
Compute a deterministic hash of all files in a directory — ideal for cache invalidation, change detection, and build triggers:
|
||||
|
||||
```typescript
|
||||
// Hash all files in a directory recursively
|
||||
const hash = await fs.directory('/assets')
|
||||
.recursive()
|
||||
.treeHash();
|
||||
// → "a3f2b8c9d4e5f6a7b8c9d0e1f2a3b4c5d6e7f8a9b0c1d2e3f4a5b6c7d8e9f0a1"
|
||||
|
||||
// Hash only specific file types
|
||||
const cssHash = await fs.directory('/styles')
|
||||
.filter(/\.css$/)
|
||||
.recursive()
|
||||
.treeHash();
|
||||
|
||||
// Use a different algorithm
|
||||
const sha512Hash = await fs.directory('/data')
|
||||
.recursive()
|
||||
.treeHash({ algorithm: 'sha512' });
|
||||
```
|
||||
|
||||
**How it works:**
|
||||
- Files are sorted by path for deterministic ordering
|
||||
- Hashes relative path + file contents (streaming, memory-efficient)
|
||||
- Does **not** include metadata (mtime/size) — pure content-based
|
||||
- Same content always produces the same hash, regardless of timestamps
|
||||
|
||||
**Use cases:**
|
||||
- 🚀 Cache-busting static assets
|
||||
- 📦 Detecting when served files have changed
|
||||
- 🔄 Incremental build triggers
|
||||
- ✅ Content integrity verification
|
||||
|
||||
## Providers
|
||||
|
||||
SmartFS supports multiple storage backends through providers:
|
||||
SmartFS supports multiple storage backends through its provider architecture. Swap providers without changing any application code.
|
||||
|
||||
### Node.js Provider
|
||||
### 🟢 Node.js Provider
|
||||
|
||||
Uses Node.js `fs/promises` API for local filesystem operations:
|
||||
Uses Node.js `fs/promises` for local filesystem operations. The default choice for most applications:
|
||||
|
||||
```typescript
|
||||
import { SmartFs, SmartFsProviderNode } from '@push.rocks/smartfs';
|
||||
@@ -325,57 +312,92 @@ import { SmartFs, SmartFsProviderNode } from '@push.rocks/smartfs';
|
||||
const fs = new SmartFs(new SmartFsProviderNode());
|
||||
```
|
||||
|
||||
**Capabilities:**
|
||||
- ✅ File watching
|
||||
- ✅ Atomic writes
|
||||
- ✅ Transactions
|
||||
- ✅ Streaming
|
||||
- ✅ Symbolic links
|
||||
- ✅ File permissions
|
||||
- ✅ Tree hashing
|
||||
| Capability | Status |
|
||||
|---|---|
|
||||
| File watching | ✅ |
|
||||
| Atomic writes | ✅ |
|
||||
| Transactions | ✅ |
|
||||
| Streaming | ✅ |
|
||||
| Symbolic links | ✅ |
|
||||
| File permissions | ✅ |
|
||||
|
||||
### Memory Provider
|
||||
### 🦀 Rust Provider
|
||||
|
||||
In-memory virtual filesystem, perfect for testing:
|
||||
A high-durability provider powered by a cross-compiled Rust binary that communicates via JSON-over-IPC. The Rust provider adds **XFS-safe `fsync` guarantees** that the Node.js `fs` module cannot provide — after every metadata-changing operation (`write`, `rename`, `unlink`, `mkdir`), the parent directory is explicitly `fsync`'d to ensure durability on delayed-logging filesystems like XFS.
|
||||
|
||||
```typescript
|
||||
import { SmartFs, SmartFsProviderRust } from '@push.rocks/smartfs';
|
||||
|
||||
const fs = new SmartFs(new SmartFsProviderRust());
|
||||
|
||||
// Use it exactly like any other provider
|
||||
await fs.file('/data/important.json')
|
||||
.atomic()
|
||||
.write(JSON.stringify(data));
|
||||
|
||||
// Don't forget to shut down when done
|
||||
const provider = fs.provider as SmartFsProviderRust;
|
||||
await provider.shutdown();
|
||||
```
|
||||
|
||||
| Capability | Status |
|
||||
|---|---|
|
||||
| File watching | ✅ (via `notify` crate) |
|
||||
| Atomic writes | ✅ (with fsync + parent fsync) |
|
||||
| Transactions | ✅ (with batch fsync) |
|
||||
| Streaming | ✅ (chunked IPC) |
|
||||
| Symbolic links | ✅ |
|
||||
| File permissions | ✅ |
|
||||
|
||||
**Key advantages over the Node.js provider:**
|
||||
- `fsync` on parent directories after all metadata changes (crash-safe on XFS)
|
||||
- Atomic writes with `fsync` → `rename` → `fsync parent` sequence
|
||||
- Batch `fsync` for transactions (collect affected directories, sync once at end)
|
||||
- Cross-device move with fallback (`EXDEV` handling)
|
||||
- Uses the [`notify`](https://crates.io/crates/notify) crate for reliable file watching
|
||||
|
||||
### 🧪 Memory Provider
|
||||
|
||||
In-memory virtual filesystem — perfect for testing:
|
||||
|
||||
```typescript
|
||||
import { SmartFs, SmartFsProviderMemory } from '@push.rocks/smartfs';
|
||||
|
||||
const fs = new SmartFs(new SmartFsProviderMemory());
|
||||
|
||||
// All operations work in memory
|
||||
// All operations work in memory — fast, isolated, no cleanup needed
|
||||
await fs.file('/virtual/file.txt').write('data');
|
||||
const content = await fs.file('/virtual/file.txt').read();
|
||||
const content = await fs.file('/virtual/file.txt').encoding('utf8').read();
|
||||
|
||||
// Clear all data
|
||||
fs.provider.clear();
|
||||
// Clear all data between tests
|
||||
(fs.provider as SmartFsProviderMemory).clear();
|
||||
```
|
||||
|
||||
**Capabilities:**
|
||||
- ✅ File watching
|
||||
- ✅ Atomic writes
|
||||
- ✅ Transactions
|
||||
- ✅ Streaming
|
||||
- ❌ Symbolic links
|
||||
- ✅ File permissions
|
||||
- ✅ Tree hashing
|
||||
| Capability | Status |
|
||||
|---|---|
|
||||
| File watching | ✅ |
|
||||
| Atomic writes | ✅ |
|
||||
| Transactions | ✅ |
|
||||
| Streaming | ✅ |
|
||||
| Symbolic links | ❌ |
|
||||
| File permissions | ✅ |
|
||||
|
||||
### Custom Providers
|
||||
### 🔧 Custom Providers
|
||||
|
||||
Create your own provider by implementing `ISmartFsProvider`:
|
||||
Build your own provider by implementing the `ISmartFsProvider` interface:
|
||||
|
||||
```typescript
|
||||
import type { ISmartFsProvider } from '@push.rocks/smartfs';
|
||||
|
||||
class MyCustomProvider implements ISmartFsProvider {
|
||||
public readonly name = 'custom';
|
||||
class MyS3Provider implements ISmartFsProvider {
|
||||
public readonly name = 's3';
|
||||
public readonly capabilities = {
|
||||
supportsWatch: true,
|
||||
supportsWatch: false,
|
||||
supportsAtomic: true,
|
||||
supportsTransactions: true,
|
||||
supportsStreaming: true,
|
||||
supportsSymlinks: false,
|
||||
supportsPermissions: true,
|
||||
supportsPermissions: false,
|
||||
};
|
||||
|
||||
// Implement all required methods...
|
||||
@@ -384,7 +406,7 @@ class MyCustomProvider implements ISmartFsProvider {
|
||||
// ... etc
|
||||
}
|
||||
|
||||
const fs = new SmartFs(new MyCustomProvider());
|
||||
const fs = new SmartFs(new MyS3Provider());
|
||||
```
|
||||
|
||||
## Advanced Usage
|
||||
@@ -395,9 +417,10 @@ const fs = new SmartFs(new MyCustomProvider());
|
||||
// UTF-8 (default for text)
|
||||
await fs.file('/file.txt').encoding('utf8').write('text');
|
||||
|
||||
// Binary
|
||||
// Binary (Buffer)
|
||||
const buffer = Buffer.from([0x48, 0x65, 0x6c, 0x6c, 0x6f]);
|
||||
await fs.file('/file.bin').write(buffer);
|
||||
const data = await fs.file('/file.bin').read(); // Returns Buffer
|
||||
|
||||
// Base64
|
||||
await fs.file('/file.txt').encoding('base64').write('SGVsbG8=');
|
||||
@@ -423,8 +446,7 @@ await fs.directory('/private')
|
||||
### Complex Filtering
|
||||
|
||||
```typescript
|
||||
// Multiple conditions
|
||||
const files = await fs.directory('/src')
|
||||
const recentLargeTs = await fs.directory('/src')
|
||||
.recursive()
|
||||
.includeStats()
|
||||
.filter(entry => {
|
||||
@@ -440,44 +462,40 @@ const files = await fs.directory('/src')
|
||||
### Transaction Operations
|
||||
|
||||
```typescript
|
||||
// Complex transaction
|
||||
const tx = fs.transaction();
|
||||
|
||||
// Write multiple files
|
||||
// Build up operations
|
||||
tx.file('/data/file1.json').write(JSON.stringify(data1));
|
||||
tx.file('/data/file2.json').write(JSON.stringify(data2));
|
||||
|
||||
// Copy backups
|
||||
tx.file('/data/file1.json').copy('/backup/file1.json');
|
||||
tx.file('/data/file2.json').copy('/backup/file2.json');
|
||||
tx.file('/data/old.json').delete();
|
||||
|
||||
// Delete old files
|
||||
tx.file('/data/old1.json').delete();
|
||||
tx.file('/data/old2.json').delete();
|
||||
|
||||
// Execute atomically
|
||||
// Execute atomically — all succeed or all revert
|
||||
await tx.commit();
|
||||
```
|
||||
|
||||
## Type Definitions
|
||||
|
||||
SmartFS is fully typed with TypeScript:
|
||||
SmartFS is fully typed. All interfaces and types are exported:
|
||||
|
||||
```typescript
|
||||
import type {
|
||||
ISmartFsProvider,
|
||||
IProviderCapabilities,
|
||||
IFileStats,
|
||||
IDirectoryEntry,
|
||||
IWatchEvent,
|
||||
ITransactionOperation,
|
||||
ITreeHashOptions,
|
||||
TEncoding,
|
||||
TFileMode,
|
||||
TEncoding, // 'utf8' | 'utf-8' | 'ascii' | 'base64' | 'hex' | 'binary' | 'buffer'
|
||||
TFileMode, // number
|
||||
TWatchEventType, // 'add' | 'change' | 'delete'
|
||||
} from '@push.rocks/smartfs';
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
SmartFS throws descriptive errors:
|
||||
SmartFS throws descriptive errors that mirror POSIX conventions:
|
||||
|
||||
```typescript
|
||||
try {
|
||||
@@ -491,22 +509,23 @@ try {
|
||||
try {
|
||||
await fs.transaction()
|
||||
.file('/file1.txt').write('data')
|
||||
.file('/file2.txt').write('data')
|
||||
.file('/readonly/file2.txt').write('data') // fails
|
||||
.commit();
|
||||
} catch (error) {
|
||||
// All operations are reverted
|
||||
// file1.txt is reverted to its original state
|
||||
console.error('Transaction failed:', error);
|
||||
}
|
||||
```
|
||||
|
||||
## Performance Tips
|
||||
|
||||
1. **Use streaming** for large files (> 1MB)
|
||||
2. **Batch operations** with transactions
|
||||
3. **Use memory provider** for testing
|
||||
4. **Enable atomic writes** for critical data
|
||||
5. **Debounce watchers** to reduce event spam
|
||||
6. **Use treeHash** instead of reading files for change detection
|
||||
1. **Use streaming** for large files (> 1MB) — avoids loading entire files into memory
|
||||
2. **Batch operations** with transactions for durability and performance
|
||||
3. **Use the memory provider** for testing — instant, isolated, no disk I/O
|
||||
4. **Enable atomic writes** for critical data — prevents partial writes on crash
|
||||
5. **Debounce watchers** to reduce event noise during rapid changes
|
||||
6. **Use `treeHash`** instead of reading individual files for change detection
|
||||
7. **Use the Rust provider** on XFS or when you need guaranteed durability
|
||||
|
||||
## License and Legal Information
|
||||
|
||||
|
||||
Reference in New Issue
Block a user