14 KiB
@push.rocks/smartfs
Modern, pluggable filesystem module with fluent API, Web Streams support, and multiple storage backends.
Issue Reporting and Security
For reporting bugs, issues, or security vulnerabilities, please visit community.foss.global/. This is the central community hub for all issue reporting. Developers who sign and comply with our contribution agreement and go through identification can also get a code.foss.global/ account to submit Pull Requests directly.
Features
- 🎯 Fluent API - Action-last chainable interface for elegant code
- 🔌 Pluggable Providers - Support for multiple storage backends (Node.js fs, memory, S3, etc.)
- 🌊 Web Streams - Modern streaming with Web Streams API
- 💾 Transactions - Atomic multi-file operations with automatic rollback
- 👀 File Watching - Event-based file system monitoring
- 🔐 Tree Hashing - SHA-256 directory hashing for cache-busting
- ⚡ Async-Only - Modern async/await patterns throughout
- 📦 Zero Dependencies - Core functionality with minimal footprint
- 🎨 TypeScript - Full type safety and IntelliSense support
Installation
npm install @push.rocks/smartfs
# or
pnpm add @push.rocks/smartfs
Quick Start
import { SmartFs, SmartFsProviderNode } from '@push.rocks/smartfs';
// Create a SmartFS instance with Node.js provider
const fs = new SmartFs(new SmartFsProviderNode());
// Write and read files with fluent API
await fs.file('/path/to/file.txt')
.encoding('utf8')
.write('Hello, World!');
const content = await fs.file('/path/to/file.txt')
.encoding('utf8')
.read();
console.log(content); // "Hello, World!"
API Overview
File Operations
The fluent API uses action-last pattern - configure first, then execute:
// Read file
const content = await fs.file('/path/to/file.txt')
.encoding('utf8')
.read();
// Write file
await fs.file('/path/to/file.txt')
.encoding('utf8')
.mode(0o644)
.write('content');
// Atomic write (write to temp, then rename)
await fs.file('/path/to/file.txt')
.atomic()
.write('content');
// Append to file
await fs.file('/path/to/file.txt')
.encoding('utf8')
.append('more content');
// Copy file
await fs.file('/source.txt')
.preserveTimestamps()
.copy('/destination.txt');
// Move file
await fs.file('/old.txt')
.move('/new.txt');
// Delete file
await fs.file('/path/to/file.txt')
.delete();
// Check existence
const exists = await fs.file('/path/to/file.txt').exists();
// Get stats
const stats = await fs.file('/path/to/file.txt').stat();
Directory Operations
// Create directory
await fs.directory('/path/to/dir').create();
// Create nested directories
await fs.directory('/path/to/nested/dir')
.recursive()
.create();
// List directory
const entries = await fs.directory('/path/to/dir').list();
// List recursively with filter
const tsFiles = await fs.directory('/path/to/dir')
.recursive()
.filter('*.ts')
.includeStats()
.list();
// Filter with RegExp
const files = await fs.directory('/path/to/dir')
.filter(/\.txt$/)
.list();
// Filter with function
const largeFiles = await fs.directory('/path/to/dir')
.includeStats()
.filter(entry => entry.stats && entry.stats.size > 1024)
.list();
// Delete directory
await fs.directory('/path/to/dir')
.recursive()
.delete();
// Check existence
const exists = await fs.directory('/path/to/dir').exists();
📁 Directory Copy & Move
Copy or move entire directory trees with fine-grained control:
// Basic copy - copies all files recursively
await fs.directory('/source').copy('/destination');
// Basic move - moves directory to new location
await fs.directory('/old-location').move('/new-location');
// Copy with options
await fs.directory('/source')
.filter(/\.ts$/) // Only copy TypeScript files
.overwrite(true) // Overwrite existing files
.preserveTimestamps(true) // Keep original timestamps
.copy('/destination');
// Copy all files (ignore filter setting)
await fs.directory('/source')
.filter('*.ts')
.applyFilter(false) // Ignore filter, copy everything
.copy('/destination');
// Handle target directory conflicts
await fs.directory('/source')
.onConflict('merge') // Default: merge contents
.copy('/destination');
await fs.directory('/source')
.onConflict('error') // Throw if target exists
.copy('/destination');
await fs.directory('/source')
.onConflict('replace') // Delete target first, then copy
.copy('/destination');
Configuration Options:
| Method | Default | Description |
|---|---|---|
filter(pattern) |
none | Filter files by glob, regex, or function |
applyFilter(bool) |
true |
Whether to apply filter during copy/move |
overwrite(bool) |
false |
Overwrite existing files at destination |
preserveTimestamps(bool) |
false |
Preserve original file timestamps |
onConflict(mode) |
'merge' |
'merge', 'error', or 'replace' |
🔐 Tree Hashing (Cache-Busting)
Compute a deterministic hash of all files in a directory - perfect for cache invalidation:
// Hash all files in a directory recursively
const hash = await fs.directory('/assets')
.recursive()
.treeHash();
// Returns: "a3f2b8c9d4e5f6a7b8c9d0e1f2a3b4c5d6e7f8a9b0c1d2e3f4a5b6c7d8e9f0a1"
// Hash only specific file types
const cssHash = await fs.directory('/styles')
.filter(/\.css$/)
.recursive()
.treeHash();
// Use different algorithm
const sha512Hash = await fs.directory('/data')
.recursive()
.treeHash({ algorithm: 'sha512' });
How it works:
- Files are sorted by path for deterministic ordering
- Hashes relative paths + file contents (streaming, memory-efficient)
- Does NOT include metadata (mtime/size) - pure content-based
- Same content always produces same hash, regardless of timestamps
Use cases:
- 🚀 Cache-busting static assets
- 📦 Detecting when served files change
- 🔄 Incremental build triggers
- ✅ Content verification
Streaming Operations
SmartFS uses Web Streams API for efficient handling of large files:
// Read stream
const readStream = await fs.file('/large-file.bin')
.chunkSize(64 * 1024)
.readStream();
const reader = readStream.getReader();
while (true) {
const { done, value } = await reader.read();
if (done) break;
// Process chunk (Uint8Array)
console.log('Chunk size:', value.length);
}
// Write stream
const writeStream = await fs.file('/output.bin').writeStream();
const writer = writeStream.getWriter();
await writer.write(new Uint8Array([1, 2, 3]));
await writer.write(new Uint8Array([4, 5, 6]));
await writer.close();
// Pipe streams
const input = await fs.file('/input.txt').readStream();
const output = await fs.file('/output.txt').writeStream();
await input.pipeTo(output);
Transactions
Execute multiple file operations atomically with automatic rollback on failure:
// Simple transaction
await fs.transaction()
.file('/file1.txt').write('content 1')
.file('/file2.txt').write('content 2')
.file('/file3.txt').delete()
.commit();
// Transaction with error handling
const tx = fs.transaction()
.file('/important.txt').write('critical data')
.file('/backup.txt').copy('/backup-old.txt')
.file('/temp.txt').delete();
try {
await tx.commit();
console.log('Transaction completed successfully');
} catch (error) {
console.error('Transaction failed and was rolled back:', error);
// All operations are automatically reverted
}
File Watching
Monitor filesystem changes with event-based watching:
// Watch a single file
const watcher = await fs.watch('/path/to/file.txt')
.onChange(event => {
console.log('File changed:', event.path);
})
.start();
// Watch directory recursively
const dirWatcher = await fs.watch('/path/to/dir')
.recursive()
.filter('*.ts')
.debounce(100)
.onChange(event => console.log('Changed:', event.path))
.onAdd(event => console.log('Added:', event.path))
.onDelete(event => console.log('Deleted:', event.path))
.start();
// Stop watching
await dirWatcher.stop();
// Watch with custom filter
const customWatcher = await fs.watch('/path/to/dir')
.recursive()
.filter(path => path.endsWith('.ts') && !path.includes('test'))
.onAll(event => {
console.log(`${event.type}: ${event.path}`);
})
.start();
Providers
SmartFS supports multiple storage backends through providers:
Node.js Provider
Uses Node.js fs/promises API for local filesystem operations:
import { SmartFs, SmartFsProviderNode } from '@push.rocks/smartfs';
const fs = new SmartFs(new SmartFsProviderNode());
Capabilities:
- ✅ File watching
- ✅ Atomic writes
- ✅ Transactions
- ✅ Streaming
- ✅ Symbolic links
- ✅ File permissions
- ✅ Tree hashing
Memory Provider
In-memory virtual filesystem, perfect for testing:
import { SmartFs, SmartFsProviderMemory } from '@push.rocks/smartfs';
const fs = new SmartFs(new SmartFsProviderMemory());
// All operations work in memory
await fs.file('/virtual/file.txt').write('data');
const content = await fs.file('/virtual/file.txt').read();
// Clear all data
fs.provider.clear();
Capabilities:
- ✅ File watching
- ✅ Atomic writes
- ✅ Transactions
- ✅ Streaming
- ❌ Symbolic links
- ✅ File permissions
- ✅ Tree hashing
Custom Providers
Create your own provider by implementing ISmartFsProvider:
import type { ISmartFsProvider } from '@push.rocks/smartfs';
class MyCustomProvider implements ISmartFsProvider {
public readonly name = 'custom';
public readonly capabilities = {
supportsWatch: true,
supportsAtomic: true,
supportsTransactions: true,
supportsStreaming: true,
supportsSymlinks: false,
supportsPermissions: true,
};
// Implement all required methods...
async readFile(path: string, options?) { /* ... */ }
async writeFile(path: string, content, options?) { /* ... */ }
// ... etc
}
const fs = new SmartFs(new MyCustomProvider());
Advanced Usage
Encoding Options
// UTF-8 (default for text)
await fs.file('/file.txt').encoding('utf8').write('text');
// Binary
const buffer = Buffer.from([0x48, 0x65, 0x6c, 0x6c, 0x6f]);
await fs.file('/file.bin').write(buffer);
// Base64
await fs.file('/file.txt').encoding('base64').write('SGVsbG8=');
// Hex
await fs.file('/file.txt').encoding('hex').write('48656c6c6f');
File Permissions
// Set file mode
await fs.file('/script.sh')
.mode(0o755)
.write('#!/bin/bash\necho "Hello"');
// Set directory mode
await fs.directory('/private')
.mode(0o700)
.create();
Complex Filtering
// Multiple conditions
const files = await fs.directory('/src')
.recursive()
.includeStats()
.filter(entry => {
if (!entry.stats) return false;
return entry.isFile &&
entry.name.endsWith('.ts') &&
entry.stats.size > 1024 &&
entry.stats.mtime > new Date('2024-01-01');
})
.list();
Transaction Operations
// Complex transaction
const tx = fs.transaction();
// Write multiple files
tx.file('/data/file1.json').write(JSON.stringify(data1));
tx.file('/data/file2.json').write(JSON.stringify(data2));
// Copy backups
tx.file('/data/file1.json').copy('/backup/file1.json');
tx.file('/data/file2.json').copy('/backup/file2.json');
// Delete old files
tx.file('/data/old1.json').delete();
tx.file('/data/old2.json').delete();
// Execute atomically
await tx.commit();
Type Definitions
SmartFS is fully typed with TypeScript:
import type {
IFileStats,
IDirectoryEntry,
IWatchEvent,
ITransactionOperation,
ITreeHashOptions,
TEncoding,
TFileMode,
} from '@push.rocks/smartfs';
Error Handling
SmartFS throws descriptive errors:
try {
await fs.file('/nonexistent.txt').read();
} catch (error) {
console.error(error.message);
// "ENOENT: no such file or directory, open '/nonexistent.txt'"
}
// Transactions automatically rollback on error
try {
await fs.transaction()
.file('/file1.txt').write('data')
.file('/file2.txt').write('data')
.commit();
} catch (error) {
// All operations are reverted
console.error('Transaction failed:', error);
}
Performance Tips
- Use streaming for large files (> 1MB)
- Batch operations with transactions
- Use memory provider for testing
- Enable atomic writes for critical data
- Debounce watchers to reduce event spam
- Use treeHash instead of reading files for change detection
License and Legal Information
This repository contains open-source code licensed under the MIT License. A copy of the license can be found in the LICENSE file.
Please note: The MIT License does not grant permission to use the trade names, trademarks, service marks, or product names of the project, except as required for reasonable and customary use in describing the origin of the work and reproducing the content of the NOTICE file.
Trademarks
This project is owned and maintained by Task Venture Capital GmbH. The names and logos associated with Task Venture Capital GmbH and any related products or services are trademarks of Task Venture Capital GmbH or third parties, and are not included within the scope of the MIT license granted herein.
Use of these trademarks must comply with Task Venture Capital GmbH's Trademark Guidelines or the guidelines of the respective third-party owners, and any usage must be approved in writing. Third-party trademarks used herein are the property of their respective owners and used only in a descriptive manner, e.g. for an implementation of an API or similar.
Company Information
Task Venture Capital GmbH Registered at District Court Bremen HRB 35230 HB, Germany
For any legal inquiries or further information, please contact us via email at hello@task.vc.
By using this repository, you acknowledge that you have read this section, agree to comply with its terms, and understand that the licensing of the code does not imply endorsement by Task Venture Capital GmbH of any derivative works.