@push.rocks/smartfs
Modern, pluggable filesystem module with fluent API, Web Streams support, and multiple storage backends.
Features
- 🎯 Fluent API - Action-last chainable interface for elegant code
- 🔌 Pluggable Providers - Support for multiple storage backends (Node.js fs, memory, S3, etc.)
- 🌊 Web Streams - Modern streaming with Web Streams API
- 💾 Transactions - Atomic multi-file operations with automatic rollback
- 👀 File Watching - Event-based file system monitoring
- ⚡ Async-Only - Modern async/await patterns throughout
- 📦 Zero Dependencies - Core functionality with minimal dependencies
- 🎨 TypeScript - Full type safety and IntelliSense support
Installation
pnpm install @push.rocks/smartfs
Quick Start
import { SmartFs, SmartFsProviderNode } from '@push.rocks/smartfs';
// Create a SmartFS instance with Node.js provider
const fs = new SmartFs(new SmartFsProviderNode());
// Write and read files with fluent API
await fs.file('/path/to/file.txt')
.encoding('utf8')
.write('Hello, World!');
const content = await fs.file('/path/to/file.txt')
.encoding('utf8')
.read();
console.log(content); // "Hello, World!"
API Overview
File Operations
The fluent API uses action-last pattern - configure first, then execute:
// Read file
const content = await fs.file('/path/to/file.txt')
.encoding('utf8')
.read();
// Write file
await fs.file('/path/to/file.txt')
.encoding('utf8')
.mode(0o644)
.write('content');
// Atomic write (write to temp, then rename)
await fs.file('/path/to/file.txt')
.atomic()
.write('content');
// Append to file
await fs.file('/path/to/file.txt')
.encoding('utf8')
.append('more content');
// Copy file
await fs.file('/source.txt')
.preserveTimestamps()
.copy('/destination.txt');
// Move file
await fs.file('/old.txt')
.move('/new.txt');
// Delete file
await fs.file('/path/to/file.txt')
.delete();
// Check existence
const exists = await fs.file('/path/to/file.txt').exists();
// Get stats
const stats = await fs.file('/path/to/file.txt').stat();
Directory Operations
// Create directory
await fs.directory('/path/to/dir').create();
// Create nested directories
await fs.directory('/path/to/nested/dir')
.recursive()
.create();
// List directory
const entries = await fs.directory('/path/to/dir').list();
// List recursively with filter
const tsFiles = await fs.directory('/path/to/dir')
.recursive()
.filter('*.ts')
.includeStats()
.list();
// Filter with RegExp
const files = await fs.directory('/path/to/dir')
.filter(/\.txt$/)
.list();
// Filter with function
const largeFiles = await fs.directory('/path/to/dir')
.includeStats()
.filter(entry => entry.stats && entry.stats.size > 1024)
.list();
// Delete directory
await fs.directory('/path/to/dir')
.recursive()
.delete();
// Check existence
const exists = await fs.directory('/path/to/dir').exists();
Streaming Operations
SmartFS uses Web Streams API for efficient handling of large files:
// Read stream
const readStream = await fs.file('/large-file.bin')
.chunkSize(64 * 1024)
.readStream();
const reader = readStream.getReader();
while (true) {
const { done, value } = await reader.read();
if (done) break;
// Process chunk (Uint8Array)
console.log('Chunk size:', value.length);
}
// Write stream
const writeStream = await fs.file('/output.bin').writeStream();
const writer = writeStream.getWriter();
await writer.write(new Uint8Array([1, 2, 3]));
await writer.write(new Uint8Array([4, 5, 6]));
await writer.close();
// Pipe streams
const input = await fs.file('/input.txt').readStream();
const output = await fs.file('/output.txt').writeStream();
await input.pipeTo(output);
Transactions
Execute multiple file operations atomically with automatic rollback on failure:
// Simple transaction
await fs.transaction()
.file('/file1.txt').write('content 1')
.file('/file2.txt').write('content 2')
.file('/file3.txt').delete()
.commit();
// Transaction with error handling
const tx = fs.transaction()
.file('/important.txt').write('critical data')
.file('/backup.txt').copy('/backup-old.txt')
.file('/temp.txt').delete();
try {
await tx.commit();
console.log('Transaction completed successfully');
} catch (error) {
console.error('Transaction failed and was rolled back:', error);
// All operations are automatically reverted
}
File Watching
Monitor filesystem changes with event-based watching:
// Watch a single file
const watcher = await fs.watch('/path/to/file.txt')
.onChange(event => {
console.log('File changed:', event.path);
})
.start();
// Watch directory recursively
const dirWatcher = await fs.watch('/path/to/dir')
.recursive()
.filter('*.ts')
.debounce(100)
.onChange(event => console.log('Changed:', event.path))
.onAdd(event => console.log('Added:', event.path))
.onDelete(event => console.log('Deleted:', event.path))
.start();
// Stop watching
await dirWatcher.stop();
// Watch with custom filter
const customWatcher = await fs.watch('/path/to/dir')
.recursive()
.filter(path => path.endsWith('.ts') && !path.includes('test'))
.onAll(event => {
console.log(`${event.type}: ${event.path}`);
})
.start();
Providers
SmartFS supports multiple storage backends through providers:
Node.js Provider
Uses Node.js fs/promises API for local filesystem operations:
import { SmartFs, SmartFsProviderNode } from '@push.rocks/smartfs';
const fs = new SmartFs(new SmartFsProviderNode());
Capabilities:
- ✅ File watching
- ✅ Atomic writes
- ✅ Transactions
- ✅ Streaming
- ✅ Symbolic links
- ✅ File permissions
Memory Provider
In-memory virtual filesystem, perfect for testing:
import { SmartFs, SmartFsProviderMemory } from '@push.rocks/smartfs';
const fs = new SmartFs(new SmartFsProviderMemory());
// All operations work in memory
await fs.file('/virtual/file.txt').write('data');
const content = await fs.file('/virtual/file.txt').read();
// Clear all data
fs.provider.clear();
Capabilities:
- ✅ File watching
- ✅ Atomic writes
- ✅ Transactions
- ✅ Streaming
- ❌ Symbolic links
- ✅ File permissions
Custom Providers
Create your own provider by implementing ISmartFsProvider:
import type { ISmartFsProvider } from '@push.rocks/smartfs';
class MyCustomProvider implements ISmartFsProvider {
public readonly name = 'custom';
public readonly capabilities = {
supportsWatch: true,
supportsAtomic: true,
supportsTransactions: true,
supportsStreaming: true,
supportsSymlinks: false,
supportsPermissions: true,
};
// Implement all required methods...
async readFile(path: string, options?) { /* ... */ }
async writeFile(path: string, content, options?) { /* ... */ }
// ... etc
}
const fs = new SmartFs(new MyCustomProvider());
Advanced Usage
Encoding Options
// UTF-8 (default for text)
await fs.file('/file.txt').encoding('utf8').write('text');
// Binary
const buffer = Buffer.from([0x48, 0x65, 0x6c, 0x6c, 0x6f]);
await fs.file('/file.bin').write(buffer);
// Base64
await fs.file('/file.txt').encoding('base64').write('SGVsbG8=');
// Hex
await fs.file('/file.txt').encoding('hex').write('48656c6c6f');
File Permissions
// Set file mode
await fs.file('/script.sh')
.mode(0o755)
.write('#!/bin/bash\necho "Hello"');
// Set directory mode
await fs.directory('/private')
.mode(0o700)
.create();
Complex Filtering
// Multiple conditions
const files = await fs.directory('/src')
.recursive()
.includeStats()
.filter(entry => {
if (!entry.stats) return false;
return entry.isFile &&
entry.name.endsWith('.ts') &&
entry.stats.size > 1024 &&
entry.stats.mtime > new Date('2024-01-01');
})
.list();
Transaction Operations
// Complex transaction
const tx = fs.transaction();
// Write multiple files
tx.file('/data/file1.json').write(JSON.stringify(data1));
tx.file('/data/file2.json').write(JSON.stringify(data2));
// Copy backups
tx.file('/data/file1.json').copy('/backup/file1.json');
tx.file('/data/file2.json').copy('/backup/file2.json');
// Delete old files
tx.file('/data/old1.json').delete();
tx.file('/data/old2.json').delete();
// Execute atomically
await tx.commit();
Type Definitions
SmartFS is fully typed with TypeScript:
import type {
IFileStats,
IDirectoryEntry,
IWatchEvent,
ITransactionOperation,
TEncoding,
TFileMode,
} from '@push.rocks/smartfs';
Testing
# Run all tests
pnpm test
# Run specific test
pnpm tstest test/test.memory.provider.ts --verbose
# Run with log output
pnpm tstest test/test.node.provider.ts --logfile .nogit/testlogs/test.log
Error Handling
SmartFS throws descriptive errors:
try {
await fs.file('/nonexistent.txt').read();
} catch (error) {
console.error(error.message);
// "ENOENT: no such file or directory, open '/nonexistent.txt'"
}
// Transactions automatically rollback on error
try {
await fs.transaction()
.file('/file1.txt').write('data')
.file('/file2.txt').write('data')
.commit();
} catch (error) {
// All operations are reverted
console.error('Transaction failed:', error);
}
Performance Tips
- Use streaming for large files (> 1MB)
- Batch operations with transactions
- Use memory provider for testing
- Enable atomic writes for critical data
- Debounce watchers to reduce event spam
Contributing
Contributions welcome! Please ensure:
- All tests pass
- Code follows existing style
- TypeScript types are complete
- Documentation is updated
License
MIT © Lossless GmbH
For more information, visit code.foss.global