feat(taskbuffer): add sliding-window rate limiting and result-sharing to TaskConstraintGroup and integrate with TaskManager
This commit is contained in:
121
readme.md
121
readme.md
@@ -13,7 +13,7 @@ For reporting bugs, issues, or security vulnerabilities, please visit [community
|
||||
## 🌟 Features
|
||||
|
||||
- **🎯 Type-Safe Task Management** — Full TypeScript support with generics and type inference
|
||||
- **🔒 Constraint-Based Concurrency** — Per-key mutual exclusion, group concurrency limits, and cooldown enforcement via `TaskConstraintGroup`
|
||||
- **🔒 Constraint-Based Concurrency** — Per-key mutual exclusion, group concurrency limits, cooldown enforcement, sliding-window rate limiting, and result sharing via `TaskConstraintGroup`
|
||||
- **📊 Real-Time Progress Tracking** — Step-based progress with percentage weights
|
||||
- **⚡ Smart Buffering** — Intelligent request debouncing and batching
|
||||
- **⏰ Cron Scheduling** — Schedule tasks with cron expressions
|
||||
@@ -311,6 +311,105 @@ const [cert1, cert2, cert3] = await Promise.all([r1, r2, r3]);
|
||||
- Has closure access to external state modified by prior executions
|
||||
- If multiple constraint groups have `shouldExecute`, **all** must return `true`
|
||||
|
||||
### Sliding Window Rate Limiting
|
||||
|
||||
Enforce "N completions per time window" with burst capability. Unlike `cooldownMs` (which forces even spacing between executions), `rateLimit` allows bursts up to the cap, then blocks until the window slides:
|
||||
|
||||
```typescript
|
||||
// Let's Encrypt style: 300 new orders per 3 hours
|
||||
const acmeRateLimit = new TaskConstraintGroup({
|
||||
name: 'acme-rate',
|
||||
constraintKeyForExecution: () => 'acme-account',
|
||||
rateLimit: {
|
||||
maxPerWindow: 300,
|
||||
windowMs: 3 * 60 * 60 * 1000, // 3 hours
|
||||
},
|
||||
});
|
||||
|
||||
manager.addConstraintGroup(acmeRateLimit);
|
||||
|
||||
// All 300 can burst immediately. The 301st waits until the oldest
|
||||
// completion falls out of the 3-hour window.
|
||||
for (const domain of domains) {
|
||||
manager.triggerTaskConstrained(certTask, { domain });
|
||||
}
|
||||
```
|
||||
|
||||
Compose multiple rate limits for layered protection:
|
||||
|
||||
```typescript
|
||||
// Per-domain weekly cap AND global order rate
|
||||
const perDomainWeekly = new TaskConstraintGroup({
|
||||
name: 'per-domain-weekly',
|
||||
constraintKeyForExecution: (task, input) => input.registeredDomain,
|
||||
rateLimit: { maxPerWindow: 50, windowMs: 7 * 24 * 60 * 60 * 1000 },
|
||||
});
|
||||
|
||||
const globalOrderRate = new TaskConstraintGroup({
|
||||
name: 'global-order-rate',
|
||||
constraintKeyForExecution: () => 'global',
|
||||
rateLimit: { maxPerWindow: 300, windowMs: 3 * 60 * 60 * 1000 },
|
||||
});
|
||||
|
||||
manager.addConstraintGroup(perDomainWeekly);
|
||||
manager.addConstraintGroup(globalOrderRate);
|
||||
```
|
||||
|
||||
Combine with `maxConcurrent` and `cooldownMs` for fine-grained control:
|
||||
|
||||
```typescript
|
||||
const throttled = new TaskConstraintGroup({
|
||||
name: 'acme-throttle',
|
||||
constraintKeyForExecution: () => 'acme',
|
||||
maxConcurrent: 5, // max 5 concurrent requests
|
||||
cooldownMs: 1000, // 1s gap after each completion
|
||||
rateLimit: {
|
||||
maxPerWindow: 300,
|
||||
windowMs: 3 * 60 * 60 * 1000,
|
||||
},
|
||||
});
|
||||
```
|
||||
|
||||
### Result Sharing — Deduplication for Concurrent Requests
|
||||
|
||||
When multiple callers request the same resource concurrently, `resultSharingMode: 'share-latest'` ensures only one execution occurs. All queued waiters receive the same result:
|
||||
|
||||
```typescript
|
||||
const certMutex = new TaskConstraintGroup({
|
||||
name: 'cert-per-tld',
|
||||
constraintKeyForExecution: (task, input) => extractTld(input.domain),
|
||||
maxConcurrent: 1,
|
||||
resultSharingMode: 'share-latest',
|
||||
});
|
||||
|
||||
manager.addConstraintGroup(certMutex);
|
||||
|
||||
const certTask = new Task({
|
||||
name: 'obtain-cert',
|
||||
taskFunction: async (input) => {
|
||||
return await acmeClient.obtainWildcard(input.domain);
|
||||
},
|
||||
});
|
||||
manager.addTask(certTask);
|
||||
|
||||
// Three requests for *.example.com arrive simultaneously
|
||||
const [cert1, cert2, cert3] = await Promise.all([
|
||||
manager.triggerTaskConstrained(certTask, { domain: 'api.example.com' }),
|
||||
manager.triggerTaskConstrained(certTask, { domain: 'www.example.com' }),
|
||||
manager.triggerTaskConstrained(certTask, { domain: 'mail.example.com' }),
|
||||
]);
|
||||
|
||||
// Only ONE ACME request was made.
|
||||
// cert1 === cert2 === cert3 — all callers got the same cert object.
|
||||
```
|
||||
|
||||
**Result sharing semantics:**
|
||||
|
||||
- `shouldExecute` is NOT called for shared results (the task's purpose was already fulfilled)
|
||||
- Error results are NOT shared — queued tasks execute independently after a failure
|
||||
- `lastResults` persists until `reset()` — for time-bounded sharing, use `shouldExecute` to control staleness
|
||||
- Composable with rate limiting: rate-limited waiters get shared results without waiting for the window
|
||||
|
||||
### How It Works
|
||||
|
||||
When you trigger a task through `TaskManager` (via `triggerTask`, `triggerTaskByName`, `addExecuteRemoveTask`, or cron), the manager:
|
||||
@@ -319,8 +418,9 @@ When you trigger a task through `TaskManager` (via `triggerTask`, `triggerTaskBy
|
||||
2. If no constraints apply (all matchers return `null`) → checks `shouldExecute` → runs or skips
|
||||
3. If all applicable constraints have capacity → acquires slots → checks `shouldExecute` → runs or skips
|
||||
4. If any constraint blocks → enqueues the task; when a running task completes, the queue is drained
|
||||
5. Cooldown-blocked tasks auto-retry after the shortest remaining cooldown expires
|
||||
6. Queued tasks re-check `shouldExecute` when their turn comes — stale work is automatically pruned
|
||||
5. Cooldown/rate-limit-blocked tasks auto-retry after the shortest remaining delay expires
|
||||
6. Queued tasks check for shared results first (if any group has `resultSharingMode: 'share-latest'`)
|
||||
7. Queued tasks re-check `shouldExecute` when their turn comes — stale work is automatically pruned
|
||||
|
||||
## 🎯 Core Concepts
|
||||
|
||||
@@ -926,6 +1026,8 @@ const acmeTasks = manager.getTasksMetadataByLabel('tenantId', 'acme');
|
||||
| `maxConcurrent` | `number` | `Infinity` | Max concurrent tasks per key |
|
||||
| `cooldownMs` | `number` | `0` | Minimum ms between completions per key |
|
||||
| `shouldExecute` | `(task, input?) => boolean \| Promise<boolean>` | — | Pre-execution check. Return `false` to skip; deferred resolves `undefined`. |
|
||||
| `rateLimit` | `IRateLimitConfig` | — | Sliding window: `{ maxPerWindow, windowMs }`. Counts running + completed tasks. |
|
||||
| `resultSharingMode` | `TResultSharingMode` | `'none'` | `'none'` or `'share-latest'`. Queued tasks get first task's result without executing. |
|
||||
|
||||
### TaskConstraintGroup Methods
|
||||
|
||||
@@ -933,12 +1035,17 @@ const acmeTasks = manager.getTasksMetadataByLabel('tenantId', 'acme');
|
||||
| --- | --- | --- |
|
||||
| `getConstraintKey(task, input?)` | `string \| null` | Get the constraint key for a task + input |
|
||||
| `checkShouldExecute(task, input?)` | `Promise<boolean>` | Run the `shouldExecute` callback (defaults to `true`) |
|
||||
| `canRun(key)` | `boolean` | Check if a slot is available |
|
||||
| `canRun(key)` | `boolean` | Check if a slot is available (considers concurrency, cooldown, and rate limit) |
|
||||
| `acquireSlot(key)` | `void` | Claim a running slot |
|
||||
| `releaseSlot(key)` | `void` | Release a slot and record completion time |
|
||||
| `releaseSlot(key)` | `void` | Release a slot and record completion time + rate-limit timestamp |
|
||||
| `getCooldownRemaining(key)` | `number` | Milliseconds until cooldown expires |
|
||||
| `getRateLimitDelay(key)` | `number` | Milliseconds until a rate-limit slot opens |
|
||||
| `getNextAvailableDelay(key)` | `number` | Max of cooldown + rate-limit delay — unified "when can I run" |
|
||||
| `getRunningCount(key)` | `number` | Current running count for key |
|
||||
| `reset()` | `void` | Clear all state |
|
||||
| `recordResult(key, result)` | `void` | Store result for sharing (no-op if mode is `'none'`) |
|
||||
| `getLastResult(key)` | `{result, timestamp} \| undefined` | Get last shared result for key |
|
||||
| `hasResultSharing()` | `boolean` | Whether result sharing is enabled |
|
||||
| `reset()` | `void` | Clear all state (running counts, cooldowns, rate-limit timestamps, shared results) |
|
||||
|
||||
### TaskManager Methods
|
||||
|
||||
@@ -986,6 +1093,8 @@ import type {
|
||||
ITaskStep,
|
||||
ITaskFunction,
|
||||
ITaskConstraintGroupOptions,
|
||||
IRateLimitConfig,
|
||||
TResultSharingMode,
|
||||
StepNames,
|
||||
} from '@push.rocks/taskbuffer';
|
||||
```
|
||||
|
||||
Reference in New Issue
Block a user