Compare commits
6 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
| 9bbc3da484 | |||
| e9af3f8328 | |||
| 351680159b | |||
| 0cabf284ed | |||
| dbc8566aad | |||
| bd64a7b140 |
33
changelog.md
33
changelog.md
@@ -1,5 +1,38 @@
|
||||
# Changelog
|
||||
|
||||
## 2025-12-03 - 2.7.0 - feat(upstream)
|
||||
Add dynamic per-request upstream provider and integrate into registries
|
||||
|
||||
- Introduce IUpstreamProvider and IUpstreamResolutionContext to resolve upstream configs per request.
|
||||
- Add StaticUpstreamProvider implementation for simple static upstream configurations.
|
||||
- Propagate dynamic upstream provider through SmartRegistry and wire into protocol handlers (npm, oci, maven, cargo, composer, pypi, rubygems).
|
||||
- Replace persistent per-protocol upstream instances with per-request resolution: registries now call provider.resolveUpstreamConfig(...) and instantiate protocol-specific Upstream when needed.
|
||||
- Add IRequestActor to core interfaces and pass actor context (userId, ip, userAgent, etc.) to upstream resolution and storage/auth hooks.
|
||||
- Update many protocol registries to accept an upstreamProvider instead of IProtocolUpstreamConfig and to attempt upstream fetches only when provider returns enabled config.
|
||||
- Add utilities and tests: test helpers to create registries with upstream provider, a tracking upstream provider helper, StaticUpstreamProvider tests and extensive upstream/provider integration tests.
|
||||
- Improve upstream interfaces and cache/fetch contexts (IUpstreamFetchContext includes actor) and add StaticUpstreamProvider class to upstream module.
|
||||
|
||||
## 2025-11-27 - 2.6.0 - feat(core)
|
||||
Add core registry infrastructure: storage, auth, upstream cache, and protocol handlers
|
||||
|
||||
- Introduce RegistryStorage: unified storage abstraction with hook support (before/after put/delete/get) and helpers for OCI, NPM, Maven, Cargo, Composer, PyPI, and RubyGems paths and operations
|
||||
- Add DefaultAuthProvider and AuthManager: in-memory token store, UUID tokens for package protocols, OCI JWT creation/validation, token lifecycle (create/validate/revoke) and authorization checking
|
||||
- Add SmartRegistry orchestrator to initialize and route requests to protocol handlers (OCI, NPM, Maven, Cargo, Composer, PyPI, RubyGems)
|
||||
- Implement upstream subsystem: UpstreamCache (in-memory + optional S3 persistence), BaseUpstream with multi-upstream routing, scope rules, retries, TTLs, stale-while-revalidate and negative caching
|
||||
- Add circuit breaker implementation for upstream resilience with exponential backoff and per-upstream breakers
|
||||
- Add protocol implementations and helpers: NpmRegistry/NpmUpstream (packument/tarball handling and tarball URL rewriting), PypiRegistry (PEP 503/691 support, uploads, metadata), MavenRegistry (artifact/metadata handling and checksum generation), CargoRegistry (sparse index, publish/download/yank)
|
||||
- Utility exports and helpers: buffer helpers, plugins aggregator, path helpers, and various protocol-specific helper modules
|
||||
|
||||
## 2025-11-27 - 2.5.0 - feat(pypi,rubygems)
|
||||
Add PyPI and RubyGems protocol implementations, upstream caching, and auth/storage improvements
|
||||
|
||||
- Implemented full PyPI support (PEP 503 Simple API HTML, PEP 691 JSON API, legacy upload handling, name normalization, hash verification, content negotiation, package/file storage and metadata management).
|
||||
- Implemented RubyGems support (compact index, /versions, /info, /names endpoints, gem upload, yank/unyank, platform handling and file storage).
|
||||
- Expanded RegistryStorage with protocol-specific helpers for OCI, NPM, Maven, Cargo, Composer, PyPI, and RubyGems (get/put/delete/list helpers, metadata handling, context-aware hooks).
|
||||
- Added AuthManager and DefaultAuthProvider improvements: unified token creation/validation for multiple protocols (npm, oci, maven, composer, cargo, pypi, rubygems) and OCI JWT support.
|
||||
- Added upstream infrastructure: BaseUpstream, UpstreamCache (S3-backed optional, stale-while-revalidate, negative caching), circuit breaker with retries/backoff and resilience defaults.
|
||||
- Added various protocol registries (NPM, Maven, Cargo, OCI, PyPI) with request routing, permission checks, and optional upstream proxying/caching.
|
||||
|
||||
## 2025-11-27 - 2.4.0 - feat(core)
|
||||
Add pluggable auth providers, storage hooks, multi-upstream cache awareness, and PyPI/RubyGems protocol implementations
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "@push.rocks/smartregistry",
|
||||
"version": "2.4.0",
|
||||
"version": "2.7.0",
|
||||
"private": false,
|
||||
"description": "A composable TypeScript library implementing OCI, NPM, Maven, Cargo, Composer, PyPI, and RubyGems registries for building unified container and package registries",
|
||||
"main": "dist_ts/index.js",
|
||||
|
||||
226
readme.md
226
readme.md
@@ -4,7 +4,7 @@
|
||||
|
||||
## Issue Reporting and Security
|
||||
|
||||
For reporting bugs, issues, or security vulnerabilities, please visit [community.foss.global/](https://community.foss.global/). This is the central community hub for all issue reporting. Developers who want to sign a contribution agreement and go through identification can also get a [code.foss.global/](https://code.foss.global/) account to submit Pull Requests directly.
|
||||
For reporting bugs, issues, or security vulnerabilities, please visit [community.foss.global/](https://community.foss.global/). This is the central community hub for all issue reporting. Developers who sign and comply with our contribution agreement and go through identification can also get a [code.foss.global/](https://code.foss.global/) account to submit Pull Requests directly.
|
||||
|
||||
## ✨ Features
|
||||
|
||||
@@ -82,6 +82,19 @@ For reporting bugs, issues, or security vulnerabilities, please visit [community
|
||||
- ✅ Dependency resolution
|
||||
- ✅ Legacy API compatibility
|
||||
|
||||
### 🌐 Upstream Proxy & Caching
|
||||
- **Multi-Upstream Support**: Configure multiple upstream registries per protocol with priority ordering
|
||||
- **Scope-Based Routing**: Route specific packages/scopes to different upstreams (e.g., `@company/*` → private registry)
|
||||
- **S3-Backed Cache**: Persistent caching using existing S3 storage with URL-based cache paths
|
||||
- **Circuit Breaker**: Automatic failover with configurable thresholds
|
||||
- **Stale-While-Revalidate**: Serve cached content while refreshing in background
|
||||
- **Content-Aware TTLs**: Different TTLs for immutable (tarballs) vs mutable (metadata) content
|
||||
|
||||
### 🔌 Enterprise Extensibility
|
||||
- **Pluggable Auth Provider** (`IAuthProvider`): Integrate LDAP, OAuth, SSO, or custom auth systems
|
||||
- **Storage Event Hooks** (`IStorageHooks`): Quota tracking, audit logging, virus scanning, cache invalidation
|
||||
- **Request Actor Context**: Pass user/org info through requests for audit trails and rate limiting
|
||||
|
||||
## 📥 Installation
|
||||
|
||||
```bash
|
||||
@@ -648,6 +661,217 @@ const canWrite = await authManager.authorize(
|
||||
);
|
||||
```
|
||||
|
||||
### 🌐 Upstream Proxy Configuration
|
||||
|
||||
```typescript
|
||||
import { SmartRegistry, IRegistryConfig } from '@push.rocks/smartregistry';
|
||||
|
||||
const config: IRegistryConfig = {
|
||||
storage: { /* S3 config */ },
|
||||
auth: { /* Auth config */ },
|
||||
npm: {
|
||||
enabled: true,
|
||||
basePath: '/npm',
|
||||
upstream: {
|
||||
enabled: true,
|
||||
upstreams: [
|
||||
{
|
||||
id: 'company-private',
|
||||
name: 'Company Private NPM',
|
||||
url: 'https://npm.internal.company.com',
|
||||
priority: 1, // Lower = higher priority
|
||||
enabled: true,
|
||||
scopeRules: [
|
||||
{ pattern: '@company/*', action: 'include' },
|
||||
{ pattern: '@internal/*', action: 'include' },
|
||||
],
|
||||
auth: { type: 'bearer', token: process.env.NPM_PRIVATE_TOKEN },
|
||||
},
|
||||
{
|
||||
id: 'npmjs',
|
||||
name: 'NPM Public Registry',
|
||||
url: 'https://registry.npmjs.org',
|
||||
priority: 10,
|
||||
enabled: true,
|
||||
scopeRules: [
|
||||
{ pattern: '@company/*', action: 'exclude' },
|
||||
{ pattern: '@internal/*', action: 'exclude' },
|
||||
],
|
||||
auth: { type: 'none' },
|
||||
cache: { defaultTtlSeconds: 300 },
|
||||
resilience: { timeoutMs: 30000, maxRetries: 3 },
|
||||
},
|
||||
],
|
||||
cache: { enabled: true, staleWhileRevalidate: true },
|
||||
},
|
||||
},
|
||||
oci: {
|
||||
enabled: true,
|
||||
basePath: '/oci',
|
||||
upstream: {
|
||||
enabled: true,
|
||||
upstreams: [
|
||||
{
|
||||
id: 'dockerhub',
|
||||
name: 'Docker Hub',
|
||||
url: 'https://registry-1.docker.io',
|
||||
priority: 1,
|
||||
enabled: true,
|
||||
auth: { type: 'none' },
|
||||
},
|
||||
{
|
||||
id: 'ghcr',
|
||||
name: 'GitHub Container Registry',
|
||||
url: 'https://ghcr.io',
|
||||
priority: 2,
|
||||
enabled: true,
|
||||
scopeRules: [{ pattern: 'ghcr.io/*', action: 'include' }],
|
||||
auth: { type: 'bearer', token: process.env.GHCR_TOKEN },
|
||||
},
|
||||
],
|
||||
},
|
||||
},
|
||||
};
|
||||
|
||||
const registry = new SmartRegistry(config);
|
||||
await registry.init();
|
||||
|
||||
// Requests for @company/* packages go to private registry
|
||||
// Other packages proxy through to npmjs.org with caching
|
||||
```
|
||||
|
||||
### 🔌 Custom Auth Provider
|
||||
|
||||
```typescript
|
||||
import { SmartRegistry, IAuthProvider, IAuthToken, ICredentials, TRegistryProtocol } from '@push.rocks/smartregistry';
|
||||
|
||||
// Implement custom auth (e.g., LDAP, OAuth)
|
||||
class LdapAuthProvider implements IAuthProvider {
|
||||
constructor(private ldapClient: LdapClient) {}
|
||||
|
||||
async authenticate(credentials: ICredentials): Promise<string | null> {
|
||||
const result = await this.ldapClient.bind(credentials.username, credentials.password);
|
||||
return result.success ? credentials.username : null;
|
||||
}
|
||||
|
||||
async validateToken(token: string, protocol?: TRegistryProtocol): Promise<IAuthToken | null> {
|
||||
const session = await this.sessionStore.get(token);
|
||||
if (!session) return null;
|
||||
return {
|
||||
userId: session.userId,
|
||||
scopes: session.scopes,
|
||||
readonly: session.readonly,
|
||||
created: session.created,
|
||||
};
|
||||
}
|
||||
|
||||
async createToken(userId: string, protocol: TRegistryProtocol, options?: ITokenOptions): Promise<string> {
|
||||
const token = crypto.randomUUID();
|
||||
await this.sessionStore.set(token, { userId, protocol, ...options });
|
||||
return token;
|
||||
}
|
||||
|
||||
async revokeToken(token: string): Promise<void> {
|
||||
await this.sessionStore.delete(token);
|
||||
}
|
||||
|
||||
async authorize(token: IAuthToken | null, resource: string, action: string): Promise<boolean> {
|
||||
if (!token) return action === 'read'; // Anonymous read-only
|
||||
// Check LDAP groups, roles, etc.
|
||||
return this.checkPermissions(token.userId, resource, action);
|
||||
}
|
||||
}
|
||||
|
||||
// Use custom provider
|
||||
const registry = new SmartRegistry({
|
||||
...config,
|
||||
authProvider: new LdapAuthProvider(ldapClient),
|
||||
});
|
||||
```
|
||||
|
||||
### 📊 Storage Hooks (Quota & Audit)
|
||||
|
||||
```typescript
|
||||
import { SmartRegistry, IStorageHooks, IStorageHookContext } from '@push.rocks/smartregistry';
|
||||
|
||||
const storageHooks: IStorageHooks = {
|
||||
// Block uploads that exceed quota
|
||||
async beforePut(ctx: IStorageHookContext) {
|
||||
if (ctx.actor?.orgId) {
|
||||
const usage = await getStorageUsage(ctx.actor.orgId);
|
||||
const quota = await getQuota(ctx.actor.orgId);
|
||||
|
||||
if (usage + (ctx.metadata?.size || 0) > quota) {
|
||||
return { allowed: false, reason: 'Storage quota exceeded' };
|
||||
}
|
||||
}
|
||||
return { allowed: true };
|
||||
},
|
||||
|
||||
// Update usage tracking after successful upload
|
||||
async afterPut(ctx: IStorageHookContext) {
|
||||
if (ctx.actor?.orgId && ctx.metadata?.size) {
|
||||
await incrementUsage(ctx.actor.orgId, ctx.metadata.size);
|
||||
}
|
||||
|
||||
// Audit log
|
||||
await auditLog.write({
|
||||
action: 'storage.put',
|
||||
key: ctx.key,
|
||||
protocol: ctx.protocol,
|
||||
actor: ctx.actor,
|
||||
timestamp: ctx.timestamp,
|
||||
});
|
||||
},
|
||||
|
||||
// Prevent deletion of protected packages
|
||||
async beforeDelete(ctx: IStorageHookContext) {
|
||||
if (await isProtectedPackage(ctx.key)) {
|
||||
return { allowed: false, reason: 'Cannot delete protected package' };
|
||||
}
|
||||
return { allowed: true };
|
||||
},
|
||||
|
||||
// Log all access for compliance
|
||||
async afterGet(ctx: IStorageHookContext) {
|
||||
await accessLog.write({
|
||||
action: 'storage.get',
|
||||
key: ctx.key,
|
||||
actor: ctx.actor,
|
||||
timestamp: ctx.timestamp,
|
||||
});
|
||||
},
|
||||
};
|
||||
|
||||
const registry = new SmartRegistry({
|
||||
...config,
|
||||
storageHooks,
|
||||
});
|
||||
```
|
||||
|
||||
### 👤 Request Actor Context
|
||||
|
||||
```typescript
|
||||
// Pass actor information through requests for audit/quota tracking
|
||||
const response = await registry.handleRequest({
|
||||
method: 'PUT',
|
||||
path: '/npm/my-package',
|
||||
headers: { 'Authorization': 'Bearer <token>' },
|
||||
query: {},
|
||||
body: packageData,
|
||||
actor: {
|
||||
userId: 'user123',
|
||||
tokenId: 'token-abc',
|
||||
ip: req.ip,
|
||||
userAgent: req.headers['user-agent'],
|
||||
orgId: 'org-456',
|
||||
sessionId: 'session-xyz',
|
||||
},
|
||||
});
|
||||
|
||||
// Actor info is available in storage hooks for quota/audit
|
||||
```
|
||||
|
||||
## ⚙️ Configuration
|
||||
|
||||
### Storage Configuration
|
||||
|
||||
@@ -3,7 +3,11 @@ import * as crypto from 'crypto';
|
||||
import * as smartarchive from '@push.rocks/smartarchive';
|
||||
import * as smartbucket from '@push.rocks/smartbucket';
|
||||
import { SmartRegistry } from '../../ts/classes.smartregistry.js';
|
||||
import type { IRegistryConfig } from '../../ts/core/interfaces.core.js';
|
||||
import type { IRegistryConfig, IAuthToken, TRegistryProtocol } from '../../ts/core/interfaces.core.js';
|
||||
import type { IAuthProvider, ITokenOptions } from '../../ts/core/interfaces.auth.js';
|
||||
import type { IStorageHooks, IStorageHookContext, IBeforePutResult, IBeforeDeleteResult } from '../../ts/core/interfaces.storage.js';
|
||||
import { StaticUpstreamProvider } from '../../ts/upstream/interfaces.upstream.js';
|
||||
import type { IUpstreamProvider, IUpstreamResolutionContext, IProtocolUpstreamConfig } from '../../ts/upstream/interfaces.upstream.js';
|
||||
|
||||
const testQenv = new qenv.Qenv('./', './.nogit');
|
||||
|
||||
@@ -132,6 +136,89 @@ export async function createTestRegistry(): Promise<SmartRegistry> {
|
||||
return registry;
|
||||
}
|
||||
|
||||
/**
|
||||
* Create a test SmartRegistry instance with upstream provider configured
|
||||
*/
|
||||
export async function createTestRegistryWithUpstream(
|
||||
upstreamProvider?: IUpstreamProvider
|
||||
): Promise<SmartRegistry> {
|
||||
// Read S3 config from env.json
|
||||
const s3AccessKey = await testQenv.getEnvVarOnDemand('S3_ACCESSKEY');
|
||||
const s3SecretKey = await testQenv.getEnvVarOnDemand('S3_SECRETKEY');
|
||||
const s3Endpoint = await testQenv.getEnvVarOnDemand('S3_ENDPOINT');
|
||||
const s3Port = await testQenv.getEnvVarOnDemand('S3_PORT');
|
||||
|
||||
// Default to StaticUpstreamProvider with npm.js configured
|
||||
const defaultProvider = new StaticUpstreamProvider({
|
||||
npm: {
|
||||
enabled: true,
|
||||
upstreams: [{ id: 'npmjs', url: 'https://registry.npmjs.org', priority: 1, enabled: true }],
|
||||
},
|
||||
oci: {
|
||||
enabled: true,
|
||||
upstreams: [{ id: 'dockerhub', url: 'https://registry-1.docker.io', priority: 1, enabled: true }],
|
||||
},
|
||||
});
|
||||
|
||||
const config: IRegistryConfig = {
|
||||
storage: {
|
||||
accessKey: s3AccessKey || 'minioadmin',
|
||||
accessSecret: s3SecretKey || 'minioadmin',
|
||||
endpoint: s3Endpoint || 'localhost',
|
||||
port: parseInt(s3Port || '9000', 10),
|
||||
useSsl: false,
|
||||
region: 'us-east-1',
|
||||
bucketName: 'test-registry',
|
||||
},
|
||||
auth: {
|
||||
jwtSecret: 'test-secret-key',
|
||||
tokenStore: 'memory',
|
||||
npmTokens: { enabled: true },
|
||||
ociTokens: {
|
||||
enabled: true,
|
||||
realm: 'https://auth.example.com/token',
|
||||
service: 'test-registry',
|
||||
},
|
||||
pypiTokens: { enabled: true },
|
||||
rubygemsTokens: { enabled: true },
|
||||
},
|
||||
upstreamProvider: upstreamProvider || defaultProvider,
|
||||
oci: { enabled: true, basePath: '/oci' },
|
||||
npm: { enabled: true, basePath: '/npm' },
|
||||
maven: { enabled: true, basePath: '/maven' },
|
||||
composer: { enabled: true, basePath: '/composer' },
|
||||
cargo: { enabled: true, basePath: '/cargo' },
|
||||
pypi: { enabled: true, basePath: '/pypi' },
|
||||
rubygems: { enabled: true, basePath: '/rubygems' },
|
||||
};
|
||||
|
||||
const registry = new SmartRegistry(config);
|
||||
await registry.init();
|
||||
|
||||
return registry;
|
||||
}
|
||||
|
||||
/**
|
||||
* Create a mock upstream provider that tracks all calls for testing
|
||||
*/
|
||||
export function createTrackingUpstreamProvider(
|
||||
baseConfig?: Partial<Record<TRegistryProtocol, IProtocolUpstreamConfig>>
|
||||
): {
|
||||
provider: IUpstreamProvider;
|
||||
calls: IUpstreamResolutionContext[];
|
||||
} {
|
||||
const calls: IUpstreamResolutionContext[] = [];
|
||||
|
||||
const provider: IUpstreamProvider = {
|
||||
async resolveUpstreamConfig(context: IUpstreamResolutionContext) {
|
||||
calls.push({ ...context });
|
||||
return baseConfig?.[context.protocol] ?? null;
|
||||
},
|
||||
};
|
||||
|
||||
return { provider, calls };
|
||||
}
|
||||
|
||||
/**
|
||||
* Helper to create test authentication tokens
|
||||
*/
|
||||
@@ -608,3 +695,228 @@ export function calculateRubyGemsChecksums(data: Buffer) {
|
||||
sha256: crypto.createHash('sha256').update(data).digest('hex'),
|
||||
};
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// Enterprise Extensibility Test Helpers
|
||||
// ============================================================================
|
||||
|
||||
/**
|
||||
* Create a mock auth provider for testing pluggable authentication.
|
||||
* Allows customizing behavior for different test scenarios.
|
||||
*/
|
||||
export function createMockAuthProvider(overrides?: Partial<IAuthProvider>): IAuthProvider {
|
||||
const tokens = new Map<string, IAuthToken>();
|
||||
|
||||
return {
|
||||
init: async () => {},
|
||||
authenticate: async (credentials) => {
|
||||
// Default: always authenticate successfully
|
||||
return credentials.username;
|
||||
},
|
||||
validateToken: async (token, protocol) => {
|
||||
const stored = tokens.get(token);
|
||||
if (stored && (!protocol || stored.type === protocol)) {
|
||||
return stored;
|
||||
}
|
||||
// Mock token for tests
|
||||
if (token === 'valid-mock-token') {
|
||||
return {
|
||||
type: 'npm' as TRegistryProtocol,
|
||||
userId: 'mock-user',
|
||||
scopes: ['npm:*:*:*'],
|
||||
};
|
||||
}
|
||||
return null;
|
||||
},
|
||||
createToken: async (userId, protocol, options) => {
|
||||
const tokenId = `mock-${protocol}-${Date.now()}`;
|
||||
const authToken: IAuthToken = {
|
||||
type: protocol,
|
||||
userId,
|
||||
scopes: options?.scopes || [`${protocol}:*:*:*`],
|
||||
readonly: options?.readonly,
|
||||
expiresAt: options?.expiresIn ? new Date(Date.now() + options.expiresIn * 1000) : undefined,
|
||||
};
|
||||
tokens.set(tokenId, authToken);
|
||||
return tokenId;
|
||||
},
|
||||
revokeToken: async (token) => {
|
||||
tokens.delete(token);
|
||||
},
|
||||
authorize: async (token, resource, action) => {
|
||||
if (!token) return false;
|
||||
if (token.readonly && ['write', 'push', 'delete'].includes(action)) {
|
||||
return false;
|
||||
}
|
||||
return true;
|
||||
},
|
||||
listUserTokens: async (userId) => {
|
||||
const result: Array<{ key: string; readonly: boolean; created: string; protocol?: TRegistryProtocol }> = [];
|
||||
for (const [key, token] of tokens.entries()) {
|
||||
if (token.userId === userId) {
|
||||
result.push({
|
||||
key: `hash-${key.substring(0, 8)}`,
|
||||
readonly: token.readonly || false,
|
||||
created: new Date().toISOString(),
|
||||
protocol: token.type,
|
||||
});
|
||||
}
|
||||
}
|
||||
return result;
|
||||
},
|
||||
...overrides,
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Create test storage hooks that track all calls.
|
||||
* Useful for verifying hook invocation order and parameters.
|
||||
*/
|
||||
export function createTrackingHooks(options?: {
|
||||
beforePutAllowed?: boolean;
|
||||
beforeDeleteAllowed?: boolean;
|
||||
throwOnAfterPut?: boolean;
|
||||
throwOnAfterGet?: boolean;
|
||||
}): {
|
||||
hooks: IStorageHooks;
|
||||
calls: Array<{ method: string; context: IStorageHookContext; timestamp: number }>;
|
||||
} {
|
||||
const calls: Array<{ method: string; context: IStorageHookContext; timestamp: number }> = [];
|
||||
|
||||
return {
|
||||
calls,
|
||||
hooks: {
|
||||
beforePut: async (ctx) => {
|
||||
calls.push({ method: 'beforePut', context: ctx, timestamp: Date.now() });
|
||||
return {
|
||||
allowed: options?.beforePutAllowed !== false,
|
||||
reason: options?.beforePutAllowed === false ? 'Blocked by test' : undefined,
|
||||
};
|
||||
},
|
||||
afterPut: async (ctx) => {
|
||||
calls.push({ method: 'afterPut', context: ctx, timestamp: Date.now() });
|
||||
if (options?.throwOnAfterPut) {
|
||||
throw new Error('Test error in afterPut');
|
||||
}
|
||||
},
|
||||
beforeDelete: async (ctx) => {
|
||||
calls.push({ method: 'beforeDelete', context: ctx, timestamp: Date.now() });
|
||||
return {
|
||||
allowed: options?.beforeDeleteAllowed !== false,
|
||||
reason: options?.beforeDeleteAllowed === false ? 'Blocked by test' : undefined,
|
||||
};
|
||||
},
|
||||
afterDelete: async (ctx) => {
|
||||
calls.push({ method: 'afterDelete', context: ctx, timestamp: Date.now() });
|
||||
},
|
||||
afterGet: async (ctx) => {
|
||||
calls.push({ method: 'afterGet', context: ctx, timestamp: Date.now() });
|
||||
if (options?.throwOnAfterGet) {
|
||||
throw new Error('Test error in afterGet');
|
||||
}
|
||||
},
|
||||
},
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Create a blocking storage hooks implementation for quota testing.
|
||||
*/
|
||||
export function createQuotaHooks(maxSizeBytes: number): {
|
||||
hooks: IStorageHooks;
|
||||
currentUsage: { bytes: number };
|
||||
} {
|
||||
const currentUsage = { bytes: 0 };
|
||||
|
||||
return {
|
||||
currentUsage,
|
||||
hooks: {
|
||||
beforePut: async (ctx) => {
|
||||
const size = ctx.metadata?.size || 0;
|
||||
if (currentUsage.bytes + size > maxSizeBytes) {
|
||||
return { allowed: false, reason: `Quota exceeded: ${currentUsage.bytes + size} > ${maxSizeBytes}` };
|
||||
}
|
||||
return { allowed: true };
|
||||
},
|
||||
afterPut: async (ctx) => {
|
||||
currentUsage.bytes += ctx.metadata?.size || 0;
|
||||
},
|
||||
afterDelete: async (ctx) => {
|
||||
currentUsage.bytes -= ctx.metadata?.size || 0;
|
||||
if (currentUsage.bytes < 0) currentUsage.bytes = 0;
|
||||
},
|
||||
},
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Create a SmartBucket storage backend for upstream cache testing.
|
||||
*/
|
||||
export async function createTestStorageBackend(): Promise<{
|
||||
storage: {
|
||||
getObject: (key: string) => Promise<Buffer | null>;
|
||||
putObject: (key: string, data: Buffer) => Promise<void>;
|
||||
deleteObject: (key: string) => Promise<void>;
|
||||
listObjects: (prefix: string) => Promise<string[]>;
|
||||
};
|
||||
bucket: smartbucket.Bucket;
|
||||
cleanup: () => Promise<void>;
|
||||
}> {
|
||||
const s3AccessKey = await testQenv.getEnvVarOnDemand('S3_ACCESSKEY');
|
||||
const s3SecretKey = await testQenv.getEnvVarOnDemand('S3_SECRETKEY');
|
||||
const s3Endpoint = await testQenv.getEnvVarOnDemand('S3_ENDPOINT');
|
||||
const s3Port = await testQenv.getEnvVarOnDemand('S3_PORT');
|
||||
|
||||
const s3 = new smartbucket.SmartBucket({
|
||||
accessKey: s3AccessKey || 'minioadmin',
|
||||
accessSecret: s3SecretKey || 'minioadmin',
|
||||
endpoint: s3Endpoint || 'localhost',
|
||||
port: parseInt(s3Port || '9000', 10),
|
||||
useSsl: false,
|
||||
});
|
||||
|
||||
const testRunId = generateTestRunId();
|
||||
const bucketName = 'test-cache-' + testRunId.substring(0, 8);
|
||||
const bucket = await s3.createBucket(bucketName);
|
||||
|
||||
const storage = {
|
||||
getObject: async (key: string): Promise<Buffer | null> => {
|
||||
try {
|
||||
const file = await bucket.fastGet({ path: key });
|
||||
if (!file) return null;
|
||||
const stream = await file.createReadStream();
|
||||
const chunks: Buffer[] = [];
|
||||
for await (const chunk of stream) {
|
||||
chunks.push(Buffer.from(chunk));
|
||||
}
|
||||
return Buffer.concat(chunks);
|
||||
} catch {
|
||||
return null;
|
||||
}
|
||||
},
|
||||
putObject: async (key: string, data: Buffer): Promise<void> => {
|
||||
await bucket.fastPut({ path: key, contents: data, overwrite: true });
|
||||
},
|
||||
deleteObject: async (key: string): Promise<void> => {
|
||||
await bucket.fastRemove({ path: key });
|
||||
},
|
||||
listObjects: async (prefix: string): Promise<string[]> => {
|
||||
const files = await bucket.fastList({ prefix });
|
||||
return files.map(f => f.name);
|
||||
},
|
||||
};
|
||||
|
||||
const cleanup = async () => {
|
||||
try {
|
||||
const files = await bucket.fastList({});
|
||||
for (const file of files) {
|
||||
await bucket.fastRemove({ path: file.name });
|
||||
}
|
||||
await s3.removeBucket(bucketName);
|
||||
} catch {
|
||||
// Ignore cleanup errors
|
||||
}
|
||||
};
|
||||
|
||||
return { storage, bucket, cleanup };
|
||||
}
|
||||
|
||||
412
test/test.auth.provider.ts
Normal file
412
test/test.auth.provider.ts
Normal file
@@ -0,0 +1,412 @@
|
||||
import { expect, tap } from '@git.zone/tstest/tapbundle';
|
||||
import { DefaultAuthProvider } from '../ts/core/classes.defaultauthprovider.js';
|
||||
import { AuthManager } from '../ts/core/classes.authmanager.js';
|
||||
import type { IAuthProvider } from '../ts/core/interfaces.auth.js';
|
||||
import type { IAuthConfig, IAuthToken } from '../ts/core/interfaces.core.js';
|
||||
import { createMockAuthProvider } from './helpers/registry.js';
|
||||
|
||||
// ============================================================================
|
||||
// Test State
|
||||
// ============================================================================
|
||||
|
||||
let provider: DefaultAuthProvider;
|
||||
let authConfig: IAuthConfig;
|
||||
|
||||
// ============================================================================
|
||||
// Setup
|
||||
// ============================================================================
|
||||
|
||||
tap.test('setup: should create DefaultAuthProvider', async () => {
|
||||
authConfig = {
|
||||
jwtSecret: 'test-secret-key-for-jwt-signing',
|
||||
tokenStore: 'memory',
|
||||
npmTokens: { enabled: true },
|
||||
ociTokens: {
|
||||
enabled: true,
|
||||
realm: 'https://auth.example.com/token',
|
||||
service: 'test-registry',
|
||||
},
|
||||
mavenTokens: { enabled: true },
|
||||
cargoTokens: { enabled: true },
|
||||
composerTokens: { enabled: true },
|
||||
pypiTokens: { enabled: true },
|
||||
rubygemsTokens: { enabled: true },
|
||||
};
|
||||
|
||||
provider = new DefaultAuthProvider(authConfig);
|
||||
await provider.init();
|
||||
expect(provider).toBeInstanceOf(DefaultAuthProvider);
|
||||
});
|
||||
|
||||
// ============================================================================
|
||||
// Authentication Tests
|
||||
// ============================================================================
|
||||
|
||||
tap.test('authenticate: should authenticate new user (auto-registration)', async () => {
|
||||
const userId = await provider.authenticate({
|
||||
username: 'newuser',
|
||||
password: 'newpassword',
|
||||
});
|
||||
|
||||
expect(userId).toEqual('newuser');
|
||||
});
|
||||
|
||||
tap.test('authenticate: should authenticate existing user with correct password', async () => {
|
||||
// First registration
|
||||
await provider.authenticate({
|
||||
username: 'existinguser',
|
||||
password: 'correctpass',
|
||||
});
|
||||
|
||||
// Second authentication with same credentials
|
||||
const userId = await provider.authenticate({
|
||||
username: 'existinguser',
|
||||
password: 'correctpass',
|
||||
});
|
||||
|
||||
expect(userId).toEqual('existinguser');
|
||||
});
|
||||
|
||||
tap.test('authenticate: should reject authentication with wrong password', async () => {
|
||||
// First registration
|
||||
await provider.authenticate({
|
||||
username: 'passworduser',
|
||||
password: 'originalpass',
|
||||
});
|
||||
|
||||
// Attempt with wrong password
|
||||
const userId = await provider.authenticate({
|
||||
username: 'passworduser',
|
||||
password: 'wrongpass',
|
||||
});
|
||||
|
||||
expect(userId).toBeNull();
|
||||
});
|
||||
|
||||
// ============================================================================
|
||||
// Token Creation Tests
|
||||
// ============================================================================
|
||||
|
||||
tap.test('createToken: should create NPM token with correct scopes', async () => {
|
||||
const token = await provider.createToken('testuser', 'npm', {
|
||||
scopes: ['npm:package:*:*'],
|
||||
});
|
||||
|
||||
expect(token).toBeTruthy();
|
||||
expect(typeof token).toEqual('string');
|
||||
|
||||
// Validate the token
|
||||
const validated = await provider.validateToken(token, 'npm');
|
||||
expect(validated).toBeTruthy();
|
||||
expect(validated!.type).toEqual('npm');
|
||||
expect(validated!.userId).toEqual('testuser');
|
||||
expect(validated!.scopes).toContain('npm:package:*:*');
|
||||
});
|
||||
|
||||
tap.test('createToken: should create Maven token', async () => {
|
||||
const token = await provider.createToken('mavenuser', 'maven', {
|
||||
readonly: true,
|
||||
});
|
||||
|
||||
expect(token).toBeTruthy();
|
||||
|
||||
const validated = await provider.validateToken(token, 'maven');
|
||||
expect(validated).toBeTruthy();
|
||||
expect(validated!.type).toEqual('maven');
|
||||
expect(validated!.readonly).toBeTrue();
|
||||
});
|
||||
|
||||
tap.test('createToken: should create OCI JWT token with correct claims', async () => {
|
||||
const token = await provider.createToken('ociuser', 'oci', {
|
||||
scopes: ['oci:repository:myrepo:push', 'oci:repository:myrepo:pull'],
|
||||
expiresIn: 3600,
|
||||
});
|
||||
|
||||
expect(token).toBeTruthy();
|
||||
// OCI tokens are JWTs (contain dots)
|
||||
expect(token.split('.').length).toEqual(3);
|
||||
|
||||
const validated = await provider.validateToken(token, 'oci');
|
||||
expect(validated).toBeTruthy();
|
||||
expect(validated!.type).toEqual('oci');
|
||||
expect(validated!.userId).toEqual('ociuser');
|
||||
expect(validated!.scopes.length).toBeGreaterThan(0);
|
||||
});
|
||||
|
||||
tap.test('createToken: should create token with expiration', async () => {
|
||||
const token = await provider.createToken('expiryuser', 'npm', {
|
||||
expiresIn: 60, // 60 seconds
|
||||
});
|
||||
|
||||
const validated = await provider.validateToken(token, 'npm');
|
||||
expect(validated).toBeTruthy();
|
||||
expect(validated!.expiresAt).toBeTruthy();
|
||||
expect(validated!.expiresAt!.getTime()).toBeGreaterThan(Date.now());
|
||||
});
|
||||
|
||||
// ============================================================================
|
||||
// Token Validation Tests
|
||||
// ============================================================================
|
||||
|
||||
tap.test('validateToken: should validate UUID token (NPM, Maven, etc.)', async () => {
|
||||
const npmToken = await provider.createToken('validateuser', 'npm');
|
||||
const validated = await provider.validateToken(npmToken);
|
||||
|
||||
expect(validated).toBeTruthy();
|
||||
expect(validated!.type).toEqual('npm');
|
||||
expect(validated!.userId).toEqual('validateuser');
|
||||
});
|
||||
|
||||
tap.test('validateToken: should validate OCI JWT token', async () => {
|
||||
const ociToken = await provider.createToken('ocivalidate', 'oci', {
|
||||
scopes: ['oci:repository:*:*'],
|
||||
});
|
||||
|
||||
const validated = await provider.validateToken(ociToken, 'oci');
|
||||
|
||||
expect(validated).toBeTruthy();
|
||||
expect(validated!.type).toEqual('oci');
|
||||
expect(validated!.userId).toEqual('ocivalidate');
|
||||
});
|
||||
|
||||
tap.test('validateToken: should reject expired tokens', async () => {
|
||||
const token = await provider.createToken('expireduser', 'npm', {
|
||||
expiresIn: -1, // Already expired (in the past)
|
||||
});
|
||||
|
||||
// The token should be created but will fail validation due to expiry
|
||||
const validated = await provider.validateToken(token, 'npm');
|
||||
|
||||
// Token should be rejected because it's expired
|
||||
expect(validated).toBeNull();
|
||||
});
|
||||
|
||||
tap.test('validateToken: should reject invalid token', async () => {
|
||||
const validated = await provider.validateToken('invalid-random-token');
|
||||
expect(validated).toBeNull();
|
||||
});
|
||||
|
||||
tap.test('validateToken: should reject token with wrong protocol', async () => {
|
||||
const npmToken = await provider.createToken('protocoluser', 'npm');
|
||||
|
||||
// Try to validate as Maven token
|
||||
const validated = await provider.validateToken(npmToken, 'maven');
|
||||
expect(validated).toBeNull();
|
||||
});
|
||||
|
||||
// ============================================================================
|
||||
// Token Revocation Tests
|
||||
// ============================================================================
|
||||
|
||||
tap.test('revokeToken: should revoke tokens', async () => {
|
||||
const token = await provider.createToken('revokeuser', 'npm');
|
||||
|
||||
// Verify token works before revocation
|
||||
let validated = await provider.validateToken(token);
|
||||
expect(validated).toBeTruthy();
|
||||
|
||||
// Revoke the token
|
||||
await provider.revokeToken(token);
|
||||
|
||||
// Token should no longer be valid
|
||||
validated = await provider.validateToken(token);
|
||||
expect(validated).toBeNull();
|
||||
});
|
||||
|
||||
// ============================================================================
|
||||
// Authorization Tests
|
||||
// ============================================================================
|
||||
|
||||
tap.test('authorize: should authorize read actions for readonly tokens', async () => {
|
||||
const token = await provider.createToken('readonlyuser', 'npm', {
|
||||
readonly: true,
|
||||
scopes: ['npm:package:*:read'],
|
||||
});
|
||||
|
||||
const validated = await provider.validateToken(token);
|
||||
|
||||
const canRead = await provider.authorize(validated, 'npm:package:lodash', 'read');
|
||||
expect(canRead).toBeTrue();
|
||||
|
||||
const canPull = await provider.authorize(validated, 'npm:package:lodash', 'pull');
|
||||
expect(canPull).toBeTrue();
|
||||
});
|
||||
|
||||
tap.test('authorize: should deny write actions for readonly tokens', async () => {
|
||||
const token = await provider.createToken('readonlyuser2', 'npm', {
|
||||
readonly: true,
|
||||
scopes: ['npm:package:*:*'],
|
||||
});
|
||||
|
||||
const validated = await provider.validateToken(token);
|
||||
|
||||
const canWrite = await provider.authorize(validated, 'npm:package:lodash', 'write');
|
||||
expect(canWrite).toBeFalse();
|
||||
|
||||
const canPush = await provider.authorize(validated, 'npm:package:lodash', 'push');
|
||||
expect(canPush).toBeFalse();
|
||||
|
||||
const canDelete = await provider.authorize(validated, 'npm:package:lodash', 'delete');
|
||||
expect(canDelete).toBeFalse();
|
||||
});
|
||||
|
||||
tap.test('authorize: should match scopes with wildcards', async () => {
|
||||
// The scope system uses literal * as wildcard, not glob patterns
|
||||
// npm:*:*:* means "all types, all names, all actions under npm"
|
||||
const token = await provider.createToken('wildcarduser', 'npm', {
|
||||
scopes: ['npm:*:*:*'],
|
||||
});
|
||||
|
||||
const validated = await provider.validateToken(token);
|
||||
|
||||
// Should match any npm resource with full wildcard scope
|
||||
const canAccessAnyPackage = await provider.authorize(validated, 'npm:package:lodash', 'read');
|
||||
expect(canAccessAnyPackage).toBeTrue();
|
||||
|
||||
const canAccessScopedPackage = await provider.authorize(validated, 'npm:package:@myorg/foo', 'write');
|
||||
expect(canAccessScopedPackage).toBeTrue();
|
||||
});
|
||||
|
||||
tap.test('authorize: should deny access with null token', async () => {
|
||||
const canAccess = await provider.authorize(null, 'npm:package:lodash', 'read');
|
||||
expect(canAccess).toBeFalse();
|
||||
});
|
||||
|
||||
// ============================================================================
|
||||
// List Tokens Tests
|
||||
// ============================================================================
|
||||
|
||||
tap.test('listUserTokens: should list user tokens', async () => {
|
||||
// Create multiple tokens for the same user
|
||||
const userId = 'listtokenuser';
|
||||
await provider.createToken(userId, 'npm');
|
||||
await provider.createToken(userId, 'maven', { readonly: true });
|
||||
await provider.createToken(userId, 'cargo');
|
||||
|
||||
const tokens = await provider.listUserTokens!(userId);
|
||||
|
||||
expect(tokens.length).toBeGreaterThanOrEqual(3);
|
||||
|
||||
// Check that tokens have expected properties
|
||||
for (const token of tokens) {
|
||||
expect(token.key).toBeTruthy();
|
||||
expect(typeof token.readonly).toEqual('boolean');
|
||||
expect(token.created).toBeTruthy();
|
||||
}
|
||||
|
||||
// Verify we have different protocols
|
||||
const protocols = tokens.map(t => t.protocol);
|
||||
expect(protocols).toContain('npm');
|
||||
expect(protocols).toContain('maven');
|
||||
expect(protocols).toContain('cargo');
|
||||
});
|
||||
|
||||
// ============================================================================
|
||||
// AuthManager Integration Tests
|
||||
// ============================================================================
|
||||
|
||||
tap.test('AuthManager: should accept custom IAuthProvider', async () => {
|
||||
const mockProvider = createMockAuthProvider({
|
||||
authenticate: async (credentials) => {
|
||||
if (credentials.username === 'custom' && credentials.password === 'pass') {
|
||||
return 'custom-user-id';
|
||||
}
|
||||
return null;
|
||||
},
|
||||
});
|
||||
|
||||
const manager = new AuthManager(authConfig, mockProvider);
|
||||
await manager.init();
|
||||
|
||||
// Use the custom provider
|
||||
const userId = await manager.authenticate({
|
||||
username: 'custom',
|
||||
password: 'pass',
|
||||
});
|
||||
|
||||
expect(userId).toEqual('custom-user-id');
|
||||
|
||||
// Wrong credentials should fail
|
||||
const failed = await manager.authenticate({
|
||||
username: 'custom',
|
||||
password: 'wrong',
|
||||
});
|
||||
|
||||
expect(failed).toBeNull();
|
||||
});
|
||||
|
||||
tap.test('AuthManager: should use default provider when none specified', async () => {
|
||||
const manager = new AuthManager(authConfig);
|
||||
await manager.init();
|
||||
|
||||
// Should use DefaultAuthProvider internally
|
||||
const userId = await manager.authenticate({
|
||||
username: 'defaultuser',
|
||||
password: 'defaultpass',
|
||||
});
|
||||
|
||||
expect(userId).toEqual('defaultuser');
|
||||
});
|
||||
|
||||
tap.test('AuthManager: should delegate token creation to provider', async () => {
|
||||
let tokenCreated = false;
|
||||
const mockProvider = createMockAuthProvider({
|
||||
createToken: async (userId, protocol, options) => {
|
||||
tokenCreated = true;
|
||||
return `mock-token-${protocol}-${userId}`;
|
||||
},
|
||||
});
|
||||
|
||||
const manager = new AuthManager(authConfig, mockProvider);
|
||||
await manager.init();
|
||||
|
||||
const token = await manager.createNpmToken('delegateuser', false);
|
||||
|
||||
expect(tokenCreated).toBeTrue();
|
||||
expect(token).toContain('mock-token-npm');
|
||||
});
|
||||
|
||||
// ============================================================================
|
||||
// Edge Cases
|
||||
// ============================================================================
|
||||
|
||||
tap.test('edge: should handle concurrent token operations', async () => {
|
||||
const promises: Promise<string>[] = [];
|
||||
|
||||
// Create 10 tokens concurrently
|
||||
for (let i = 0; i < 10; i++) {
|
||||
promises.push(provider.createToken(`concurrent-user-${i}`, 'npm'));
|
||||
}
|
||||
|
||||
const tokens = await Promise.all(promises);
|
||||
|
||||
// All tokens should be unique
|
||||
const uniqueTokens = new Set(tokens);
|
||||
expect(uniqueTokens.size).toEqual(10);
|
||||
|
||||
// All tokens should be valid
|
||||
for (const token of tokens) {
|
||||
const validated = await provider.validateToken(token);
|
||||
expect(validated).toBeTruthy();
|
||||
}
|
||||
});
|
||||
|
||||
tap.test('edge: should handle empty scopes', async () => {
|
||||
const token = await provider.createToken('emptyuser', 'npm', {
|
||||
scopes: [],
|
||||
});
|
||||
|
||||
const validated = await provider.validateToken(token);
|
||||
expect(validated).toBeTruthy();
|
||||
// Even with empty scopes, token should be valid
|
||||
});
|
||||
|
||||
// ============================================================================
|
||||
// Cleanup
|
||||
// ============================================================================
|
||||
|
||||
tap.test('cleanup', async () => {
|
||||
// No cleanup needed for in-memory provider
|
||||
});
|
||||
|
||||
export default tap.start();
|
||||
506
test/test.storage.hooks.ts
Normal file
506
test/test.storage.hooks.ts
Normal file
@@ -0,0 +1,506 @@
|
||||
import { expect, tap } from '@git.zone/tstest/tapbundle';
|
||||
import * as qenv from '@push.rocks/qenv';
|
||||
import { RegistryStorage } from '../ts/core/classes.registrystorage.js';
|
||||
import type { IStorageConfig } from '../ts/core/interfaces.core.js';
|
||||
import type { IStorageHooks, IStorageHookContext } from '../ts/core/interfaces.storage.js';
|
||||
import { createTrackingHooks, createQuotaHooks, generateTestRunId } from './helpers/registry.js';
|
||||
|
||||
const testQenv = new qenv.Qenv('./', './.nogit');
|
||||
|
||||
// ============================================================================
|
||||
// Test State
|
||||
// ============================================================================
|
||||
|
||||
let storage: RegistryStorage;
|
||||
let storageConfig: IStorageConfig;
|
||||
let testRunId: string;
|
||||
|
||||
// ============================================================================
|
||||
// Setup
|
||||
// ============================================================================
|
||||
|
||||
tap.test('setup: should create storage config', async () => {
|
||||
testRunId = generateTestRunId();
|
||||
|
||||
const s3AccessKey = await testQenv.getEnvVarOnDemand('S3_ACCESSKEY');
|
||||
const s3SecretKey = await testQenv.getEnvVarOnDemand('S3_SECRETKEY');
|
||||
const s3Endpoint = await testQenv.getEnvVarOnDemand('S3_ENDPOINT');
|
||||
const s3Port = await testQenv.getEnvVarOnDemand('S3_PORT');
|
||||
|
||||
storageConfig = {
|
||||
accessKey: s3AccessKey || 'minioadmin',
|
||||
accessSecret: s3SecretKey || 'minioadmin',
|
||||
endpoint: s3Endpoint || 'localhost',
|
||||
port: parseInt(s3Port || '9000', 10),
|
||||
useSsl: false,
|
||||
region: 'us-east-1',
|
||||
bucketName: `test-hooks-${testRunId}`,
|
||||
};
|
||||
|
||||
expect(storageConfig.bucketName).toBeTruthy();
|
||||
});
|
||||
|
||||
// ============================================================================
|
||||
// beforePut Hook Tests
|
||||
// ============================================================================
|
||||
|
||||
tap.test('beforePut: should be called before storage', async () => {
|
||||
const tracker = createTrackingHooks();
|
||||
|
||||
storage = new RegistryStorage(storageConfig, tracker.hooks);
|
||||
await storage.init();
|
||||
|
||||
// Set context and put object
|
||||
storage.setContext({
|
||||
protocol: 'npm',
|
||||
actor: { userId: 'testuser' },
|
||||
metadata: { packageName: 'test-package' },
|
||||
});
|
||||
|
||||
await storage.putObject('test/beforeput-called.txt', Buffer.from('test data'));
|
||||
storage.clearContext();
|
||||
|
||||
// Verify beforePut was called
|
||||
const beforePutCalls = tracker.calls.filter(c => c.method === 'beforePut');
|
||||
expect(beforePutCalls.length).toEqual(1);
|
||||
expect(beforePutCalls[0].context.operation).toEqual('put');
|
||||
expect(beforePutCalls[0].context.key).toEqual('test/beforeput-called.txt');
|
||||
expect(beforePutCalls[0].context.protocol).toEqual('npm');
|
||||
});
|
||||
|
||||
tap.test('beforePut: returning {allowed: false} should block storage', async () => {
|
||||
const tracker = createTrackingHooks({ beforePutAllowed: false });
|
||||
|
||||
const blockingStorage = new RegistryStorage(storageConfig, tracker.hooks);
|
||||
await blockingStorage.init();
|
||||
|
||||
blockingStorage.setContext({
|
||||
protocol: 'npm',
|
||||
actor: { userId: 'testuser' },
|
||||
});
|
||||
|
||||
let errorThrown = false;
|
||||
try {
|
||||
await blockingStorage.putObject('test/should-not-exist.txt', Buffer.from('blocked data'));
|
||||
} catch (error) {
|
||||
errorThrown = true;
|
||||
expect((error as Error).message).toContain('Blocked by test');
|
||||
}
|
||||
|
||||
blockingStorage.clearContext();
|
||||
|
||||
expect(errorThrown).toBeTrue();
|
||||
|
||||
// Verify object was NOT stored
|
||||
const result = await blockingStorage.getObject('test/should-not-exist.txt');
|
||||
expect(result).toBeNull();
|
||||
});
|
||||
|
||||
// ============================================================================
|
||||
// afterPut Hook Tests
|
||||
// ============================================================================
|
||||
|
||||
tap.test('afterPut: should be called after successful storage', async () => {
|
||||
const tracker = createTrackingHooks();
|
||||
|
||||
const trackedStorage = new RegistryStorage(storageConfig, tracker.hooks);
|
||||
await trackedStorage.init();
|
||||
|
||||
trackedStorage.setContext({
|
||||
protocol: 'maven',
|
||||
actor: { userId: 'maven-user' },
|
||||
});
|
||||
|
||||
await trackedStorage.putObject('test/afterput-test.txt', Buffer.from('after put test'));
|
||||
trackedStorage.clearContext();
|
||||
|
||||
// Give async hook time to complete
|
||||
await new Promise(resolve => setTimeout(resolve, 100));
|
||||
|
||||
const afterPutCalls = tracker.calls.filter(c => c.method === 'afterPut');
|
||||
expect(afterPutCalls.length).toEqual(1);
|
||||
expect(afterPutCalls[0].context.operation).toEqual('put');
|
||||
});
|
||||
|
||||
tap.test('afterPut: should receive correct metadata (size, key, protocol)', async () => {
|
||||
const tracker = createTrackingHooks();
|
||||
|
||||
const metadataStorage = new RegistryStorage(storageConfig, tracker.hooks);
|
||||
await metadataStorage.init();
|
||||
|
||||
const testData = Buffer.from('metadata test data - some content here');
|
||||
|
||||
metadataStorage.setContext({
|
||||
protocol: 'cargo',
|
||||
actor: { userId: 'cargo-user', ip: '192.168.1.100' },
|
||||
metadata: { packageName: 'my-crate', version: '1.0.0' },
|
||||
});
|
||||
|
||||
await metadataStorage.putObject('test/metadata-test.txt', testData);
|
||||
metadataStorage.clearContext();
|
||||
|
||||
await new Promise(resolve => setTimeout(resolve, 100));
|
||||
|
||||
const afterPutCalls = tracker.calls.filter(c => c.method === 'afterPut');
|
||||
expect(afterPutCalls.length).toBeGreaterThanOrEqual(1);
|
||||
|
||||
const call = afterPutCalls[afterPutCalls.length - 1];
|
||||
expect(call.context.metadata?.size).toEqual(testData.length);
|
||||
expect(call.context.key).toEqual('test/metadata-test.txt');
|
||||
expect(call.context.protocol).toEqual('cargo');
|
||||
expect(call.context.actor?.userId).toEqual('cargo-user');
|
||||
expect(call.context.actor?.ip).toEqual('192.168.1.100');
|
||||
});
|
||||
|
||||
// ============================================================================
|
||||
// beforeDelete Hook Tests
|
||||
// ============================================================================
|
||||
|
||||
tap.test('beforeDelete: should be called before deletion', async () => {
|
||||
const tracker = createTrackingHooks();
|
||||
|
||||
const deleteStorage = new RegistryStorage(storageConfig, tracker.hooks);
|
||||
await deleteStorage.init();
|
||||
|
||||
// First, store an object
|
||||
deleteStorage.setContext({ protocol: 'npm' });
|
||||
await deleteStorage.putObject('test/to-delete.txt', Buffer.from('delete me'));
|
||||
|
||||
// Now delete it
|
||||
await deleteStorage.deleteObject('test/to-delete.txt');
|
||||
deleteStorage.clearContext();
|
||||
|
||||
const beforeDeleteCalls = tracker.calls.filter(c => c.method === 'beforeDelete');
|
||||
expect(beforeDeleteCalls.length).toEqual(1);
|
||||
expect(beforeDeleteCalls[0].context.operation).toEqual('delete');
|
||||
expect(beforeDeleteCalls[0].context.key).toEqual('test/to-delete.txt');
|
||||
});
|
||||
|
||||
tap.test('beforeDelete: returning {allowed: false} should block deletion', async () => {
|
||||
const tracker = createTrackingHooks({ beforeDeleteAllowed: false });
|
||||
|
||||
const protectedStorage = new RegistryStorage(storageConfig, tracker.hooks);
|
||||
await protectedStorage.init();
|
||||
|
||||
// First store an object
|
||||
protectedStorage.setContext({ protocol: 'npm' });
|
||||
await protectedStorage.putObject('test/protected.txt', Buffer.from('protected data'));
|
||||
|
||||
// Try to delete - should be blocked
|
||||
let errorThrown = false;
|
||||
try {
|
||||
await protectedStorage.deleteObject('test/protected.txt');
|
||||
} catch (error) {
|
||||
errorThrown = true;
|
||||
expect((error as Error).message).toContain('Blocked by test');
|
||||
}
|
||||
|
||||
protectedStorage.clearContext();
|
||||
|
||||
expect(errorThrown).toBeTrue();
|
||||
|
||||
// Verify object still exists
|
||||
const result = await protectedStorage.getObject('test/protected.txt');
|
||||
expect(result).toBeTruthy();
|
||||
});
|
||||
|
||||
// ============================================================================
|
||||
// afterDelete Hook Tests
|
||||
// ============================================================================
|
||||
|
||||
tap.test('afterDelete: should be called after successful deletion', async () => {
|
||||
const tracker = createTrackingHooks();
|
||||
|
||||
const afterDeleteStorage = new RegistryStorage(storageConfig, tracker.hooks);
|
||||
await afterDeleteStorage.init();
|
||||
|
||||
afterDeleteStorage.setContext({ protocol: 'pypi' });
|
||||
await afterDeleteStorage.putObject('test/delete-tracked.txt', Buffer.from('to be deleted'));
|
||||
await afterDeleteStorage.deleteObject('test/delete-tracked.txt');
|
||||
afterDeleteStorage.clearContext();
|
||||
|
||||
await new Promise(resolve => setTimeout(resolve, 100));
|
||||
|
||||
const afterDeleteCalls = tracker.calls.filter(c => c.method === 'afterDelete');
|
||||
expect(afterDeleteCalls.length).toEqual(1);
|
||||
expect(afterDeleteCalls[0].context.operation).toEqual('delete');
|
||||
});
|
||||
|
||||
// ============================================================================
|
||||
// afterGet Hook Tests
|
||||
// ============================================================================
|
||||
|
||||
tap.test('afterGet: should be called after reading object', async () => {
|
||||
const tracker = createTrackingHooks();
|
||||
|
||||
const getStorage = new RegistryStorage(storageConfig, tracker.hooks);
|
||||
await getStorage.init();
|
||||
|
||||
// Store an object first
|
||||
getStorage.setContext({ protocol: 'rubygems' });
|
||||
await getStorage.putObject('test/read-test.txt', Buffer.from('read me'));
|
||||
|
||||
// Clear calls to focus on the get
|
||||
tracker.calls.length = 0;
|
||||
|
||||
// Read the object
|
||||
const data = await getStorage.getObject('test/read-test.txt');
|
||||
getStorage.clearContext();
|
||||
|
||||
await new Promise(resolve => setTimeout(resolve, 100));
|
||||
|
||||
expect(data).toBeTruthy();
|
||||
expect(data!.toString()).toEqual('read me');
|
||||
|
||||
const afterGetCalls = tracker.calls.filter(c => c.method === 'afterGet');
|
||||
expect(afterGetCalls.length).toEqual(1);
|
||||
expect(afterGetCalls[0].context.operation).toEqual('get');
|
||||
});
|
||||
|
||||
// ============================================================================
|
||||
// Context Tests
|
||||
// ============================================================================
|
||||
|
||||
tap.test('context: hooks should receive actor information', async () => {
|
||||
const tracker = createTrackingHooks();
|
||||
|
||||
const actorStorage = new RegistryStorage(storageConfig, tracker.hooks);
|
||||
await actorStorage.init();
|
||||
|
||||
actorStorage.setContext({
|
||||
protocol: 'composer',
|
||||
actor: {
|
||||
userId: 'user-123',
|
||||
tokenId: 'token-abc',
|
||||
ip: '10.0.0.1',
|
||||
userAgent: 'composer/2.0',
|
||||
orgId: 'org-456',
|
||||
sessionId: 'session-xyz',
|
||||
},
|
||||
});
|
||||
|
||||
await actorStorage.putObject('test/actor-test.txt', Buffer.from('actor test'));
|
||||
actorStorage.clearContext();
|
||||
|
||||
const beforePutCall = tracker.calls.find(c => c.method === 'beforePut');
|
||||
expect(beforePutCall).toBeTruthy();
|
||||
expect(beforePutCall!.context.actor?.userId).toEqual('user-123');
|
||||
expect(beforePutCall!.context.actor?.tokenId).toEqual('token-abc');
|
||||
expect(beforePutCall!.context.actor?.ip).toEqual('10.0.0.1');
|
||||
expect(beforePutCall!.context.actor?.userAgent).toEqual('composer/2.0');
|
||||
expect(beforePutCall!.context.actor?.orgId).toEqual('org-456');
|
||||
expect(beforePutCall!.context.actor?.sessionId).toEqual('session-xyz');
|
||||
});
|
||||
|
||||
tap.test('withContext: should set and clear context correctly', async () => {
|
||||
const tracker = createTrackingHooks();
|
||||
|
||||
const contextStorage = new RegistryStorage(storageConfig, tracker.hooks);
|
||||
await contextStorage.init();
|
||||
|
||||
// Use withContext to ensure automatic cleanup
|
||||
await contextStorage.withContext(
|
||||
{
|
||||
protocol: 'oci',
|
||||
actor: { userId: 'oci-user' },
|
||||
},
|
||||
async () => {
|
||||
await contextStorage.putObject('test/with-context.txt', Buffer.from('context managed'));
|
||||
}
|
||||
);
|
||||
|
||||
const call = tracker.calls.find(c => c.method === 'beforePut');
|
||||
expect(call).toBeTruthy();
|
||||
expect(call!.context.protocol).toEqual('oci');
|
||||
expect(call!.context.actor?.userId).toEqual('oci-user');
|
||||
});
|
||||
|
||||
tap.test('withContext: should clear context even on error', async () => {
|
||||
const tracker = createTrackingHooks({ beforePutAllowed: false });
|
||||
|
||||
const errorStorage = new RegistryStorage(storageConfig, tracker.hooks);
|
||||
await errorStorage.init();
|
||||
|
||||
let errorThrown = false;
|
||||
try {
|
||||
await errorStorage.withContext(
|
||||
{
|
||||
protocol: 'npm',
|
||||
actor: { userId: 'error-user' },
|
||||
},
|
||||
async () => {
|
||||
await errorStorage.putObject('test/error-context.txt', Buffer.from('will fail'));
|
||||
}
|
||||
);
|
||||
} catch {
|
||||
errorThrown = true;
|
||||
}
|
||||
|
||||
expect(errorThrown).toBeTrue();
|
||||
|
||||
// Verify context was cleared - next operation without context should work
|
||||
// (hooks won't be called without context)
|
||||
tracker.hooks.beforePut = async () => ({ allowed: true });
|
||||
await errorStorage.putObject('test/after-error.txt', Buffer.from('ok'));
|
||||
});
|
||||
|
||||
// ============================================================================
|
||||
// Graceful Degradation Tests
|
||||
// ============================================================================
|
||||
|
||||
tap.test('graceful: hooks should not fail the operation if afterPut throws', async () => {
|
||||
const tracker = createTrackingHooks({ throwOnAfterPut: true });
|
||||
|
||||
const gracefulStorage = new RegistryStorage(storageConfig, tracker.hooks);
|
||||
await gracefulStorage.init();
|
||||
|
||||
gracefulStorage.setContext({ protocol: 'npm' });
|
||||
|
||||
// This should NOT throw even though afterPut throws
|
||||
let errorThrown = false;
|
||||
try {
|
||||
await gracefulStorage.putObject('test/graceful-afterput.txt', Buffer.from('should succeed'));
|
||||
} catch {
|
||||
errorThrown = true;
|
||||
}
|
||||
|
||||
gracefulStorage.clearContext();
|
||||
|
||||
expect(errorThrown).toBeFalse();
|
||||
|
||||
// Verify object was stored
|
||||
const data = await gracefulStorage.getObject('test/graceful-afterput.txt');
|
||||
expect(data).toBeTruthy();
|
||||
});
|
||||
|
||||
tap.test('graceful: hooks should not fail the operation if afterGet throws', async () => {
|
||||
const tracker = createTrackingHooks({ throwOnAfterGet: true });
|
||||
|
||||
const gracefulGetStorage = new RegistryStorage(storageConfig, tracker.hooks);
|
||||
await gracefulGetStorage.init();
|
||||
|
||||
// Store first
|
||||
gracefulGetStorage.setContext({ protocol: 'maven' });
|
||||
await gracefulGetStorage.putObject('test/graceful-afterget.txt', Buffer.from('read me gracefully'));
|
||||
|
||||
// Read should succeed even though afterGet throws
|
||||
let errorThrown = false;
|
||||
try {
|
||||
const data = await gracefulGetStorage.getObject('test/graceful-afterget.txt');
|
||||
expect(data).toBeTruthy();
|
||||
} catch {
|
||||
errorThrown = true;
|
||||
}
|
||||
|
||||
gracefulGetStorage.clearContext();
|
||||
|
||||
expect(errorThrown).toBeFalse();
|
||||
});
|
||||
|
||||
// ============================================================================
|
||||
// Quota Hooks Tests
|
||||
// ============================================================================
|
||||
|
||||
tap.test('quota: should block storage when quota exceeded', async () => {
|
||||
const maxSize = 100; // 100 bytes max
|
||||
const quotaTracker = createQuotaHooks(maxSize);
|
||||
|
||||
const quotaStorage = new RegistryStorage(storageConfig, quotaTracker.hooks);
|
||||
await quotaStorage.init();
|
||||
|
||||
quotaStorage.setContext({
|
||||
protocol: 'npm',
|
||||
actor: { userId: 'quota-user' },
|
||||
});
|
||||
|
||||
// Store 50 bytes - should succeed
|
||||
await quotaStorage.putObject('test/quota-1.txt', Buffer.from('x'.repeat(50)));
|
||||
expect(quotaTracker.currentUsage.bytes).toEqual(50);
|
||||
|
||||
// Try to store 60 more bytes - should fail (total would be 110)
|
||||
let errorThrown = false;
|
||||
try {
|
||||
await quotaStorage.putObject('test/quota-2.txt', Buffer.from('x'.repeat(60)));
|
||||
} catch (error) {
|
||||
errorThrown = true;
|
||||
expect((error as Error).message).toContain('Quota exceeded');
|
||||
}
|
||||
|
||||
quotaStorage.clearContext();
|
||||
|
||||
expect(errorThrown).toBeTrue();
|
||||
expect(quotaTracker.currentUsage.bytes).toEqual(50); // Still 50, not 110
|
||||
});
|
||||
|
||||
tap.test('quota: should update usage after delete', async () => {
|
||||
const maxSize = 200;
|
||||
const quotaTracker = createQuotaHooks(maxSize);
|
||||
|
||||
const quotaDelStorage = new RegistryStorage(storageConfig, quotaTracker.hooks);
|
||||
await quotaDelStorage.init();
|
||||
|
||||
quotaDelStorage.setContext({
|
||||
protocol: 'npm',
|
||||
metadata: { size: 75 },
|
||||
});
|
||||
|
||||
// Store and track
|
||||
await quotaDelStorage.putObject('test/quota-del.txt', Buffer.from('x'.repeat(75)));
|
||||
expect(quotaTracker.currentUsage.bytes).toEqual(75);
|
||||
|
||||
// Delete and verify usage decreases
|
||||
await quotaDelStorage.deleteObject('test/quota-del.txt');
|
||||
|
||||
await new Promise(resolve => setTimeout(resolve, 100));
|
||||
|
||||
quotaDelStorage.clearContext();
|
||||
|
||||
// Usage should be reduced (though exact value depends on metadata)
|
||||
expect(quotaTracker.currentUsage.bytes).toBeLessThanOrEqual(75);
|
||||
});
|
||||
|
||||
// ============================================================================
|
||||
// setHooks Tests
|
||||
// ============================================================================
|
||||
|
||||
tap.test('setHooks: should allow setting hooks after construction', async () => {
|
||||
const lateStorage = new RegistryStorage(storageConfig);
|
||||
await lateStorage.init();
|
||||
|
||||
// Initially no hooks
|
||||
await lateStorage.putObject('test/no-hooks.txt', Buffer.from('no hooks yet'));
|
||||
|
||||
// Add hooks later
|
||||
const tracker = createTrackingHooks();
|
||||
lateStorage.setHooks(tracker.hooks);
|
||||
|
||||
lateStorage.setContext({ protocol: 'npm' });
|
||||
await lateStorage.putObject('test/with-late-hooks.txt', Buffer.from('now with hooks'));
|
||||
lateStorage.clearContext();
|
||||
|
||||
const beforePutCalls = tracker.calls.filter(c => c.method === 'beforePut');
|
||||
expect(beforePutCalls.length).toEqual(1);
|
||||
});
|
||||
|
||||
// ============================================================================
|
||||
// Cleanup
|
||||
// ============================================================================
|
||||
|
||||
tap.test('cleanup: should clean up test bucket', async () => {
|
||||
if (storage) {
|
||||
// Clean up test objects
|
||||
const prefixes = ['test/'];
|
||||
for (const prefix of prefixes) {
|
||||
try {
|
||||
const objects = await storage.listObjects(prefix);
|
||||
for (const obj of objects) {
|
||||
await storage.deleteObject(obj);
|
||||
}
|
||||
} catch {
|
||||
// Ignore cleanup errors
|
||||
}
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
export default tap.start();
|
||||
598
test/test.upstream.cache.s3.ts
Normal file
598
test/test.upstream.cache.s3.ts
Normal file
@@ -0,0 +1,598 @@
|
||||
import { expect, tap } from '@git.zone/tstest/tapbundle';
|
||||
import * as qenv from '@push.rocks/qenv';
|
||||
import * as smartbucket from '@push.rocks/smartbucket';
|
||||
import { UpstreamCache } from '../ts/upstream/classes.upstreamcache.js';
|
||||
import type { IUpstreamFetchContext, IUpstreamCacheConfig } from '../ts/upstream/interfaces.upstream.js';
|
||||
import type { IStorageBackend } from '../ts/core/interfaces.core.js';
|
||||
import { generateTestRunId } from './helpers/registry.js';
|
||||
|
||||
const testQenv = new qenv.Qenv('./', './.nogit');
|
||||
|
||||
// ============================================================================
|
||||
// Test State
|
||||
// ============================================================================
|
||||
|
||||
let cache: UpstreamCache;
|
||||
let storageBackend: IStorageBackend;
|
||||
let s3Bucket: smartbucket.Bucket;
|
||||
let smartBucket: smartbucket.SmartBucket;
|
||||
let testRunId: string;
|
||||
let bucketName: string;
|
||||
|
||||
// ============================================================================
|
||||
// Helper Functions
|
||||
// ============================================================================
|
||||
|
||||
function createFetchContext(overrides?: Partial<IUpstreamFetchContext>): IUpstreamFetchContext {
|
||||
// Use resource name as path to ensure unique cache keys
|
||||
const resource = overrides?.resource || 'lodash';
|
||||
return {
|
||||
protocol: 'npm',
|
||||
resource,
|
||||
resourceType: 'packument',
|
||||
path: `/${resource}`,
|
||||
method: 'GET',
|
||||
headers: {},
|
||||
query: {},
|
||||
...overrides,
|
||||
};
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// Setup
|
||||
// ============================================================================
|
||||
|
||||
tap.test('setup: should create S3 storage backend', async () => {
|
||||
testRunId = generateTestRunId();
|
||||
bucketName = `test-ucache-${testRunId.substring(0, 8)}`;
|
||||
|
||||
const s3AccessKey = await testQenv.getEnvVarOnDemand('S3_ACCESSKEY');
|
||||
const s3SecretKey = await testQenv.getEnvVarOnDemand('S3_SECRETKEY');
|
||||
const s3Endpoint = await testQenv.getEnvVarOnDemand('S3_ENDPOINT');
|
||||
const s3Port = await testQenv.getEnvVarOnDemand('S3_PORT');
|
||||
|
||||
smartBucket = new smartbucket.SmartBucket({
|
||||
accessKey: s3AccessKey || 'minioadmin',
|
||||
accessSecret: s3SecretKey || 'minioadmin',
|
||||
endpoint: s3Endpoint || 'localhost',
|
||||
port: parseInt(s3Port || '9000', 10),
|
||||
useSsl: false,
|
||||
});
|
||||
|
||||
s3Bucket = await smartBucket.createBucket(bucketName);
|
||||
|
||||
// Create storage backend adapter
|
||||
storageBackend = {
|
||||
getObject: async (key: string): Promise<Buffer | null> => {
|
||||
try {
|
||||
// fastGet returns Buffer directly (or undefined if not found)
|
||||
const data = await s3Bucket.fastGet({ path: key });
|
||||
if (!data) {
|
||||
return null;
|
||||
}
|
||||
return data;
|
||||
} catch (error) {
|
||||
// fastGet throws if object doesn't exist
|
||||
return null;
|
||||
}
|
||||
},
|
||||
putObject: async (key: string, data: Buffer): Promise<void> => {
|
||||
await s3Bucket.fastPut({ path: key, contents: data, overwrite: true });
|
||||
},
|
||||
deleteObject: async (key: string): Promise<void> => {
|
||||
await s3Bucket.fastRemove({ path: key });
|
||||
},
|
||||
listObjects: async (prefix: string): Promise<string[]> => {
|
||||
const paths: string[] = [];
|
||||
for await (const path of s3Bucket.listAllObjects(prefix)) {
|
||||
paths.push(path);
|
||||
}
|
||||
return paths;
|
||||
},
|
||||
};
|
||||
|
||||
expect(storageBackend).toBeTruthy();
|
||||
});
|
||||
|
||||
tap.test('setup: should create UpstreamCache with S3 storage', async () => {
|
||||
cache = new UpstreamCache(
|
||||
{ enabled: true, defaultTtlSeconds: 300 },
|
||||
10000,
|
||||
storageBackend
|
||||
);
|
||||
|
||||
expect(cache.isEnabled()).toBeTrue();
|
||||
expect(cache.hasStorage()).toBeTrue();
|
||||
});
|
||||
|
||||
// ============================================================================
|
||||
// Basic Cache Operations
|
||||
// ============================================================================
|
||||
|
||||
tap.test('cache: should store cache entry in S3', async () => {
|
||||
const context = createFetchContext({ resource: 'store-test' });
|
||||
const testData = Buffer.from(JSON.stringify({ name: 'store-test', version: '1.0.0' }));
|
||||
const upstreamUrl = 'https://registry.npmjs.org';
|
||||
|
||||
await cache.set(
|
||||
context,
|
||||
testData,
|
||||
'application/json',
|
||||
{ 'etag': '"abc123"' },
|
||||
'npmjs',
|
||||
upstreamUrl
|
||||
);
|
||||
|
||||
// Verify in S3
|
||||
const stats = cache.getStats();
|
||||
expect(stats.totalEntries).toBeGreaterThanOrEqual(1);
|
||||
});
|
||||
|
||||
tap.test('cache: should retrieve cache entry from S3', async () => {
|
||||
const context = createFetchContext({ resource: 'retrieve-test' });
|
||||
const testData = Buffer.from('retrieve test data');
|
||||
const upstreamUrl = 'https://registry.npmjs.org';
|
||||
|
||||
await cache.set(
|
||||
context,
|
||||
testData,
|
||||
'application/octet-stream',
|
||||
{},
|
||||
'npmjs',
|
||||
upstreamUrl
|
||||
);
|
||||
|
||||
const entry = await cache.get(context, upstreamUrl);
|
||||
|
||||
expect(entry).toBeTruthy();
|
||||
expect(entry!.data.toString()).toEqual('retrieve test data');
|
||||
expect(entry!.contentType).toEqual('application/octet-stream');
|
||||
expect(entry!.upstreamId).toEqual('npmjs');
|
||||
});
|
||||
|
||||
// ============================================================================
|
||||
// Multi-Upstream URL Tests
|
||||
// ============================================================================
|
||||
|
||||
tap.test('cache: should include upstream URL in cache path', async () => {
|
||||
const context = createFetchContext({ resource: 'url-path-test' });
|
||||
const testData = Buffer.from('url path test');
|
||||
|
||||
await cache.set(
|
||||
context,
|
||||
testData,
|
||||
'text/plain',
|
||||
{},
|
||||
'npmjs',
|
||||
'https://registry.npmjs.org'
|
||||
);
|
||||
|
||||
// The cache key should include the escaped URL
|
||||
const entry = await cache.get(context, 'https://registry.npmjs.org');
|
||||
expect(entry).toBeTruthy();
|
||||
expect(entry!.data.toString()).toEqual('url path test');
|
||||
});
|
||||
|
||||
tap.test('cache: should handle multiple upstreams with different URLs', async () => {
|
||||
const context = createFetchContext({ resource: '@company/private-pkg' });
|
||||
|
||||
// Store from private upstream
|
||||
const privateData = Buffer.from('private package data');
|
||||
await cache.set(
|
||||
context,
|
||||
privateData,
|
||||
'application/json',
|
||||
{},
|
||||
'private-npm',
|
||||
'https://npm.company.com'
|
||||
);
|
||||
|
||||
// Store from public upstream (same resource name, different upstream)
|
||||
const publicData = Buffer.from('public package data');
|
||||
await cache.set(
|
||||
context,
|
||||
publicData,
|
||||
'application/json',
|
||||
{},
|
||||
'public-npm',
|
||||
'https://registry.npmjs.org'
|
||||
);
|
||||
|
||||
// Retrieve from private - should get private data
|
||||
const privateEntry = await cache.get(context, 'https://npm.company.com');
|
||||
expect(privateEntry).toBeTruthy();
|
||||
expect(privateEntry!.data.toString()).toEqual('private package data');
|
||||
expect(privateEntry!.upstreamId).toEqual('private-npm');
|
||||
|
||||
// Retrieve from public - should get public data
|
||||
const publicEntry = await cache.get(context, 'https://registry.npmjs.org');
|
||||
expect(publicEntry).toBeTruthy();
|
||||
expect(publicEntry!.data.toString()).toEqual('public package data');
|
||||
expect(publicEntry!.upstreamId).toEqual('public-npm');
|
||||
});
|
||||
|
||||
// ============================================================================
|
||||
// TTL and Expiration Tests
|
||||
// ============================================================================
|
||||
|
||||
tap.test('cache: should respect TTL expiration', async () => {
|
||||
// Create cache with very short TTL
|
||||
const shortTtlCache = new UpstreamCache(
|
||||
{
|
||||
enabled: true,
|
||||
defaultTtlSeconds: 1, // 1 second TTL
|
||||
staleWhileRevalidate: false,
|
||||
staleMaxAgeSeconds: 0,
|
||||
immutableTtlSeconds: 1,
|
||||
negativeCacheTtlSeconds: 1,
|
||||
},
|
||||
1000,
|
||||
storageBackend
|
||||
);
|
||||
|
||||
const context = createFetchContext({ resource: 'ttl-test' });
|
||||
const testData = Buffer.from('expires soon');
|
||||
|
||||
await shortTtlCache.set(
|
||||
context,
|
||||
testData,
|
||||
'text/plain',
|
||||
{},
|
||||
'test-upstream',
|
||||
'https://test.example.com'
|
||||
);
|
||||
|
||||
// Should exist immediately
|
||||
let entry = await shortTtlCache.get(context, 'https://test.example.com');
|
||||
expect(entry).toBeTruthy();
|
||||
|
||||
// Wait for expiration
|
||||
await new Promise(resolve => setTimeout(resolve, 1500));
|
||||
|
||||
// Should be expired now
|
||||
entry = await shortTtlCache.get(context, 'https://test.example.com');
|
||||
expect(entry).toBeNull();
|
||||
|
||||
shortTtlCache.stop();
|
||||
});
|
||||
|
||||
tap.test('cache: should serve stale content during stale-while-revalidate window', async () => {
|
||||
const staleCache = new UpstreamCache(
|
||||
{
|
||||
enabled: true,
|
||||
defaultTtlSeconds: 1, // 1 second fresh
|
||||
staleWhileRevalidate: true,
|
||||
staleMaxAgeSeconds: 60, // 60 seconds stale window
|
||||
immutableTtlSeconds: 1,
|
||||
negativeCacheTtlSeconds: 1,
|
||||
},
|
||||
1000,
|
||||
storageBackend
|
||||
);
|
||||
|
||||
const context = createFetchContext({ resource: 'stale-test' });
|
||||
const testData = Buffer.from('stale but usable');
|
||||
|
||||
await staleCache.set(
|
||||
context,
|
||||
testData,
|
||||
'text/plain',
|
||||
{},
|
||||
'stale-upstream',
|
||||
'https://stale.example.com'
|
||||
);
|
||||
|
||||
// Wait for fresh period to expire
|
||||
await new Promise(resolve => setTimeout(resolve, 1500));
|
||||
|
||||
// Should still be available but marked as stale
|
||||
const entry = await staleCache.get(context, 'https://stale.example.com');
|
||||
expect(entry).toBeTruthy();
|
||||
expect(entry!.stale).toBeTrue();
|
||||
expect(entry!.data.toString()).toEqual('stale but usable');
|
||||
|
||||
staleCache.stop();
|
||||
});
|
||||
|
||||
tap.test('cache: should reject content past stale deadline', async () => {
|
||||
const veryShortCache = new UpstreamCache(
|
||||
{
|
||||
enabled: true,
|
||||
defaultTtlSeconds: 1,
|
||||
staleWhileRevalidate: true,
|
||||
staleMaxAgeSeconds: 1, // Only 1 second stale window
|
||||
immutableTtlSeconds: 1,
|
||||
negativeCacheTtlSeconds: 1,
|
||||
},
|
||||
1000,
|
||||
storageBackend
|
||||
);
|
||||
|
||||
const context = createFetchContext({ resource: 'very-stale-test' });
|
||||
await veryShortCache.set(
|
||||
context,
|
||||
Buffer.from('will expire completely'),
|
||||
'text/plain',
|
||||
{},
|
||||
'short-upstream',
|
||||
'https://short.example.com'
|
||||
);
|
||||
|
||||
// Wait for both fresh AND stale periods to expire
|
||||
await new Promise(resolve => setTimeout(resolve, 2500));
|
||||
|
||||
const entry = await veryShortCache.get(context, 'https://short.example.com');
|
||||
expect(entry).toBeNull();
|
||||
|
||||
veryShortCache.stop();
|
||||
});
|
||||
|
||||
// ============================================================================
|
||||
// Negative Cache Tests
|
||||
// ============================================================================
|
||||
|
||||
tap.test('cache: should store negative cache entries (404)', async () => {
|
||||
const context = createFetchContext({ resource: 'not-found-pkg' });
|
||||
const upstreamUrl = 'https://registry.npmjs.org';
|
||||
|
||||
await cache.setNegative(context, 'npmjs', upstreamUrl);
|
||||
|
||||
const hasNegative = await cache.hasNegative(context, upstreamUrl);
|
||||
expect(hasNegative).toBeTrue();
|
||||
});
|
||||
|
||||
tap.test('cache: should retrieve negative cache entries', async () => {
|
||||
const context = createFetchContext({ resource: 'negative-retrieve-test' });
|
||||
const upstreamUrl = 'https://registry.npmjs.org';
|
||||
|
||||
await cache.setNegative(context, 'npmjs', upstreamUrl);
|
||||
|
||||
const entry = await cache.get(context, upstreamUrl);
|
||||
expect(entry).toBeTruthy();
|
||||
expect(entry!.data.length).toEqual(0); // Empty buffer indicates 404
|
||||
});
|
||||
|
||||
// ============================================================================
|
||||
// Eviction Tests
|
||||
// ============================================================================
|
||||
|
||||
tap.test('cache: should evict oldest entries when memory limit reached', async () => {
|
||||
// Create cache with very small limit
|
||||
const smallCache = new UpstreamCache(
|
||||
{ enabled: true, defaultTtlSeconds: 300 },
|
||||
5, // Only 5 entries
|
||||
storageBackend
|
||||
);
|
||||
|
||||
// Add 10 entries
|
||||
for (let i = 0; i < 10; i++) {
|
||||
const context = createFetchContext({ resource: `evict-test-${i}` });
|
||||
await smallCache.set(
|
||||
context,
|
||||
Buffer.from(`data ${i}`),
|
||||
'text/plain',
|
||||
{},
|
||||
'evict-upstream',
|
||||
'https://evict.example.com'
|
||||
);
|
||||
}
|
||||
|
||||
const stats = smallCache.getStats();
|
||||
// Should have evicted some entries
|
||||
expect(stats.totalEntries).toBeLessThanOrEqual(5);
|
||||
|
||||
smallCache.stop();
|
||||
});
|
||||
|
||||
// ============================================================================
|
||||
// Query Parameter Tests
|
||||
// ============================================================================
|
||||
|
||||
tap.test('cache: cache key should include query parameters', async () => {
|
||||
const context1 = createFetchContext({
|
||||
resource: 'query-test',
|
||||
query: { version: '1.0.0' },
|
||||
});
|
||||
|
||||
const context2 = createFetchContext({
|
||||
resource: 'query-test',
|
||||
query: { version: '2.0.0' },
|
||||
});
|
||||
|
||||
const upstreamUrl = 'https://registry.npmjs.org';
|
||||
|
||||
// Store with v1 query
|
||||
await cache.set(
|
||||
context1,
|
||||
Buffer.from('version 1 data'),
|
||||
'text/plain',
|
||||
{},
|
||||
'query-upstream',
|
||||
upstreamUrl
|
||||
);
|
||||
|
||||
// Store with v2 query
|
||||
await cache.set(
|
||||
context2,
|
||||
Buffer.from('version 2 data'),
|
||||
'text/plain',
|
||||
{},
|
||||
'query-upstream',
|
||||
upstreamUrl
|
||||
);
|
||||
|
||||
// Retrieve v1 - should get v1 data
|
||||
const entry1 = await cache.get(context1, upstreamUrl);
|
||||
expect(entry1).toBeTruthy();
|
||||
expect(entry1!.data.toString()).toEqual('version 1 data');
|
||||
|
||||
// Retrieve v2 - should get v2 data
|
||||
const entry2 = await cache.get(context2, upstreamUrl);
|
||||
expect(entry2).toBeTruthy();
|
||||
expect(entry2!.data.toString()).toEqual('version 2 data');
|
||||
});
|
||||
|
||||
// ============================================================================
|
||||
// S3 Persistence Tests
|
||||
// ============================================================================
|
||||
|
||||
tap.test('cache: should load from S3 on memory cache miss', async () => {
|
||||
// Use a unique resource name for this test
|
||||
const uniqueResource = `persist-test-${Date.now()}`;
|
||||
const persistContext = createFetchContext({ resource: uniqueResource });
|
||||
const upstreamUrl = 'https://persist.example.com';
|
||||
|
||||
// Store in first cache instance
|
||||
await cache.set(
|
||||
persistContext,
|
||||
Buffer.from('persisted data'),
|
||||
'text/plain',
|
||||
{},
|
||||
'persist-upstream',
|
||||
upstreamUrl
|
||||
);
|
||||
|
||||
// Wait for S3 write to complete
|
||||
await new Promise(resolve => setTimeout(resolve, 200));
|
||||
|
||||
// Verify the entry is in the original cache's memory
|
||||
const originalEntry = await cache.get(persistContext, upstreamUrl);
|
||||
expect(originalEntry).toBeTruthy();
|
||||
|
||||
// Create a new cache instance (simulates restart) with SAME storage backend
|
||||
const freshCache = new UpstreamCache(
|
||||
{ enabled: true, defaultTtlSeconds: 300 },
|
||||
10000,
|
||||
storageBackend
|
||||
);
|
||||
|
||||
// Fresh cache has empty memory, should load from S3
|
||||
const entry = await freshCache.get(persistContext, upstreamUrl);
|
||||
|
||||
expect(entry).toBeTruthy();
|
||||
expect(entry!.data.toString()).toEqual('persisted data');
|
||||
|
||||
freshCache.stop();
|
||||
});
|
||||
|
||||
// ============================================================================
|
||||
// Cache Stats Tests
|
||||
// ============================================================================
|
||||
|
||||
tap.test('cache: should return accurate stats', async () => {
|
||||
const statsCache = new UpstreamCache(
|
||||
{ enabled: true, defaultTtlSeconds: 300 },
|
||||
1000,
|
||||
storageBackend
|
||||
);
|
||||
|
||||
// Add some entries
|
||||
for (let i = 0; i < 3; i++) {
|
||||
const context = createFetchContext({ resource: `stats-test-${i}` });
|
||||
await statsCache.set(
|
||||
context,
|
||||
Buffer.from(`stats data ${i}`),
|
||||
'text/plain',
|
||||
{},
|
||||
'stats-upstream',
|
||||
'https://stats.example.com'
|
||||
);
|
||||
}
|
||||
|
||||
// Add a negative entry
|
||||
const negContext = createFetchContext({ resource: 'stats-negative' });
|
||||
await statsCache.setNegative(negContext, 'stats-upstream', 'https://stats.example.com');
|
||||
|
||||
const stats = statsCache.getStats();
|
||||
|
||||
expect(stats.totalEntries).toBeGreaterThanOrEqual(4);
|
||||
expect(stats.enabled).toBeTrue();
|
||||
expect(stats.hasStorage).toBeTrue();
|
||||
expect(stats.maxEntries).toEqual(1000);
|
||||
|
||||
statsCache.stop();
|
||||
});
|
||||
|
||||
// ============================================================================
|
||||
// Invalidation Tests
|
||||
// ============================================================================
|
||||
|
||||
tap.test('cache: should invalidate specific cache entry', async () => {
|
||||
const invalidateContext = createFetchContext({ resource: 'invalidate-test' });
|
||||
const upstreamUrl = 'https://invalidate.example.com';
|
||||
|
||||
await cache.set(
|
||||
invalidateContext,
|
||||
Buffer.from('to be invalidated'),
|
||||
'text/plain',
|
||||
{},
|
||||
'inv-upstream',
|
||||
upstreamUrl
|
||||
);
|
||||
|
||||
// Verify it exists
|
||||
let entry = await cache.get(invalidateContext, upstreamUrl);
|
||||
expect(entry).toBeTruthy();
|
||||
|
||||
// Invalidate
|
||||
const deleted = await cache.invalidate(invalidateContext, upstreamUrl);
|
||||
expect(deleted).toBeTrue();
|
||||
|
||||
// Should be gone
|
||||
entry = await cache.get(invalidateContext, upstreamUrl);
|
||||
expect(entry).toBeNull();
|
||||
});
|
||||
|
||||
tap.test('cache: should invalidate entries matching pattern', async () => {
|
||||
const upstreamUrl = 'https://pattern.example.com';
|
||||
|
||||
// Add multiple entries
|
||||
for (const name of ['pattern-a', 'pattern-b', 'other-c']) {
|
||||
const context = createFetchContext({ resource: name });
|
||||
await cache.set(
|
||||
context,
|
||||
Buffer.from(`data for ${name}`),
|
||||
'text/plain',
|
||||
{},
|
||||
'pattern-upstream',
|
||||
upstreamUrl
|
||||
);
|
||||
}
|
||||
|
||||
// Invalidate entries matching 'pattern-*'
|
||||
const count = await cache.invalidatePattern(/pattern-/);
|
||||
expect(count).toBeGreaterThanOrEqual(2);
|
||||
|
||||
// pattern-a should be gone
|
||||
const entryA = await cache.get(createFetchContext({ resource: 'pattern-a' }), upstreamUrl);
|
||||
expect(entryA).toBeNull();
|
||||
|
||||
// other-c should still exist
|
||||
const entryC = await cache.get(createFetchContext({ resource: 'other-c' }), upstreamUrl);
|
||||
expect(entryC).toBeTruthy();
|
||||
});
|
||||
|
||||
// ============================================================================
|
||||
// Cleanup
|
||||
// ============================================================================
|
||||
|
||||
tap.test('cleanup: should stop cache and clean up bucket', async () => {
|
||||
if (cache) {
|
||||
cache.stop();
|
||||
}
|
||||
|
||||
// Clean up test bucket
|
||||
if (s3Bucket) {
|
||||
try {
|
||||
const files = await s3Bucket.fastList({});
|
||||
for (const file of files) {
|
||||
await s3Bucket.fastRemove({ path: file.name });
|
||||
}
|
||||
await smartBucket.removeBucket(bucketName);
|
||||
} catch {
|
||||
// Ignore cleanup errors
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
export default tap.start();
|
||||
343
test/test.upstream.provider.ts
Normal file
343
test/test.upstream.provider.ts
Normal file
@@ -0,0 +1,343 @@
|
||||
import { expect, tap } from '@git.zone/tstest/tapbundle';
|
||||
import { SmartRegistry } from '../ts/index.js';
|
||||
import {
|
||||
createTestRegistryWithUpstream,
|
||||
createTrackingUpstreamProvider,
|
||||
} from './helpers/registry.js';
|
||||
import { StaticUpstreamProvider } from '../ts/upstream/interfaces.upstream.js';
|
||||
import type {
|
||||
IUpstreamProvider,
|
||||
IUpstreamResolutionContext,
|
||||
IProtocolUpstreamConfig,
|
||||
} from '../ts/upstream/interfaces.upstream.js';
|
||||
import type { TRegistryProtocol } from '../ts/core/interfaces.core.js';
|
||||
|
||||
// =============================================================================
|
||||
// StaticUpstreamProvider Tests
|
||||
// =============================================================================
|
||||
|
||||
tap.test('StaticUpstreamProvider: should return config for configured protocol', async () => {
|
||||
const npmConfig: IProtocolUpstreamConfig = {
|
||||
enabled: true,
|
||||
upstreams: [{ id: 'npmjs', url: 'https://registry.npmjs.org', priority: 1, enabled: true }],
|
||||
};
|
||||
|
||||
const provider = new StaticUpstreamProvider({
|
||||
npm: npmConfig,
|
||||
});
|
||||
|
||||
const result = await provider.resolveUpstreamConfig({
|
||||
protocol: 'npm',
|
||||
resource: 'lodash',
|
||||
scope: null,
|
||||
method: 'GET',
|
||||
resourceType: 'packument',
|
||||
});
|
||||
|
||||
expect(result).toBeDefined();
|
||||
expect(result?.enabled).toEqual(true);
|
||||
expect(result?.upstreams[0].id).toEqual('npmjs');
|
||||
});
|
||||
|
||||
tap.test('StaticUpstreamProvider: should return null for unconfigured protocol', async () => {
|
||||
const provider = new StaticUpstreamProvider({
|
||||
npm: {
|
||||
enabled: true,
|
||||
upstreams: [{ id: 'npmjs', url: 'https://registry.npmjs.org', priority: 1, enabled: true }],
|
||||
},
|
||||
});
|
||||
|
||||
const result = await provider.resolveUpstreamConfig({
|
||||
protocol: 'maven',
|
||||
resource: 'com.example:lib',
|
||||
scope: 'com.example',
|
||||
method: 'GET',
|
||||
resourceType: 'pom',
|
||||
});
|
||||
|
||||
expect(result).toBeNull();
|
||||
});
|
||||
|
||||
tap.test('StaticUpstreamProvider: should support multiple protocols', async () => {
|
||||
const provider = new StaticUpstreamProvider({
|
||||
npm: {
|
||||
enabled: true,
|
||||
upstreams: [{ id: 'npmjs', url: 'https://registry.npmjs.org', priority: 1, enabled: true }],
|
||||
},
|
||||
oci: {
|
||||
enabled: true,
|
||||
upstreams: [{ id: 'dockerhub', url: 'https://registry-1.docker.io', priority: 1, enabled: true }],
|
||||
},
|
||||
maven: {
|
||||
enabled: true,
|
||||
upstreams: [{ id: 'central', url: 'https://repo1.maven.org/maven2', priority: 1, enabled: true }],
|
||||
},
|
||||
});
|
||||
|
||||
const npmResult = await provider.resolveUpstreamConfig({
|
||||
protocol: 'npm',
|
||||
resource: 'lodash',
|
||||
scope: null,
|
||||
method: 'GET',
|
||||
resourceType: 'packument',
|
||||
});
|
||||
expect(npmResult?.upstreams[0].id).toEqual('npmjs');
|
||||
|
||||
const ociResult = await provider.resolveUpstreamConfig({
|
||||
protocol: 'oci',
|
||||
resource: 'library/nginx',
|
||||
scope: 'library',
|
||||
method: 'GET',
|
||||
resourceType: 'manifest',
|
||||
});
|
||||
expect(ociResult?.upstreams[0].id).toEqual('dockerhub');
|
||||
|
||||
const mavenResult = await provider.resolveUpstreamConfig({
|
||||
protocol: 'maven',
|
||||
resource: 'com.example:lib',
|
||||
scope: 'com.example',
|
||||
method: 'GET',
|
||||
resourceType: 'pom',
|
||||
});
|
||||
expect(mavenResult?.upstreams[0].id).toEqual('central');
|
||||
});
|
||||
|
||||
// =============================================================================
|
||||
// Registry with Provider Integration Tests
|
||||
// =============================================================================
|
||||
|
||||
let registry: SmartRegistry;
|
||||
let trackingProvider: ReturnType<typeof createTrackingUpstreamProvider>;
|
||||
|
||||
tap.test('Provider Integration: should create registry with upstream provider', async () => {
|
||||
trackingProvider = createTrackingUpstreamProvider({
|
||||
npm: {
|
||||
enabled: true,
|
||||
upstreams: [{ id: 'test-npm', url: 'https://registry.npmjs.org', priority: 1, enabled: true }],
|
||||
},
|
||||
});
|
||||
|
||||
registry = await createTestRegistryWithUpstream(trackingProvider.provider);
|
||||
|
||||
expect(registry).toBeInstanceOf(SmartRegistry);
|
||||
expect(registry.isInitialized()).toEqual(true);
|
||||
});
|
||||
|
||||
tap.test('Provider Integration: should call provider when fetching unknown npm package', async () => {
|
||||
// Clear previous calls
|
||||
trackingProvider.calls.length = 0;
|
||||
|
||||
// Request a package that doesn't exist locally - should trigger upstream lookup
|
||||
const response = await registry.handleRequest({
|
||||
method: 'GET',
|
||||
path: '/npm/@test-scope/nonexistent-package',
|
||||
headers: {},
|
||||
query: {},
|
||||
});
|
||||
|
||||
// Provider should have been called for the packument lookup
|
||||
const npmCalls = trackingProvider.calls.filter(c => c.protocol === 'npm');
|
||||
|
||||
// The package doesn't exist locally, so upstream should be consulted
|
||||
// Note: actual upstream fetch may fail since the package doesn't exist
|
||||
expect(response.status).toBeOneOf([404, 200, 502]); // 404 if not found, 502 if upstream error
|
||||
});
|
||||
|
||||
tap.test('Provider Integration: provider receives correct context for scoped npm package', async () => {
|
||||
trackingProvider.calls.length = 0;
|
||||
|
||||
// Use URL-encoded path for scoped packages as npm client does
|
||||
await registry.handleRequest({
|
||||
method: 'GET',
|
||||
path: '/npm/@myorg%2fmy-package',
|
||||
headers: {},
|
||||
query: {},
|
||||
});
|
||||
|
||||
// Find any npm call - the exact resource type depends on routing
|
||||
const npmCalls = trackingProvider.calls.filter(c => c.protocol === 'npm');
|
||||
|
||||
// Provider should be called for upstream lookup
|
||||
if (npmCalls.length > 0) {
|
||||
const call = npmCalls[0];
|
||||
expect(call.protocol).toEqual('npm');
|
||||
// The resource should include the scoped name
|
||||
expect(call.resource).toInclude('myorg');
|
||||
expect(call.method).toEqual('GET');
|
||||
}
|
||||
});
|
||||
|
||||
tap.test('Provider Integration: provider receives correct context for unscoped npm package', async () => {
|
||||
trackingProvider.calls.length = 0;
|
||||
|
||||
await registry.handleRequest({
|
||||
method: 'GET',
|
||||
path: '/npm/lodash',
|
||||
headers: {},
|
||||
query: {},
|
||||
});
|
||||
|
||||
const packumentCall = trackingProvider.calls.find(
|
||||
c => c.protocol === 'npm' && c.resourceType === 'packument'
|
||||
);
|
||||
|
||||
if (packumentCall) {
|
||||
expect(packumentCall.protocol).toEqual('npm');
|
||||
expect(packumentCall.resource).toEqual('lodash');
|
||||
expect(packumentCall.scope).toBeNull(); // No scope for unscoped package
|
||||
}
|
||||
});
|
||||
|
||||
// =============================================================================
|
||||
// Custom Provider Implementation Tests
|
||||
// =============================================================================
|
||||
|
||||
tap.test('Custom Provider: should support dynamic resolution based on context', async () => {
|
||||
// Create a provider that returns different configs based on scope
|
||||
const dynamicProvider: IUpstreamProvider = {
|
||||
async resolveUpstreamConfig(context: IUpstreamResolutionContext) {
|
||||
if (context.scope === 'internal') {
|
||||
// Internal packages go to private registry
|
||||
return {
|
||||
enabled: true,
|
||||
upstreams: [{ id: 'private', url: 'https://private.registry.com', priority: 1, enabled: true }],
|
||||
};
|
||||
}
|
||||
// Everything else goes to public registry
|
||||
return {
|
||||
enabled: true,
|
||||
upstreams: [{ id: 'public', url: 'https://registry.npmjs.org', priority: 1, enabled: true }],
|
||||
};
|
||||
},
|
||||
};
|
||||
|
||||
const internalResult = await dynamicProvider.resolveUpstreamConfig({
|
||||
protocol: 'npm',
|
||||
resource: '@internal/utils',
|
||||
scope: 'internal',
|
||||
method: 'GET',
|
||||
resourceType: 'packument',
|
||||
});
|
||||
expect(internalResult?.upstreams[0].id).toEqual('private');
|
||||
|
||||
const publicResult = await dynamicProvider.resolveUpstreamConfig({
|
||||
protocol: 'npm',
|
||||
resource: '@public/utils',
|
||||
scope: 'public',
|
||||
method: 'GET',
|
||||
resourceType: 'packument',
|
||||
});
|
||||
expect(publicResult?.upstreams[0].id).toEqual('public');
|
||||
});
|
||||
|
||||
tap.test('Custom Provider: should support actor-based resolution', async () => {
|
||||
const actorAwareProvider: IUpstreamProvider = {
|
||||
async resolveUpstreamConfig(context: IUpstreamResolutionContext) {
|
||||
// Different upstreams based on user's organization
|
||||
if (context.actor?.orgId === 'enterprise-org') {
|
||||
return {
|
||||
enabled: true,
|
||||
upstreams: [{ id: 'enterprise', url: 'https://enterprise.registry.com', priority: 1, enabled: true }],
|
||||
};
|
||||
}
|
||||
return {
|
||||
enabled: true,
|
||||
upstreams: [{ id: 'default', url: 'https://registry.npmjs.org', priority: 1, enabled: true }],
|
||||
};
|
||||
},
|
||||
};
|
||||
|
||||
const enterpriseResult = await actorAwareProvider.resolveUpstreamConfig({
|
||||
protocol: 'npm',
|
||||
resource: 'lodash',
|
||||
scope: null,
|
||||
actor: { orgId: 'enterprise-org', userId: 'user1' },
|
||||
method: 'GET',
|
||||
resourceType: 'packument',
|
||||
});
|
||||
expect(enterpriseResult?.upstreams[0].id).toEqual('enterprise');
|
||||
|
||||
const defaultResult = await actorAwareProvider.resolveUpstreamConfig({
|
||||
protocol: 'npm',
|
||||
resource: 'lodash',
|
||||
scope: null,
|
||||
actor: { orgId: 'free-org', userId: 'user2' },
|
||||
method: 'GET',
|
||||
resourceType: 'packument',
|
||||
});
|
||||
expect(defaultResult?.upstreams[0].id).toEqual('default');
|
||||
});
|
||||
|
||||
tap.test('Custom Provider: should support disabling upstream for specific resources', async () => {
|
||||
const selectiveProvider: IUpstreamProvider = {
|
||||
async resolveUpstreamConfig(context: IUpstreamResolutionContext) {
|
||||
// Block upstream for internal packages
|
||||
if (context.scope === 'internal') {
|
||||
return null; // No upstream for internal packages
|
||||
}
|
||||
return {
|
||||
enabled: true,
|
||||
upstreams: [{ id: 'public', url: 'https://registry.npmjs.org', priority: 1, enabled: true }],
|
||||
};
|
||||
},
|
||||
};
|
||||
|
||||
const internalResult = await selectiveProvider.resolveUpstreamConfig({
|
||||
protocol: 'npm',
|
||||
resource: '@internal/secret',
|
||||
scope: 'internal',
|
||||
method: 'GET',
|
||||
resourceType: 'packument',
|
||||
});
|
||||
expect(internalResult).toBeNull();
|
||||
|
||||
const publicResult = await selectiveProvider.resolveUpstreamConfig({
|
||||
protocol: 'npm',
|
||||
resource: 'lodash',
|
||||
scope: null,
|
||||
method: 'GET',
|
||||
resourceType: 'packument',
|
||||
});
|
||||
expect(publicResult).not.toBeNull();
|
||||
});
|
||||
|
||||
// =============================================================================
|
||||
// Registry without Provider Tests
|
||||
// =============================================================================
|
||||
|
||||
tap.test('No Provider: registry should work without upstream provider', async () => {
|
||||
const registryWithoutUpstream = await createTestRegistryWithUpstream(
|
||||
// Pass a provider that always returns null
|
||||
{
|
||||
async resolveUpstreamConfig() {
|
||||
return null;
|
||||
},
|
||||
}
|
||||
);
|
||||
|
||||
expect(registryWithoutUpstream).toBeInstanceOf(SmartRegistry);
|
||||
|
||||
// Should return 404 for non-existent package (no upstream to check)
|
||||
const response = await registryWithoutUpstream.handleRequest({
|
||||
method: 'GET',
|
||||
path: '/npm/nonexistent-package-xyz',
|
||||
headers: {},
|
||||
query: {},
|
||||
});
|
||||
|
||||
expect(response.status).toEqual(404);
|
||||
|
||||
registryWithoutUpstream.destroy();
|
||||
});
|
||||
|
||||
// =============================================================================
|
||||
// Cleanup
|
||||
// =============================================================================
|
||||
|
||||
tap.postTask('cleanup registry', async () => {
|
||||
if (registry) {
|
||||
registry.destroy();
|
||||
}
|
||||
});
|
||||
|
||||
export default tap.start();
|
||||
@@ -3,6 +3,6 @@
|
||||
*/
|
||||
export const commitinfo = {
|
||||
name: '@push.rocks/smartregistry',
|
||||
version: '2.4.0',
|
||||
version: '2.7.0',
|
||||
description: 'A composable TypeScript library implementing OCI, NPM, Maven, Cargo, Composer, PyPI, and RubyGems registries for building unified container and package registries'
|
||||
}
|
||||
|
||||
@@ -2,8 +2,8 @@ import { Smartlog } from '@push.rocks/smartlog';
|
||||
import { BaseRegistry } from '../core/classes.baseregistry.js';
|
||||
import { RegistryStorage } from '../core/classes.registrystorage.js';
|
||||
import { AuthManager } from '../core/classes.authmanager.js';
|
||||
import type { IRequestContext, IResponse, IAuthToken } from '../core/interfaces.core.js';
|
||||
import type { IProtocolUpstreamConfig } from '../upstream/interfaces.upstream.js';
|
||||
import type { IRequestContext, IResponse, IAuthToken, IRequestActor } from '../core/interfaces.core.js';
|
||||
import type { IUpstreamProvider } from '../upstream/interfaces.upstream.js';
|
||||
import type {
|
||||
ICargoIndexEntry,
|
||||
ICargoPublishMetadata,
|
||||
@@ -27,20 +27,21 @@ export class CargoRegistry extends BaseRegistry {
|
||||
private basePath: string = '/cargo';
|
||||
private registryUrl: string;
|
||||
private logger: Smartlog;
|
||||
private upstream: CargoUpstream | null = null;
|
||||
private upstreamProvider: IUpstreamProvider | null = null;
|
||||
|
||||
constructor(
|
||||
storage: RegistryStorage,
|
||||
authManager: AuthManager,
|
||||
basePath: string = '/cargo',
|
||||
registryUrl: string = 'http://localhost:5000/cargo',
|
||||
upstreamConfig?: IProtocolUpstreamConfig
|
||||
upstreamProvider?: IUpstreamProvider
|
||||
) {
|
||||
super();
|
||||
this.storage = storage;
|
||||
this.authManager = authManager;
|
||||
this.basePath = basePath;
|
||||
this.registryUrl = registryUrl;
|
||||
this.upstreamProvider = upstreamProvider || null;
|
||||
|
||||
// Initialize logger
|
||||
this.logger = new Smartlog({
|
||||
@@ -54,20 +55,38 @@ export class CargoRegistry extends BaseRegistry {
|
||||
}
|
||||
});
|
||||
this.logger.enableConsole();
|
||||
|
||||
// Initialize upstream if configured
|
||||
if (upstreamConfig?.enabled) {
|
||||
this.upstream = new CargoUpstream(upstreamConfig, undefined, this.logger);
|
||||
}
|
||||
|
||||
/**
|
||||
* Get upstream for a specific request.
|
||||
* Calls the provider to resolve upstream config dynamically.
|
||||
*/
|
||||
private async getUpstreamForRequest(
|
||||
resource: string,
|
||||
resourceType: string,
|
||||
method: string,
|
||||
actor?: IRequestActor
|
||||
): Promise<CargoUpstream | null> {
|
||||
if (!this.upstreamProvider) return null;
|
||||
|
||||
const config = await this.upstreamProvider.resolveUpstreamConfig({
|
||||
protocol: 'cargo',
|
||||
resource,
|
||||
scope: resource, // For Cargo, crate name is the scope
|
||||
actor,
|
||||
method,
|
||||
resourceType,
|
||||
});
|
||||
|
||||
if (!config?.enabled) return null;
|
||||
return new CargoUpstream(config, undefined, this.logger);
|
||||
}
|
||||
|
||||
/**
|
||||
* Clean up resources (timers, connections, etc.)
|
||||
*/
|
||||
public destroy(): void {
|
||||
if (this.upstream) {
|
||||
this.upstream.stop();
|
||||
}
|
||||
// No persistent upstream to clean up with dynamic provider
|
||||
}
|
||||
|
||||
public async init(): Promise<void> {
|
||||
@@ -94,6 +113,14 @@ export class CargoRegistry extends BaseRegistry {
|
||||
const authHeader = context.headers['authorization'] || context.headers['Authorization'];
|
||||
const token = authHeader ? await this.authManager.validateToken(authHeader, 'cargo') : null;
|
||||
|
||||
// Build actor from context and validated token
|
||||
const actor: IRequestActor = {
|
||||
...context.actor,
|
||||
userId: token?.userId,
|
||||
ip: context.headers['x-forwarded-for'] || context.headers['X-Forwarded-For'],
|
||||
userAgent: context.headers['user-agent'] || context.headers['User-Agent'],
|
||||
};
|
||||
|
||||
this.logger.log('debug', `handleRequest: ${context.method} ${path}`, {
|
||||
method: context.method,
|
||||
path,
|
||||
@@ -107,11 +134,11 @@ export class CargoRegistry extends BaseRegistry {
|
||||
|
||||
// API endpoints
|
||||
if (path.startsWith('/api/v1/')) {
|
||||
return this.handleApiRequest(path, context, token);
|
||||
return this.handleApiRequest(path, context, token, actor);
|
||||
}
|
||||
|
||||
// Index files (sparse protocol)
|
||||
return this.handleIndexRequest(path);
|
||||
return this.handleIndexRequest(path, actor);
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -132,7 +159,8 @@ export class CargoRegistry extends BaseRegistry {
|
||||
private async handleApiRequest(
|
||||
path: string,
|
||||
context: IRequestContext,
|
||||
token: IAuthToken | null
|
||||
token: IAuthToken | null,
|
||||
actor?: IRequestActor
|
||||
): Promise<IResponse> {
|
||||
// Publish: PUT /api/v1/crates/new
|
||||
if (path === '/api/v1/crates/new' && context.method === 'PUT') {
|
||||
@@ -142,7 +170,7 @@ export class CargoRegistry extends BaseRegistry {
|
||||
// Download: GET /api/v1/crates/{crate}/{version}/download
|
||||
const downloadMatch = path.match(/^\/api\/v1\/crates\/([^\/]+)\/([^\/]+)\/download$/);
|
||||
if (downloadMatch && context.method === 'GET') {
|
||||
return this.handleDownload(downloadMatch[1], downloadMatch[2]);
|
||||
return this.handleDownload(downloadMatch[1], downloadMatch[2], actor);
|
||||
}
|
||||
|
||||
// Yank: DELETE /api/v1/crates/{crate}/{version}/yank
|
||||
@@ -175,7 +203,7 @@ export class CargoRegistry extends BaseRegistry {
|
||||
* Handle index file requests
|
||||
* Paths: /1/{name}, /2/{name}, /3/{c}/{name}, /{p1}/{p2}/{name}
|
||||
*/
|
||||
private async handleIndexRequest(path: string): Promise<IResponse> {
|
||||
private async handleIndexRequest(path: string, actor?: IRequestActor): Promise<IResponse> {
|
||||
// Parse index paths to extract crate name
|
||||
const pathParts = path.split('/').filter(p => p);
|
||||
let crateName: string | null = null;
|
||||
@@ -202,7 +230,7 @@ export class CargoRegistry extends BaseRegistry {
|
||||
};
|
||||
}
|
||||
|
||||
return this.handleIndexFile(crateName);
|
||||
return this.handleIndexFile(crateName, actor);
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -224,12 +252,14 @@ export class CargoRegistry extends BaseRegistry {
|
||||
/**
|
||||
* Serve index file for a crate
|
||||
*/
|
||||
private async handleIndexFile(crateName: string): Promise<IResponse> {
|
||||
private async handleIndexFile(crateName: string, actor?: IRequestActor): Promise<IResponse> {
|
||||
let index = await this.storage.getCargoIndex(crateName);
|
||||
|
||||
// Try upstream if not found locally
|
||||
if ((!index || index.length === 0) && this.upstream) {
|
||||
const upstreamIndex = await this.upstream.fetchCrateIndex(crateName);
|
||||
if (!index || index.length === 0) {
|
||||
const upstream = await this.getUpstreamForRequest(crateName, 'index', 'GET', actor);
|
||||
if (upstream) {
|
||||
const upstreamIndex = await upstream.fetchCrateIndex(crateName);
|
||||
if (upstreamIndex) {
|
||||
// Parse the newline-delimited JSON
|
||||
const parsedIndex: ICargoIndexEntry[] = upstreamIndex
|
||||
@@ -244,6 +274,7 @@ export class CargoRegistry extends BaseRegistry {
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (!index || index.length === 0) {
|
||||
return {
|
||||
@@ -431,20 +462,24 @@ export class CargoRegistry extends BaseRegistry {
|
||||
*/
|
||||
private async handleDownload(
|
||||
crateName: string,
|
||||
version: string
|
||||
version: string,
|
||||
actor?: IRequestActor
|
||||
): Promise<IResponse> {
|
||||
this.logger.log('debug', 'handleDownload', { crate: crateName, version });
|
||||
|
||||
let crateFile = await this.storage.getCargoCrate(crateName, version);
|
||||
|
||||
// Try upstream if not found locally
|
||||
if (!crateFile && this.upstream) {
|
||||
crateFile = await this.upstream.fetchCrate(crateName, version);
|
||||
if (!crateFile) {
|
||||
const upstream = await this.getUpstreamForRequest(crateName, 'crate', 'GET', actor);
|
||||
if (upstream) {
|
||||
crateFile = await upstream.fetchCrate(crateName, version);
|
||||
if (crateFile) {
|
||||
// Cache locally
|
||||
await this.storage.putCargoCrate(crateName, version, crateFile);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (!crateFile) {
|
||||
return {
|
||||
|
||||
@@ -86,7 +86,7 @@ export class SmartRegistry {
|
||||
this.authManager,
|
||||
ociBasePath,
|
||||
ociTokens,
|
||||
this.config.oci.upstream
|
||||
this.config.upstreamProvider
|
||||
);
|
||||
await ociRegistry.init();
|
||||
this.registries.set('oci', ociRegistry);
|
||||
@@ -101,7 +101,7 @@ export class SmartRegistry {
|
||||
this.authManager,
|
||||
npmBasePath,
|
||||
registryUrl,
|
||||
this.config.npm.upstream
|
||||
this.config.upstreamProvider
|
||||
);
|
||||
await npmRegistry.init();
|
||||
this.registries.set('npm', npmRegistry);
|
||||
@@ -116,7 +116,7 @@ export class SmartRegistry {
|
||||
this.authManager,
|
||||
mavenBasePath,
|
||||
registryUrl,
|
||||
this.config.maven.upstream
|
||||
this.config.upstreamProvider
|
||||
);
|
||||
await mavenRegistry.init();
|
||||
this.registries.set('maven', mavenRegistry);
|
||||
@@ -131,7 +131,7 @@ export class SmartRegistry {
|
||||
this.authManager,
|
||||
cargoBasePath,
|
||||
registryUrl,
|
||||
this.config.cargo.upstream
|
||||
this.config.upstreamProvider
|
||||
);
|
||||
await cargoRegistry.init();
|
||||
this.registries.set('cargo', cargoRegistry);
|
||||
@@ -146,7 +146,7 @@ export class SmartRegistry {
|
||||
this.authManager,
|
||||
composerBasePath,
|
||||
registryUrl,
|
||||
this.config.composer.upstream
|
||||
this.config.upstreamProvider
|
||||
);
|
||||
await composerRegistry.init();
|
||||
this.registries.set('composer', composerRegistry);
|
||||
@@ -161,7 +161,7 @@ export class SmartRegistry {
|
||||
this.authManager,
|
||||
pypiBasePath,
|
||||
registryUrl,
|
||||
this.config.pypi.upstream
|
||||
this.config.upstreamProvider
|
||||
);
|
||||
await pypiRegistry.init();
|
||||
this.registries.set('pypi', pypiRegistry);
|
||||
@@ -176,7 +176,7 @@ export class SmartRegistry {
|
||||
this.authManager,
|
||||
rubygemsBasePath,
|
||||
registryUrl,
|
||||
this.config.rubygems.upstream
|
||||
this.config.upstreamProvider
|
||||
);
|
||||
await rubygemsRegistry.init();
|
||||
this.registries.set('rubygems', rubygemsRegistry);
|
||||
|
||||
@@ -6,8 +6,8 @@
|
||||
import { BaseRegistry } from '../core/classes.baseregistry.js';
|
||||
import type { RegistryStorage } from '../core/classes.registrystorage.js';
|
||||
import type { AuthManager } from '../core/classes.authmanager.js';
|
||||
import type { IRequestContext, IResponse, IAuthToken } from '../core/interfaces.core.js';
|
||||
import type { IProtocolUpstreamConfig } from '../upstream/interfaces.upstream.js';
|
||||
import type { IRequestContext, IResponse, IAuthToken, IRequestActor } from '../core/interfaces.core.js';
|
||||
import type { IUpstreamProvider } from '../upstream/interfaces.upstream.js';
|
||||
import { isBinaryData, toBuffer } from '../core/helpers.buffer.js';
|
||||
import type {
|
||||
IComposerPackage,
|
||||
@@ -30,34 +30,66 @@ export class ComposerRegistry extends BaseRegistry {
|
||||
private authManager: AuthManager;
|
||||
private basePath: string = '/composer';
|
||||
private registryUrl: string;
|
||||
private upstream: ComposerUpstream | null = null;
|
||||
private upstreamProvider: IUpstreamProvider | null = null;
|
||||
|
||||
constructor(
|
||||
storage: RegistryStorage,
|
||||
authManager: AuthManager,
|
||||
basePath: string = '/composer',
|
||||
registryUrl: string = 'http://localhost:5000/composer',
|
||||
upstreamConfig?: IProtocolUpstreamConfig
|
||||
upstreamProvider?: IUpstreamProvider
|
||||
) {
|
||||
super();
|
||||
this.storage = storage;
|
||||
this.authManager = authManager;
|
||||
this.basePath = basePath;
|
||||
this.registryUrl = registryUrl;
|
||||
|
||||
// Initialize upstream if configured
|
||||
if (upstreamConfig?.enabled) {
|
||||
this.upstream = new ComposerUpstream(upstreamConfig);
|
||||
this.upstreamProvider = upstreamProvider || null;
|
||||
}
|
||||
|
||||
/**
|
||||
* Extract scope from Composer package name.
|
||||
* For Composer, vendor is the scope.
|
||||
* @example "symfony" from "symfony/console"
|
||||
*/
|
||||
private extractScope(vendorPackage: string): string | null {
|
||||
const slashIndex = vendorPackage.indexOf('/');
|
||||
if (slashIndex > 0) {
|
||||
return vendorPackage.substring(0, slashIndex);
|
||||
}
|
||||
return null;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get upstream for a specific request.
|
||||
* Calls the provider to resolve upstream config dynamically.
|
||||
*/
|
||||
private async getUpstreamForRequest(
|
||||
resource: string,
|
||||
resourceType: string,
|
||||
method: string,
|
||||
actor?: IRequestActor
|
||||
): Promise<ComposerUpstream | null> {
|
||||
if (!this.upstreamProvider) return null;
|
||||
|
||||
const config = await this.upstreamProvider.resolveUpstreamConfig({
|
||||
protocol: 'composer',
|
||||
resource,
|
||||
scope: this.extractScope(resource),
|
||||
actor,
|
||||
method,
|
||||
resourceType,
|
||||
});
|
||||
|
||||
if (!config?.enabled) return null;
|
||||
return new ComposerUpstream(config);
|
||||
}
|
||||
|
||||
/**
|
||||
* Clean up resources (timers, connections, etc.)
|
||||
*/
|
||||
public destroy(): void {
|
||||
if (this.upstream) {
|
||||
this.upstream.stop();
|
||||
}
|
||||
// No persistent upstream to clean up with dynamic provider
|
||||
}
|
||||
|
||||
public async init(): Promise<void> {
|
||||
@@ -96,6 +128,14 @@ export class ComposerRegistry extends BaseRegistry {
|
||||
}
|
||||
}
|
||||
|
||||
// Build actor from context and validated token
|
||||
const actor: IRequestActor = {
|
||||
...context.actor,
|
||||
userId: token?.userId,
|
||||
ip: context.headers['x-forwarded-for'] || context.headers['X-Forwarded-For'],
|
||||
userAgent: context.headers['user-agent'] || context.headers['User-Agent'],
|
||||
};
|
||||
|
||||
// Root packages.json
|
||||
if (path === '/packages.json' || path === '' || path === '/') {
|
||||
return this.handlePackagesJson();
|
||||
@@ -106,7 +146,7 @@ export class ComposerRegistry extends BaseRegistry {
|
||||
if (metadataMatch) {
|
||||
const [, vendorPackage, devSuffix] = metadataMatch;
|
||||
const includeDev = !!devSuffix;
|
||||
return this.handlePackageMetadata(vendorPackage, includeDev, token);
|
||||
return this.handlePackageMetadata(vendorPackage, includeDev, token, actor);
|
||||
}
|
||||
|
||||
// Package list: /packages/list.json?filter=vendor/*
|
||||
@@ -176,18 +216,21 @@ export class ComposerRegistry extends BaseRegistry {
|
||||
private async handlePackageMetadata(
|
||||
vendorPackage: string,
|
||||
includeDev: boolean,
|
||||
token: IAuthToken | null
|
||||
token: IAuthToken | null,
|
||||
actor?: IRequestActor
|
||||
): Promise<IResponse> {
|
||||
// Read operations are public, no authentication required
|
||||
let metadata = await this.storage.getComposerPackageMetadata(vendorPackage);
|
||||
|
||||
// Try upstream if not found locally
|
||||
if (!metadata && this.upstream) {
|
||||
if (!metadata) {
|
||||
const upstream = await this.getUpstreamForRequest(vendorPackage, 'metadata', 'GET', actor);
|
||||
if (upstream) {
|
||||
const [vendor, packageName] = vendorPackage.split('/');
|
||||
if (vendor && packageName) {
|
||||
const upstreamMetadata = includeDev
|
||||
? await this.upstream.fetchPackageDevMetadata(vendor, packageName)
|
||||
: await this.upstream.fetchPackageMetadata(vendor, packageName);
|
||||
? await upstream.fetchPackageDevMetadata(vendor, packageName)
|
||||
: await upstream.fetchPackageMetadata(vendor, packageName);
|
||||
|
||||
if (upstreamMetadata && upstreamMetadata.packages) {
|
||||
// Store upstream metadata locally
|
||||
@@ -199,6 +242,7 @@ export class ComposerRegistry extends BaseRegistry {
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (!metadata) {
|
||||
return {
|
||||
|
||||
@@ -3,7 +3,7 @@
|
||||
*/
|
||||
|
||||
import type * as plugins from '../plugins.js';
|
||||
import type { IProtocolUpstreamConfig } from '../upstream/interfaces.upstream.js';
|
||||
import type { IUpstreamProvider } from '../upstream/interfaces.upstream.js';
|
||||
import type { IAuthProvider } from './interfaces.auth.js';
|
||||
import type { IStorageHooks } from './interfaces.storage.js';
|
||||
|
||||
@@ -89,8 +89,6 @@ export interface IProtocolConfig {
|
||||
enabled: boolean;
|
||||
basePath: string;
|
||||
features?: Record<string, boolean>;
|
||||
/** Upstream registry configuration for proxying/caching */
|
||||
upstream?: IProtocolUpstreamConfig;
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -113,6 +111,13 @@ export interface IRegistryConfig {
|
||||
*/
|
||||
storageHooks?: IStorageHooks;
|
||||
|
||||
/**
|
||||
* Dynamic upstream configuration provider.
|
||||
* Called per-request to resolve which upstream registries to use.
|
||||
* Use StaticUpstreamProvider for simple static configurations.
|
||||
*/
|
||||
upstreamProvider?: IUpstreamProvider;
|
||||
|
||||
oci?: IProtocolConfig;
|
||||
npm?: IProtocolConfig;
|
||||
maven?: IProtocolConfig;
|
||||
|
||||
@@ -6,8 +6,8 @@
|
||||
import { BaseRegistry } from '../core/classes.baseregistry.js';
|
||||
import type { RegistryStorage } from '../core/classes.registrystorage.js';
|
||||
import type { AuthManager } from '../core/classes.authmanager.js';
|
||||
import type { IRequestContext, IResponse, IAuthToken } from '../core/interfaces.core.js';
|
||||
import type { IProtocolUpstreamConfig } from '../upstream/interfaces.upstream.js';
|
||||
import type { IRequestContext, IResponse, IAuthToken, IRequestActor } from '../core/interfaces.core.js';
|
||||
import type { IUpstreamProvider } from '../upstream/interfaces.upstream.js';
|
||||
import { toBuffer } from '../core/helpers.buffer.js';
|
||||
import type { IMavenCoordinate, IMavenMetadata, IChecksums } from './interfaces.maven.js';
|
||||
import {
|
||||
@@ -33,34 +33,64 @@ export class MavenRegistry extends BaseRegistry {
|
||||
private authManager: AuthManager;
|
||||
private basePath: string = '/maven';
|
||||
private registryUrl: string;
|
||||
private upstream: MavenUpstream | null = null;
|
||||
private upstreamProvider: IUpstreamProvider | null = null;
|
||||
|
||||
constructor(
|
||||
storage: RegistryStorage,
|
||||
authManager: AuthManager,
|
||||
basePath: string,
|
||||
registryUrl: string,
|
||||
upstreamConfig?: IProtocolUpstreamConfig
|
||||
upstreamProvider?: IUpstreamProvider
|
||||
) {
|
||||
super();
|
||||
this.storage = storage;
|
||||
this.authManager = authManager;
|
||||
this.basePath = basePath;
|
||||
this.registryUrl = registryUrl;
|
||||
|
||||
// Initialize upstream if configured
|
||||
if (upstreamConfig?.enabled) {
|
||||
this.upstream = new MavenUpstream(upstreamConfig);
|
||||
this.upstreamProvider = upstreamProvider || null;
|
||||
}
|
||||
|
||||
/**
|
||||
* Extract scope from Maven coordinates.
|
||||
* For Maven, the groupId is the scope.
|
||||
* @example "com.example" from "com.example:my-lib"
|
||||
*/
|
||||
private extractScope(groupId: string): string | null {
|
||||
return groupId || null;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get upstream for a specific request.
|
||||
* Calls the provider to resolve upstream config dynamically.
|
||||
*/
|
||||
private async getUpstreamForRequest(
|
||||
resource: string,
|
||||
resourceType: string,
|
||||
method: string,
|
||||
actor?: IRequestActor
|
||||
): Promise<MavenUpstream | null> {
|
||||
if (!this.upstreamProvider) return null;
|
||||
|
||||
// For Maven, resource is "groupId:artifactId"
|
||||
const [groupId] = resource.split(':');
|
||||
const config = await this.upstreamProvider.resolveUpstreamConfig({
|
||||
protocol: 'maven',
|
||||
resource,
|
||||
scope: this.extractScope(groupId),
|
||||
actor,
|
||||
method,
|
||||
resourceType,
|
||||
});
|
||||
|
||||
if (!config?.enabled) return null;
|
||||
return new MavenUpstream(config);
|
||||
}
|
||||
|
||||
/**
|
||||
* Clean up resources (timers, connections, etc.)
|
||||
*/
|
||||
public destroy(): void {
|
||||
if (this.upstream) {
|
||||
this.upstream.stop();
|
||||
}
|
||||
// No persistent upstream to clean up with dynamic provider
|
||||
}
|
||||
|
||||
public async init(): Promise<void> {
|
||||
@@ -85,13 +115,21 @@ export class MavenRegistry extends BaseRegistry {
|
||||
token = await this.authManager.validateToken(tokenString, 'maven');
|
||||
}
|
||||
|
||||
// Build actor from context and validated token
|
||||
const actor: IRequestActor = {
|
||||
...context.actor,
|
||||
userId: token?.userId,
|
||||
ip: context.headers['x-forwarded-for'] || context.headers['X-Forwarded-For'],
|
||||
userAgent: context.headers['user-agent'] || context.headers['User-Agent'],
|
||||
};
|
||||
|
||||
// Parse path to determine request type
|
||||
const coordinate = pathToGAV(path);
|
||||
|
||||
if (!coordinate) {
|
||||
// Not a valid artifact path, could be metadata or root
|
||||
if (path.endsWith('/maven-metadata.xml')) {
|
||||
return this.handleMetadataRequest(context.method, path, token);
|
||||
return this.handleMetadataRequest(context.method, path, token, actor);
|
||||
}
|
||||
|
||||
return {
|
||||
@@ -108,7 +146,7 @@ export class MavenRegistry extends BaseRegistry {
|
||||
}
|
||||
|
||||
// Handle artifact requests (JAR, POM, WAR, etc.)
|
||||
return this.handleArtifactRequest(context.method, coordinate, token, context.body);
|
||||
return this.handleArtifactRequest(context.method, coordinate, token, context.body, actor);
|
||||
}
|
||||
|
||||
protected async checkPermission(
|
||||
@@ -128,7 +166,8 @@ export class MavenRegistry extends BaseRegistry {
|
||||
method: string,
|
||||
coordinate: IMavenCoordinate,
|
||||
token: IAuthToken | null,
|
||||
body?: Buffer | any
|
||||
body?: Buffer | any,
|
||||
actor?: IRequestActor
|
||||
): Promise<IResponse> {
|
||||
const { groupId, artifactId, version } = coordinate;
|
||||
const filename = buildFilename(coordinate);
|
||||
@@ -139,7 +178,7 @@ export class MavenRegistry extends BaseRegistry {
|
||||
case 'HEAD':
|
||||
// Maven repositories typically allow anonymous reads
|
||||
return method === 'GET'
|
||||
? this.getArtifact(groupId, artifactId, version, filename)
|
||||
? this.getArtifact(groupId, artifactId, version, filename, actor)
|
||||
: this.headArtifact(groupId, artifactId, version, filename);
|
||||
|
||||
case 'PUT':
|
||||
@@ -211,7 +250,8 @@ export class MavenRegistry extends BaseRegistry {
|
||||
private async handleMetadataRequest(
|
||||
method: string,
|
||||
path: string,
|
||||
token: IAuthToken | null
|
||||
token: IAuthToken | null,
|
||||
actor?: IRequestActor
|
||||
): Promise<IResponse> {
|
||||
// Parse path to extract groupId and artifactId
|
||||
// Path format: /com/example/my-lib/maven-metadata.xml
|
||||
@@ -232,7 +272,7 @@ export class MavenRegistry extends BaseRegistry {
|
||||
if (method === 'GET') {
|
||||
// Metadata is usually public (read permission optional)
|
||||
// Some registries allow anonymous metadata access
|
||||
return this.getMetadata(groupId, artifactId);
|
||||
return this.getMetadata(groupId, artifactId, actor);
|
||||
}
|
||||
|
||||
return {
|
||||
@@ -250,16 +290,20 @@ export class MavenRegistry extends BaseRegistry {
|
||||
groupId: string,
|
||||
artifactId: string,
|
||||
version: string,
|
||||
filename: string
|
||||
filename: string,
|
||||
actor?: IRequestActor
|
||||
): Promise<IResponse> {
|
||||
let data = await this.storage.getMavenArtifact(groupId, artifactId, version, filename);
|
||||
|
||||
// Try upstream if not found locally
|
||||
if (!data && this.upstream) {
|
||||
if (!data) {
|
||||
const resource = `${groupId}:${artifactId}`;
|
||||
const upstream = await this.getUpstreamForRequest(resource, 'artifact', 'GET', actor);
|
||||
if (upstream) {
|
||||
// Parse the filename to extract extension and classifier
|
||||
const { extension, classifier } = this.parseFilename(filename, artifactId, version);
|
||||
if (extension) {
|
||||
data = await this.upstream.fetchArtifact(groupId, artifactId, version, extension, classifier);
|
||||
data = await upstream.fetchArtifact(groupId, artifactId, version, extension, classifier);
|
||||
if (data) {
|
||||
// Cache the artifact locally
|
||||
await this.storage.putMavenArtifact(groupId, artifactId, version, filename, data);
|
||||
@@ -269,6 +313,7 @@ export class MavenRegistry extends BaseRegistry {
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (!data) {
|
||||
return {
|
||||
@@ -495,18 +540,22 @@ export class MavenRegistry extends BaseRegistry {
|
||||
// METADATA OPERATIONS
|
||||
// ========================================================================
|
||||
|
||||
private async getMetadata(groupId: string, artifactId: string): Promise<IResponse> {
|
||||
private async getMetadata(groupId: string, artifactId: string, actor?: IRequestActor): Promise<IResponse> {
|
||||
let metadataBuffer = await this.storage.getMavenMetadata(groupId, artifactId);
|
||||
|
||||
// Try upstream if not found locally
|
||||
if (!metadataBuffer && this.upstream) {
|
||||
const upstreamMetadata = await this.upstream.fetchMetadata(groupId, artifactId);
|
||||
if (!metadataBuffer) {
|
||||
const resource = `${groupId}:${artifactId}`;
|
||||
const upstream = await this.getUpstreamForRequest(resource, 'metadata', 'GET', actor);
|
||||
if (upstream) {
|
||||
const upstreamMetadata = await upstream.fetchMetadata(groupId, artifactId);
|
||||
if (upstreamMetadata) {
|
||||
metadataBuffer = Buffer.from(upstreamMetadata, 'utf-8');
|
||||
// Cache the metadata locally
|
||||
await this.storage.putMavenMetadata(groupId, artifactId, metadataBuffer);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (!metadataBuffer) {
|
||||
// Generate empty metadata if none exists
|
||||
|
||||
@@ -2,8 +2,8 @@ import { Smartlog } from '@push.rocks/smartlog';
|
||||
import { BaseRegistry } from '../core/classes.baseregistry.js';
|
||||
import { RegistryStorage } from '../core/classes.registrystorage.js';
|
||||
import { AuthManager } from '../core/classes.authmanager.js';
|
||||
import type { IRequestContext, IResponse, IAuthToken } from '../core/interfaces.core.js';
|
||||
import type { IProtocolUpstreamConfig } from '../upstream/interfaces.upstream.js';
|
||||
import type { IRequestContext, IResponse, IAuthToken, IRequestActor } from '../core/interfaces.core.js';
|
||||
import type { IUpstreamProvider } from '../upstream/interfaces.upstream.js';
|
||||
import { NpmUpstream } from './classes.npmupstream.js';
|
||||
import type {
|
||||
IPackument,
|
||||
@@ -27,20 +27,21 @@ export class NpmRegistry extends BaseRegistry {
|
||||
private basePath: string = '/npm';
|
||||
private registryUrl: string;
|
||||
private logger: Smartlog;
|
||||
private upstream: NpmUpstream | null = null;
|
||||
private upstreamProvider: IUpstreamProvider | null = null;
|
||||
|
||||
constructor(
|
||||
storage: RegistryStorage,
|
||||
authManager: AuthManager,
|
||||
basePath: string = '/npm',
|
||||
registryUrl: string = 'http://localhost:5000/npm',
|
||||
upstreamConfig?: IProtocolUpstreamConfig
|
||||
upstreamProvider?: IUpstreamProvider
|
||||
) {
|
||||
super();
|
||||
this.storage = storage;
|
||||
this.authManager = authManager;
|
||||
this.basePath = basePath;
|
||||
this.registryUrl = registryUrl;
|
||||
this.upstreamProvider = upstreamProvider || null;
|
||||
|
||||
// Initialize logger
|
||||
this.logger = new Smartlog({
|
||||
@@ -55,15 +56,51 @@ export class NpmRegistry extends BaseRegistry {
|
||||
});
|
||||
this.logger.enableConsole();
|
||||
|
||||
// Initialize upstream if configured
|
||||
if (upstreamConfig?.enabled) {
|
||||
this.upstream = new NpmUpstream(upstreamConfig, registryUrl, this.logger);
|
||||
this.logger.log('info', 'NPM upstream initialized', {
|
||||
upstreams: upstreamConfig.upstreams.map(u => u.name),
|
||||
});
|
||||
if (upstreamProvider) {
|
||||
this.logger.log('info', 'NPM upstream provider configured');
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Extract scope from npm package name.
|
||||
* @example "@company/utils" -> "company"
|
||||
* @example "lodash" -> null
|
||||
*/
|
||||
private extractScope(packageName: string): string | null {
|
||||
if (packageName.startsWith('@')) {
|
||||
const slashIndex = packageName.indexOf('/');
|
||||
if (slashIndex > 1) {
|
||||
return packageName.substring(1, slashIndex);
|
||||
}
|
||||
}
|
||||
return null;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get upstream for a specific request.
|
||||
* Calls the provider to resolve upstream config dynamically.
|
||||
*/
|
||||
private async getUpstreamForRequest(
|
||||
resource: string,
|
||||
resourceType: string,
|
||||
method: string,
|
||||
actor?: IRequestActor
|
||||
): Promise<NpmUpstream | null> {
|
||||
if (!this.upstreamProvider) return null;
|
||||
|
||||
const config = await this.upstreamProvider.resolveUpstreamConfig({
|
||||
protocol: 'npm',
|
||||
resource,
|
||||
scope: this.extractScope(resource),
|
||||
actor,
|
||||
method,
|
||||
resourceType,
|
||||
});
|
||||
|
||||
if (!config?.enabled) return null;
|
||||
return new NpmUpstream(config, this.registryUrl, this.logger);
|
||||
}
|
||||
|
||||
public async init(): Promise<void> {
|
||||
// NPM registry initialization
|
||||
}
|
||||
@@ -80,6 +117,14 @@ export class NpmRegistry extends BaseRegistry {
|
||||
const tokenString = authHeader?.replace(/^Bearer\s+/i, '');
|
||||
const token = tokenString ? await this.authManager.validateToken(tokenString, 'npm') : null;
|
||||
|
||||
// Build actor context for upstream resolution
|
||||
const actor: IRequestActor = {
|
||||
userId: token?.userId,
|
||||
ip: context.headers['x-forwarded-for'] || context.headers['x-real-ip'],
|
||||
userAgent: context.headers['user-agent'],
|
||||
...context.actor, // Include any pre-populated actor info
|
||||
};
|
||||
|
||||
this.logger.log('debug', `handleRequest: ${context.method} ${path}`, {
|
||||
method: context.method,
|
||||
path,
|
||||
@@ -118,7 +163,7 @@ export class NpmRegistry extends BaseRegistry {
|
||||
const tarballMatch = path.match(/^\/(@?[^\/]+(?:\/[^\/]+)?)\/-\/(.+\.tgz)$/);
|
||||
if (tarballMatch) {
|
||||
const [, packageName, filename] = tarballMatch;
|
||||
return this.handleTarballDownload(packageName, filename, token);
|
||||
return this.handleTarballDownload(packageName, filename, token, actor);
|
||||
}
|
||||
|
||||
// Unpublish specific version: DELETE /{package}/-/{version}
|
||||
@@ -142,7 +187,7 @@ export class NpmRegistry extends BaseRegistry {
|
||||
if (versionMatch) {
|
||||
const [, packageName, version] = versionMatch;
|
||||
this.logger.log('debug', 'versionMatch', { packageName, version });
|
||||
return this.handlePackageVersion(packageName, version, token);
|
||||
return this.handlePackageVersion(packageName, version, token, actor);
|
||||
}
|
||||
|
||||
// Package operations: /{package}
|
||||
@@ -150,7 +195,7 @@ export class NpmRegistry extends BaseRegistry {
|
||||
if (packageMatch) {
|
||||
const packageName = packageMatch[1];
|
||||
this.logger.log('debug', 'packageMatch', { packageName });
|
||||
return this.handlePackage(context.method, packageName, context.body, context.query, token);
|
||||
return this.handlePackage(context.method, packageName, context.body, context.query, token, actor);
|
||||
}
|
||||
|
||||
return {
|
||||
@@ -198,11 +243,12 @@ export class NpmRegistry extends BaseRegistry {
|
||||
packageName: string,
|
||||
body: any,
|
||||
query: Record<string, string>,
|
||||
token: IAuthToken | null
|
||||
token: IAuthToken | null,
|
||||
actor?: IRequestActor
|
||||
): Promise<IResponse> {
|
||||
switch (method) {
|
||||
case 'GET':
|
||||
return this.getPackument(packageName, token, query);
|
||||
return this.getPackument(packageName, token, query, actor);
|
||||
case 'PUT':
|
||||
return this.publishPackage(packageName, body, token);
|
||||
case 'DELETE':
|
||||
@@ -219,7 +265,8 @@ export class NpmRegistry extends BaseRegistry {
|
||||
private async getPackument(
|
||||
packageName: string,
|
||||
token: IAuthToken | null,
|
||||
query: Record<string, string>
|
||||
query: Record<string, string>,
|
||||
actor?: IRequestActor
|
||||
): Promise<IResponse> {
|
||||
let packument = await this.storage.getNpmPackument(packageName);
|
||||
this.logger.log('debug', `getPackument: ${packageName}`, {
|
||||
@@ -229,9 +276,11 @@ export class NpmRegistry extends BaseRegistry {
|
||||
});
|
||||
|
||||
// If not found locally, try upstream
|
||||
if (!packument && this.upstream) {
|
||||
if (!packument) {
|
||||
const upstream = await this.getUpstreamForRequest(packageName, 'packument', 'GET', actor);
|
||||
if (upstream) {
|
||||
this.logger.log('debug', `getPackument: fetching from upstream`, { packageName });
|
||||
const upstreamPackument = await this.upstream.fetchPackument(packageName);
|
||||
const upstreamPackument = await upstream.fetchPackument(packageName);
|
||||
if (upstreamPackument) {
|
||||
this.logger.log('debug', `getPackument: found in upstream`, {
|
||||
packageName,
|
||||
@@ -242,6 +291,7 @@ export class NpmRegistry extends BaseRegistry {
|
||||
// We don't store tarballs here - they'll be fetched on demand
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (!packument) {
|
||||
return {
|
||||
@@ -279,7 +329,8 @@ export class NpmRegistry extends BaseRegistry {
|
||||
private async handlePackageVersion(
|
||||
packageName: string,
|
||||
version: string,
|
||||
token: IAuthToken | null
|
||||
token: IAuthToken | null,
|
||||
actor?: IRequestActor
|
||||
): Promise<IResponse> {
|
||||
this.logger.log('debug', 'handlePackageVersion', { packageName, version });
|
||||
let packument = await this.storage.getNpmPackument(packageName);
|
||||
@@ -289,13 +340,16 @@ export class NpmRegistry extends BaseRegistry {
|
||||
}
|
||||
|
||||
// If not found locally, try upstream
|
||||
if (!packument && this.upstream) {
|
||||
if (!packument) {
|
||||
const upstream = await this.getUpstreamForRequest(packageName, 'packument', 'GET', actor);
|
||||
if (upstream) {
|
||||
this.logger.log('debug', 'handlePackageVersion: fetching from upstream', { packageName });
|
||||
const upstreamPackument = await this.upstream.fetchPackument(packageName);
|
||||
const upstreamPackument = await upstream.fetchPackument(packageName);
|
||||
if (upstreamPackument) {
|
||||
packument = upstreamPackument;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (!packument) {
|
||||
return {
|
||||
@@ -563,7 +617,8 @@ export class NpmRegistry extends BaseRegistry {
|
||||
private async handleTarballDownload(
|
||||
packageName: string,
|
||||
filename: string,
|
||||
token: IAuthToken | null
|
||||
token: IAuthToken | null,
|
||||
actor?: IRequestActor
|
||||
): Promise<IResponse> {
|
||||
// Extract version from filename: package-name-1.0.0.tgz
|
||||
const versionMatch = filename.match(/-([\d.]+(?:-[a-z0-9.]+)?)\.tgz$/i);
|
||||
@@ -579,12 +634,14 @@ export class NpmRegistry extends BaseRegistry {
|
||||
let tarball = await this.storage.getNpmTarball(packageName, version);
|
||||
|
||||
// If not found locally, try upstream
|
||||
if (!tarball && this.upstream) {
|
||||
if (!tarball) {
|
||||
const upstream = await this.getUpstreamForRequest(packageName, 'tarball', 'GET', actor);
|
||||
if (upstream) {
|
||||
this.logger.log('debug', 'handleTarballDownload: fetching from upstream', {
|
||||
packageName,
|
||||
version,
|
||||
});
|
||||
const upstreamTarball = await this.upstream.fetchTarball(packageName, version);
|
||||
const upstreamTarball = await upstream.fetchTarball(packageName, version);
|
||||
if (upstreamTarball) {
|
||||
tarball = upstreamTarball;
|
||||
// Cache the tarball locally for future requests
|
||||
@@ -596,6 +653,7 @@ export class NpmRegistry extends BaseRegistry {
|
||||
});
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (!tarball) {
|
||||
return {
|
||||
|
||||
@@ -2,8 +2,8 @@ import { Smartlog } from '@push.rocks/smartlog';
|
||||
import { BaseRegistry } from '../core/classes.baseregistry.js';
|
||||
import { RegistryStorage } from '../core/classes.registrystorage.js';
|
||||
import { AuthManager } from '../core/classes.authmanager.js';
|
||||
import type { IRequestContext, IResponse, IAuthToken, IRegistryError } from '../core/interfaces.core.js';
|
||||
import type { IProtocolUpstreamConfig } from '../upstream/interfaces.upstream.js';
|
||||
import type { IRequestContext, IResponse, IAuthToken, IRegistryError, IRequestActor } from '../core/interfaces.core.js';
|
||||
import type { IUpstreamProvider } from '../upstream/interfaces.upstream.js';
|
||||
import { OciUpstream } from './classes.ociupstream.js';
|
||||
import type {
|
||||
IUploadSession,
|
||||
@@ -24,7 +24,7 @@ export class OciRegistry extends BaseRegistry {
|
||||
private basePath: string = '/oci';
|
||||
private cleanupInterval?: NodeJS.Timeout;
|
||||
private ociTokens?: { realm: string; service: string };
|
||||
private upstream: OciUpstream | null = null;
|
||||
private upstreamProvider: IUpstreamProvider | null = null;
|
||||
private logger: Smartlog;
|
||||
|
||||
constructor(
|
||||
@@ -32,13 +32,14 @@ export class OciRegistry extends BaseRegistry {
|
||||
authManager: AuthManager,
|
||||
basePath: string = '/oci',
|
||||
ociTokens?: { realm: string; service: string },
|
||||
upstreamConfig?: IProtocolUpstreamConfig
|
||||
upstreamProvider?: IUpstreamProvider
|
||||
) {
|
||||
super();
|
||||
this.storage = storage;
|
||||
this.authManager = authManager;
|
||||
this.basePath = basePath;
|
||||
this.ociTokens = ociTokens;
|
||||
this.upstreamProvider = upstreamProvider || null;
|
||||
|
||||
// Initialize logger
|
||||
this.logger = new Smartlog({
|
||||
@@ -53,15 +54,50 @@ export class OciRegistry extends BaseRegistry {
|
||||
});
|
||||
this.logger.enableConsole();
|
||||
|
||||
// Initialize upstream if configured
|
||||
if (upstreamConfig?.enabled) {
|
||||
this.upstream = new OciUpstream(upstreamConfig, basePath, this.logger);
|
||||
this.logger.log('info', 'OCI upstream initialized', {
|
||||
upstreams: upstreamConfig.upstreams.map(u => u.name),
|
||||
});
|
||||
if (upstreamProvider) {
|
||||
this.logger.log('info', 'OCI upstream provider configured');
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Extract scope from OCI repository name.
|
||||
* @example "myorg/myimage" -> "myorg"
|
||||
* @example "library/nginx" -> "library"
|
||||
* @example "nginx" -> null
|
||||
*/
|
||||
private extractScope(repository: string): string | null {
|
||||
const slashIndex = repository.indexOf('/');
|
||||
if (slashIndex > 0) {
|
||||
return repository.substring(0, slashIndex);
|
||||
}
|
||||
return null;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get upstream for a specific request.
|
||||
* Calls the provider to resolve upstream config dynamically.
|
||||
*/
|
||||
private async getUpstreamForRequest(
|
||||
resource: string,
|
||||
resourceType: string,
|
||||
method: string,
|
||||
actor?: IRequestActor
|
||||
): Promise<OciUpstream | null> {
|
||||
if (!this.upstreamProvider) return null;
|
||||
|
||||
const config = await this.upstreamProvider.resolveUpstreamConfig({
|
||||
protocol: 'oci',
|
||||
resource,
|
||||
scope: this.extractScope(resource),
|
||||
actor,
|
||||
method,
|
||||
resourceType,
|
||||
});
|
||||
|
||||
if (!config?.enabled) return null;
|
||||
return new OciUpstream(config, this.basePath, this.logger);
|
||||
}
|
||||
|
||||
public async init(): Promise<void> {
|
||||
// Start cleanup of stale upload sessions
|
||||
this.startUploadSessionCleanup();
|
||||
@@ -80,6 +116,14 @@ export class OciRegistry extends BaseRegistry {
|
||||
const tokenString = authHeader?.replace(/^Bearer\s+/i, '');
|
||||
const token = tokenString ? await this.authManager.validateToken(tokenString, 'oci') : null;
|
||||
|
||||
// Build actor from context and validated token
|
||||
const actor: IRequestActor = {
|
||||
...context.actor,
|
||||
userId: token?.userId,
|
||||
ip: context.headers['x-forwarded-for'] || context.headers['X-Forwarded-For'],
|
||||
userAgent: context.headers['user-agent'] || context.headers['User-Agent'],
|
||||
};
|
||||
|
||||
// Route to appropriate handler
|
||||
if (path === '/v2/' || path === '/v2') {
|
||||
return this.handleVersionCheck();
|
||||
@@ -91,14 +135,14 @@ export class OciRegistry extends BaseRegistry {
|
||||
const [, name, reference] = manifestMatch;
|
||||
// Prefer rawBody for content-addressable operations to preserve exact bytes
|
||||
const bodyData = context.rawBody || context.body;
|
||||
return this.handleManifestRequest(context.method, name, reference, token, bodyData, context.headers);
|
||||
return this.handleManifestRequest(context.method, name, reference, token, bodyData, context.headers, actor);
|
||||
}
|
||||
|
||||
// Blob operations: /v2/{name}/blobs/{digest}
|
||||
const blobMatch = path.match(/^\/v2\/([^\/]+(?:\/[^\/]+)*)\/blobs\/(sha256:[a-f0-9]{64})$/);
|
||||
if (blobMatch) {
|
||||
const [, name, digest] = blobMatch;
|
||||
return this.handleBlobRequest(context.method, name, digest, token, context.headers);
|
||||
return this.handleBlobRequest(context.method, name, digest, token, context.headers, actor);
|
||||
}
|
||||
|
||||
// Blob upload operations: /v2/{name}/blobs/uploads/
|
||||
@@ -168,11 +212,12 @@ export class OciRegistry extends BaseRegistry {
|
||||
reference: string,
|
||||
token: IAuthToken | null,
|
||||
body?: Buffer | any,
|
||||
headers?: Record<string, string>
|
||||
headers?: Record<string, string>,
|
||||
actor?: IRequestActor
|
||||
): Promise<IResponse> {
|
||||
switch (method) {
|
||||
case 'GET':
|
||||
return this.getManifest(repository, reference, token, headers);
|
||||
return this.getManifest(repository, reference, token, headers, actor);
|
||||
case 'HEAD':
|
||||
return this.headManifest(repository, reference, token);
|
||||
case 'PUT':
|
||||
@@ -193,11 +238,12 @@ export class OciRegistry extends BaseRegistry {
|
||||
repository: string,
|
||||
digest: string,
|
||||
token: IAuthToken | null,
|
||||
headers: Record<string, string>
|
||||
headers: Record<string, string>,
|
||||
actor?: IRequestActor
|
||||
): Promise<IResponse> {
|
||||
switch (method) {
|
||||
case 'GET':
|
||||
return this.getBlob(repository, digest, token, headers['range'] || headers['Range']);
|
||||
return this.getBlob(repository, digest, token, headers['range'] || headers['Range'], actor);
|
||||
case 'HEAD':
|
||||
return this.headBlob(repository, digest, token);
|
||||
case 'DELETE':
|
||||
@@ -318,7 +364,8 @@ export class OciRegistry extends BaseRegistry {
|
||||
repository: string,
|
||||
reference: string,
|
||||
token: IAuthToken | null,
|
||||
headers?: Record<string, string>
|
||||
headers?: Record<string, string>,
|
||||
actor?: IRequestActor
|
||||
): Promise<IResponse> {
|
||||
if (!await this.checkPermission(token, repository, 'pull')) {
|
||||
return this.createUnauthorizedResponse(repository, 'pull');
|
||||
@@ -346,9 +393,11 @@ export class OciRegistry extends BaseRegistry {
|
||||
}
|
||||
|
||||
// If not found locally, try upstream
|
||||
if (!manifestData && this.upstream) {
|
||||
if (!manifestData) {
|
||||
const upstream = await this.getUpstreamForRequest(repository, 'manifest', 'GET', actor);
|
||||
if (upstream) {
|
||||
this.logger.log('debug', 'getManifest: fetching from upstream', { repository, reference });
|
||||
const upstreamResult = await this.upstream.fetchManifest(repository, reference);
|
||||
const upstreamResult = await upstream.fetchManifest(repository, reference);
|
||||
if (upstreamResult) {
|
||||
manifestData = Buffer.from(JSON.stringify(upstreamResult.manifest), 'utf8');
|
||||
contentType = upstreamResult.contentType;
|
||||
@@ -372,6 +421,7 @@ export class OciRegistry extends BaseRegistry {
|
||||
});
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (!manifestData) {
|
||||
return {
|
||||
@@ -514,7 +564,8 @@ export class OciRegistry extends BaseRegistry {
|
||||
repository: string,
|
||||
digest: string,
|
||||
token: IAuthToken | null,
|
||||
range?: string
|
||||
range?: string,
|
||||
actor?: IRequestActor
|
||||
): Promise<IResponse> {
|
||||
if (!await this.checkPermission(token, repository, 'pull')) {
|
||||
return this.createUnauthorizedResponse(repository, 'pull');
|
||||
@@ -524,9 +575,11 @@ export class OciRegistry extends BaseRegistry {
|
||||
let data = await this.storage.getOciBlob(digest);
|
||||
|
||||
// If not found locally, try upstream
|
||||
if (!data && this.upstream) {
|
||||
if (!data) {
|
||||
const upstream = await this.getUpstreamForRequest(repository, 'blob', 'GET', actor);
|
||||
if (upstream) {
|
||||
this.logger.log('debug', 'getBlob: fetching from upstream', { repository, digest });
|
||||
const upstreamBlob = await this.upstream.fetchBlob(repository, digest);
|
||||
const upstreamBlob = await upstream.fetchBlob(repository, digest);
|
||||
if (upstreamBlob) {
|
||||
data = upstreamBlob;
|
||||
// Cache the blob locally (blobs are content-addressable and immutable)
|
||||
@@ -538,6 +591,7 @@ export class OciRegistry extends BaseRegistry {
|
||||
});
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (!data) {
|
||||
return {
|
||||
|
||||
@@ -2,8 +2,8 @@ import { Smartlog } from '@push.rocks/smartlog';
|
||||
import { BaseRegistry } from '../core/classes.baseregistry.js';
|
||||
import { RegistryStorage } from '../core/classes.registrystorage.js';
|
||||
import { AuthManager } from '../core/classes.authmanager.js';
|
||||
import type { IRequestContext, IResponse, IAuthToken } from '../core/interfaces.core.js';
|
||||
import type { IProtocolUpstreamConfig } from '../upstream/interfaces.upstream.js';
|
||||
import type { IRequestContext, IResponse, IAuthToken, IRequestActor } from '../core/interfaces.core.js';
|
||||
import type { IUpstreamProvider } from '../upstream/interfaces.upstream.js';
|
||||
import { isBinaryData, toBuffer } from '../core/helpers.buffer.js';
|
||||
import type {
|
||||
IPypiPackageMetadata,
|
||||
@@ -24,20 +24,21 @@ export class PypiRegistry extends BaseRegistry {
|
||||
private basePath: string = '/pypi';
|
||||
private registryUrl: string;
|
||||
private logger: Smartlog;
|
||||
private upstream: PypiUpstream | null = null;
|
||||
private upstreamProvider: IUpstreamProvider | null = null;
|
||||
|
||||
constructor(
|
||||
storage: RegistryStorage,
|
||||
authManager: AuthManager,
|
||||
basePath: string = '/pypi',
|
||||
registryUrl: string = 'http://localhost:5000',
|
||||
upstreamConfig?: IProtocolUpstreamConfig
|
||||
upstreamProvider?: IUpstreamProvider
|
||||
) {
|
||||
super();
|
||||
this.storage = storage;
|
||||
this.authManager = authManager;
|
||||
this.basePath = basePath;
|
||||
this.registryUrl = registryUrl;
|
||||
this.upstreamProvider = upstreamProvider || null;
|
||||
|
||||
// Initialize logger
|
||||
this.logger = new Smartlog({
|
||||
@@ -51,20 +52,38 @@ export class PypiRegistry extends BaseRegistry {
|
||||
}
|
||||
});
|
||||
this.logger.enableConsole();
|
||||
|
||||
// Initialize upstream if configured
|
||||
if (upstreamConfig?.enabled) {
|
||||
this.upstream = new PypiUpstream(upstreamConfig, registryUrl, this.logger);
|
||||
}
|
||||
|
||||
/**
|
||||
* Get upstream for a specific request.
|
||||
* Calls the provider to resolve upstream config dynamically.
|
||||
*/
|
||||
private async getUpstreamForRequest(
|
||||
resource: string,
|
||||
resourceType: string,
|
||||
method: string,
|
||||
actor?: IRequestActor
|
||||
): Promise<PypiUpstream | null> {
|
||||
if (!this.upstreamProvider) return null;
|
||||
|
||||
const config = await this.upstreamProvider.resolveUpstreamConfig({
|
||||
protocol: 'pypi',
|
||||
resource,
|
||||
scope: resource, // For PyPI, package name is the scope
|
||||
actor,
|
||||
method,
|
||||
resourceType,
|
||||
});
|
||||
|
||||
if (!config?.enabled) return null;
|
||||
return new PypiUpstream(config, this.registryUrl, this.logger);
|
||||
}
|
||||
|
||||
/**
|
||||
* Clean up resources (timers, connections, etc.)
|
||||
*/
|
||||
public destroy(): void {
|
||||
if (this.upstream) {
|
||||
this.upstream.stop();
|
||||
}
|
||||
// No persistent upstream to clean up with dynamic provider
|
||||
}
|
||||
|
||||
public async init(): Promise<void> {
|
||||
@@ -84,15 +103,23 @@ export class PypiRegistry extends BaseRegistry {
|
||||
public async handleRequest(context: IRequestContext): Promise<IResponse> {
|
||||
let path = context.path.replace(this.basePath, '');
|
||||
|
||||
// Extract token (Basic Auth or Bearer)
|
||||
const token = await this.extractToken(context);
|
||||
|
||||
// Build actor from context and validated token
|
||||
const actor: IRequestActor = {
|
||||
...context.actor,
|
||||
userId: token?.userId,
|
||||
ip: context.headers['x-forwarded-for'] || context.headers['X-Forwarded-For'],
|
||||
userAgent: context.headers['user-agent'] || context.headers['User-Agent'],
|
||||
};
|
||||
|
||||
// Also handle /simple path prefix
|
||||
if (path.startsWith('/simple')) {
|
||||
path = path.replace('/simple', '');
|
||||
return this.handleSimpleRequest(path, context);
|
||||
return this.handleSimpleRequest(path, context, actor);
|
||||
}
|
||||
|
||||
// Extract token (Basic Auth or Bearer)
|
||||
const token = await this.extractToken(context);
|
||||
|
||||
this.logger.log('debug', `handleRequest: ${context.method} ${path}`, {
|
||||
method: context.method,
|
||||
path,
|
||||
@@ -119,7 +146,7 @@ export class PypiRegistry extends BaseRegistry {
|
||||
// Package file download: GET /packages/{package}/{filename}
|
||||
const downloadMatch = path.match(/^\/packages\/([^\/]+)\/(.+)$/);
|
||||
if (downloadMatch && context.method === 'GET') {
|
||||
return this.handleDownload(downloadMatch[1], downloadMatch[2]);
|
||||
return this.handleDownload(downloadMatch[1], downloadMatch[2], actor);
|
||||
}
|
||||
|
||||
// Delete package: DELETE /packages/{package}
|
||||
@@ -156,7 +183,7 @@ export class PypiRegistry extends BaseRegistry {
|
||||
/**
|
||||
* Handle Simple API requests (PEP 503 HTML or PEP 691 JSON)
|
||||
*/
|
||||
private async handleSimpleRequest(path: string, context: IRequestContext): Promise<IResponse> {
|
||||
private async handleSimpleRequest(path: string, context: IRequestContext, actor?: IRequestActor): Promise<IResponse> {
|
||||
// Ensure path ends with / (PEP 503 requirement)
|
||||
if (!path.endsWith('/') && !path.includes('.')) {
|
||||
return {
|
||||
@@ -174,7 +201,7 @@ export class PypiRegistry extends BaseRegistry {
|
||||
// Package index: /simple/{package}/
|
||||
const packageMatch = path.match(/^\/([^\/]+)\/$/);
|
||||
if (packageMatch) {
|
||||
return this.handleSimplePackage(packageMatch[1], context);
|
||||
return this.handleSimplePackage(packageMatch[1], context, actor);
|
||||
}
|
||||
|
||||
return {
|
||||
@@ -228,15 +255,17 @@ export class PypiRegistry extends BaseRegistry {
|
||||
* Handle Simple API package index
|
||||
* Returns HTML (PEP 503) or JSON (PEP 691) based on Accept header
|
||||
*/
|
||||
private async handleSimplePackage(packageName: string, context: IRequestContext): Promise<IResponse> {
|
||||
private async handleSimplePackage(packageName: string, context: IRequestContext, actor?: IRequestActor): Promise<IResponse> {
|
||||
const normalized = helpers.normalizePypiPackageName(packageName);
|
||||
|
||||
// Get package metadata
|
||||
let metadata = await this.storage.getPypiPackageMetadata(normalized);
|
||||
|
||||
// Try upstream if not found locally
|
||||
if (!metadata && this.upstream) {
|
||||
const upstreamHtml = await this.upstream.fetchSimplePackage(normalized);
|
||||
if (!metadata) {
|
||||
const upstream = await this.getUpstreamForRequest(normalized, 'simple', 'GET', actor);
|
||||
if (upstream) {
|
||||
const upstreamHtml = await upstream.fetchSimplePackage(normalized);
|
||||
if (upstreamHtml) {
|
||||
// Parse the HTML to extract file information and cache it
|
||||
// For now, just return the upstream HTML directly (caching can be improved later)
|
||||
@@ -246,7 +275,7 @@ export class PypiRegistry extends BaseRegistry {
|
||||
|
||||
if (preferJson) {
|
||||
// Try to get JSON format from upstream
|
||||
const upstreamJson = await this.upstream.fetchPackageJson(normalized);
|
||||
const upstreamJson = await upstream.fetchPackageJson(normalized);
|
||||
if (upstreamJson) {
|
||||
return {
|
||||
status: 200,
|
||||
@@ -270,6 +299,7 @@ export class PypiRegistry extends BaseRegistry {
|
||||
};
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (!metadata) {
|
||||
return this.errorResponse(404, 'Package not found');
|
||||
@@ -503,18 +533,21 @@ export class PypiRegistry extends BaseRegistry {
|
||||
/**
|
||||
* Handle package download
|
||||
*/
|
||||
private async handleDownload(packageName: string, filename: string): Promise<IResponse> {
|
||||
private async handleDownload(packageName: string, filename: string, actor?: IRequestActor): Promise<IResponse> {
|
||||
const normalized = helpers.normalizePypiPackageName(packageName);
|
||||
let fileData = await this.storage.getPypiPackageFile(normalized, filename);
|
||||
|
||||
// Try upstream if not found locally
|
||||
if (!fileData && this.upstream) {
|
||||
fileData = await this.upstream.fetchPackageFile(normalized, filename);
|
||||
if (!fileData) {
|
||||
const upstream = await this.getUpstreamForRequest(normalized, 'file', 'GET', actor);
|
||||
if (upstream) {
|
||||
fileData = await upstream.fetchPackageFile(normalized, filename);
|
||||
if (fileData) {
|
||||
// Cache locally
|
||||
await this.storage.putPypiPackageFile(normalized, filename, fileData);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (!fileData) {
|
||||
return {
|
||||
|
||||
@@ -2,8 +2,8 @@ import { Smartlog } from '@push.rocks/smartlog';
|
||||
import { BaseRegistry } from '../core/classes.baseregistry.js';
|
||||
import { RegistryStorage } from '../core/classes.registrystorage.js';
|
||||
import { AuthManager } from '../core/classes.authmanager.js';
|
||||
import type { IRequestContext, IResponse, IAuthToken } from '../core/interfaces.core.js';
|
||||
import type { IProtocolUpstreamConfig } from '../upstream/interfaces.upstream.js';
|
||||
import type { IRequestContext, IResponse, IAuthToken, IRequestActor } from '../core/interfaces.core.js';
|
||||
import type { IUpstreamProvider } from '../upstream/interfaces.upstream.js';
|
||||
import type {
|
||||
IRubyGemsMetadata,
|
||||
IRubyGemsVersionMetadata,
|
||||
@@ -25,20 +25,21 @@ export class RubyGemsRegistry extends BaseRegistry {
|
||||
private basePath: string = '/rubygems';
|
||||
private registryUrl: string;
|
||||
private logger: Smartlog;
|
||||
private upstream: RubygemsUpstream | null = null;
|
||||
private upstreamProvider: IUpstreamProvider | null = null;
|
||||
|
||||
constructor(
|
||||
storage: RegistryStorage,
|
||||
authManager: AuthManager,
|
||||
basePath: string = '/rubygems',
|
||||
registryUrl: string = 'http://localhost:5000/rubygems',
|
||||
upstreamConfig?: IProtocolUpstreamConfig
|
||||
upstreamProvider?: IUpstreamProvider
|
||||
) {
|
||||
super();
|
||||
this.storage = storage;
|
||||
this.authManager = authManager;
|
||||
this.basePath = basePath;
|
||||
this.registryUrl = registryUrl;
|
||||
this.upstreamProvider = upstreamProvider || null;
|
||||
|
||||
// Initialize logger
|
||||
this.logger = new Smartlog({
|
||||
@@ -52,20 +53,38 @@ export class RubyGemsRegistry extends BaseRegistry {
|
||||
}
|
||||
});
|
||||
this.logger.enableConsole();
|
||||
|
||||
// Initialize upstream if configured
|
||||
if (upstreamConfig?.enabled) {
|
||||
this.upstream = new RubygemsUpstream(upstreamConfig, this.logger);
|
||||
}
|
||||
|
||||
/**
|
||||
* Get upstream for a specific request.
|
||||
* Calls the provider to resolve upstream config dynamically.
|
||||
*/
|
||||
private async getUpstreamForRequest(
|
||||
resource: string,
|
||||
resourceType: string,
|
||||
method: string,
|
||||
actor?: IRequestActor
|
||||
): Promise<RubygemsUpstream | null> {
|
||||
if (!this.upstreamProvider) return null;
|
||||
|
||||
const config = await this.upstreamProvider.resolveUpstreamConfig({
|
||||
protocol: 'rubygems',
|
||||
resource,
|
||||
scope: resource, // gem name is the scope
|
||||
actor,
|
||||
method,
|
||||
resourceType,
|
||||
});
|
||||
|
||||
if (!config?.enabled) return null;
|
||||
return new RubygemsUpstream(config, this.logger);
|
||||
}
|
||||
|
||||
/**
|
||||
* Clean up resources (timers, connections, etc.)
|
||||
*/
|
||||
public destroy(): void {
|
||||
if (this.upstream) {
|
||||
this.upstream.stop();
|
||||
}
|
||||
// No persistent upstream to clean up with dynamic provider
|
||||
}
|
||||
|
||||
public async init(): Promise<void> {
|
||||
@@ -95,6 +114,14 @@ export class RubyGemsRegistry extends BaseRegistry {
|
||||
// Extract token (Authorization header)
|
||||
const token = await this.extractToken(context);
|
||||
|
||||
// Build actor from context and validated token
|
||||
const actor: IRequestActor = {
|
||||
...context.actor,
|
||||
userId: token?.userId,
|
||||
ip: context.headers['x-forwarded-for'] || context.headers['X-Forwarded-For'],
|
||||
userAgent: context.headers['user-agent'] || context.headers['User-Agent'],
|
||||
};
|
||||
|
||||
this.logger.log('debug', `handleRequest: ${context.method} ${path}`, {
|
||||
method: context.method,
|
||||
path,
|
||||
@@ -113,13 +140,13 @@ export class RubyGemsRegistry extends BaseRegistry {
|
||||
// Info file: GET /info/{gem}
|
||||
const infoMatch = path.match(/^\/info\/([^\/]+)$/);
|
||||
if (infoMatch && context.method === 'GET') {
|
||||
return this.handleInfoFile(infoMatch[1]);
|
||||
return this.handleInfoFile(infoMatch[1], actor);
|
||||
}
|
||||
|
||||
// Gem download: GET /gems/{gem}-{version}[-{platform}].gem
|
||||
const downloadMatch = path.match(/^\/gems\/(.+\.gem)$/);
|
||||
if (downloadMatch && context.method === 'GET') {
|
||||
return this.handleDownload(downloadMatch[1]);
|
||||
return this.handleDownload(downloadMatch[1], actor);
|
||||
}
|
||||
|
||||
// Legacy specs endpoints (Marshal format)
|
||||
@@ -232,18 +259,21 @@ export class RubyGemsRegistry extends BaseRegistry {
|
||||
/**
|
||||
* Handle /info/{gem} endpoint (Compact Index)
|
||||
*/
|
||||
private async handleInfoFile(gemName: string): Promise<IResponse> {
|
||||
private async handleInfoFile(gemName: string, actor?: IRequestActor): Promise<IResponse> {
|
||||
let content = await this.storage.getRubyGemsInfo(gemName);
|
||||
|
||||
// Try upstream if not found locally
|
||||
if (!content && this.upstream) {
|
||||
const upstreamInfo = await this.upstream.fetchInfo(gemName);
|
||||
if (!content) {
|
||||
const upstream = await this.getUpstreamForRequest(gemName, 'info', 'GET', actor);
|
||||
if (upstream) {
|
||||
const upstreamInfo = await upstream.fetchInfo(gemName);
|
||||
if (upstreamInfo) {
|
||||
// Cache locally
|
||||
await this.storage.putRubyGemsInfo(gemName, upstreamInfo);
|
||||
content = upstreamInfo;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (!content) {
|
||||
return {
|
||||
@@ -267,7 +297,7 @@ export class RubyGemsRegistry extends BaseRegistry {
|
||||
/**
|
||||
* Handle gem file download
|
||||
*/
|
||||
private async handleDownload(filename: string): Promise<IResponse> {
|
||||
private async handleDownload(filename: string, actor?: IRequestActor): Promise<IResponse> {
|
||||
const parsed = helpers.parseGemFilename(filename);
|
||||
if (!parsed) {
|
||||
return this.errorResponse(400, 'Invalid gem filename');
|
||||
@@ -280,13 +310,16 @@ export class RubyGemsRegistry extends BaseRegistry {
|
||||
);
|
||||
|
||||
// Try upstream if not found locally
|
||||
if (!gemData && this.upstream) {
|
||||
gemData = await this.upstream.fetchGem(parsed.name, parsed.version);
|
||||
if (!gemData) {
|
||||
const upstream = await this.getUpstreamForRequest(parsed.name, 'gem', 'GET', actor);
|
||||
if (upstream) {
|
||||
gemData = await upstream.fetchGem(parsed.name, parsed.version);
|
||||
if (gemData) {
|
||||
// Cache locally
|
||||
await this.storage.putRubyGemsGem(parsed.name, parsed.version, gemData, parsed.platform);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (!gemData) {
|
||||
return this.errorResponse(404, 'Gem not found');
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
import type { TRegistryProtocol } from '../core/interfaces.core.js';
|
||||
import type { TRegistryProtocol, IRequestActor } from '../core/interfaces.core.js';
|
||||
|
||||
/**
|
||||
* Scope rule for routing requests to specific upstreams.
|
||||
@@ -146,6 +146,8 @@ export interface IUpstreamFetchContext {
|
||||
headers: Record<string, string>;
|
||||
/** Query parameters */
|
||||
query: Record<string, string>;
|
||||
/** Actor performing the request (for cache key isolation) */
|
||||
actor?: IRequestActor;
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -193,3 +195,80 @@ export const DEFAULT_RESILIENCE_CONFIG: IUpstreamResilienceConfig = {
|
||||
circuitBreakerThreshold: 5,
|
||||
circuitBreakerResetMs: 30000,
|
||||
};
|
||||
|
||||
// ============================================================================
|
||||
// Upstream Provider Interfaces
|
||||
// ============================================================================
|
||||
|
||||
/**
|
||||
* Context for resolving upstream configuration.
|
||||
* Passed to IUpstreamProvider per-request to enable dynamic upstream routing.
|
||||
*/
|
||||
export interface IUpstreamResolutionContext {
|
||||
/** Protocol being accessed */
|
||||
protocol: TRegistryProtocol;
|
||||
/** Resource identifier (package name, repository, coordinates, etc.) */
|
||||
resource: string;
|
||||
/** Extracted scope (e.g., "company" from "@company/pkg", "myorg" from "myorg/image") */
|
||||
scope: string | null;
|
||||
/** Actor performing the request */
|
||||
actor?: IRequestActor;
|
||||
/** HTTP method */
|
||||
method: string;
|
||||
/** Resource type (packument, tarball, manifest, blob, etc.) */
|
||||
resourceType: string;
|
||||
}
|
||||
|
||||
/**
|
||||
* Dynamic upstream configuration provider.
|
||||
* Implement this interface to provide per-request upstream routing
|
||||
* based on actor context (user, organization, etc.)
|
||||
*
|
||||
* @example
|
||||
* ```typescript
|
||||
* class OrgUpstreamProvider implements IUpstreamProvider {
|
||||
* constructor(private db: Database) {}
|
||||
*
|
||||
* async resolveUpstreamConfig(ctx: IUpstreamResolutionContext) {
|
||||
* if (ctx.actor?.orgId) {
|
||||
* const orgConfig = await this.db.getOrgUpstream(ctx.actor.orgId, ctx.protocol);
|
||||
* if (orgConfig) return orgConfig;
|
||||
* }
|
||||
* return this.db.getDefaultUpstream(ctx.protocol);
|
||||
* }
|
||||
* }
|
||||
* ```
|
||||
*/
|
||||
export interface IUpstreamProvider {
|
||||
/** Optional initialization */
|
||||
init?(): Promise<void>;
|
||||
|
||||
/**
|
||||
* Resolve upstream configuration for a request.
|
||||
* @param context - Information about the current request
|
||||
* @returns Upstream config to use, or null to skip upstream lookup
|
||||
*/
|
||||
resolveUpstreamConfig(context: IUpstreamResolutionContext): Promise<IProtocolUpstreamConfig | null>;
|
||||
}
|
||||
|
||||
/**
|
||||
* Static upstream provider for simple configurations.
|
||||
* Use this when you have fixed upstream registries that don't change per-request.
|
||||
*
|
||||
* @example
|
||||
* ```typescript
|
||||
* const provider = new StaticUpstreamProvider({
|
||||
* npm: {
|
||||
* enabled: true,
|
||||
* upstreams: [{ id: 'npmjs', url: 'https://registry.npmjs.org', priority: 1, enabled: true, auth: { type: 'none' } }],
|
||||
* },
|
||||
* });
|
||||
* ```
|
||||
*/
|
||||
export class StaticUpstreamProvider implements IUpstreamProvider {
|
||||
constructor(private configs: Partial<Record<TRegistryProtocol, IProtocolUpstreamConfig>>) {}
|
||||
|
||||
async resolveUpstreamConfig(ctx: IUpstreamResolutionContext): Promise<IProtocolUpstreamConfig | null> {
|
||||
return this.configs[ctx.protocol] ?? null;
|
||||
}
|
||||
}
|
||||
|
||||
Reference in New Issue
Block a user