feat: P1 보안/성능 개선 및 마이그레이션 자동화

Security fixes:
- migrate.ts: SQL/Command Injection 방지 (spawnSync 사용)
- migrate.ts: Path Traversal 검증 추가
- api-tester.ts: API 키 마스킹 (4자만 노출)
- api-tester.ts: 최소 16자 키 길이 검증
- cache.ts: ReDoS 방지 (패턴 길이/와일드카드 제한)

Performance improvements:
- cache.ts: 순차 삭제 → 병렬 배치 처리 (50개씩)
- cache.ts: KV 등록 fire-and-forget (non-blocking)
- cache.ts: 메모리 제한 (5000키)
- cache.ts: 25초 실행 시간 가드
- cache.ts: 패턴 매칭 prefix 최적화

New features:
- 마이그레이션 자동화 시스템 (scripts/migrate.ts)
- KV 기반 캐시 인덱스 (invalidatePattern, clearAll)
- 글로벌 CacheService 싱글톤

Other:
- .env.example 추가, API 키 환경변수 처리
- CACHE_TTL.RECOMMENDATIONS (10분) 분리
- e2e-tester.ts JSON 파싱 에러 핸들링 개선

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
kappa
2026-01-26 00:23:13 +09:00
parent 3a8dd705e6
commit 5a9362bf43
13 changed files with 1320 additions and 76 deletions

9
.env.example Normal file
View File

@@ -0,0 +1,9 @@
# API Configuration for Testing Scripts
# Copy this file to .env and replace with your actual API key
# API URL (defaults to production if not set)
API_URL=https://cloud-instances-api.kappa-d8e.workers.dev
# API Key - REQUIRED for test scripts
# Get your API key from the project administrator
API_KEY=your-api-key-here

1
.gitignore vendored
View File

@@ -2,5 +2,6 @@ node_modules/
dist/ dist/
.wrangler/ .wrangler/
.dev.vars .dev.vars
.env
*.log *.log
.DS_Store .DS_Store

View File

@@ -0,0 +1,34 @@
-- Migration 000: Migration History Tracking
-- Description: Creates table to track which migrations have been applied
-- Date: 2026-01-25
-- Author: Claude Code
-- ============================================================
-- Table: migration_history
-- Purpose: Track applied database migrations
-- ============================================================
CREATE TABLE IF NOT EXISTS migration_history (
id INTEGER PRIMARY KEY AUTOINCREMENT,
migration_name TEXT NOT NULL UNIQUE, -- e.g., "002_add_composite_indexes"
applied_at TEXT NOT NULL DEFAULT (datetime('now')),
execution_time_ms INTEGER, -- Time taken to execute
success INTEGER NOT NULL DEFAULT 1, -- 0=failed, 1=succeeded
error_message TEXT, -- Error details if failed
checksum TEXT -- Future: file content hash for validation
);
-- Index for quick lookup of applied migrations
CREATE INDEX IF NOT EXISTS idx_migration_history_name
ON migration_history(migration_name);
-- Index for status queries
CREATE INDEX IF NOT EXISTS idx_migration_history_success
ON migration_history(success);
-- ============================================================
-- Notes
-- ============================================================
-- This table is created first (000) to track all subsequent migrations.
-- Safe to re-run: uses IF NOT EXISTS for idempotency.
-- Migrations are tracked by filename without .sql extension.

View File

@@ -2,7 +2,43 @@
This directory contains SQL migration files for database schema changes. This directory contains SQL migration files for database schema changes.
## Migration Files ## Automated Migration System
The project uses an **automated migration tracking system** that:
- ✅ Automatically detects and runs unapplied migrations
- ✅ Tracks migration history in the database
- ✅ Executes migrations in numerical order
- ✅ Safe to run multiple times (idempotent)
- ✅ Records execution time and errors
### Quick Start
```bash
# Check migration status
npm run db:migrate:status
# Run all pending migrations (local)
npm run db:migrate
# Run all pending migrations (remote)
npm run db:migrate:remote
```
### Migration Files
All migration files are named with a numeric prefix for ordering:
- `000_migration_history.sql` - Creates migration tracking table (always runs first)
- `002_add_composite_indexes.sql` - Query performance optimization
- `003_add_retail_pricing.sql` - Add retail pricing columns
- `004_anvil_tables.sql` - Anvil-branded product tables
## Migration Details
### 000_migration_history.sql
**Date**: 2026-01-25
**Purpose**: Create migration tracking system
- Creates `migration_history` table to track applied migrations
- Records execution time, success/failure status, and error messages
### 002_add_composite_indexes.sql ### 002_add_composite_indexes.sql
**Date**: 2026-01-21 **Date**: 2026-01-21
@@ -18,16 +54,45 @@ This directory contains SQL migration files for database schema changes.
- Improves JOIN performance between instance_types, pricing, and regions tables - Improves JOIN performance between instance_types, pricing, and regions tables
- Enables efficient ORDER BY on hourly_price without additional sort operations - Enables efficient ORDER BY on hourly_price without additional sort operations
### 003_add_retail_pricing.sql
**Date**: 2026-01-23
**Purpose**: Add retail pricing fields to all pricing tables
- Adds `hourly_price_retail` and `monthly_price_retail` columns
- Backfills existing data with 1.21x markup
### 004_anvil_tables.sql
**Date**: 2026-01-25
**Purpose**: Create Anvil-branded product tables
- `anvil_regions` - Anvil regional datacenters
- `anvil_instances` - Anvil instance specifications
- `anvil_pricing` - Anvil retail pricing with cost tracking
- `anvil_transfer_pricing` - Data transfer pricing
## Running Migrations ## Running Migrations
### Local Development ### Automated (Recommended)
```bash ```bash
npm run db:migrate # Check current status
npm run db:migrate:status # Local database
npm run db:migrate:status:remote # Remote database
# Run all pending migrations
npm run db:migrate # Local database
npm run db:migrate:remote # Remote database
``` ```
### Production ### Manual (Backward Compatibility)
Individual migration scripts are still available:
```bash ```bash
npm run db:migrate:remote npm run db:migrate:002 # Local
npm run db:migrate:002:remote # Remote
npm run db:migrate:003 # Local
npm run db:migrate:003:remote # Remote
npm run db:migrate:004 # Local
npm run db:migrate:004:remote # Remote
``` ```
## Migration Best Practices ## Migration Best Practices

View File

@@ -15,14 +15,21 @@
"db:init:remote": "wrangler d1 execute cloud-instances-db --remote --file=./schema.sql", "db:init:remote": "wrangler d1 execute cloud-instances-db --remote --file=./schema.sql",
"db:seed": "wrangler d1 execute cloud-instances-db --local --file=./seed.sql", "db:seed": "wrangler d1 execute cloud-instances-db --local --file=./seed.sql",
"db:seed:remote": "wrangler d1 execute cloud-instances-db --remote --file=./seed.sql", "db:seed:remote": "wrangler d1 execute cloud-instances-db --remote --file=./seed.sql",
"db:migrate": "wrangler d1 execute cloud-instances-db --local --file=./migrations/002_add_composite_indexes.sql", "db:migrate": "tsx scripts/migrate.ts",
"db:migrate:remote": "wrangler d1 execute cloud-instances-db --remote --file=./migrations/002_add_composite_indexes.sql", "db:migrate:remote": "tsx scripts/migrate.ts --remote",
"db:migrate:status": "tsx scripts/migrate.ts --status",
"db:migrate:status:remote": "tsx scripts/migrate.ts --status --remote",
"db:migrate:002": "wrangler d1 execute cloud-instances-db --local --file=./migrations/002_add_composite_indexes.sql",
"db:migrate:002:remote": "wrangler d1 execute cloud-instances-db --remote --file=./migrations/002_add_composite_indexes.sql",
"db:migrate:003": "wrangler d1 execute cloud-instances-db --local --file=./migrations/003_add_retail_pricing.sql", "db:migrate:003": "wrangler d1 execute cloud-instances-db --local --file=./migrations/003_add_retail_pricing.sql",
"db:migrate:003:remote": "wrangler d1 execute cloud-instances-db --remote --file=./migrations/003_add_retail_pricing.sql", "db:migrate:003:remote": "wrangler d1 execute cloud-instances-db --remote --file=./migrations/003_add_retail_pricing.sql",
"db:migrate:004": "wrangler d1 execute cloud-instances-db --local --file=./migrations/004_anvil_tables.sql",
"db:migrate:004:remote": "wrangler d1 execute cloud-instances-db --remote --file=./migrations/004_anvil_tables.sql",
"db:query": "wrangler d1 execute cloud-instances-db --local --command" "db:query": "wrangler d1 execute cloud-instances-db --local --command"
}, },
"devDependencies": { "devDependencies": {
"@cloudflare/workers-types": "^4.20241205.0", "@cloudflare/workers-types": "^4.20241205.0",
"tsx": "^4.7.0",
"typescript": "^5.7.2", "typescript": "^5.7.2",
"vitest": "^2.1.8", "vitest": "^2.1.8",
"wrangler": "^4.59.3" "wrangler": "^4.59.3"

View File

@@ -4,10 +4,18 @@
* Comprehensive test suite for API endpoints with colorful console output. * Comprehensive test suite for API endpoints with colorful console output.
* Tests all endpoints with various parameter combinations and validates responses. * Tests all endpoints with various parameter combinations and validates responses.
* *
* Requirements:
* API_KEY environment variable must be set
*
* Usage: * Usage:
* export API_KEY=your-api-key-here
* npx tsx scripts/api-tester.ts * npx tsx scripts/api-tester.ts
* npx tsx scripts/api-tester.ts --endpoint /health * npx tsx scripts/api-tester.ts --endpoint /health
* npx tsx scripts/api-tester.ts --verbose * npx tsx scripts/api-tester.ts --verbose
*
* Or use npm scripts:
* npm run test:api
* npm run test:api:verbose
*/ */
// ============================================================ // ============================================================
@@ -15,7 +23,23 @@
// ============================================================ // ============================================================
const API_URL = process.env.API_URL || 'https://cloud-instances-api.kappa-d8e.workers.dev'; const API_URL = process.env.API_URL || 'https://cloud-instances-api.kappa-d8e.workers.dev';
const API_KEY = process.env.API_KEY || '0f955192075f7d36b1432ec985713ac6aba7fe82ffa556e6f45381c5530ca042'; const API_KEY = process.env.API_KEY;
if (!API_KEY) {
console.error('\n❌ ERROR: API_KEY environment variable is required');
console.error('Please set API_KEY before running the tests:');
console.error(' export API_KEY=your-api-key-here');
console.error(' npm run test:api');
console.error('\nOr create a .env file (see .env.example for reference)');
process.exit(1);
}
if (API_KEY.length < 16) {
console.error('\n❌ ERROR: API_KEY must be at least 16 characters');
console.error('The provided API key is too short to be valid.');
console.error('Please check your API_KEY environment variable.');
process.exit(1);
}
// CLI flags // CLI flags
const args = process.argv.slice(2); const args = process.argv.slice(2);
@@ -585,7 +609,10 @@ async function runTests(): Promise<TestReport> {
console.log(bold(color('\n🧪 Cloud Instances API Tester', colors.cyan))); console.log(bold(color('\n🧪 Cloud Instances API Tester', colors.cyan)));
console.log(color('================================', colors.cyan)); console.log(color('================================', colors.cyan));
console.log(`${color('Target:', colors.white)} ${API_URL}`); console.log(`${color('Target:', colors.white)} ${API_URL}`);
console.log(`${color('API Key:', colors.white)} ${API_KEY.substring(0, 20)}...`); const maskedKey = API_KEY.length > 4
? `${API_KEY.substring(0, 4)}${'*'.repeat(8)}`
: '****';
console.log(`${color('API Key:', colors.white)} ${maskedKey}`);
if (VERBOSE) { if (VERBOSE) {
console.log(color('Mode: VERBOSE', colors.yellow)); console.log(color('Mode: VERBOSE', colors.yellow));
} }

View File

@@ -3,7 +3,17 @@
* E2E Scenario Tester for Cloud Instances API * E2E Scenario Tester for Cloud Instances API
* *
* Tests complete user workflows against the deployed API * Tests complete user workflows against the deployed API
* Run: npx tsx scripts/e2e-tester.ts [--scenario <name>] [--dry-run] *
* Requirements:
* API_KEY environment variable must be set
*
* Usage:
* export API_KEY=your-api-key-here
* npx tsx scripts/e2e-tester.ts [--scenario <name>] [--dry-run]
*
* Or use npm scripts:
* npm run test:e2e
* npm run test:e2e:dry
*/ */
import process from 'process'; import process from 'process';
@@ -12,8 +22,17 @@ import process from 'process';
// Configuration // Configuration
// ============================================================ // ============================================================
const API_URL = 'https://cloud-instances-api.kappa-d8e.workers.dev'; const API_URL = process.env.API_URL || 'https://cloud-instances-api.kappa-d8e.workers.dev';
const API_KEY = '0f955192075f7d36b1432ec985713ac6aba7fe82ffa556e6f45381c5530ca042'; const API_KEY = process.env.API_KEY;
if (!API_KEY) {
console.error('\n❌ ERROR: API_KEY environment variable is required');
console.error('Please set API_KEY before running E2E tests:');
console.error(' export API_KEY=your-api-key-here');
console.error(' npm run test:e2e');
console.error('\nOr create a .env file (see .env.example for reference)');
process.exit(1);
}
interface TestContext { interface TestContext {
recommendedInstanceId?: string; recommendedInstanceId?: string;
@@ -49,9 +68,14 @@ async function apiRequest(
let data: unknown; let data: unknown;
try { try {
data = await response.json(); const text = await response.text();
try {
data = JSON.parse(text);
} catch (err) {
data = { error: 'Failed to parse JSON response', rawText: text };
}
} catch (err) { } catch (err) {
data = { error: 'Failed to parse JSON response', rawText: await response.text() }; data = { error: 'Failed to read response body' };
} }
return { return {

404
scripts/migrate.ts Normal file
View File

@@ -0,0 +1,404 @@
#!/usr/bin/env tsx
/**
* Database Migration Runner
* Automatically detects and executes unapplied SQL migrations
*
* Usage:
* npm run db:migrate # Run on local database
* npm run db:migrate:remote # Run on remote database
* npm run db:migrate:status # Show migration status
*/
import { spawnSync } from 'child_process';
import { readdirSync, readFileSync, existsSync } from 'fs';
import { join, basename, resolve } from 'path';
// ============================================================
// Configuration
// ============================================================
const DB_NAME = 'cloud-instances-db';
const MIGRATIONS_DIR = join(process.cwd(), 'migrations');
const MIGRATION_HISTORY_TABLE = 'migration_history';
type Environment = 'local' | 'remote';
type Command = 'migrate' | 'status';
// ============================================================
// Utility Functions
// ============================================================
/**
* Sanitize migration name (prevent SQL injection)
* Only allows alphanumeric characters, underscores, and hyphens
*/
function sanitizeMigrationName(name: string): string {
if (!/^[a-zA-Z0-9_-]+$/.test(name)) {
throw new Error(`Invalid migration name: ${name} (only alphanumeric, underscore, and hyphen allowed)`);
}
return name;
}
/**
* Validate file path (prevent path traversal)
* Ensures the file is within the migrations directory
*/
function validateFilePath(filename: string): string {
// Check filename format
if (!/^[a-zA-Z0-9_-]+\.sql$/.test(filename)) {
throw new Error(`Invalid filename format: ${filename}`);
}
const filePath = join(MIGRATIONS_DIR, filename);
const resolvedPath = resolve(filePath);
const resolvedMigrationsDir = resolve(MIGRATIONS_DIR);
// Ensure resolved path is within migrations directory
if (!resolvedPath.startsWith(resolvedMigrationsDir + '/') && resolvedPath !== resolvedMigrationsDir) {
throw new Error(`Path traversal detected: ${filename}`);
}
return filePath;
}
/**
* Execute wrangler d1 command and return output
* Uses spawnSync to prevent command injection
*/
function executeD1Command(sql: string, env: Environment): string {
const envFlag = env === 'remote' ? '--remote' : '--local';
const args = ['d1', 'execute', DB_NAME, envFlag, '--command', sql];
try {
const result = spawnSync('wrangler', args, {
encoding: 'utf-8',
stdio: ['pipe', 'pipe', 'pipe']
});
if (result.error) {
throw new Error(`Failed to execute wrangler: ${result.error.message}`);
}
if (result.status !== 0) {
throw new Error(`D1 command failed with exit code ${result.status}: ${result.stderr}`);
}
return result.stdout;
} catch (error: any) {
throw new Error(`D1 command failed: ${error.message}`);
}
}
/**
* Execute SQL file using wrangler d1
* Uses spawnSync to prevent command injection
*/
function executeD1File(filePath: string, env: Environment): void {
const envFlag = env === 'remote' ? '--remote' : '--local';
const args = ['d1', 'execute', DB_NAME, envFlag, '--file', filePath];
try {
const result = spawnSync('wrangler', args, {
encoding: 'utf-8',
stdio: 'inherit'
});
if (result.error) {
throw new Error(`Failed to execute wrangler: ${result.error.message}`);
}
if (result.status !== 0) {
throw new Error(`Migration file execution failed with exit code ${result.status}`);
}
} catch (error: any) {
throw new Error(`Migration file execution failed: ${error.message}`);
}
}
/**
* Get list of applied migrations from database
*/
function getAppliedMigrations(env: Environment): Set<string> {
try {
const sql = `SELECT migration_name FROM ${MIGRATION_HISTORY_TABLE} WHERE success = 1`;
const output = executeD1Command(sql, env);
const migrations = new Set<string>();
const lines = output.split('\n');
for (const line of lines) {
const trimmed = line.trim();
if (trimmed && !trimmed.includes('migration_name') && !trimmed.includes('─')) {
migrations.add(trimmed);
}
}
return migrations;
} catch (error: any) {
// If table doesn't exist, return empty set
if (error.message.includes('no such table')) {
return new Set<string>();
}
throw error;
}
}
/**
* Get list of migration files from migrations directory
*/
function getMigrationFiles(): string[] {
if (!existsSync(MIGRATIONS_DIR)) {
throw new Error(`Migrations directory not found: ${MIGRATIONS_DIR}`);
}
const files = readdirSync(MIGRATIONS_DIR)
.filter(f => f.endsWith('.sql'))
.filter(f => f !== 'README.md')
.sort(); // Alphabetical sort ensures numeric order (000, 002, 003, 004)
return files;
}
/**
* Extract migration name from filename (without .sql extension)
*/
function getMigrationName(filename: string): string {
return basename(filename, '.sql');
}
/**
* Record migration execution in history table
*/
function recordMigration(
migrationName: string,
executionTimeMs: number,
success: boolean,
errorMessage: string | null,
env: Environment
): void {
// Sanitize migration name to prevent SQL injection
const safeMigrationName = sanitizeMigrationName(migrationName);
// Escape error message (if any) to prevent SQL injection
const safeErrorMessage = errorMessage
? `'${errorMessage.replace(/'/g, "''")}'`
: 'NULL';
const sql = `
INSERT INTO ${MIGRATION_HISTORY_TABLE}
(migration_name, execution_time_ms, success, error_message)
VALUES
('${safeMigrationName}', ${executionTimeMs}, ${success ? 1 : 0}, ${safeErrorMessage})
`;
try {
executeD1Command(sql, env);
} catch (error: any) {
console.error(`[ERROR] Failed to record migration: ${error.message}`);
}
}
/**
* Execute a single migration file
*/
function executeMigration(filename: string, env: Environment): boolean {
const migrationName = getMigrationName(filename);
// Validate file path to prevent path traversal attacks
const filePath = validateFilePath(filename);
console.log(`\n[Migrate] Executing: ${migrationName}`);
console.log(`[Migrate] File: ${filename}`);
const startTime = Date.now();
try {
executeD1File(filePath, env);
const executionTime = Date.now() - startTime;
console.log(`[Migrate] ✅ Success (${executionTime}ms)`);
// Record success (only if not 000_migration_history itself)
if (migrationName !== '000_migration_history') {
recordMigration(migrationName, executionTime, true, null, env);
}
return true;
} catch (error: any) {
const executionTime = Date.now() - startTime;
const errorMessage = error.message || 'Unknown error';
console.error(`[Migrate] ❌ Failed (${executionTime}ms)`);
console.error(`[Migrate] Error: ${errorMessage}`);
// Record failure (only if not 000_migration_history itself)
if (migrationName !== '000_migration_history') {
recordMigration(migrationName, executionTime, false, errorMessage, env);
}
return false;
}
}
// ============================================================
// Main Migration Logic
// ============================================================
/**
* Run all pending migrations
*/
function runMigrations(env: Environment): void {
console.log(`\n${'='.repeat(60)}`);
console.log(`[Migrate] Database Migration Runner`);
console.log(`[Migrate] Environment: ${env}`);
console.log(`[Migrate] Database: ${DB_NAME}`);
console.log(`${'='.repeat(60)}\n`);
const allMigrations = getMigrationFiles();
console.log(`[Migrate] Found ${allMigrations.length} migration files`);
// Ensure migration history table exists first
const historyMigration = allMigrations.find(f => f.startsWith('000_migration_history'));
if (!historyMigration) {
console.error('[Migrate] ❌ Migration history table file (000_migration_history.sql) not found!');
process.exit(1);
}
// Execute history table migration first
console.log(`[Migrate] Ensuring migration tracking table exists...`);
if (!executeMigration(historyMigration, env)) {
console.error('[Migrate] ❌ Failed to create migration history table');
process.exit(1);
}
// Get applied migrations
const appliedMigrations = getAppliedMigrations(env);
console.log(`[Migrate] ${appliedMigrations.size} migrations already applied`);
// Filter pending migrations (excluding 000_migration_history)
const pendingMigrations = allMigrations
.filter(f => !f.startsWith('000_migration_history'))
.filter(f => !appliedMigrations.has(getMigrationName(f)));
if (pendingMigrations.length === 0) {
console.log('\n[Migrate] ✅ All migrations are up to date. Nothing to do.');
return;
}
console.log(`\n[Migrate] ${pendingMigrations.length} pending migrations to execute:`);
pendingMigrations.forEach(f => console.log(` - ${getMigrationName(f)}`));
// Execute each pending migration
let successCount = 0;
let failureCount = 0;
for (const filename of pendingMigrations) {
const success = executeMigration(filename, env);
if (success) {
successCount++;
} else {
failureCount++;
console.error(`\n[Migrate] ❌ Migration failed: ${filename}`);
console.error(`[Migrate] Stopping migration process.`);
break; // Stop on first failure
}
}
// Summary
console.log(`\n${'='.repeat(60)}`);
console.log(`[Migrate] Migration Summary:`);
console.log(`[Migrate] ✅ Successful: ${successCount}`);
if (failureCount > 0) {
console.log(`[Migrate] ❌ Failed: ${failureCount}`);
}
console.log(`${'='.repeat(60)}\n`);
if (failureCount > 0) {
process.exit(1);
}
}
/**
* Show migration status
*/
function showStatus(env: Environment): void {
console.log(`\n${'='.repeat(60)}`);
console.log(`[Migrate] Migration Status`);
console.log(`[Migrate] Environment: ${env}`);
console.log(`[Migrate] Database: ${DB_NAME}`);
console.log(`${'='.repeat(60)}\n`);
const allMigrations = getMigrationFiles();
console.log(`[Migrate] Total migration files: ${allMigrations.length}`);
try {
const appliedMigrations = getAppliedMigrations(env);
console.log(`[Migrate] Applied migrations: ${appliedMigrations.size}`);
// Show detailed status
console.log(`\n[Migrate] Detailed Status:\n`);
for (const filename of allMigrations) {
const migrationName = getMigrationName(filename);
const isApplied = appliedMigrations.has(migrationName);
const status = isApplied ? '✅' : '⏳';
const label = isApplied ? 'Applied' : 'Pending';
console.log(` ${status} ${migrationName.padEnd(40)} [${label}]`);
}
const pendingCount = allMigrations.length - appliedMigrations.size;
if (pendingCount > 0) {
console.log(`\n[Migrate] ⚠️ ${pendingCount} migrations pending execution`);
console.log(`[Migrate] Run 'npm run db:migrate' to apply pending migrations`);
} else {
console.log(`\n[Migrate] ✅ All migrations are up to date`);
}
} catch (error: any) {
console.error(`[Migrate] ❌ Error reading migration status: ${error.message}`);
console.error(`[Migrate] The migration_history table may not exist yet.`);
console.error(`[Migrate] Run 'npm run db:migrate' to initialize the system.`);
process.exit(1);
}
console.log(`\n${'='.repeat(60)}\n`);
}
// ============================================================
// CLI Entry Point
// ============================================================
function main(): void {
const args = process.argv.slice(2);
// Parse command
let command: Command = 'migrate';
if (args.includes('--status')) {
command = 'status';
}
// Parse environment
let env: Environment = 'local';
if (args.includes('--remote')) {
env = 'remote';
}
// Execute command
try {
if (command === 'status') {
showStatus(env);
} else {
runMigrations(env);
}
} catch (error: any) {
console.error(`\n[Migrate] ❌ Fatal error: ${error.message}`);
process.exit(1);
}
}
// Run if executed directly
if (require.main === module) {
main();
}

View File

@@ -29,6 +29,8 @@ export const CACHE_TTL = {
HEALTH: 30, HEALTH: 30,
/** Cache TTL for pricing data (1 hour) */ /** Cache TTL for pricing data (1 hour) */
PRICING: 3600, PRICING: 3600,
/** Cache TTL for recommendation results (10 minutes) */
RECOMMENDATIONS: 600,
/** Default cache TTL (5 minutes) */ /** Default cache TTL (5 minutes) */
DEFAULT: 300, DEFAULT: 300,
} as const; } as const;

View File

@@ -7,7 +7,7 @@
import type { Env, InstanceQueryParams } from '../types'; import type { Env, InstanceQueryParams } from '../types';
import { QueryService } from '../services/query'; import { QueryService } from '../services/query';
import { CacheService } from '../services/cache'; import { getGlobalCacheService } from '../services/cache';
import { logger } from '../utils/logger'; import { logger } from '../utils/logger';
import { import {
SUPPORTED_PROVIDERS, SUPPORTED_PROVIDERS,
@@ -31,7 +31,6 @@ import {
* Note: Worker instances are recreated periodically, preventing memory leaks * Note: Worker instances are recreated periodically, preventing memory leaks
*/ */
let cachedQueryService: QueryService | null = null; let cachedQueryService: QueryService | null = null;
let cachedCacheService: CacheService | null = null;
let cachedDb: D1Database | null = null; let cachedDb: D1Database | null = null;
/** /**
@@ -49,18 +48,6 @@ function getQueryService(db: D1Database, env: Env): QueryService {
return cachedQueryService; return cachedQueryService;
} }
/**
* Get or create CacheService singleton
* Lazy initialization on first request, then reused for subsequent requests
*/
function getCacheService(): CacheService {
if (!cachedCacheService) {
cachedCacheService = new CacheService(CACHE_TTL.INSTANCES);
logger.debug('[Instances] CacheService singleton initialized');
}
return cachedCacheService;
}
/** /**
* Parsed and validated query parameters * Parsed and validated query parameters
*/ */
@@ -359,8 +346,8 @@ export async function handleInstances(
const params = parseResult.params!; const params = parseResult.params!;
logger.info('[Instances] Query params validated', params as unknown as Record<string, unknown>); logger.info('[Instances] Query params validated', params as unknown as Record<string, unknown>);
// Get cache service singleton (reused across requests) // Get global cache service singleton (shared across all routes)
const cacheService = getCacheService(); const cacheService = getGlobalCacheService(CACHE_TTL.INSTANCES, env.RATE_LIMIT_KV);
// Generate cache key from query parameters // Generate cache key from query parameters
const cacheKey = cacheService.generateKey(params as unknown as Record<string, unknown>); const cacheKey = cacheService.generateKey(params as unknown as Record<string, unknown>);

View File

@@ -8,7 +8,7 @@
import type { Env, ScaleType } from '../types'; import type { Env, ScaleType } from '../types';
import { RecommendationService } from '../services/recommendation'; import { RecommendationService } from '../services/recommendation';
import { validateStack, STACK_REQUIREMENTS } from '../services/stackConfig'; import { validateStack, STACK_REQUIREMENTS } from '../services/stackConfig';
import { CacheService } from '../services/cache'; import { getGlobalCacheService } from '../services/cache';
import { logger } from '../utils/logger'; import { logger } from '../utils/logger';
import { HTTP_STATUS, CACHE_TTL, REQUEST_LIMITS } from '../constants'; import { HTTP_STATUS, CACHE_TTL, REQUEST_LIMITS } from '../constants';
import { import {
@@ -33,23 +33,6 @@ interface RecommendRequestBody {
*/ */
const SUPPORTED_SCALES: readonly ScaleType[] = ['small', 'medium', 'large'] as const; const SUPPORTED_SCALES: readonly ScaleType[] = ['small', 'medium', 'large'] as const;
/**
* Cached CacheService instance for singleton pattern
*/
let cachedCacheService: CacheService | null = null;
/**
* Get or create CacheService singleton
*
* @returns CacheService instance with INSTANCES TTL
*/
function getCacheService(): CacheService {
if (!cachedCacheService) {
cachedCacheService = new CacheService(CACHE_TTL.INSTANCES);
}
return cachedCacheService;
}
/** /**
* Handle POST /recommend endpoint * Handle POST /recommend endpoint
* *
@@ -174,7 +157,7 @@ export async function handleRecommend(request: Request, env: Env): Promise<Respo
// 7. Initialize cache service and generate cache key // 7. Initialize cache service and generate cache key
logger.info('[Recommend] Validation passed', { stack, scale, budgetMax }); logger.info('[Recommend] Validation passed', { stack, scale, budgetMax });
const cacheService = getCacheService(); const cacheService = getGlobalCacheService(CACHE_TTL.RECOMMENDATIONS, env.RATE_LIMIT_KV);
// Generate cache key from sorted stack, scale, and budget // Generate cache key from sorted stack, scale, and budget
// Sort stack to ensure consistent cache keys regardless of order // Sort stack to ensure consistent cache keys regardless of order
@@ -224,7 +207,7 @@ export async function handleRecommend(request: Request, env: Env): Promise<Respo
{ {
status: HTTP_STATUS.OK, status: HTTP_STATUS.OK,
headers: { headers: {
'Cache-Control': `public, max-age=${CACHE_TTL.INSTANCES}`, 'Cache-Control': `public, max-age=${CACHE_TTL.RECOMMENDATIONS}`,
}, },
} }
); );
@@ -258,7 +241,7 @@ export async function handleRecommend(request: Request, env: Env): Promise<Respo
// 10. Store result in cache // 10. Store result in cache
try { try {
await cacheService.set(cacheKey, responseData, CACHE_TTL.INSTANCES); await cacheService.set(cacheKey, responseData, CACHE_TTL.RECOMMENDATIONS);
} catch (error) { } catch (error) {
// Graceful degradation: log error but don't fail the request // Graceful degradation: log error but don't fail the request
logger.error('[Recommend] Cache write failed', logger.error('[Recommend] Cache write failed',
@@ -273,7 +256,7 @@ export async function handleRecommend(request: Request, env: Env): Promise<Respo
{ {
status: HTTP_STATUS.OK, status: HTTP_STATUS.OK,
headers: { headers: {
'Cache-Control': `public, max-age=${CACHE_TTL.INSTANCES}`, 'Cache-Control': `public, max-age=${CACHE_TTL.RECOMMENDATIONS}`,
}, },
} }
); );

View File

@@ -0,0 +1,374 @@
/**
* CacheService KV Index Tests
*
* Tests for KV-based cache index functionality:
* - Cache key registration and unregistration
* - Pattern-based cache invalidation
* - clearAll() with KV enumeration
* - Backward compatibility without KV
*/
import { describe, it, expect, beforeEach, vi } from 'vitest';
import { CacheService } from './cache';
/**
* Mock Cloudflare Cache API
*/
const createMockCache = () => {
const store = new Map<string, Response>();
return {
async match(key: string) {
return store.get(key) || null;
},
async put(key: string, response: Response) {
store.set(key, response);
},
async delete(key: string) {
return store.delete(key);
},
// For testing: expose internal store
_getStore: () => store
} as any as Cache;
};
/**
* Mock KV namespace
*/
const createMockKV = () => {
const store = new Map<string, { value: string; metadata?: any; expirationTtl?: number }>();
return {
async get(key: string) {
const entry = store.get(key);
return entry ? entry.value : null;
},
async put(key: string, value: string, options?: { expirationTtl?: number; metadata?: any }) {
store.set(key, { value, metadata: options?.metadata, expirationTtl: options?.expirationTtl });
},
async delete(key: string) {
return store.delete(key);
},
async list(options?: { prefix?: string; cursor?: string }) {
const prefix = options?.prefix || '';
const keys = Array.from(store.keys())
.filter(k => k.startsWith(prefix))
.map(name => ({ name }));
return {
keys,
list_complete: true,
cursor: undefined
};
},
// For testing: expose internal store
_getStore: () => store
} as any as KVNamespace;
};
describe('CacheService - KV Index Integration', () => {
let mockCache: ReturnType<typeof createMockCache>;
let mockKV: ReturnType<typeof createMockKV>;
let cache: CacheService;
beforeEach(() => {
// Mock global caches
mockCache = createMockCache();
(global as any).caches = {
default: mockCache
};
mockKV = createMockKV();
cache = new CacheService(300, mockKV);
});
describe('Cache Key Registration', () => {
it('should register cache keys in KV index when setting cache', async () => {
const key = 'https://cache.internal/instances?provider=linode';
await cache.set(key, { data: 'test' });
const kvStore = mockKV._getStore();
const indexKey = `cache_index:${key}`;
expect(kvStore.has(indexKey)).toBe(true);
const entry = kvStore.get(indexKey);
expect(entry?.metadata.ttl).toBe(300);
});
it('should unregister cache keys when deleting cache', async () => {
const key = 'https://cache.internal/instances?provider=linode';
await cache.set(key, { data: 'test' });
await cache.delete(key);
const kvStore = mockKV._getStore();
const indexKey = `cache_index:${key}`;
expect(kvStore.has(indexKey)).toBe(false);
});
it('should use cache TTL for KV index expiration', async () => {
const key = 'https://cache.internal/instances?provider=linode';
const customTTL = 600;
await cache.set(key, { data: 'test' }, customTTL);
const kvStore = mockKV._getStore();
const indexKey = `cache_index:${key}`;
const entry = kvStore.get(indexKey);
expect(entry?.expirationTtl).toBe(customTTL);
});
});
describe('Pattern Invalidation', () => {
beforeEach(async () => {
// Setup test cache entries
await cache.set('https://cache.internal/instances?provider=linode', { data: 'linode-1' });
await cache.set('https://cache.internal/instances?provider=vultr', { data: 'vultr-1' });
await cache.set('https://cache.internal/pricing?provider=linode', { data: 'pricing-1' });
await cache.set('https://cache.internal/pricing?provider=vultr', { data: 'pricing-2' });
});
it('should invalidate cache entries matching wildcard pattern', async () => {
const invalidated = await cache.invalidatePattern('*instances*');
expect(invalidated).toBe(2);
// Verify only instances entries were deleted
const stats = await cache.getStats();
expect(stats.indexed_keys).toBe(2); // Only pricing entries remain
});
it('should invalidate cache entries matching specific provider pattern', async () => {
const invalidated = await cache.invalidatePattern('*provider=linode*');
expect(invalidated).toBe(2); // Both instances and pricing for linode
const stats = await cache.getStats();
expect(stats.indexed_keys).toBe(2); // Only vultr entries remain
});
it('should invalidate cache entries matching exact pattern', async () => {
const invalidated = await cache.invalidatePattern('*pricing?provider=vultr*');
expect(invalidated).toBe(1);
const stats = await cache.getStats();
expect(stats.indexed_keys).toBe(3);
});
it('should return 0 when no entries match pattern', async () => {
const invalidated = await cache.invalidatePattern('*nonexistent*');
expect(invalidated).toBe(0);
const stats = await cache.getStats();
expect(stats.indexed_keys).toBe(4); // All entries remain
});
it('should handle regex special characters in pattern', async () => {
const keyWithSpecialChars = 'https://cache.internal/test?query=value+test';
await cache.set(keyWithSpecialChars, { data: 'test' });
const invalidated = await cache.invalidatePattern('*query=value+test*');
expect(invalidated).toBe(1);
});
it('should be case-insensitive', async () => {
const invalidated = await cache.invalidatePattern('*INSTANCES*');
expect(invalidated).toBe(2);
});
});
describe('clearAll() with KV Index', () => {
beforeEach(async () => {
await cache.set('https://cache.internal/instances?provider=linode', { data: 'test-1' });
await cache.set('https://cache.internal/instances?provider=vultr', { data: 'test-2' });
await cache.set('https://cache.internal/pricing?provider=linode', { data: 'test-3' });
});
it('should clear all cache entries', async () => {
const cleared = await cache.clearAll();
expect(cleared).toBe(3);
const stats = await cache.getStats();
expect(stats.indexed_keys).toBe(0);
});
it('should clear cache entries with prefix filter', async () => {
const cleared = await cache.clearAll('https://cache.internal/instances');
expect(cleared).toBe(2);
const stats = await cache.getStats();
expect(stats.indexed_keys).toBe(1); // Only pricing entry remains
});
it('should return 0 when no entries exist', async () => {
await cache.clearAll();
const cleared = await cache.clearAll();
expect(cleared).toBe(0);
});
});
describe('Cache Statistics', () => {
it('should return indexed key count when KV is available', async () => {
await cache.set('https://cache.internal/test1', { data: 'test' });
await cache.set('https://cache.internal/test2', { data: 'test' });
const stats = await cache.getStats();
expect(stats.supported).toBe(true);
expect(stats.indexed_keys).toBe(2);
});
it('should return 0 indexed keys when cache is empty', async () => {
const stats = await cache.getStats();
expect(stats.supported).toBe(true);
expect(stats.indexed_keys).toBe(0);
});
});
describe('Error Handling', () => {
it('should gracefully handle KV registration failures', async () => {
const mockKVWithError = {
...mockKV,
put: vi.fn().mockRejectedValue(new Error('KV write failed'))
} as any;
const cacheWithError = new CacheService(300, mockKVWithError);
// Should not throw error, should gracefully degrade
await expect(cacheWithError.set('test-key', { data: 'test' })).resolves.toBeUndefined();
});
it('should gracefully handle KV unregistration failures', async () => {
await cache.set('test-key', { data: 'test' });
const mockKVWithError = {
...mockKV,
delete: vi.fn().mockRejectedValue(new Error('KV delete failed'))
} as any;
// Replace KV namespace with error mock
(cache as any).kvNamespace = mockKVWithError;
// Should not throw error, should gracefully degrade
const deleted = await cache.delete('test-key');
expect(deleted).toBe(true); // Cache delete succeeded
});
it('should return empty array on KV list failures', async () => {
await cache.set('test-key', { data: 'test' });
const mockKVWithError = {
...mockKV,
list: vi.fn().mockRejectedValue(new Error('KV list failed'))
} as any;
// Replace KV namespace with error mock
(cache as any).kvNamespace = mockKVWithError;
const cleared = await cache.clearAll();
expect(cleared).toBe(0); // Graceful degradation
});
});
describe('KV Index Pagination', () => {
it('should handle large numbers of cache keys', async () => {
// Create 150 cache entries to test pagination
const promises = [];
for (let i = 0; i < 150; i++) {
promises.push(cache.set(`https://cache.internal/test${i}`, { data: `test-${i}` }));
}
await Promise.all(promises);
const stats = await cache.getStats();
expect(stats.indexed_keys).toBe(150);
const cleared = await cache.clearAll();
expect(cleared).toBe(150);
});
});
});
describe('CacheService - Backward Compatibility (No KV)', () => {
let mockCache: ReturnType<typeof createMockCache>;
let cache: CacheService;
beforeEach(() => {
mockCache = createMockCache();
(global as any).caches = {
default: mockCache
};
// Initialize without KV namespace
cache = new CacheService(300);
});
describe('Pattern Invalidation without KV', () => {
it('should return 0 and log warning when KV is not available', async () => {
const invalidated = await cache.invalidatePattern('*instances*');
expect(invalidated).toBe(0);
});
});
describe('clearAll() without KV', () => {
it('should return 0 and log info when KV is not available', async () => {
const cleared = await cache.clearAll();
expect(cleared).toBe(0);
});
it('should return 0 with prefix filter when KV is not available', async () => {
const cleared = await cache.clearAll('https://cache.internal/instances');
expect(cleared).toBe(0);
});
});
describe('Statistics without KV', () => {
it('should indicate not supported when KV is not available', async () => {
const stats = await cache.getStats();
expect(stats.supported).toBe(false);
expect(stats.indexed_keys).toBeUndefined();
});
});
describe('Basic Cache Operations (no KV impact)', () => {
it('should set and get cache entries normally without KV', async () => {
const key = 'https://cache.internal/test';
const data = { value: 'test-data' };
await cache.set(key, data);
const result = await cache.get<typeof data>(key);
expect(result).not.toBeNull();
expect(result?.data).toEqual(data);
});
it('should delete cache entries normally without KV', async () => {
const key = 'https://cache.internal/test';
await cache.set(key, { data: 'test' });
const deleted = await cache.delete(key);
expect(deleted).toBe(true);
const result = await cache.get(key);
expect(result).toBeNull();
});
});
});

View File

@@ -7,14 +7,23 @@
* - Cache key generation with sorted parameters * - Cache key generation with sorted parameters
* - Cache age tracking and metadata * - Cache age tracking and metadata
* - Graceful degradation on cache failures * - Graceful degradation on cache failures
* - KV-based cache index for pattern invalidation and enumeration
* *
* @example * @example
* // Basic usage (no KV index)
* const cache = new CacheService(CACHE_TTL.INSTANCES); * const cache = new CacheService(CACHE_TTL.INSTANCES);
* await cache.set('key', data, CACHE_TTL.PRICING); * await cache.set('key', data, CACHE_TTL.PRICING);
* const result = await cache.get<MyType>('key'); * const result = await cache.get<MyType>('key');
* if (result) { * if (result) {
* console.log(result.cache_age_seconds); * console.log(result.cache_age_seconds);
* } * }
*
* @example
* // With KV index (enables pattern invalidation)
* const cache = new CacheService(CACHE_TTL.INSTANCES, env.RATE_LIMIT_KV);
* await cache.set('https://cache.internal/instances?provider=linode', data);
* await cache.invalidatePattern('*instances*'); // Invalidate all instance caches
* const count = await cache.clearAll(); // Clear all cache entries (returns actual count)
*/ */
import { logger } from '../utils/logger'; import { logger } from '../utils/logger';
@@ -34,23 +43,57 @@ export interface CacheResult<T> {
cached_at: string; cached_at: string;
} }
/**
* Global CacheService singleton
* Prevents race conditions from multiple route-level singletons
*/
let globalCacheService: CacheService | null = null;
/**
* Get or create global CacheService singleton
* Thread-safe factory function that ensures only one CacheService instance exists
*
* @param ttl - TTL in seconds for cache entries
* @param kv - KV namespace for cache index (enables pattern invalidation)
* @returns Global CacheService singleton instance
*
* @example
* const cache = getGlobalCacheService(CACHE_TTL.INSTANCES, env.RATE_LIMIT_KV);
*/
export function getGlobalCacheService(ttl: number, kv: KVNamespace | null): CacheService {
if (!globalCacheService) {
globalCacheService = new CacheService(ttl, kv);
logger.debug('[CacheService] Global singleton initialized');
}
return globalCacheService;
}
/** /**
* CacheService - Manages cache operations using Cloudflare Workers Cache API * CacheService - Manages cache operations using Cloudflare Workers Cache API
*/ */
export class CacheService { export class CacheService {
private cache: Cache; private cache: Cache;
private defaultTTL: number; private defaultTTL: number;
private kvNamespace: KVNamespace | null;
private readonly CACHE_INDEX_PREFIX = 'cache_index:';
private readonly BATCH_SIZE = 50;
private readonly MAX_KEYS_LIMIT = 5000;
private readonly MAX_PATTERN_LENGTH = 200;
private readonly MAX_WILDCARD_COUNT = 5;
private readonly OPERATION_TIMEOUT_MS = 25000;
/** /**
* Initialize cache service * Initialize cache service
* *
* @param ttlSeconds - Default TTL in seconds (default: CACHE_TTL.DEFAULT) * @param ttlSeconds - Default TTL in seconds (default: CACHE_TTL.DEFAULT)
* @param kvNamespace - Optional KV namespace for cache index (enables pattern invalidation)
*/ */
constructor(ttlSeconds = CACHE_TTL.DEFAULT) { constructor(ttlSeconds: number = CACHE_TTL.DEFAULT, kvNamespace: KVNamespace | null = null) {
// Use Cloudflare Workers global caches.default // Use Cloudflare Workers global caches.default
this.cache = caches.default; this.cache = caches.default;
this.defaultTTL = ttlSeconds; this.defaultTTL = ttlSeconds;
logger.debug(`[CacheService] Initialized with default TTL: ${ttlSeconds}s`); this.kvNamespace = kvNamespace;
logger.debug(`[CacheService] Initialized with default TTL: ${ttlSeconds}s, KV index: ${!!kvNamespace}`);
} }
/** /**
@@ -123,6 +166,17 @@ export class CacheService {
// Store in cache // Store in cache
await this.cache.put(key, response); await this.cache.put(key, response);
// Register key in KV index if available (fire-and-forget)
if (this.kvNamespace) {
this._registerCacheKey(key, ttl).catch(error => {
logger.error('[CacheService] Failed to register cache key (non-blocking)', {
key,
error: error instanceof Error ? error.message : String(error)
});
});
}
logger.debug(`[CacheService] Cached: ${key} (TTL: ${ttl}s)`); logger.debug(`[CacheService] Cached: ${key} (TTL: ${ttl}s)`);
} catch (error) { } catch (error) {
@@ -143,6 +197,11 @@ export class CacheService {
try { try {
const deleted = await this.cache.delete(key); const deleted = await this.cache.delete(key);
// Unregister key from KV index if available
if (this.kvNamespace && deleted) {
await this._unregisterCacheKey(key);
}
if (deleted) { if (deleted) {
logger.debug(`[CacheService] Deleted: ${key}`); logger.debug(`[CacheService] Deleted: ${key}`);
} else { } else {
@@ -183,13 +242,11 @@ export class CacheService {
/** /**
* Clear all cache entries with optional prefix filter * Clear all cache entries with optional prefix filter
* *
* Note: The Cloudflare Workers Cache API doesn't support listing/enumerating keys, * With KV index: Enumerates and deletes all matching cache entries
* so this method can only track operations via logging. Individual entries will * Without KV index: Logs operation only (entries expire based on TTL)
* expire based on their TTL. For production use cases requiring enumeration,
* consider using KV-backed cache index.
* *
* @param prefix - Optional URL prefix to filter entries (e.g., 'https://cache.internal/instances') * @param prefix - Optional URL prefix to filter entries (e.g., 'https://cache.internal/instances')
* @returns Number of entries cleared (0, as enumeration is not supported) * @returns Number of entries cleared
* *
* @example * @example
* // Clear all cache entries * // Clear all cache entries
@@ -202,18 +259,52 @@ export class CacheService {
try { try {
const targetPrefix = prefix ?? 'https://cache.internal/'; const targetPrefix = prefix ?? 'https://cache.internal/';
// The Cache API doesn't support listing keys directly // If KV index is not available, log and return 0
// We log the clear operation for audit purposes if (!this.kvNamespace) {
// Individual entries will naturally expire based on TTL logger.info('[CacheService] Cache clearAll requested (no KV index)', {
prefix: targetPrefix,
note: 'Individual entries will expire based on TTL. Enable KV namespace for enumeration.'
});
return 0;
}
logger.info('[CacheService] Cache clearAll requested', { // List all cache keys from KV index
const cacheKeys = await this._listCacheKeys(targetPrefix);
if (cacheKeys.length === 0) {
logger.info('[CacheService] No cache entries to clear', { prefix: targetPrefix });
return 0;
}
// Delete cache entries in parallel batches with timeout
const startTime = Date.now();
let deletedCount = 0;
for (let i = 0; i < cacheKeys.length; i += this.BATCH_SIZE) {
// Check timeout
if (Date.now() - startTime > this.OPERATION_TIMEOUT_MS) {
logger.warn('[CacheService] clearAll timeout reached', {
deleted_count: deletedCount,
total_keys: cacheKeys.length,
timeout_ms: this.OPERATION_TIMEOUT_MS
});
break;
}
const batch = cacheKeys.slice(i, i + this.BATCH_SIZE);
const deletePromises = batch.map(key => this.delete(key));
const results = await Promise.all(deletePromises);
deletedCount += results.filter(deleted => deleted).length;
}
logger.info('[CacheService] Cache cleared', {
prefix: targetPrefix, prefix: targetPrefix,
note: 'Individual entries will expire based on TTL. Consider using KV-backed cache index for enumeration.' total_keys: cacheKeys.length,
deleted_count: deletedCount
}); });
// Return 0 as we can't enumerate Cache API entries return deletedCount;
// In production, use KV-backed cache index for enumeration
return 0;
} catch (error) { } catch (error) {
logger.error('[CacheService] Cache clearAll failed', { logger.error('[CacheService] Cache clearAll failed', {
@@ -264,14 +355,98 @@ export class CacheService {
/** /**
* Invalidate all cache entries matching a pattern * Invalidate all cache entries matching a pattern
* Note: Cloudflare Workers Cache API doesn't support pattern matching
* This method is for future implementation with KV or custom cache index
* *
* @param pattern - Pattern to match (e.g., 'instances:*') * Supports wildcards and pattern matching via KV index.
* Without KV index, logs warning and returns 0.
*
* @param pattern - Pattern to match (supports * wildcard)
* @returns Number of entries invalidated
*
* @example
* // Invalidate all instance cache entries
* await cache.invalidatePattern('*instances*');
*
* // Invalidate all cache entries for a specific provider
* await cache.invalidatePattern('*provider=linode*');
*
* // Invalidate all pricing cache entries
* await cache.invalidatePattern('*pricing*');
*/ */
async invalidatePattern(pattern: string): Promise<void> { async invalidatePattern(pattern: string): Promise<number> {
logger.warn(`[CacheService] Pattern invalidation not supported: ${pattern}`); try {
// TODO: Implement with KV-based cache index if needed // If KV index is not available, log warning
if (!this.kvNamespace) {
logger.warn('[CacheService] Pattern invalidation not available (no KV index)', { pattern });
return 0;
}
// ReDoS prevention: validate pattern
this._validatePattern(pattern);
// Extract prefix from pattern for KV-level filtering
// e.g., "instances*" -> prefix "instances"
// e.g., "*instances*" -> no prefix (full scan)
const prefixMatch = pattern.match(/^([^*?]+)/);
const kvPrefix = prefixMatch ? prefixMatch[1] : undefined;
if (kvPrefix) {
logger.debug(`[CacheService] Using KV prefix filter: "${kvPrefix}" for pattern: "${pattern}"`);
}
// List cache keys with KV-side prefix filtering
const allKeys = await this._listCacheKeys(kvPrefix);
// Convert pattern to regex (escape special chars except *)
const regexPattern = pattern
.replace(/[.+?^${}()|[\]\\]/g, '\\$&') // Escape special regex chars
.replace(/\*/g, '.*'); // Convert * to .*
const regex = new RegExp(`^${regexPattern}$`, 'i');
// Filter keys matching pattern
const matchingKeys = allKeys.filter(key => regex.test(key));
if (matchingKeys.length === 0) {
logger.info('[CacheService] No cache entries match pattern', { pattern });
return 0;
}
// Delete matching cache entries in parallel batches with timeout
const startTime = Date.now();
let deletedCount = 0;
for (let i = 0; i < matchingKeys.length; i += this.BATCH_SIZE) {
// Check timeout
if (Date.now() - startTime > this.OPERATION_TIMEOUT_MS) {
logger.warn('[CacheService] invalidatePattern timeout reached', {
deleted_count: deletedCount,
total_matches: matchingKeys.length,
timeout_ms: this.OPERATION_TIMEOUT_MS
});
break;
}
const batch = matchingKeys.slice(i, i + this.BATCH_SIZE);
const deletePromises = batch.map(key => this.delete(key));
const results = await Promise.all(deletePromises);
deletedCount += results.filter(deleted => deleted).length;
}
logger.info('[CacheService] Pattern invalidation complete', {
pattern,
total_matches: matchingKeys.length,
deleted_count: deletedCount
});
return deletedCount;
} catch (error) {
logger.error('[CacheService] Pattern invalidation failed', {
pattern,
error: error instanceof Error ? error.message : String(error)
});
throw error;
}
} }
/** /**
@@ -281,8 +456,160 @@ export class CacheService {
* *
* @returns Cache statistics (not available in Cloudflare Workers) * @returns Cache statistics (not available in Cloudflare Workers)
*/ */
async getStats(): Promise<{ supported: boolean }> { async getStats(): Promise<{ supported: boolean; indexed_keys?: number }> {
logger.warn('[CacheService] Cache statistics not available in Cloudflare Workers'); if (!this.kvNamespace) {
return { supported: false }; logger.warn('[CacheService] Cache statistics not available in Cloudflare Workers');
return { supported: false };
}
try {
const allKeys = await this._listCacheKeys();
return {
supported: true,
indexed_keys: allKeys.length
};
} catch (error) {
logger.error('[CacheService] Failed to get cache stats', {
error: error instanceof Error ? error.message : String(error)
});
return { supported: false };
}
}
/**
* Register cache key in KV index
*
* @param key - Cache key (URL format)
* @param ttlSeconds - TTL in seconds
*/
private async _registerCacheKey(key: string, ttlSeconds: number): Promise<void> {
if (!this.kvNamespace) {
return;
}
try {
const indexKey = `${this.CACHE_INDEX_PREFIX}${key}`;
const metadata = {
cached_at: new Date().toISOString(),
ttl: ttlSeconds
};
// Store with same TTL as cache entry
// KV will auto-delete when expired, keeping index clean
await this.kvNamespace.put(indexKey, '1', {
expirationTtl: ttlSeconds,
metadata
});
logger.debug(`[CacheService] Registered cache key in index: ${key}`);
} catch (error) {
// Graceful degradation: log error but don't fail cache operation
logger.error('[CacheService] Failed to register cache key', {
key,
error: error instanceof Error ? error.message : String(error)
});
}
}
/**
* Unregister cache key from KV index
*
* @param key - Cache key (URL format)
*/
private async _unregisterCacheKey(key: string): Promise<void> {
if (!this.kvNamespace) {
return;
}
try {
const indexKey = `${this.CACHE_INDEX_PREFIX}${key}`;
await this.kvNamespace.delete(indexKey);
logger.debug(`[CacheService] Unregistered cache key from index: ${key}`);
} catch (error) {
// Graceful degradation: log error but don't fail delete operation
logger.error('[CacheService] Failed to unregister cache key', {
key,
error: error instanceof Error ? error.message : String(error)
});
}
}
/**
* List all cache keys from KV index
*
* @param prefix - Optional prefix to filter keys (matches original cache key, not index key)
* @param maxKeys - Maximum number of keys to return (default: 5000)
* @returns Array of cache keys
*/
private async _listCacheKeys(prefix?: string, maxKeys = this.MAX_KEYS_LIMIT): Promise<string[]> {
if (!this.kvNamespace) {
return [];
}
try {
const keys: string[] = [];
let cursor: string | undefined;
// List all keys with cache_index: prefix
do {
const result = await this.kvNamespace.list({
prefix: this.CACHE_INDEX_PREFIX,
cursor
});
// Extract original cache keys (remove cache_index: prefix)
const extractedKeys = result.keys
.map(k => k.name.substring(this.CACHE_INDEX_PREFIX.length))
.filter(k => !prefix || k.startsWith(prefix));
keys.push(...extractedKeys);
// Check if we've exceeded the max keys limit
if (keys.length >= maxKeys) {
logger.warn('[CacheService] Cache key limit reached', {
max_keys: maxKeys,
current_count: keys.length,
note: 'Consider increasing MAX_KEYS_LIMIT or implementing pagination'
});
return keys.slice(0, maxKeys);
}
cursor = result.list_complete ? undefined : result.cursor;
} while (cursor);
logger.debug(`[CacheService] Listed ${keys.length} cache keys from index`, { prefix });
return keys;
} catch (error) {
logger.error('[CacheService] Failed to list cache keys', {
prefix,
error: error instanceof Error ? error.message : String(error)
});
return [];
}
}
/**
* Validate pattern for ReDoS prevention
*
* @param pattern - Pattern to validate
* @throws Error if pattern is invalid
*/
private _validatePattern(pattern: string): void {
// Check pattern length
if (pattern.length > this.MAX_PATTERN_LENGTH) {
throw new Error(`Pattern too long (max ${this.MAX_PATTERN_LENGTH} chars): ${pattern.length} chars`);
}
// Check for consecutive wildcards (**) which can cause ReDoS
if (pattern.includes('**')) {
throw new Error('Consecutive wildcards (**) not allowed (ReDoS prevention)');
}
// Count wildcards
const wildcardCount = (pattern.match(/\*/g) || []).length;
if (wildcardCount > this.MAX_WILDCARD_COUNT) {
throw new Error(`Too many wildcards (max ${this.MAX_WILDCARD_COUNT}): ${wildcardCount} wildcards`);
}
} }
} }