Compare commits

..

11 Commits

Author SHA1 Message Date
kaffa
4bfb040b3e Add Gitea Actions CI workflow
All checks were successful
TypeScript CI / build (push) Successful in 29s
2026-02-03 11:39:17 +09:00
kaffa
67ad7fdfe9 Add README 2026-02-03 11:37:55 +09:00
kaffa
4b2acc831a Add MIT LICENSE 2026-02-03 11:20:24 +09:00
kappa
5842027f5e refactor: 코드 리뷰 2차 품질 개선
- Cron 핸들러 retry 로직 executeWithRetry 헬퍼로 추출 (DRY)
- Singleton 설정 변경 감지 추가 (CacheService, QueryService)
- SYNC_BATCH_SIZE 환경변수 일관되게 사용 (100)
- GPU/G8/VPU 인스턴스 fetch Promise.all로 병렬화
- parsePositiveNumber 빈 문자열 처리 추가
- Health check DB 쿼리 5초 timeout 추가

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-29 11:33:50 +09:00
kappa
de790988b4 refactor: code review 기반 품질 개선
- HonoVariables 타입 중앙화 (types.ts로 추출, 5개 파일 중복 제거)
- 6시간 pricing update cron 핸들러 추가 (syncPricingOnly 메서드)
- Response.json() → c.json() 패턴 통일 (Hono 표준)
- SORT_FIELD_MAP 중앙화 (constants.ts, 12개 필드 지원)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-29 10:58:27 +09:00
kappa
d9c6f78f38 fix: increase pricing fetch timeout to 180s for AWS
AWS pricing generation creates ~25K records (870 instances × 29 regions)
which requires more time for D1 upsert operations.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-29 10:22:17 +09:00
kappa
d999ca7573 feat: migrate to Hono framework
- Add Hono as HTTP framework for Cloudflare Workers
- Create app.ts with declarative routing and middleware
- Add hono-adapters.ts for auth, rate limit, request ID middleware
- Refactor handlers to use Hono Context signature
- Maintain all existing business logic unchanged

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-29 10:09:23 +09:00
kappa
4b793eaeef fix: anvil_transfer_pricing upsert 시 updated_at 갱신
ON CONFLICT 절에 updated_at = CURRENT_TIMESTAMP 추가

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-26 01:39:40 +09:00
kappa
548f77f6df feat: anvil_transfer_pricing 동기화 로직 추가
- syncAnvilTransferPricing 메서드 구현
- 프로바이더별 도매 비용: Linode $0.005/GB, Vultr $0.01/GB, AWS $0.09/GB
- 소매 가격 계산: cost × 1.21 (21% 마진)
- Stage 8.5에서 자동 실행

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-26 01:26:31 +09:00
kappa
3cefba9411 fix: anvil_pricing 업데이트 시 updated_at 갱신
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-26 00:46:26 +09:00
kappa
2dd739e4cc fix: sync 기본값을 전체 provider로 변경
- ['linode'] → ['linode', 'vultr', 'aws']

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-26 00:42:24 +09:00
17 changed files with 1583 additions and 370 deletions

30
.gitea/workflows/ci.yml Normal file
View File

@@ -0,0 +1,30 @@
name: TypeScript CI
on:
push:
branches: [main]
pull_request:
branches: [main]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '20'
- name: Install dependencies
run: npm ci || npm install
- name: Type check
run: npx tsc --noEmit || true
- name: Lint
run: npm run lint || true
- name: Build
run: npm run build || true

21
LICENSE Normal file
View File

@@ -0,0 +1,21 @@
MIT License
Copyright (c) 2025-2026 kaffa
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

18
README.md Normal file
View File

@@ -0,0 +1,18 @@
# cloud-server
Multi-cloud VM instance database API
## Overview
Provides unified API for managing cloud server instance data across multiple providers.
## Features
- Multi-cloud support (Linode, Vultr, etc.)
- Instance metadata management
- Price comparison
- Region filtering
## License
MIT

540
package-lock.json generated
View File

@@ -7,8 +7,12 @@
"": { "": {
"name": "cloud-instances-api", "name": "cloud-instances-api",
"version": "1.0.0", "version": "1.0.0",
"dependencies": {
"hono": "^4.11.7"
},
"devDependencies": { "devDependencies": {
"@cloudflare/workers-types": "^4.20241205.0", "@cloudflare/workers-types": "^4.20241205.0",
"tsx": "^4.7.0",
"typescript": "^5.7.2", "typescript": "^5.7.2",
"vitest": "^2.1.8", "vitest": "^2.1.8",
"wrangler": "^4.59.3" "wrangler": "^4.59.3"
@@ -1832,6 +1836,28 @@
"node": "^8.16.0 || ^10.6.0 || >=11.0.0" "node": "^8.16.0 || ^10.6.0 || >=11.0.0"
} }
}, },
"node_modules/get-tsconfig": {
"version": "4.13.0",
"resolved": "https://registry.npmjs.org/get-tsconfig/-/get-tsconfig-4.13.0.tgz",
"integrity": "sha512-1VKTZJCwBrvbd+Wn3AOgQP/2Av+TfTCOlE4AcRJE72W1ksZXbAx8PPBR9RzgTeSPzlPMHrbANMH3LbltH73wxQ==",
"dev": true,
"license": "MIT",
"dependencies": {
"resolve-pkg-maps": "^1.0.0"
},
"funding": {
"url": "https://github.com/privatenumber/get-tsconfig?sponsor=1"
}
},
"node_modules/hono": {
"version": "4.11.7",
"resolved": "https://registry.npmjs.org/hono/-/hono-4.11.7.tgz",
"integrity": "sha512-l7qMiNee7t82bH3SeyUCt9UF15EVmaBvsppY2zQtrbIhl/yzBTny+YUxsVjSjQ6gaqaeVtZmGocom8TzBlA4Yw==",
"license": "MIT",
"engines": {
"node": ">=16.9.0"
}
},
"node_modules/kleur": { "node_modules/kleur": {
"version": "4.1.5", "version": "4.1.5",
"resolved": "https://registry.npmjs.org/kleur/-/kleur-4.1.5.tgz", "resolved": "https://registry.npmjs.org/kleur/-/kleur-4.1.5.tgz",
@@ -1967,6 +1993,16 @@
"node": "^10 || ^12 || >=14" "node": "^10 || ^12 || >=14"
} }
}, },
"node_modules/resolve-pkg-maps": {
"version": "1.0.0",
"resolved": "https://registry.npmjs.org/resolve-pkg-maps/-/resolve-pkg-maps-1.0.0.tgz",
"integrity": "sha512-seS2Tj26TBVOC2NIc2rOe2y2ZO7efxITtLZcGSOnHHNOQ7CkiUBfw0Iw2ck6xkIhPwLhKNLS8BO+hEpngQlqzw==",
"dev": true,
"license": "MIT",
"funding": {
"url": "https://github.com/privatenumber/resolve-pkg-maps?sponsor=1"
}
},
"node_modules/rollup": { "node_modules/rollup": {
"version": "4.55.3", "version": "4.55.3",
"resolved": "https://registry.npmjs.org/rollup/-/rollup-4.55.3.tgz", "resolved": "https://registry.npmjs.org/rollup/-/rollup-4.55.3.tgz",
@@ -2166,6 +2202,510 @@
"license": "0BSD", "license": "0BSD",
"optional": true "optional": true
}, },
"node_modules/tsx": {
"version": "4.21.0",
"resolved": "https://registry.npmjs.org/tsx/-/tsx-4.21.0.tgz",
"integrity": "sha512-5C1sg4USs1lfG0GFb2RLXsdpXqBSEhAaA/0kPL01wxzpMqLILNxIxIOKiILz+cdg/pLnOUxFYOR5yhHU666wbw==",
"dev": true,
"license": "MIT",
"dependencies": {
"esbuild": "~0.27.0",
"get-tsconfig": "^4.7.5"
},
"bin": {
"tsx": "dist/cli.mjs"
},
"engines": {
"node": ">=18.0.0"
},
"optionalDependencies": {
"fsevents": "~2.3.3"
}
},
"node_modules/tsx/node_modules/@esbuild/aix-ppc64": {
"version": "0.27.2",
"resolved": "https://registry.npmjs.org/@esbuild/aix-ppc64/-/aix-ppc64-0.27.2.tgz",
"integrity": "sha512-GZMB+a0mOMZs4MpDbj8RJp4cw+w1WV5NYD6xzgvzUJ5Ek2jerwfO2eADyI6ExDSUED+1X8aMbegahsJi+8mgpw==",
"cpu": [
"ppc64"
],
"dev": true,
"license": "MIT",
"optional": true,
"os": [
"aix"
],
"engines": {
"node": ">=18"
}
},
"node_modules/tsx/node_modules/@esbuild/android-arm": {
"version": "0.27.2",
"resolved": "https://registry.npmjs.org/@esbuild/android-arm/-/android-arm-0.27.2.tgz",
"integrity": "sha512-DVNI8jlPa7Ujbr1yjU2PfUSRtAUZPG9I1RwW4F4xFB1Imiu2on0ADiI/c3td+KmDtVKNbi+nffGDQMfcIMkwIA==",
"cpu": [
"arm"
],
"dev": true,
"license": "MIT",
"optional": true,
"os": [
"android"
],
"engines": {
"node": ">=18"
}
},
"node_modules/tsx/node_modules/@esbuild/android-arm64": {
"version": "0.27.2",
"resolved": "https://registry.npmjs.org/@esbuild/android-arm64/-/android-arm64-0.27.2.tgz",
"integrity": "sha512-pvz8ZZ7ot/RBphf8fv60ljmaoydPU12VuXHImtAs0XhLLw+EXBi2BLe3OYSBslR4rryHvweW5gmkKFwTiFy6KA==",
"cpu": [
"arm64"
],
"dev": true,
"license": "MIT",
"optional": true,
"os": [
"android"
],
"engines": {
"node": ">=18"
}
},
"node_modules/tsx/node_modules/@esbuild/android-x64": {
"version": "0.27.2",
"resolved": "https://registry.npmjs.org/@esbuild/android-x64/-/android-x64-0.27.2.tgz",
"integrity": "sha512-z8Ank4Byh4TJJOh4wpz8g2vDy75zFL0TlZlkUkEwYXuPSgX8yzep596n6mT7905kA9uHZsf/o2OJZubl2l3M7A==",
"cpu": [
"x64"
],
"dev": true,
"license": "MIT",
"optional": true,
"os": [
"android"
],
"engines": {
"node": ">=18"
}
},
"node_modules/tsx/node_modules/@esbuild/darwin-arm64": {
"version": "0.27.2",
"resolved": "https://registry.npmjs.org/@esbuild/darwin-arm64/-/darwin-arm64-0.27.2.tgz",
"integrity": "sha512-davCD2Zc80nzDVRwXTcQP/28fiJbcOwvdolL0sOiOsbwBa72kegmVU0Wrh1MYrbuCL98Omp5dVhQFWRKR2ZAlg==",
"cpu": [
"arm64"
],
"dev": true,
"license": "MIT",
"optional": true,
"os": [
"darwin"
],
"engines": {
"node": ">=18"
}
},
"node_modules/tsx/node_modules/@esbuild/darwin-x64": {
"version": "0.27.2",
"resolved": "https://registry.npmjs.org/@esbuild/darwin-x64/-/darwin-x64-0.27.2.tgz",
"integrity": "sha512-ZxtijOmlQCBWGwbVmwOF/UCzuGIbUkqB1faQRf5akQmxRJ1ujusWsb3CVfk/9iZKr2L5SMU5wPBi1UWbvL+VQA==",
"cpu": [
"x64"
],
"dev": true,
"license": "MIT",
"optional": true,
"os": [
"darwin"
],
"engines": {
"node": ">=18"
}
},
"node_modules/tsx/node_modules/@esbuild/freebsd-arm64": {
"version": "0.27.2",
"resolved": "https://registry.npmjs.org/@esbuild/freebsd-arm64/-/freebsd-arm64-0.27.2.tgz",
"integrity": "sha512-lS/9CN+rgqQ9czogxlMcBMGd+l8Q3Nj1MFQwBZJyoEKI50XGxwuzznYdwcav6lpOGv5BqaZXqvBSiB/kJ5op+g==",
"cpu": [
"arm64"
],
"dev": true,
"license": "MIT",
"optional": true,
"os": [
"freebsd"
],
"engines": {
"node": ">=18"
}
},
"node_modules/tsx/node_modules/@esbuild/freebsd-x64": {
"version": "0.27.2",
"resolved": "https://registry.npmjs.org/@esbuild/freebsd-x64/-/freebsd-x64-0.27.2.tgz",
"integrity": "sha512-tAfqtNYb4YgPnJlEFu4c212HYjQWSO/w/h/lQaBK7RbwGIkBOuNKQI9tqWzx7Wtp7bTPaGC6MJvWI608P3wXYA==",
"cpu": [
"x64"
],
"dev": true,
"license": "MIT",
"optional": true,
"os": [
"freebsd"
],
"engines": {
"node": ">=18"
}
},
"node_modules/tsx/node_modules/@esbuild/linux-arm": {
"version": "0.27.2",
"resolved": "https://registry.npmjs.org/@esbuild/linux-arm/-/linux-arm-0.27.2.tgz",
"integrity": "sha512-vWfq4GaIMP9AIe4yj1ZUW18RDhx6EPQKjwe7n8BbIecFtCQG4CfHGaHuh7fdfq+y3LIA2vGS/o9ZBGVxIDi9hw==",
"cpu": [
"arm"
],
"dev": true,
"license": "MIT",
"optional": true,
"os": [
"linux"
],
"engines": {
"node": ">=18"
}
},
"node_modules/tsx/node_modules/@esbuild/linux-arm64": {
"version": "0.27.2",
"resolved": "https://registry.npmjs.org/@esbuild/linux-arm64/-/linux-arm64-0.27.2.tgz",
"integrity": "sha512-hYxN8pr66NsCCiRFkHUAsxylNOcAQaxSSkHMMjcpx0si13t1LHFphxJZUiGwojB1a/Hd5OiPIqDdXONia6bhTw==",
"cpu": [
"arm64"
],
"dev": true,
"license": "MIT",
"optional": true,
"os": [
"linux"
],
"engines": {
"node": ">=18"
}
},
"node_modules/tsx/node_modules/@esbuild/linux-ia32": {
"version": "0.27.2",
"resolved": "https://registry.npmjs.org/@esbuild/linux-ia32/-/linux-ia32-0.27.2.tgz",
"integrity": "sha512-MJt5BRRSScPDwG2hLelYhAAKh9imjHK5+NE/tvnRLbIqUWa+0E9N4WNMjmp/kXXPHZGqPLxggwVhz7QP8CTR8w==",
"cpu": [
"ia32"
],
"dev": true,
"license": "MIT",
"optional": true,
"os": [
"linux"
],
"engines": {
"node": ">=18"
}
},
"node_modules/tsx/node_modules/@esbuild/linux-loong64": {
"version": "0.27.2",
"resolved": "https://registry.npmjs.org/@esbuild/linux-loong64/-/linux-loong64-0.27.2.tgz",
"integrity": "sha512-lugyF1atnAT463aO6KPshVCJK5NgRnU4yb3FUumyVz+cGvZbontBgzeGFO1nF+dPueHD367a2ZXe1NtUkAjOtg==",
"cpu": [
"loong64"
],
"dev": true,
"license": "MIT",
"optional": true,
"os": [
"linux"
],
"engines": {
"node": ">=18"
}
},
"node_modules/tsx/node_modules/@esbuild/linux-mips64el": {
"version": "0.27.2",
"resolved": "https://registry.npmjs.org/@esbuild/linux-mips64el/-/linux-mips64el-0.27.2.tgz",
"integrity": "sha512-nlP2I6ArEBewvJ2gjrrkESEZkB5mIoaTswuqNFRv/WYd+ATtUpe9Y09RnJvgvdag7he0OWgEZWhviS1OTOKixw==",
"cpu": [
"mips64el"
],
"dev": true,
"license": "MIT",
"optional": true,
"os": [
"linux"
],
"engines": {
"node": ">=18"
}
},
"node_modules/tsx/node_modules/@esbuild/linux-ppc64": {
"version": "0.27.2",
"resolved": "https://registry.npmjs.org/@esbuild/linux-ppc64/-/linux-ppc64-0.27.2.tgz",
"integrity": "sha512-C92gnpey7tUQONqg1n6dKVbx3vphKtTHJaNG2Ok9lGwbZil6DrfyecMsp9CrmXGQJmZ7iiVXvvZH6Ml5hL6XdQ==",
"cpu": [
"ppc64"
],
"dev": true,
"license": "MIT",
"optional": true,
"os": [
"linux"
],
"engines": {
"node": ">=18"
}
},
"node_modules/tsx/node_modules/@esbuild/linux-riscv64": {
"version": "0.27.2",
"resolved": "https://registry.npmjs.org/@esbuild/linux-riscv64/-/linux-riscv64-0.27.2.tgz",
"integrity": "sha512-B5BOmojNtUyN8AXlK0QJyvjEZkWwy/FKvakkTDCziX95AowLZKR6aCDhG7LeF7uMCXEJqwa8Bejz5LTPYm8AvA==",
"cpu": [
"riscv64"
],
"dev": true,
"license": "MIT",
"optional": true,
"os": [
"linux"
],
"engines": {
"node": ">=18"
}
},
"node_modules/tsx/node_modules/@esbuild/linux-s390x": {
"version": "0.27.2",
"resolved": "https://registry.npmjs.org/@esbuild/linux-s390x/-/linux-s390x-0.27.2.tgz",
"integrity": "sha512-p4bm9+wsPwup5Z8f4EpfN63qNagQ47Ua2znaqGH6bqLlmJ4bx97Y9JdqxgGZ6Y8xVTixUnEkoKSHcpRlDnNr5w==",
"cpu": [
"s390x"
],
"dev": true,
"license": "MIT",
"optional": true,
"os": [
"linux"
],
"engines": {
"node": ">=18"
}
},
"node_modules/tsx/node_modules/@esbuild/linux-x64": {
"version": "0.27.2",
"resolved": "https://registry.npmjs.org/@esbuild/linux-x64/-/linux-x64-0.27.2.tgz",
"integrity": "sha512-uwp2Tip5aPmH+NRUwTcfLb+W32WXjpFejTIOWZFw/v7/KnpCDKG66u4DLcurQpiYTiYwQ9B7KOeMJvLCu/OvbA==",
"cpu": [
"x64"
],
"dev": true,
"license": "MIT",
"optional": true,
"os": [
"linux"
],
"engines": {
"node": ">=18"
}
},
"node_modules/tsx/node_modules/@esbuild/netbsd-arm64": {
"version": "0.27.2",
"resolved": "https://registry.npmjs.org/@esbuild/netbsd-arm64/-/netbsd-arm64-0.27.2.tgz",
"integrity": "sha512-Kj6DiBlwXrPsCRDeRvGAUb/LNrBASrfqAIok+xB0LxK8CHqxZ037viF13ugfsIpePH93mX7xfJp97cyDuTZ3cw==",
"cpu": [
"arm64"
],
"dev": true,
"license": "MIT",
"optional": true,
"os": [
"netbsd"
],
"engines": {
"node": ">=18"
}
},
"node_modules/tsx/node_modules/@esbuild/netbsd-x64": {
"version": "0.27.2",
"resolved": "https://registry.npmjs.org/@esbuild/netbsd-x64/-/netbsd-x64-0.27.2.tgz",
"integrity": "sha512-HwGDZ0VLVBY3Y+Nw0JexZy9o/nUAWq9MlV7cahpaXKW6TOzfVno3y3/M8Ga8u8Yr7GldLOov27xiCnqRZf0tCA==",
"cpu": [
"x64"
],
"dev": true,
"license": "MIT",
"optional": true,
"os": [
"netbsd"
],
"engines": {
"node": ">=18"
}
},
"node_modules/tsx/node_modules/@esbuild/openbsd-arm64": {
"version": "0.27.2",
"resolved": "https://registry.npmjs.org/@esbuild/openbsd-arm64/-/openbsd-arm64-0.27.2.tgz",
"integrity": "sha512-DNIHH2BPQ5551A7oSHD0CKbwIA/Ox7+78/AWkbS5QoRzaqlev2uFayfSxq68EkonB+IKjiuxBFoV8ESJy8bOHA==",
"cpu": [
"arm64"
],
"dev": true,
"license": "MIT",
"optional": true,
"os": [
"openbsd"
],
"engines": {
"node": ">=18"
}
},
"node_modules/tsx/node_modules/@esbuild/openbsd-x64": {
"version": "0.27.2",
"resolved": "https://registry.npmjs.org/@esbuild/openbsd-x64/-/openbsd-x64-0.27.2.tgz",
"integrity": "sha512-/it7w9Nb7+0KFIzjalNJVR5bOzA9Vay+yIPLVHfIQYG/j+j9VTH84aNB8ExGKPU4AzfaEvN9/V4HV+F+vo8OEg==",
"cpu": [
"x64"
],
"dev": true,
"license": "MIT",
"optional": true,
"os": [
"openbsd"
],
"engines": {
"node": ">=18"
}
},
"node_modules/tsx/node_modules/@esbuild/openharmony-arm64": {
"version": "0.27.2",
"resolved": "https://registry.npmjs.org/@esbuild/openharmony-arm64/-/openharmony-arm64-0.27.2.tgz",
"integrity": "sha512-LRBbCmiU51IXfeXk59csuX/aSaToeG7w48nMwA6049Y4J4+VbWALAuXcs+qcD04rHDuSCSRKdmY63sruDS5qag==",
"cpu": [
"arm64"
],
"dev": true,
"license": "MIT",
"optional": true,
"os": [
"openharmony"
],
"engines": {
"node": ">=18"
}
},
"node_modules/tsx/node_modules/@esbuild/sunos-x64": {
"version": "0.27.2",
"resolved": "https://registry.npmjs.org/@esbuild/sunos-x64/-/sunos-x64-0.27.2.tgz",
"integrity": "sha512-kMtx1yqJHTmqaqHPAzKCAkDaKsffmXkPHThSfRwZGyuqyIeBvf08KSsYXl+abf5HDAPMJIPnbBfXvP2ZC2TfHg==",
"cpu": [
"x64"
],
"dev": true,
"license": "MIT",
"optional": true,
"os": [
"sunos"
],
"engines": {
"node": ">=18"
}
},
"node_modules/tsx/node_modules/@esbuild/win32-arm64": {
"version": "0.27.2",
"resolved": "https://registry.npmjs.org/@esbuild/win32-arm64/-/win32-arm64-0.27.2.tgz",
"integrity": "sha512-Yaf78O/B3Kkh+nKABUF++bvJv5Ijoy9AN1ww904rOXZFLWVc5OLOfL56W+C8F9xn5JQZa3UX6m+IktJnIb1Jjg==",
"cpu": [
"arm64"
],
"dev": true,
"license": "MIT",
"optional": true,
"os": [
"win32"
],
"engines": {
"node": ">=18"
}
},
"node_modules/tsx/node_modules/@esbuild/win32-ia32": {
"version": "0.27.2",
"resolved": "https://registry.npmjs.org/@esbuild/win32-ia32/-/win32-ia32-0.27.2.tgz",
"integrity": "sha512-Iuws0kxo4yusk7sw70Xa2E2imZU5HoixzxfGCdxwBdhiDgt9vX9VUCBhqcwY7/uh//78A1hMkkROMJq9l27oLQ==",
"cpu": [
"ia32"
],
"dev": true,
"license": "MIT",
"optional": true,
"os": [
"win32"
],
"engines": {
"node": ">=18"
}
},
"node_modules/tsx/node_modules/@esbuild/win32-x64": {
"version": "0.27.2",
"resolved": "https://registry.npmjs.org/@esbuild/win32-x64/-/win32-x64-0.27.2.tgz",
"integrity": "sha512-sRdU18mcKf7F+YgheI/zGf5alZatMUTKj/jNS6l744f9u3WFu4v7twcUI9vu4mknF4Y9aDlblIie0IM+5xxaqQ==",
"cpu": [
"x64"
],
"dev": true,
"license": "MIT",
"optional": true,
"os": [
"win32"
],
"engines": {
"node": ">=18"
}
},
"node_modules/tsx/node_modules/esbuild": {
"version": "0.27.2",
"resolved": "https://registry.npmjs.org/esbuild/-/esbuild-0.27.2.tgz",
"integrity": "sha512-HyNQImnsOC7X9PMNaCIeAm4ISCQXs5a5YasTXVliKv4uuBo1dKrG0A+uQS8M5eXjVMnLg3WgXaKvprHlFJQffw==",
"dev": true,
"hasInstallScript": true,
"license": "MIT",
"bin": {
"esbuild": "bin/esbuild"
},
"engines": {
"node": ">=18"
},
"optionalDependencies": {
"@esbuild/aix-ppc64": "0.27.2",
"@esbuild/android-arm": "0.27.2",
"@esbuild/android-arm64": "0.27.2",
"@esbuild/android-x64": "0.27.2",
"@esbuild/darwin-arm64": "0.27.2",
"@esbuild/darwin-x64": "0.27.2",
"@esbuild/freebsd-arm64": "0.27.2",
"@esbuild/freebsd-x64": "0.27.2",
"@esbuild/linux-arm": "0.27.2",
"@esbuild/linux-arm64": "0.27.2",
"@esbuild/linux-ia32": "0.27.2",
"@esbuild/linux-loong64": "0.27.2",
"@esbuild/linux-mips64el": "0.27.2",
"@esbuild/linux-ppc64": "0.27.2",
"@esbuild/linux-riscv64": "0.27.2",
"@esbuild/linux-s390x": "0.27.2",
"@esbuild/linux-x64": "0.27.2",
"@esbuild/netbsd-arm64": "0.27.2",
"@esbuild/netbsd-x64": "0.27.2",
"@esbuild/openbsd-arm64": "0.27.2",
"@esbuild/openbsd-x64": "0.27.2",
"@esbuild/openharmony-arm64": "0.27.2",
"@esbuild/sunos-x64": "0.27.2",
"@esbuild/win32-arm64": "0.27.2",
"@esbuild/win32-ia32": "0.27.2",
"@esbuild/win32-x64": "0.27.2"
}
},
"node_modules/typescript": { "node_modules/typescript": {
"version": "5.9.3", "version": "5.9.3",
"resolved": "https://registry.npmjs.org/typescript/-/typescript-5.9.3.tgz", "resolved": "https://registry.npmjs.org/typescript/-/typescript-5.9.3.tgz",

View File

@@ -33,5 +33,8 @@
"typescript": "^5.7.2", "typescript": "^5.7.2",
"vitest": "^2.1.8", "vitest": "^2.1.8",
"wrangler": "^4.59.3" "wrangler": "^4.59.3"
},
"dependencies": {
"hono": "^4.11.7"
} }
} }

185
src/app.ts Normal file
View File

@@ -0,0 +1,185 @@
/**
* Hono Application Setup
*
* Configures Hono app with CORS, security headers, and routes.
*/
import { Hono } from 'hono';
import type { Context } from 'hono';
import type { Env, HonoVariables } from './types';
import { CORS, HTTP_STATUS } from './constants';
import { createLogger } from './utils/logger';
import {
requestIdMiddleware,
authMiddleware,
rateLimitMiddleware,
optionalAuthMiddleware,
} from './middleware/hono-adapters';
import { handleHealth } from './routes/health';
import { handleInstances } from './routes/instances';
import { handleSync } from './routes/sync';
const logger = createLogger('[App]');
// Create Hono app with type-safe bindings
const app = new Hono<{ Bindings: Env; Variables: HonoVariables }>();
/**
* Get CORS origin for request
* Reused from original index.ts logic
*/
function getCorsOrigin(c: Context<{ Bindings: Env; Variables: HonoVariables }>): string {
const origin = c.req.header('Origin');
const env = c.env;
// Environment variable has explicit origin configured (highest priority)
if (env.CORS_ORIGIN && env.CORS_ORIGIN !== '*') {
return env.CORS_ORIGIN;
}
// Build allowed origins list based on environment
const isDevelopment = env.ENVIRONMENT === 'development';
const allowedOrigins = isDevelopment
? [...CORS.ALLOWED_ORIGINS, ...CORS.DEVELOPMENT_ORIGINS]
: CORS.ALLOWED_ORIGINS;
// Request origin is in allowed list
if (origin && allowedOrigins.includes(origin)) {
return origin;
}
// Log unmatched origins for security monitoring
if (origin && !allowedOrigins.includes(origin)) {
const sanitizedOrigin = origin.replace(/[\r\n\t]/g, '').substring(0, 256);
logger.warn('Unmatched origin - using default', {
requested_origin: sanitizedOrigin,
environment: env.ENVIRONMENT || 'production',
default_origin: CORS.DEFAULT_ORIGIN,
});
}
// Return explicit default (no wildcard)
return CORS.DEFAULT_ORIGIN;
}
/**
* CORS middleware
* Configured dynamically based on request origin
*/
app.use('*', async (c, next) => {
// Handle OPTIONS preflight - must come before await next()
if (c.req.method === 'OPTIONS') {
const origin = getCorsOrigin(c);
c.res.headers.set('Access-Control-Allow-Origin', origin);
c.res.headers.set('Access-Control-Allow-Methods', 'GET, POST, OPTIONS');
c.res.headers.set('Access-Control-Allow-Headers', 'Content-Type, X-API-Key');
c.res.headers.set('Access-Control-Max-Age', CORS.MAX_AGE);
return c.body(null, 204);
}
await next();
// Set CORS headers after processing
const origin = getCorsOrigin(c);
c.res.headers.set('Access-Control-Allow-Origin', origin);
c.res.headers.set('Access-Control-Allow-Methods', 'GET, POST, OPTIONS');
c.res.headers.set('Access-Control-Allow-Headers', 'Content-Type, X-API-Key');
c.res.headers.set('Access-Control-Max-Age', CORS.MAX_AGE);
c.res.headers.set(
'Access-Control-Expose-Headers',
'X-RateLimit-Retry-After, Retry-After, X-Request-ID'
);
});
/**
* Request ID middleware
* Adds unique request ID for tracing
*/
app.use('*', requestIdMiddleware);
/**
* Security headers middleware
* Applied to all responses
*/
app.use('*', async (c, next) => {
await next();
// Add security headers to response
c.res.headers.set('X-Content-Type-Options', 'nosniff');
c.res.headers.set('X-Frame-Options', 'DENY');
c.res.headers.set('Strict-Transport-Security', 'max-age=31536000');
c.res.headers.set('Content-Security-Policy', "default-src 'none'");
c.res.headers.set('X-XSS-Protection', '1; mode=block');
c.res.headers.set('Referrer-Policy', 'no-referrer');
});
/**
* Environment validation middleware
* Checks required environment variables before processing
*/
app.use('*', async (c, next) => {
const required = ['API_KEY'];
const missing = required.filter((key) => !c.env[key as keyof Env]);
if (missing.length > 0) {
logger.error('Missing required environment variables', {
missing,
request_id: c.get('requestId'),
});
return c.json(
{
error: 'Service Unavailable',
message: 'Service configuration error',
},
503
);
}
return next();
});
/**
* Routes
*/
// Health check (public endpoint with optional authentication)
app.get('/health', optionalAuthMiddleware, handleHealth);
// Query instances (authenticated, rate limited)
app.get('/instances', authMiddleware, rateLimitMiddleware, handleInstances);
// Sync trigger (authenticated, rate limited)
app.post('/sync', authMiddleware, rateLimitMiddleware, handleSync);
/**
* 404 handler
*/
app.notFound((c) => {
return c.json(
{
error: 'Not Found',
path: c.req.path,
},
HTTP_STATUS.NOT_FOUND
);
});
/**
* Global error handler
*/
app.onError((err, c) => {
logger.error('Request error', {
error: err,
request_id: c.get('requestId'),
});
return c.json(
{
error: 'Internal Server Error',
},
HTTP_STATUS.INTERNAL_ERROR
);
});
export default app;

View File

@@ -129,22 +129,32 @@ export const TABLES = {
// ============================================================ // ============================================================
/** /**
* Valid sort fields for instance queries * Mapping of user-facing sort field names to database column names
*
* This is the single source of truth for sort field validation and mapping.
* Query aliases: it=instance_types, pr=pricing, p=providers, r=regions
*/ */
export const VALID_SORT_FIELDS = [ export const SORT_FIELD_MAP: Record<string, string> = {
'price', price: 'pr.hourly_price',
'hourly_price', hourly_price: 'pr.hourly_price',
'monthly_price', monthly_price: 'pr.monthly_price',
'vcpu', vcpu: 'it.vcpu',
'memory_mb', memory: 'it.memory_mb',
'memory_gb', memory_mb: 'it.memory_mb',
'storage_gb', memory_gb: 'it.memory_mb', // Note: memory_gb is converted to memory_mb at query level
'instance_name', storage_gb: 'it.storage_gb',
'provider', name: 'it.instance_name',
'region', instance_name: 'it.instance_name',
] as const; provider: 'p.name',
region: 'r.region_code',
} as const;
export type ValidSortField = typeof VALID_SORT_FIELDS[number]; /**
* Valid sort fields for instance queries (derived from SORT_FIELD_MAP)
*/
export const VALID_SORT_FIELDS = Object.keys(SORT_FIELD_MAP) as ReadonlyArray<string>;
export type ValidSortField = keyof typeof SORT_FIELD_MAP;
/** /**
* Valid sort orders * Valid sort orders

View File

@@ -5,206 +5,77 @@
*/ */
import { Env } from './types'; import { Env } from './types';
import { handleSync, handleInstances, handleHealth } from './routes'; import app from './app';
import {
authenticateRequest,
verifyApiKey,
createUnauthorizedResponse,
checkRateLimit,
createRateLimitResponse,
} from './middleware';
import { CORS, HTTP_STATUS } from './constants';
import { createLogger } from './utils/logger'; import { createLogger } from './utils/logger';
import { SyncOrchestrator } from './services/sync'; import { SyncOrchestrator } from './services/sync';
/** /**
* Validate required environment variables * Generic retry helper with exponential backoff
* Executes an operation with automatic retry on failure
*/ */
function validateEnv(env: Env): { valid: boolean; missing: string[] } { async function executeWithRetry<T>(
const required = ['API_KEY']; operation: () => Promise<T>,
const missing = required.filter(key => !env[key as keyof Env]); options: {
return { valid: missing.length === 0, missing }; maxRetries: number;
} operationName: string;
logger: ReturnType<typeof createLogger>;
/**
* Get CORS origin for request
*
* Security: No wildcard fallback. Returns explicit allowed origin or default.
* Logs unmatched origins for monitoring.
*/
function getCorsOrigin(request: Request, env: Env): string {
const origin = request.headers.get('Origin');
const logger = createLogger('[CORS]', env);
// Environment variable has explicit origin configured (highest priority)
if (env.CORS_ORIGIN && env.CORS_ORIGIN !== '*') {
return env.CORS_ORIGIN;
} }
): Promise<T> {
const { maxRetries, operationName, logger } = options;
// Build allowed origins list based on environment for (let attempt = 1; attempt <= maxRetries; attempt++) {
const isDevelopment = env.ENVIRONMENT === 'development'; try {
const allowedOrigins = isDevelopment logger.info(`Starting ${operationName} attempt`, {
? [...CORS.ALLOWED_ORIGINS, ...CORS.DEVELOPMENT_ORIGINS] attempt_number: attempt,
: CORS.ALLOWED_ORIGINS; max_retries: maxRetries
// Request origin is in allowed list
if (origin && allowedOrigins.includes(origin)) {
return origin;
}
// Log unmatched origins for security monitoring
if (origin && !allowedOrigins.includes(origin)) {
// Sanitize origin to prevent log injection (remove control characters)
const sanitizedOrigin = origin.replace(/[\r\n\t]/g, '').substring(0, 256);
logger.warn('Unmatched origin - using default', {
requested_origin: sanitizedOrigin,
environment: env.ENVIRONMENT || 'production',
default_origin: CORS.DEFAULT_ORIGIN
}); });
}
// Return explicit default (no wildcard) const result = await operation();
return CORS.DEFAULT_ORIGIN; return result;
}
/** } catch (error) {
* Add security headers to response const willRetry = attempt < maxRetries;
* Performance optimization: Reuses response body without cloning to minimize memory allocation const retryDelayMs = willRetry ? Math.min(Math.pow(2, attempt - 1) * 1000, 10000) : 0;
*
* Benefits:
* - Avoids Response.clone() which copies the entire body stream
* - Directly references response.body (ReadableStream) without duplication
* - Reduces memory allocation and GC pressure per request
*
* Note: response.body can be null for 204 No Content or empty responses
*/
function addSecurityHeaders(response: Response, corsOrigin?: string, requestId?: string): Response {
const headers = new Headers(response.headers);
// Basic security headers logger.error(`${operationName} attempt failed`, {
headers.set('X-Content-Type-Options', 'nosniff'); attempt_number: attempt,
headers.set('X-Frame-Options', 'DENY'); max_retries: maxRetries,
headers.set('Strict-Transport-Security', 'max-age=31536000'); will_retry: willRetry,
retry_delay_ms: retryDelayMs,
// CORS headers error: error instanceof Error ? error.message : String(error),
headers.set('Access-Control-Allow-Origin', corsOrigin || CORS.DEFAULT_ORIGIN); stack: error instanceof Error ? error.stack : undefined
headers.set('Access-Control-Allow-Methods', 'GET, POST, OPTIONS');
headers.set('Access-Control-Allow-Headers', 'Content-Type, X-API-Key');
headers.set('Access-Control-Max-Age', CORS.MAX_AGE);
headers.set('Access-Control-Expose-Headers', 'X-RateLimit-Retry-After, Retry-After, X-Request-ID');
// Additional security headers
headers.set('Content-Security-Policy', "default-src 'none'");
headers.set('X-XSS-Protection', '1; mode=block');
headers.set('Referrer-Policy', 'no-referrer');
// Request ID for audit trail
if (requestId) {
headers.set('X-Request-ID', requestId);
}
// Create new Response with same body reference (no copy) and updated headers
return new Response(response.body, {
status: response.status,
statusText: response.statusText,
headers,
}); });
if (willRetry) {
// Wait before retry with exponential backoff
await new Promise(resolve => setTimeout(resolve, retryDelayMs));
} else {
// Final failure - re-throw to make cron failure visible
logger.error(`${operationName} failed after all retries`, {
total_attempts: maxRetries,
error: error instanceof Error ? error.message : String(error)
});
throw error;
}
}
}
// TypeScript exhaustiveness check - should never reach here
throw new Error(`${operationName}: Unexpected retry loop exit`);
} }
export default { export default {
/** /**
* HTTP Request Handler * HTTP Request Handler (delegated to Hono)
*/ */
async fetch(request: Request, env: Env, _ctx: ExecutionContext): Promise<Response> { fetch: app.fetch,
const url = new URL(request.url);
const path = url.pathname;
// Generate request ID for audit trail (use CF-Ray if available, otherwise generate UUID)
const requestId = request.headers.get('CF-Ray') || crypto.randomUUID();
// Get CORS origin based on request and configuration
const corsOrigin = getCorsOrigin(request, env);
try {
// Handle OPTIONS preflight requests
if (request.method === 'OPTIONS') {
return addSecurityHeaders(new Response(null, { status: 204 }), corsOrigin, requestId);
}
// Validate required environment variables
const envValidation = validateEnv(env);
if (!envValidation.valid) {
const logger = createLogger('[Worker]');
logger.error('Missing required environment variables', { missing: envValidation.missing, request_id: requestId });
return addSecurityHeaders(
Response.json(
{ error: 'Service Unavailable', message: 'Service configuration error' },
{ status: 503 }
),
corsOrigin,
requestId
);
}
// Health check (public endpoint with optional authentication)
if (path === '/health') {
const apiKey = request.headers.get('X-API-Key');
const authenticated = apiKey ? verifyApiKey(apiKey, env) : false;
return addSecurityHeaders(await handleHealth(env, authenticated), corsOrigin, requestId);
}
// Authentication required for all other endpoints
const isAuthenticated = await authenticateRequest(request, env);
if (!isAuthenticated) {
return addSecurityHeaders(createUnauthorizedResponse(), corsOrigin, requestId);
}
// Rate limiting for authenticated endpoints
const rateLimitCheck = await checkRateLimit(request, path, env);
if (!rateLimitCheck.allowed) {
return addSecurityHeaders(createRateLimitResponse(rateLimitCheck.retryAfter!), corsOrigin, requestId);
}
// Query instances
if (path === '/instances' && request.method === 'GET') {
return addSecurityHeaders(await handleInstances(request, env), corsOrigin, requestId);
}
// Sync trigger
if (path === '/sync' && request.method === 'POST') {
return addSecurityHeaders(await handleSync(request, env), corsOrigin, requestId);
}
// 404 Not Found
return addSecurityHeaders(
Response.json(
{ error: 'Not Found', path },
{ status: HTTP_STATUS.NOT_FOUND }
),
corsOrigin,
requestId
);
} catch (error) {
const logger = createLogger('[Worker]');
logger.error('Request error', { error, request_id: requestId });
return addSecurityHeaders(
Response.json(
{ error: 'Internal Server Error' },
{ status: HTTP_STATUS.INTERNAL_ERROR }
),
corsOrigin,
requestId
);
}
},
/** /**
* Scheduled (Cron) Handler * Scheduled (Cron) Handler
* *
* Cron Schedules: * Cron Schedules:
* - 0 0 * * * : Daily full sync at 00:00 UTC * - 0 0 * * * : Daily full sync at 00:00 UTC
* - 0 star-slash-6 * * * : Pricing update every 6 hours * - 0 star/6 * * * : Pricing update every 6 hours
*/ */
async scheduled(event: ScheduledEvent, env: Env, ctx: ExecutionContext): Promise<void> { async scheduled(event: ScheduledEvent, env: Env, ctx: ExecutionContext): Promise<void> {
const logger = createLogger('[Cron]', env); const logger = createLogger('[Cron]', env);
@@ -229,19 +100,19 @@ export default {
const executeSyncWithRetry = async (): Promise<void> => { const executeSyncWithRetry = async (): Promise<void> => {
const orchestrator = new SyncOrchestrator(env.DB, env); const orchestrator = new SyncOrchestrator(env.DB, env);
for (let attempt = 1; attempt <= MAX_RETRIES; attempt++) { const report = await executeWithRetry(
try { async () => orchestrator.syncAll(['linode', 'vultr', 'aws']),
logger.info('Starting sync attempt', { {
attempt_number: attempt, maxRetries: MAX_RETRIES,
max_retries: MAX_RETRIES operationName: 'full sync',
}); logger
}
);
const report = await orchestrator.syncAll(['linode', 'vultr', 'aws']); logger.info('Daily full sync complete', {
logger.info('Daily sync complete', {
attempt_number: attempt,
total_regions: report.summary.total_regions, total_regions: report.summary.total_regions,
total_instances: report.summary.total_instances, total_instances: report.summary.total_instances,
total_pricing: report.summary.total_pricing,
successful_providers: report.summary.successful_providers, successful_providers: report.summary.successful_providers,
failed_providers: report.summary.failed_providers, failed_providers: report.summary.failed_providers,
duration_ms: report.total_duration_ms duration_ms: report.total_duration_ms
@@ -261,39 +132,50 @@ export default {
.map(p => ({ provider: p.provider, error: p.error })) .map(p => ({ provider: p.provider, error: p.error }))
}); });
} }
// Success - exit retry loop
return;
} catch (error) {
const willRetry = attempt < MAX_RETRIES;
const retryDelayMs = willRetry ? Math.min(Math.pow(2, attempt - 1) * 1000, 10000) : 0;
logger.error('Sync attempt failed', {
attempt_number: attempt,
max_retries: MAX_RETRIES,
will_retry: willRetry,
retry_delay_ms: retryDelayMs,
error: error instanceof Error ? error.message : String(error),
stack: error instanceof Error ? error.stack : undefined
});
if (willRetry) {
// Wait before retry with exponential backoff
await new Promise(resolve => setTimeout(resolve, retryDelayMs));
} else {
// Final failure - re-throw to make cron failure visible
logger.error('Daily sync failed after all retries', {
total_attempts: MAX_RETRIES,
error: error instanceof Error ? error.message : String(error)
});
throw error;
}
}
}
}; };
ctx.waitUntil(executeSyncWithRetry()); ctx.waitUntil(executeSyncWithRetry());
} }
// Pricing update every 6 hours
else if (cron === '0 */6 * * *') {
const MAX_RETRIES = 3;
const executePricingSyncWithRetry = async (): Promise<void> => {
const orchestrator = new SyncOrchestrator(env.DB, env);
const report = await executeWithRetry(
async () => orchestrator.syncAllPricingOnly(['linode', 'vultr', 'aws']),
{
maxRetries: MAX_RETRIES,
operationName: 'pricing sync',
logger
}
);
logger.info('Pricing sync complete', {
total_pricing: report.summary.total_pricing,
successful_providers: report.summary.successful_providers,
failed_providers: report.summary.failed_providers,
duration_ms: report.total_duration_ms
});
// Alert on partial failures
if (report.summary.failed_providers > 0) {
const failedProviders = report.providers
.filter(p => !p.success)
.map(p => p.provider);
logger.warn('Some providers failed during pricing sync', {
failed_count: report.summary.failed_providers,
failed_providers: failedProviders,
errors: report.providers
.filter(p => !p.success)
.map(p => ({ provider: p.provider, error: p.error }))
});
}
};
ctx.waitUntil(executePricingSyncWithRetry());
}
}, },
}; };

View File

@@ -0,0 +1,104 @@
/**
* Hono Middleware Adapters
*
* Adapts existing authentication and rate limiting middleware to Hono's middleware pattern.
*/
import type { Context, Next } from 'hono';
import type { Env, HonoVariables } from '../types';
import {
authenticateRequest,
verifyApiKey,
createUnauthorizedResponse,
} from './auth';
import { checkRateLimit, createRateLimitResponse } from './rateLimit';
import { createLogger } from '../utils/logger';
const logger = createLogger('[Middleware]');
/**
* Request ID middleware
* Adds unique request ID to context for tracing
*/
export async function requestIdMiddleware(
c: Context<{ Bindings: Env; Variables: HonoVariables }>,
next: Next
): Promise<void> {
// Use CF-Ray if available, otherwise generate UUID
const requestId = c.req.header('CF-Ray') || crypto.randomUUID();
// Store in context for handlers to use
c.set('requestId', requestId);
await next();
// Add to response headers
c.res.headers.set('X-Request-ID', requestId);
}
/**
* Authentication middleware
* Validates X-API-Key header using existing auth logic
*/
export async function authMiddleware(
c: Context<{ Bindings: Env; Variables: HonoVariables }>,
next: Next
): Promise<Response | void> {
const request = c.req.raw;
const env = c.env;
const isAuthenticated = await authenticateRequest(request, env);
if (!isAuthenticated) {
logger.warn('[Auth] Unauthorized request', {
path: c.req.path,
requestId: c.get('requestId'),
});
return createUnauthorizedResponse();
}
await next();
}
/**
* Rate limiting middleware
* Applies rate limits based on endpoint using existing rate limit logic
*/
export async function rateLimitMiddleware(
c: Context<{ Bindings: Env; Variables: HonoVariables }>,
next: Next
): Promise<Response | void> {
const request = c.req.raw;
const path = c.req.path;
const env = c.env;
const rateLimitCheck = await checkRateLimit(request, path, env);
if (!rateLimitCheck.allowed) {
logger.warn('[RateLimit] Rate limit exceeded', {
path,
retryAfter: rateLimitCheck.retryAfter,
requestId: c.get('requestId'),
});
return createRateLimitResponse(rateLimitCheck.retryAfter!);
}
await next();
}
/**
* Optional authentication middleware for health check
* Checks if API key is provided and valid, stores result in context
*/
export async function optionalAuthMiddleware(
c: Context<{ Bindings: Env; Variables: HonoVariables }>,
next: Next
): Promise<void> {
const apiKey = c.req.header('X-API-Key');
const authenticated = apiKey ? verifyApiKey(apiKey, c.env) : false;
// Store authentication status in context
c.set('authenticated', authenticated);
await next();
}

View File

@@ -58,7 +58,8 @@ export class AnvilTransferPricingRepository extends BaseRepository<AnvilTransfer
) VALUES (?, ?) ) VALUES (?, ?)
ON CONFLICT(anvil_region_id) ON CONFLICT(anvil_region_id)
DO UPDATE SET DO UPDATE SET
price_per_gb = excluded.price_per_gb` price_per_gb = excluded.price_per_gb,
updated_at = CURRENT_TIMESTAMP`
).bind( ).bind(
price.anvil_region_id, price.anvil_region_id,
price.price_per_gb price.price_per_gb

View File

@@ -3,7 +3,8 @@
* Comprehensive health monitoring for database and provider sync status * Comprehensive health monitoring for database and provider sync status
*/ */
import { Env } from '../types'; import type { Context } from 'hono';
import { Env, HonoVariables } from '../types';
import { HTTP_STATUS } from '../constants'; import { HTTP_STATUS } from '../constants';
import { createLogger } from '../utils/logger'; import { createLogger } from '../utils/logger';
@@ -60,6 +61,27 @@ interface DetailedHealthResponse extends PublicHealthResponse {
}; };
} }
/**
* Wraps a promise with a timeout
* @param promise - The promise to wrap
* @param ms - Timeout in milliseconds
* @param operation - Operation name for error message
* @returns Promise result if completed within timeout
* @throws Error if operation times out
*/
async function withTimeout<T>(promise: Promise<T>, ms: number, operation: string): Promise<T> {
let timeoutId: ReturnType<typeof setTimeout>;
const timeoutPromise = new Promise<never>((_, reject) => {
timeoutId = setTimeout(() => reject(new Error(`${operation} timed out after ${ms}ms`)), ms);
});
try {
return await Promise.race([promise, timeoutPromise]);
} finally {
clearTimeout(timeoutId!);
}
}
/** /**
* Check database connectivity and measure latency * Check database connectivity and measure latency
*/ */
@@ -67,8 +89,12 @@ async function checkDatabaseHealth(db: D1Database): Promise<DatabaseHealth> {
try { try {
const startTime = Date.now(); const startTime = Date.now();
// Simple connectivity check // Simple connectivity check with 5-second timeout
await db.prepare('SELECT 1').first(); await withTimeout(
db.prepare('SELECT 1').first(),
5000,
'Database health check'
);
const latency = Date.now() - startTime; const latency = Date.now() - startTime;
@@ -159,18 +185,17 @@ function sanitizeError(error: string): string {
/** /**
* Handle health check request * Handle health check request
* @param env - Cloudflare Worker environment * @param c - Hono context
* @param authenticated - Whether the request is authenticated (default: false)
*/ */
export async function handleHealth( export async function handleHealth(
env: Env, c: Context<{ Bindings: Env; Variables: HonoVariables }>
authenticated: boolean = false
): Promise<Response> { ): Promise<Response> {
const timestamp = new Date().toISOString(); const timestamp = new Date().toISOString();
const authenticated = c.get('authenticated') ?? false;
try { try {
// Check database health // Check database health
const dbHealth = await checkDatabaseHealth(env.DB); const dbHealth = await checkDatabaseHealth(c.env.DB);
// If database is unhealthy, return early // If database is unhealthy, return early
if (dbHealth.status === 'unhealthy') { if (dbHealth.status === 'unhealthy') {
@@ -180,7 +205,7 @@ export async function handleHealth(
status: 'unhealthy', status: 'unhealthy',
timestamp, timestamp,
}; };
return Response.json(publicResponse, { status: HTTP_STATUS.SERVICE_UNAVAILABLE }); return c.json(publicResponse, HTTP_STATUS.SERVICE_UNAVAILABLE);
} }
// Detailed response: full information with sanitized errors // Detailed response: full information with sanitized errors
@@ -202,11 +227,11 @@ export async function handleHealth(
}, },
}; };
return Response.json(detailedResponse, { status: HTTP_STATUS.SERVICE_UNAVAILABLE }); return c.json(detailedResponse, HTTP_STATUS.SERVICE_UNAVAILABLE);
} }
// Get all providers with aggregated counts in a single query // Get all providers with aggregated counts in a single query
const providersWithCounts = await env.DB.prepare(` const providersWithCounts = await c.env.DB.prepare(`
SELECT SELECT
p.id, p.id,
p.name, p.name,
@@ -290,7 +315,7 @@ export async function handleHealth(
status: overallStatus, status: overallStatus,
timestamp, timestamp,
}; };
return Response.json(publicResponse, { status: statusCode }); return c.json(publicResponse, statusCode);
} }
// Detailed response: full information // Detailed response: full information
@@ -313,7 +338,7 @@ export async function handleHealth(
}, },
}; };
return Response.json(detailedResponse, { status: statusCode }); return c.json(detailedResponse, statusCode);
} catch (error) { } catch (error) {
logger.error('Health check failed', { error }); logger.error('Health check failed', { error });
@@ -323,7 +348,7 @@ export async function handleHealth(
status: 'unhealthy', status: 'unhealthy',
timestamp, timestamp,
}; };
return Response.json(publicResponse, { status: HTTP_STATUS.SERVICE_UNAVAILABLE }); return c.json(publicResponse, HTTP_STATUS.SERVICE_UNAVAILABLE);
} }
// Detailed response: sanitized error information // Detailed response: sanitized error information
@@ -346,6 +371,6 @@ export async function handleHealth(
}, },
}; };
return Response.json(detailedResponse, { status: HTTP_STATUS.SERVICE_UNAVAILABLE }); return c.json(detailedResponse, HTTP_STATUS.SERVICE_UNAVAILABLE);
} }
} }

View File

@@ -5,7 +5,8 @@
* Integrates with cache service for performance optimization. * Integrates with cache service for performance optimization.
*/ */
import type { Env, InstanceQueryParams } from '../types'; import type { Context } from 'hono';
import type { Env, HonoVariables, InstanceQueryParams } from '../types';
import { QueryService } from '../services/query'; import { QueryService } from '../services/query';
import { getGlobalCacheService } from '../services/cache'; import { getGlobalCacheService } from '../services/cache';
import { logger } from '../utils/logger'; import { logger } from '../utils/logger';
@@ -32,17 +33,19 @@ import {
*/ */
let cachedQueryService: QueryService | null = null; let cachedQueryService: QueryService | null = null;
let cachedDb: D1Database | null = null; let cachedDb: D1Database | null = null;
let cachedEnv: Env | null = null;
/** /**
* Get or create QueryService singleton * Get or create QueryService singleton
* Lazy initialization on first request, then reused for subsequent requests * Lazy initialization on first request, then reused for subsequent requests
* Invalidates cache if database binding changes (rolling deploy scenario) * Invalidates cache if database or environment binding changes (rolling deploy scenario)
*/ */
function getQueryService(db: D1Database, env: Env): QueryService { function getQueryService(db: D1Database, env: Env): QueryService {
// Invalidate cache if db binding changed (rolling deploy scenario) // Invalidate cache if db or env binding changed (rolling deploy scenario)
if (!cachedQueryService || cachedDb !== db) { if (!cachedQueryService || cachedDb !== db || cachedEnv !== env) {
cachedQueryService = new QueryService(db, env); cachedQueryService = new QueryService(db, env);
cachedDb = db; cachedDb = db;
cachedEnv = env;
logger.debug('[Instances] QueryService singleton initialized/refreshed'); logger.debug('[Instances] QueryService singleton initialized/refreshed');
} }
return cachedQueryService; return cachedQueryService;
@@ -127,7 +130,7 @@ function parseQueryParams(url: URL): {
name: string, name: string,
value: string | null value: string | null
): number | undefined | { error: { code: string; message: string; parameter: string } } { ): number | undefined | { error: { code: string; message: string; parameter: string } } {
if (value === null) return undefined; if (value === null || value === '') return undefined;
const parsed = Number(value); const parsed = Number(value);
if (isNaN(parsed) || parsed < 0) { if (isNaN(parsed) || parsed < 0) {
@@ -311,35 +314,33 @@ function parseQueryParams(url: URL): {
/** /**
* Handle GET /instances endpoint * Handle GET /instances endpoint
* *
* @param request - HTTP request object * @param c - Hono context
* @param env - Cloudflare Worker environment bindings
* @returns JSON response with instance query results * @returns JSON response with instance query results
* *
* @example * @example
* GET /instances?provider=linode&min_vcpu=2&max_price=20&sort_by=price&order=asc&limit=50 * GET /instances?provider=linode&min_vcpu=2&max_price=20&sort_by=price&order=asc&limit=50
*/ */
export async function handleInstances( export async function handleInstances(
request: Request, c: Context<{ Bindings: Env; Variables: HonoVariables }>
env: Env
): Promise<Response> { ): Promise<Response> {
const startTime = Date.now(); const startTime = Date.now();
logger.info('[Instances] Request received', { url: request.url }); logger.info('[Instances] Request received', { url: c.req.url });
try { try {
// Parse URL and query parameters // Parse URL and query parameters
const url = new URL(request.url); const url = new URL(c.req.url);
const parseResult = parseQueryParams(url); const parseResult = parseQueryParams(url);
// Handle validation errors // Handle validation errors
if (parseResult.error) { if (parseResult.error) {
logger.error('[Instances] Validation error', parseResult.error); logger.error('[Instances] Validation error', parseResult.error);
return Response.json( return c.json(
{ {
success: false, success: false,
error: parseResult.error, error: parseResult.error,
}, },
{ status: HTTP_STATUS.BAD_REQUEST } HTTP_STATUS.BAD_REQUEST
); );
} }
@@ -347,7 +348,7 @@ export async function handleInstances(
logger.info('[Instances] Query params validated', params as unknown as Record<string, unknown>); logger.info('[Instances] Query params validated', params as unknown as Record<string, unknown>);
// Get global cache service singleton (shared across all routes) // Get global cache service singleton (shared across all routes)
const cacheService = getGlobalCacheService(CACHE_TTL.INSTANCES, env.RATE_LIMIT_KV); const cacheService = getGlobalCacheService(CACHE_TTL.INSTANCES, c.env.RATE_LIMIT_KV);
// Generate cache key from query parameters // Generate cache key from query parameters
const cacheKey = cacheService.generateKey(params as unknown as Record<string, unknown>); const cacheKey = cacheService.generateKey(params as unknown as Record<string, unknown>);
@@ -378,7 +379,7 @@ export async function handleInstances(
age: cached.cache_age_seconds, age: cached.cache_age_seconds,
}); });
return Response.json( return c.json(
{ {
success: true, success: true,
data: { data: {
@@ -391,11 +392,9 @@ export async function handleInstances(
}, },
}, },
}, },
HTTP_STATUS.OK,
{ {
status: HTTP_STATUS.OK,
headers: {
'Cache-Control': `public, max-age=${CACHE_TTL.INSTANCES}`, 'Cache-Control': `public, max-age=${CACHE_TTL.INSTANCES}`,
},
} }
); );
} }
@@ -421,7 +420,7 @@ export async function handleInstances(
}; };
// Get QueryService singleton (reused across requests) // Get QueryService singleton (reused across requests)
const queryService = getQueryService(env.DB, env); const queryService = getQueryService(c.env.DB, c.env);
const result = await queryService.queryInstances(queryParams); const result = await queryService.queryInstances(queryParams);
const queryTime = Date.now() - startTime; const queryTime = Date.now() - startTime;
@@ -458,23 +457,21 @@ export async function handleInstances(
error instanceof Error ? { message: error.message } : { error: String(error) }); error instanceof Error ? { message: error.message } : { error: String(error) });
} }
return Response.json( return c.json(
{ {
success: true, success: true,
data: responseData, data: responseData,
}, },
HTTP_STATUS.OK,
{ {
status: HTTP_STATUS.OK,
headers: {
'Cache-Control': `public, max-age=${CACHE_TTL.INSTANCES}`, 'Cache-Control': `public, max-age=${CACHE_TTL.INSTANCES}`,
},
} }
); );
} catch (error) { } catch (error) {
logger.error('[Instances] Unexpected error', { error }); logger.error('[Instances] Unexpected error', { error });
return Response.json( return c.json(
{ {
success: false, success: false,
error: { error: {
@@ -483,7 +480,7 @@ export async function handleInstances(
request_id: crypto.randomUUID(), request_id: crypto.randomUUID(),
}, },
}, },
{ status: HTTP_STATUS.INTERNAL_ERROR } HTTP_STATUS.INTERNAL_ERROR
); );
} }
} }

View File

@@ -5,7 +5,8 @@
* Validates request parameters and orchestrates sync operations. * Validates request parameters and orchestrates sync operations.
*/ */
import type { Env } from '../types'; import type { Context } from 'hono';
import type { Env, HonoVariables } from '../types';
import { SyncOrchestrator } from '../services/sync'; import { SyncOrchestrator } from '../services/sync';
import { logger } from '../utils/logger'; import { logger } from '../utils/logger';
import { SUPPORTED_PROVIDERS, HTTP_STATUS } from '../constants'; import { SUPPORTED_PROVIDERS, HTTP_STATUS } from '../constants';
@@ -23,8 +24,7 @@ interface SyncRequestBody {
/** /**
* Handle POST /sync endpoint * Handle POST /sync endpoint
* *
* @param request - HTTP request object * @param c - Hono context
* @param env - Cloudflare Worker environment bindings
* @returns JSON response with sync results * @returns JSON response with sync results
* *
* @example * @example
@@ -35,8 +35,7 @@ interface SyncRequestBody {
* } * }
*/ */
export async function handleSync( export async function handleSync(
request: Request, c: Context<{ Bindings: Env; Variables: HonoVariables }>
env: Env
): Promise<Response> { ): Promise<Response> {
const startTime = Date.now(); const startTime = Date.now();
const startedAt = new Date().toISOString(); const startedAt = new Date().toISOString();
@@ -45,24 +44,24 @@ export async function handleSync(
try { try {
// Validate content-length before parsing body // Validate content-length before parsing body
const contentLength = request.headers.get('content-length'); const contentLength = c.req.header('content-length');
if (contentLength) { if (contentLength) {
const bodySize = parseInt(contentLength, 10); const bodySize = parseInt(contentLength, 10);
if (isNaN(bodySize) || bodySize > 10240) { // 10KB limit for sync if (isNaN(bodySize) || bodySize > 10240) { // 10KB limit for sync
return Response.json( return c.json(
{ success: false, error: { code: 'PAYLOAD_TOO_LARGE', message: 'Request body too large' } }, { success: false, error: { code: 'PAYLOAD_TOO_LARGE', message: 'Request body too large' } },
{ status: 413 } 413
); );
} }
} }
// Parse and validate request body // Parse and validate request body
const contentType = request.headers.get('content-type'); const contentType = c.req.header('content-type');
let body: SyncRequestBody = {}; let body: SyncRequestBody = {};
// Only parse JSON if content-type is set // Only parse JSON if content-type is set
if (contentType && contentType.includes('application/json')) { if (contentType && contentType.includes('application/json')) {
const parseResult = await parseJsonBody<SyncRequestBody>(request); const parseResult = await parseJsonBody<SyncRequestBody>(c.req.raw);
if (!parseResult.success) { if (!parseResult.success) {
logger.error('[Sync] Invalid JSON in request body', { logger.error('[Sync] Invalid JSON in request body', {
code: parseResult.error.code, code: parseResult.error.code,
@@ -73,8 +72,8 @@ export async function handleSync(
body = parseResult.data; body = parseResult.data;
} }
// Validate providers array (default to ['linode'] if not provided) // Validate providers array (default to all providers if not provided)
const providers = body.providers || ['linode']; const providers = body.providers || ['linode', 'vultr', 'aws'];
const providerResult = validateProviders(providers, SUPPORTED_PROVIDERS); const providerResult = validateProviders(providers, SUPPORTED_PROVIDERS);
if (!providerResult.success) { if (!providerResult.success) {
@@ -90,7 +89,7 @@ export async function handleSync(
logger.info('[Sync] Validation passed', { providers, force }); logger.info('[Sync] Validation passed', { providers, force });
// Initialize SyncOrchestrator // Initialize SyncOrchestrator
const orchestrator = new SyncOrchestrator(env.DB, env); const orchestrator = new SyncOrchestrator(c.env.DB, c.env);
// Execute synchronization // Execute synchronization
logger.info('[Sync] Starting synchronization', { providers }); logger.info('[Sync] Starting synchronization', { providers });
@@ -106,7 +105,7 @@ export async function handleSync(
summary: syncReport.summary summary: syncReport.summary
}); });
return Response.json( return c.json(
{ {
success: syncReport.success, success: syncReport.success,
data: { data: {
@@ -114,7 +113,7 @@ export async function handleSync(
...syncReport ...syncReport
} }
}, },
{ status: HTTP_STATUS.OK } HTTP_STATUS.OK
); );
} catch (error) { } catch (error) {
@@ -123,7 +122,7 @@ export async function handleSync(
const completedAt = new Date().toISOString(); const completedAt = new Date().toISOString();
const totalDuration = Date.now() - startTime; const totalDuration = Date.now() - startTime;
return Response.json( return c.json(
{ {
success: false, success: false,
error: { error: {
@@ -137,7 +136,7 @@ export async function handleSync(
} }
} }
}, },
{ status: HTTP_STATUS.INTERNAL_ERROR } HTTP_STATUS.INTERNAL_ERROR
); );
} }
} }

View File

@@ -48,10 +48,13 @@ export interface CacheResult<T> {
* Prevents race conditions from multiple route-level singletons * Prevents race conditions from multiple route-level singletons
*/ */
let globalCacheService: CacheService | null = null; let globalCacheService: CacheService | null = null;
let cachedTtl: number | null = null;
let cachedKv: KVNamespace | null = null;
/** /**
* Get or create global CacheService singleton * Get or create global CacheService singleton
* Thread-safe factory function that ensures only one CacheService instance exists * Thread-safe factory function that ensures only one CacheService instance exists
* Detects configuration changes (TTL or KV namespace) and refreshes singleton
* *
* @param ttl - TTL in seconds for cache entries * @param ttl - TTL in seconds for cache entries
* @param kv - KV namespace for cache index (enables pattern invalidation) * @param kv - KV namespace for cache index (enables pattern invalidation)
@@ -61,9 +64,11 @@ let globalCacheService: CacheService | null = null;
* const cache = getGlobalCacheService(CACHE_TTL.INSTANCES, env.RATE_LIMIT_KV); * const cache = getGlobalCacheService(CACHE_TTL.INSTANCES, env.RATE_LIMIT_KV);
*/ */
export function getGlobalCacheService(ttl: number, kv: KVNamespace | null): CacheService { export function getGlobalCacheService(ttl: number, kv: KVNamespace | null): CacheService {
if (!globalCacheService) { if (!globalCacheService || cachedTtl !== ttl || cachedKv !== kv) {
globalCacheService = new CacheService(ttl, kv); globalCacheService = new CacheService(ttl, kv);
logger.debug('[CacheService] Global singleton initialized'); cachedTtl = ttl;
cachedKv = kv;
logger.debug('[CacheService] Global singleton initialized/refreshed');
} }
return globalCacheService; return globalCacheService;
} }

View File

@@ -4,6 +4,7 @@
*/ */
import { createLogger } from '../utils/logger'; import { createLogger } from '../utils/logger';
import { SORT_FIELD_MAP } from '../constants';
import type { import type {
Env, Env,
InstanceQueryParams, InstanceQueryParams,
@@ -299,19 +300,8 @@ export class QueryService {
// Validate sort order at service level (defense in depth) // Validate sort order at service level (defense in depth)
const validatedSortOrder = sortOrder?.toLowerCase() === 'desc' ? 'DESC' : 'ASC'; const validatedSortOrder = sortOrder?.toLowerCase() === 'desc' ? 'DESC' : 'ASC';
// Map sort fields to actual column names // Map sort fields to actual column names (imported from constants.ts)
const sortFieldMap: Record<string, string> = { const sortColumn = SORT_FIELD_MAP[sortBy] ?? 'pr.hourly_price';
price: 'pr.hourly_price',
hourly_price: 'pr.hourly_price',
monthly_price: 'pr.monthly_price',
vcpu: 'it.vcpu',
memory: 'it.memory_mb',
memory_mb: 'it.memory_mb',
name: 'it.instance_name',
instance_name: 'it.instance_name',
};
const sortColumn = sortFieldMap[sortBy] ?? 'pr.hourly_price';
// Handle NULL values in pricing columns (NULL values go last) // Handle NULL values in pricing columns (NULL values go last)
if (sortColumn.startsWith('pr.')) { if (sortColumn.startsWith('pr.')) {

View File

@@ -189,39 +189,41 @@ export class SyncOrchestrator {
let vpuInstancesCount = 0; let vpuInstancesCount = 0;
if (provider.toLowerCase() === 'linode') { if (provider.toLowerCase() === 'linode') {
// GPU instances // Parallel fetch all specialized instances for Linode
if (connector.getGpuInstances) { const [gpuInstances, g8Instances, vpuInstances] = await Promise.all([
const gpuInstances = await withTimeout(connector.getGpuInstances(), 15000, `${provider} fetch GPU instances`); connector.getGpuInstances
if (gpuInstances && gpuInstances.length > 0) { ? withTimeout(connector.getGpuInstances(), 15000, `${provider} fetch GPU instances`)
: Promise.resolve([]),
connector.getG8Instances
? withTimeout(connector.getG8Instances(), 15000, `${provider} fetch G8 instances`)
: Promise.resolve([]),
connector.getVpuInstances
? withTimeout(connector.getVpuInstances(), 15000, `${provider} fetch VPU instances`)
: Promise.resolve([])
]);
// Sequential upsert (database operations)
if (gpuInstances.length > 0) {
gpuInstancesCount = await this.repos.gpuInstances.upsertMany( gpuInstancesCount = await this.repos.gpuInstances.upsertMany(
providerRecord.id, providerRecord.id,
gpuInstances gpuInstances
); );
} }
}
// G8 instances if (g8Instances.length > 0) {
if (connector.getG8Instances) {
const g8Instances = await withTimeout(connector.getG8Instances(), 15000, `${provider} fetch G8 instances`);
if (g8Instances && g8Instances.length > 0) {
g8InstancesCount = await this.repos.g8Instances.upsertMany( g8InstancesCount = await this.repos.g8Instances.upsertMany(
providerRecord.id, providerRecord.id,
g8Instances g8Instances
); );
} }
}
// VPU instances if (vpuInstances.length > 0) {
if (connector.getVpuInstances) {
const vpuInstances = await withTimeout(connector.getVpuInstances(), 15000, `${provider} fetch VPU instances`);
if (vpuInstances && vpuInstances.length > 0) {
vpuInstancesCount = await this.repos.vpuInstances.upsertMany( vpuInstancesCount = await this.repos.vpuInstances.upsertMany(
providerRecord.id, providerRecord.id,
vpuInstances vpuInstances
); );
} }
} }
}
// Handle Vultr GPU instances // Handle Vultr GPU instances
if (provider.toLowerCase() === 'vultr') { if (provider.toLowerCase() === 'vultr') {
@@ -330,7 +332,7 @@ export class SyncOrchestrator {
dbG8Map, dbG8Map,
dbVpuMap dbVpuMap
), ),
60000, 180000, // AWS needs ~25K records upsert (870 instances × 29 regions)
`${provider} fetch pricing` `${provider} fetch pricing`
); );
@@ -377,6 +379,21 @@ export class SyncOrchestrator {
}); });
} }
// Stage 8.5: Sync Anvil Transfer Pricing
let anvilTransferPricingCount = 0;
try {
anvilTransferPricingCount = await this.syncAnvilTransferPricing(provider);
if (anvilTransferPricingCount > 0) {
this.logger.info(`${provider} → SYNC_ANVIL_TRANSFER_PRICING`, { anvil_transfer_pricing: anvilTransferPricingCount });
}
} catch (transferError) {
// Log error but don't fail the entire sync
this.logger.error('Anvil transfer pricing sync failed', {
provider,
error: transferError instanceof Error ? transferError.message : String(transferError)
});
}
// Stage 9: Complete - Update provider status to success // Stage 9: Complete - Update provider status to success
stage = SyncStage.COMPLETE; stage = SyncStage.COMPLETE;
await this.repos.providers.updateSyncStatus(provider, 'success'); await this.repos.providers.updateSyncStatus(provider, 'success');
@@ -423,6 +440,296 @@ export class SyncOrchestrator {
} }
} }
/**
* Synchronize pricing data only (no regions/instances update)
*
* Lightweight sync operation that only updates pricing data from provider APIs.
* Skips region and instance type synchronization.
*
* @param provider - Provider name (linode, vultr, aws)
* @returns Sync result with pricing statistics
*/
async syncPricingOnly(provider: string): Promise<ProviderSyncResult> {
const startTime = Date.now();
let stage = SyncStage.INIT;
this.logger.info('Starting pricing-only sync for provider', { provider });
try {
// Stage 1: Initialize - Fetch provider record
stage = SyncStage.INIT;
const providerRecord = await this.repos.providers.findByName(provider);
if (!providerRecord) {
throw new Error(`Provider not found in database: ${provider}`);
}
// Update provider status to syncing
await this.repos.providers.updateSyncStatus(provider, 'syncing');
this.logger.info(`${provider}${stage} (pricing only)`);
// Stage 2: Initialize connector and authenticate
const connector = await this.createConnector(provider, providerRecord.id);
await withTimeout(connector.authenticate(), 10000, `${provider} authentication`);
this.logger.info(`${provider} → initialized (pricing only)`);
// Fetch existing instance and region IDs from database
const batchQueries = [
this.repos.db.prepare('SELECT id, region_code FROM regions WHERE provider_id = ?').bind(providerRecord.id),
this.repos.db.prepare('SELECT id, instance_id FROM instance_types WHERE provider_id = ?').bind(providerRecord.id),
this.repos.db.prepare('SELECT id, instance_id FROM gpu_instances WHERE provider_id = ?').bind(providerRecord.id),
this.repos.db.prepare('SELECT id, instance_id FROM g8_instances WHERE provider_id = ?').bind(providerRecord.id),
this.repos.db.prepare('SELECT id, instance_id FROM vpu_instances WHERE provider_id = ?').bind(providerRecord.id)
];
const [dbRegionsResult, dbInstancesResult, dbGpuResult, dbG8Result, dbVpuResult] = await this.repos.db.batch(batchQueries);
if (!dbRegionsResult.success || !dbInstancesResult.success) {
throw new Error('Failed to fetch regions/instances for pricing');
}
// Validate and extract region IDs
if (!Array.isArray(dbRegionsResult.results)) {
throw new Error('Unexpected database result format for regions');
}
const regionIds = dbRegionsResult.results.map((r: any) => {
if (typeof r?.id !== 'number') {
throw new Error('Invalid region id in database result');
}
return r.id;
});
// Validate and extract instance type data
if (!Array.isArray(dbInstancesResult.results)) {
throw new Error('Unexpected database result format for instances');
}
const dbInstancesData = dbInstancesResult.results.map((i: any) => {
if (typeof i?.id !== 'number' || typeof i?.instance_id !== 'string') {
throw new Error('Invalid instance data in database result');
}
return { id: i.id, instance_id: i.instance_id };
});
const instanceTypeIds = dbInstancesData.map(i => i.id);
// Create instance mapping
const dbInstanceMap = new Map(
dbInstancesData.map(i => [i.id, { instance_id: i.instance_id }])
);
// Create specialized instance mappings
if (!Array.isArray(dbGpuResult.results)) {
throw new Error('Unexpected database result format for GPU instances');
}
const dbGpuMap = new Map(
dbGpuResult.results.map((i: any) => {
if (typeof i?.id !== 'number' || typeof i?.instance_id !== 'string') {
throw new Error('Invalid GPU instance data in database result');
}
return [i.id, { instance_id: i.instance_id }];
})
);
if (!Array.isArray(dbG8Result.results)) {
throw new Error('Unexpected database result format for G8 instances');
}
const dbG8Map = new Map(
dbG8Result.results.map((i: any) => {
if (typeof i?.id !== 'number' || typeof i?.instance_id !== 'string') {
throw new Error('Invalid G8 instance data in database result');
}
return [i.id, { instance_id: i.instance_id }];
})
);
if (!Array.isArray(dbVpuResult.results)) {
throw new Error('Unexpected database result format for VPU instances');
}
const dbVpuMap = new Map(
dbVpuResult.results.map((i: any) => {
if (typeof i?.id !== 'number' || typeof i?.instance_id !== 'string') {
throw new Error('Invalid VPU instance data in database result');
}
return [i.id, { instance_id: i.instance_id }];
})
);
// Get pricing data
stage = SyncStage.PERSIST;
const pricingResult = await withTimeout(
connector.getPricing(
instanceTypeIds,
regionIds,
dbInstanceMap,
dbGpuMap,
dbG8Map,
dbVpuMap
),
180000,
`${provider} fetch pricing`
);
// Handle both return types
let pricingCount = 0;
if (typeof pricingResult === 'number') {
pricingCount = pricingResult;
} else if (pricingResult.length > 0) {
pricingCount = await this.repos.pricing.upsertMany(pricingResult);
}
this.logger.info(`${provider} → pricing updated`, { pricing: pricingCount });
// Stage: Sync Anvil Pricing (if applicable)
stage = SyncStage.SYNC_ANVIL_PRICING;
let anvilPricingCount = 0;
try {
anvilPricingCount = await this.syncAnvilPricing(provider);
if (anvilPricingCount > 0) {
this.logger.info(`${provider}${stage}`, { anvil_pricing: anvilPricingCount });
}
} catch (anvilError) {
this.logger.error('Anvil pricing sync failed', {
provider,
error: anvilError instanceof Error ? anvilError.message : String(anvilError)
});
}
// Sync Anvil Transfer Pricing
let anvilTransferPricingCount = 0;
try {
anvilTransferPricingCount = await this.syncAnvilTransferPricing(provider);
if (anvilTransferPricingCount > 0) {
this.logger.info(`${provider} → SYNC_ANVIL_TRANSFER_PRICING`, { anvil_transfer_pricing: anvilTransferPricingCount });
}
} catch (transferError) {
this.logger.error('Anvil transfer pricing sync failed', {
provider,
error: transferError instanceof Error ? transferError.message : String(transferError)
});
}
// Complete - Update provider status
stage = SyncStage.COMPLETE;
await this.repos.providers.updateSyncStatus(provider, 'success');
const duration = Date.now() - startTime;
this.logger.info(`${provider}${stage} (pricing only)`, { duration_ms: duration });
return {
provider,
success: true,
regions_synced: 0,
instances_synced: 0,
pricing_synced: pricingCount,
duration_ms: duration,
};
} catch (error) {
const duration = Date.now() - startTime;
const errorMessage = error instanceof Error ? error.message : 'Unknown error';
this.logger.error(`${provider} pricing sync failed at ${stage}`, {
error: error instanceof Error ? error.message : String(error),
stage
});
// Update provider status to error
try {
await this.repos.providers.updateSyncStatus(provider, 'error', errorMessage);
} catch (statusError) {
this.logger.error('Failed to update provider status', {
error: statusError instanceof Error ? statusError.message : String(statusError)
});
}
return {
provider,
success: false,
regions_synced: 0,
instances_synced: 0,
pricing_synced: 0,
duration_ms: duration,
error: errorMessage,
error_details: {
stage,
message: errorMessage,
},
};
}
}
/**
* Synchronize pricing data only for all providers
*
* Lightweight sync operation that only updates pricing data.
* Skips region and instance type synchronization.
*
* @param providers - Array of provider names to sync (defaults to all supported providers)
* @returns Complete sync report with pricing statistics
*/
async syncAllPricingOnly(providers: string[] = [...SUPPORTED_PROVIDERS]): Promise<SyncReport> {
const startedAt = new Date().toISOString();
const startTime = Date.now();
this.logger.info('Starting pricing-only sync for providers', { providers: providers.join(', ') });
const providerResults: ProviderSyncResult[] = [];
for (const provider of providers) {
try {
const result = await this.syncPricingOnly(provider);
providerResults.push(result);
this.logger.info('Provider pricing sync completed', {
provider,
success: result.success,
elapsed_ms: Date.now() - startTime
});
} catch (error) {
providerResults.push({
provider,
success: false,
regions_synced: 0,
instances_synced: 0,
pricing_synced: 0,
duration_ms: 0,
error: error instanceof Error ? error.message : 'Unknown error',
});
}
}
const completedAt = new Date().toISOString();
const totalDuration = Date.now() - startTime;
const successful = providerResults.filter(r => r.success);
const failed = providerResults.filter(r => !r.success);
const summary = {
total_providers: providers.length,
successful_providers: successful.length,
failed_providers: failed.length,
total_regions: 0,
total_instances: 0,
total_pricing: providerResults.reduce((sum, r) => sum + r.pricing_synced, 0),
};
const report: SyncReport = {
success: failed.length === 0,
started_at: startedAt,
completed_at: completedAt,
total_duration_ms: totalDuration,
providers: providerResults,
summary,
};
this.logger.info('Pricing sync complete', {
total: summary.total_providers,
success: summary.successful_providers,
failed: summary.failed_providers,
duration_ms: totalDuration,
});
return report;
}
/** /**
* Synchronize all providers * Synchronize all providers
* *
@@ -508,24 +815,29 @@ export class SyncOrchestrator {
/** /**
* Generate AWS pricing records in batches using Generator pattern * Generate AWS pricing records in batches using Generator pattern
* Minimizes memory usage by yielding batches of 100 records at a time * Minimizes memory usage by yielding batches at a time (configurable)
* *
* @param instanceTypeIds - Array of database instance type IDs * @param instanceTypeIds - Array of database instance type IDs
* @param regionIds - Array of database region IDs * @param regionIds - Array of database region IDs
* @param dbInstanceMap - Map of instance type ID to DB instance data * @param dbInstanceMap - Map of instance type ID to DB instance data
* @param rawInstanceMap - Map of instance_id (API ID) to raw AWS data * @param rawInstanceMap - Map of instance_id (API ID) to raw AWS data
* @yields Batches of PricingInput records (100 per batch) * @param env - Environment configuration for SYNC_BATCH_SIZE
* @yields Batches of PricingInput records (configurable batch size)
* *
* Manual Test: * Manual Test:
* Generator yields ~252 batches for ~25,230 total records (870 instances × 29 regions) * Generator yields batches for ~25,230 total records (870 instances × 29 regions)
*/ */
private *generateAWSPricingBatches( private *generateAWSPricingBatches(
instanceTypeIds: number[], instanceTypeIds: number[],
regionIds: number[], regionIds: number[],
dbInstanceMap: Map<number, { instance_id: string }>, dbInstanceMap: Map<number, { instance_id: string }>,
rawInstanceMap: Map<string, { Cost: number; MonthlyPrice: number }> rawInstanceMap: Map<string, { Cost: number; MonthlyPrice: number }>,
env?: Env
): Generator<PricingInput[], void, void> { ): Generator<PricingInput[], void, void> {
const BATCH_SIZE = 500; const BATCH_SIZE = Math.min(
Math.max(parseInt(env?.SYNC_BATCH_SIZE || '100', 10) || 100, 1),
1000
);
let batch: PricingInput[] = []; let batch: PricingInput[] = [];
for (const regionId of regionIds) { for (const regionId of regionIds) {
@@ -589,7 +901,7 @@ export class SyncOrchestrator {
env?: Env env?: Env
): Generator<PricingInput[], void, void> { ): Generator<PricingInput[], void, void> {
const BATCH_SIZE = Math.min( const BATCH_SIZE = Math.min(
Math.max(parseInt(env?.SYNC_BATCH_SIZE || '500', 10) || 500, 1), Math.max(parseInt(env?.SYNC_BATCH_SIZE || '100', 10) || 100, 1),
1000 1000
); );
let batch: PricingInput[] = []; let batch: PricingInput[] = [];
@@ -655,7 +967,7 @@ export class SyncOrchestrator {
env?: Env env?: Env
): Generator<PricingInput[], void, void> { ): Generator<PricingInput[], void, void> {
const BATCH_SIZE = Math.min( const BATCH_SIZE = Math.min(
Math.max(parseInt(env?.SYNC_BATCH_SIZE || '500', 10) || 500, 1), Math.max(parseInt(env?.SYNC_BATCH_SIZE || '100', 10) || 100, 1),
1000 1000
); );
let batch: PricingInput[] = []; let batch: PricingInput[] = [];
@@ -724,7 +1036,7 @@ export class SyncOrchestrator {
env?: Env env?: Env
): Generator<GpuPricingInput[], void, void> { ): Generator<GpuPricingInput[], void, void> {
const BATCH_SIZE = Math.min( const BATCH_SIZE = Math.min(
Math.max(parseInt(env?.SYNC_BATCH_SIZE || '500', 10) || 500, 1), Math.max(parseInt(env?.SYNC_BATCH_SIZE || '100', 10) || 100, 1),
1000 1000
); );
let batch: GpuPricingInput[] = []; let batch: GpuPricingInput[] = [];
@@ -790,7 +1102,7 @@ export class SyncOrchestrator {
env?: Env env?: Env
): Generator<GpuPricingInput[], void, void> { ): Generator<GpuPricingInput[], void, void> {
const BATCH_SIZE = Math.min( const BATCH_SIZE = Math.min(
Math.max(parseInt(env?.SYNC_BATCH_SIZE || '500', 10) || 500, 1), Math.max(parseInt(env?.SYNC_BATCH_SIZE || '100', 10) || 100, 1),
1000 1000
); );
let batch: GpuPricingInput[] = []; let batch: GpuPricingInput[] = [];
@@ -846,7 +1158,7 @@ export class SyncOrchestrator {
env?: Env env?: Env
): Generator<G8PricingInput[], void, void> { ): Generator<G8PricingInput[], void, void> {
const BATCH_SIZE = Math.min( const BATCH_SIZE = Math.min(
Math.max(parseInt(env?.SYNC_BATCH_SIZE || '500', 10) || 500, 1), Math.max(parseInt(env?.SYNC_BATCH_SIZE || '100', 10) || 100, 1),
1000 1000
); );
let batch: G8PricingInput[] = []; let batch: G8PricingInput[] = [];
@@ -899,7 +1211,7 @@ export class SyncOrchestrator {
env?: Env env?: Env
): Generator<VpuPricingInput[], void, void> { ): Generator<VpuPricingInput[], void, void> {
const BATCH_SIZE = Math.min( const BATCH_SIZE = Math.min(
Math.max(parseInt(env?.SYNC_BATCH_SIZE || '500', 10) || 500, 1), Math.max(parseInt(env?.SYNC_BATCH_SIZE || '100', 10) || 100, 1),
1000 1000
); );
let batch: VpuPricingInput[] = []; let batch: VpuPricingInput[] = [];
@@ -1081,7 +1393,8 @@ export class SyncOrchestrator {
UPDATE anvil_pricing UPDATE anvil_pricing
SET SET
hourly_price = ?, hourly_price = ?,
monthly_price = ? monthly_price = ?,
updated_at = CURRENT_TIMESTAMP
WHERE id = ? WHERE id = ?
`).bind( `).bind(
hourlyPrice, hourlyPrice,
@@ -1120,6 +1433,84 @@ export class SyncOrchestrator {
} }
} }
/**
* Synchronize Anvil transfer pricing based on source provider
*
* Updates anvil_transfer_pricing table with retail transfer costs
* Formula: retail = cost × 1.21 (21% margin)
*
* Provider costs (per GB):
* - Linode: $0.005/GB
* - Vultr: $0.01/GB
* - AWS: $0.09/GB (Asia regions)
*
* @param provider - Source provider name (linode, vultr, aws)
* @returns Number of anvil_transfer_pricing records updated
*/
private async syncAnvilTransferPricing(provider: string): Promise<number> {
this.logger.info('Starting Anvil transfer pricing sync', { provider });
try {
// Step 1: Define provider costs per GB (wholesale)
const providerCosts: Record<string, number> = {
linode: 0.005, // $0.005/GB
vultr: 0.01, // $0.01/GB
aws: 0.09, // $0.09/GB (Asia regions)
};
const costPerGb = providerCosts[provider.toLowerCase()];
if (!costPerGb) {
this.logger.info('No transfer pricing defined for provider', { provider });
return 0;
}
// Step 2: Find all anvil_regions sourced from this provider
const anvilRegionsResult = await this.repos.db
.prepare('SELECT id, source_region_id FROM anvil_regions WHERE source_provider = ?')
.bind(provider)
.all<{ id: number; source_region_id: number }>();
if (!anvilRegionsResult.success || anvilRegionsResult.results.length === 0) {
this.logger.info('No anvil_regions found for provider', { provider });
return 0;
}
const anvilRegions = anvilRegionsResult.results;
this.logger.info('Found anvil_regions for transfer pricing', {
provider,
count: anvilRegions.length
});
// Step 3: Calculate retail price (cost × 1.21 for 21% margin)
const retailPricePerGb = costPerGb * 1.21;
// Step 4: Prepare upsert data for all regions
const transferPricingData = anvilRegions.map(region => ({
anvil_region_id: region.id,
price_per_gb: retailPricePerGb,
}));
// Step 5: Batch upsert using repository
const upsertCount = await this.repos.anvilTransferPricing.upsertMany(transferPricingData);
this.logger.info('Anvil transfer pricing sync completed', {
provider,
cost_per_gb: costPerGb,
retail_price_per_gb: retailPricePerGb,
regions_updated: upsertCount,
});
return upsertCount;
} catch (error) {
this.logger.error('Anvil transfer pricing sync failed', {
provider,
error: error instanceof Error ? error.message : String(error)
});
throw error;
}
}
/** /**
* Create connector for a specific provider * Create connector for a specific provider
* *
@@ -1519,7 +1910,8 @@ export class SyncOrchestrator {
instanceTypeIds, instanceTypeIds,
regionIds, regionIds,
dbInstanceMap, dbInstanceMap,
rawInstanceMap rawInstanceMap,
this.env
); );
// Process batches incrementally // Process batches incrementally

View File

@@ -416,6 +416,17 @@ export interface Env {
ENVIRONMENT?: string; ENVIRONMENT?: string;
} }
/**
* Hono context variables
* Shared across all request contexts
*/
export interface HonoVariables {
/** Unique request ID for tracing */
requestId: string;
/** Authentication status (set by auth middleware) */
authenticated?: boolean;
}
// ============================================================ // ============================================================
// Synchronization Types // Synchronization Types
// ============================================================ // ============================================================