refactor: observability, log & i18n support#39
Conversation
Elysia's .derive can't pass context to onError, so we have to use a global map (.state) with traceID to pass context used by onError for logging.
Extract env, logger, and observability into separate packages: @cvsa/env, @cvsa/logger and @cvsa/observability Restructure packages/core from common/modules layout to src/ layout with clear public API boundaries Update all consumers to import from new package locations
|
Warning Rate limit exceeded
Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 4 minutes and 24 seconds. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: Repository UI (base), Organization UI (inherited) Review profile: CHILL Plan: Pro Run ID: ⛔ Files ignored due to path filters (1)
📒 Files selected for processing (3)
✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 16
Note
Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
packages/core/src/search/catalog/song.ts (1)
54-83:⚠️ Potential issue | 🟠 MajorHandle the embedding API's failure path explicitly.
Both indexing and query search treat
data?.embeddings[0]as optional, so a failed embedding request becomes silent partial indexing or a hybrid search with an undefined vector. This should either fail fast or fall back to pure lexical search deliberately.Also applies to: 123-127
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/src/search/catalog/song.ts` around lines 54 - 83, The embedding API call in embeddingManager.embeddings.post is treated as optional and silent on failure; update the code that builds the catalog entry (the return object with id/.../_vectors) to explicitly handle the failure path: detect when vectors.data?.embeddings[0] is missing and either (a) throw an error to fail fast during indexing or (b) record a clear lexical-only marker (e.g., set a fallback flag or metadata like vectorsMissing: true and ensure downstream search logic uses pure lexical search), and add logging around the embedding request so failures are visible; ensure the same explicit handling is applied to the other occurrence referenced (lines 123-127) so hybrid search never receives undefined vectors.
🟡 Minor comments (9)
apps/backend/src/handlers/catalog/song/update.ts-20-22 (1)
20-22:⚠️ Potential issue | 🟡 MinorAdd 422 status code to response schema for param validation errors.
When Zod validates the
idparameter, invalid values trigger a VALIDATION error in the global error handler, which returns 422 (not 400). The response schema should document this status code to accurately reflect possible responses to clients.Suggested response contract adjustment
response: { + 422: ErrorResponseSchema, 200: SongSchema,Also applies to lines 28-33.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/backend/src/handlers/catalog/song/update.ts` around lines 20 - 22, The route's parameter Zod schema (params: z.object({ id: z.coerce.number().int().positive() })) can trigger the global VALIDATION error which returns HTTP 422, so update the handler's response contract to include a 422 response for param validation errors; locate the response schema defined alongside the update handler (and the other similar handler referenced around the params block at lines 28-33) and add a 422 entry with an appropriate error shape/description to document validation failures.packages/observability/src/init.ts-6-18 (1)
6-18:⚠️ Potential issue | 🟡 Minor
initOtelSdk()is currently unused dead code, but fix before integration.This function is exported but never called in the codebase—the backend uses
@elysiajs/opentelemetrymiddleware instead. However, if this function is integrated in the future, the stated issues must be addressed: the SDK instance is discarded without being returned or stored for shutdown, there are no SIGTERM/SIGINT handlers, andsdk.start()is not awaited despite the function being markedasync.Either remove this unused export or refactor it following the suggested pattern (return the instance, register process shutdown handlers, and await
sdk.start()) before integrating it into the application.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/observability/src/init.ts` around lines 6 - 18, The exported initOtelSdk() currently discards the NodeSDK instance, doesn't await startup, and lacks shutdown handlers; either remove the unused export or refactor initOtelSdk to create and await sdk.start(), return the NodeSDK instance (sdk) so callers can manage lifecycle, and register SIGINT/SIGTERM handlers that call sdk.shutdown() (and await it) to ensure proper shutdown; update references to use the returned sdk when integrating or delete the export if not needed.packages/db/src/prisma.ts-50-58 (1)
50-58:⚠️ Potential issue | 🟡 MinorQuery events are configured but no handler is attached.
The
querylog level is set toemit: "event", but without a correspondingprisma.$on('query', ...)handler, query logs will be silently dropped. Either attach an event handler or change toemit: "stdout"if you want queries logged.export const prisma = new PrismaClient({ adapter, log: [ - { level: "query", emit: "event" }, + { level: "query", emit: "stdout" }, { level: "info", emit: "stdout" }, { level: "warn", emit: "stdout" }, { level: "error", emit: "stdout" }, ], });Alternatively, add an event handler:
prisma.$on('query', (e) => { console.log(`Query: ${e.query}`); console.log(`Duration: ${e.duration}ms`); });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/db/src/prisma.ts` around lines 50 - 58, The PrismaClient is configured to emit query logs as events but no handler is registered, so query events are dropped; either change the log config for queries to use emit: "stdout" or register a query event handler by calling prisma.$on('query', ...) (referencing the prisma constant and PrismaClient instantiation) and log e.query/e.duration as needed to surface query text and duration; implement one of these fixes next to where prisma is exported so queries are actually logged.apps/backend/src/index.ts-24-28 (1)
24-28:⚠️ Potential issue | 🟡 MinorAdd graceful shutdown handling for the batch span processor to avoid losing spans on process termination.
Currently,
BatchSpanProcessorbuffers spans in memory before export. Without shutdown handling, accumulated spans will be lost when the server terminates abruptly. To fix this, store the span processor as a variable and call itsshutdown()method during graceful shutdown:Example implementation
const spanProcessor = new BatchSpanProcessor(new OTLPTraceExporter()); const app = new Elysia({ serve: { hostname: host }, prefix: "/v2", }) .use(opentelemetry({ spanProcessors: [spanProcessor] })) // ... rest of middleware .listen(16412); process.on('SIGTERM', async () => { await spanProcessor.shutdown(); process.exit(0); });Alternatively, if using Elysia's lifecycle hooks, attach the shutdown to the app stop event via
@opentelemetry/sdk-nodeor a similar OTEL SDK instance.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/backend/src/index.ts` around lines 24 - 28, The BatchSpanProcessor used in the opentelemetry middleware should be created and stored in a variable (e.g., spanProcessor = new BatchSpanProcessor(new OTLPTraceExporter())) instead of an inline array so you can call spanProcessor.shutdown() during graceful termination; update the opentelemetry usage to pass spanProcessors: [spanProcessor] and add a shutdown handler that awaits spanProcessor.shutdown() on process signals (SIGINT/SIGTERM) or via the app stop lifecycle hook to flush buffered spans before exiting.packages/observability/src/tracer.ts-6-17 (1)
6-17:⚠️ Potential issue | 🟡 MinorSet span status to OK after successful execution, not before.
The span status is set to
OKon line 9 beforefn()executes. While this technically works because the catch block overwrites it toERRORon failure, it's non-idiomatic and misleading. The standard pattern is to setOKonly after successful completion.Proposed fix
export async function traceTask<T>(name: string, fn: () => T, startTime?: number): Promise<T> { return tracer.startActiveSpan(name, { startTime }, async (span) => { try { - span.setStatus({ code: SpanStatusCode.OK }); - return await fn(); + const result = await fn(); + span.setStatus({ code: SpanStatusCode.OK }); + return result; } catch (e) { span.setStatus({ code: SpanStatusCode.ERROR }); throw e; } finally { span.end(); } }); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/observability/src/tracer.ts` around lines 6 - 17, The span status in traceTask is set to OK before executing the provided function; move the span.setStatus({ code: SpanStatusCode.OK }) so it runs only after fn() completes successfully. In the traceTask function (the tracer.startActiveSpan callback), remove the pre-execution OK set and add span.setStatus({ code: SpanStatusCode.OK }) immediately after the await fn() returns (before returning its value), keeping the existing catch block that sets SpanStatusCode.ERROR and the final span.end() in finally.packages/logger/src/pinoSetup.ts-28-46 (1)
28-46:⚠️ Potential issue | 🟡 Minor
LOG_LEVELis ignored in test and development environments.In test mode the level is hardcoded to
"warn", and in development it's hardcoded to"debug", overridingenv.LOG_LEVEL. If this is intentional, consider documenting it. Otherwise, respect the configured level or use it as a minimum.Proposed fix to respect LOG_LEVEL as minimum
if (isTest) { return pino({ ...baseOptions, - level: "warn", + level: baseOptions.level, transport: { target: "pino-pretty", options: { colorize: true }, }, }); } return pino({ ...baseOptions, - level: "debug", + level: baseOptions.level, transport: { target: "pino-pretty", options: { colorize: true }, }, });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/logger/src/pinoSetup.ts` around lines 28 - 46, The code currently hardcodes level values inside the two pino(...) calls (when isTest and in dev), ignoring process.env.LOG_LEVEL; update the pinoSetup to compute an effectiveLevel before calling pino by reading process.env.LOG_LEVEL and using it as the configured minimum (or as the desired level if present), then pass that effectiveLevel into the pino(...) calls instead of the hardcoded "warn"/"debug"; modify the branches around isTest and the final return to use that computed effective level (refer to symbols isTest, baseOptions, and the pino(...) invocations) so tests/dev respect LOG_LEVEL or treat it as a minimum.packages/embedding/src/index.ts-4-4 (1)
4-4:⚠️ Potential issue | 🟡 MinorChange the import to remove
typekeyword sotreatyis available as a value.Line 4 imports
treatyasimport type, which makes it unavailable intypeofexpressions. The correct pattern is to importtreatyas a regular value:import { treaty } from "@elysiajs/eden". ThenReturnType<typeof treaty<EmbeddingApp>>will properly resolve to the client type.Suggested fix
-import type { treaty } from "@elysiajs/eden"; +import { treaty } from "@elysiajs/eden";🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/embedding/src/index.ts` at line 4, The import currently uses "import type { treaty }" which makes treaty unavailable as a runtime value and breaks type expressions like ReturnType<typeof treaty<EmbeddingApp>>; change the import to a normal value import (remove the "type" keyword) so treaty is imported as a value (e.g., import { treaty } from "@elysiajs/eden") so ReturnType<typeof treaty<EmbeddingApp>> resolves correctly.packages/logger/src/appLogger.ts-97-98 (1)
97-98:⚠️ Potential issue | 🟡 MinorPotential type mismatch when casting
attrsto OTelAttributes.
attrsmay contain nested objects or arrays, which are not valid OpenTelemetryAttributesvalues (only primitives and arrays of primitives are allowed). This cast could cause runtime issues when OTel attempts to serialize the attributes.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/logger/src/appLogger.ts` around lines 97 - 98, The call to activeSpan.addEvent(msg, attrs as Attributes) can pass nested objects/arrays invalid for OpenTelemetry; replace the unchecked cast by sanitizing/serializing attrs before calling addEvent — implement or invoke a sanitizer (e.g., sanitizeAttributes or normalizeAttributes) that walks the attrs object, converts non-primitive values to strings or flattens arrays to arrays of primitives, drops unsupported types, and returns a safe Attributes map; then call activeSpan.addEvent(msg, sanitizedAttrs as Attributes) (update the code around activeSpan.addEvent in appLogger.ts and ensure the sanitizer is used wherever attrs are forwarded to OTel).apps/backend/src/errorHandler.ts-106-112 (1)
106-112:⚠️ Potential issue | 🟡 MinorIncomplete handling for
ElysiaCustomStatusResponse.Only status code
404is handled. Other custom status responses (e.g.,403,500) will fall through to theelseblock and be logged as unknown 500 errors, which may misrepresent the actual error state.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/backend/src/errorHandler.ts` around lines 106 - 112, The handler only treats ElysiaCustomStatusResponse for code 404, causing other custom codes to be misclassified as 500; update the ElysiaCustomStatusResponse branch in errorHandler.ts to switch on error.code and map common status codes (e.g., 401 -> "UNAUTHORIZED", 403 -> "FORBIDDEN", 404 -> "NOT_FOUND", 409 -> "CONFLICT", 500 -> "INTERNAL_SERVER_ERROR") and call setErrorResponse(error.code, { code: <mapped_string>, message: error.message || "error.<lowercase_mapped>" }) or, for unknown codes, call setErrorResponse(error.code, { code: `HTTP_${error.code}`, message: error.message || "error.unknown" }) so the real status and message from ElysiaCustomStatusResponse are preserved instead of falling through to the generic 500 path; reference the ElysiaCustomStatusResponse type, the error.code property, and the setErrorResponse function to locate where to implement this mapping.
🧹 Nitpick comments (27)
.opencode/commands/commit.md (1)
7-7: Consider adding a brief explanation of the 50/70 rule.The 50/70 rule is referenced but not defined. While this may be understood in context, adding a brief explanation would improve clarity and maintainability.
📝 Suggested enhancement
-The commit message should follow the 50/70 rule and describe both the motivation for the commit and the changes it involves. +The commit message should follow the 50/70 rule (max 50 characters for the subject line, max 70 characters per line in the body) and describe both the motivation for the commit and the changes it involves.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.opencode/commands/commit.md at line 7, Add a short definition of the "50/70 rule" next to the existing reference so readers immediately understand the formatting expectations: state that the commit subject line should be <=50 characters and the commit message body lines should be wrapped at ~70 characters, and provide a one-sentence example of what belongs in the subject (brief intent) versus the body (motivation and detailed changes). Update the guidance that the subject must summarize the change and the body must describe both the motivation and the specific changes, referencing the existing phrasing about motivation and changes so maintainers know where to apply it (the commit message guidance section)..agents/skills/jujutsu-vcs/SKILL.md (1)
91-95: Lockfile exclusion pattern is too narrow and cwd-dependent.
~*.lockonly filters cwd-relative*.lockfiles. It won't exclude common lockfiles likepackage-lock.json,pnpm-lock.yaml, oryarn.lock, and behavior changes by current directory. Use workspace-root patterns instead.Suggested doc update
-# We exclude lock files using filesets syntax -jj diff --git '~*.lock' # Show diff of working copy vs parent -jj diff --git -r x '~*.lock' # Show diff between x and its parent +# We exclude common lockfiles using workspace-root filesets syntax +jj diff --git '~(/*.lock | /package-lock.json | /pnpm-lock.yaml | /yarn.lock)' # Show diff of working copy vs parent +jj diff --git -r x '~(/*.lock | /package-lock.json | /pnpm-lock.yaml | /yarn.lock)' # Show diff between x and its parent jj diff --git [paths] # Show diff for specific paths -jj diff --git --from x --to y '~*.lock' # Show diff from x to y +jj diff --git --from x --to y '~(/*.lock | /package-lock.json | /pnpm-lock.yaml | /yarn.lock)' # Show diff from x to y🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.agents/skills/jujutsu-vcs/SKILL.md around lines 91 - 95, The example uses the cwd-relative pattern "~*.lock" which only excludes lockfiles relative to the current directory and misses common root lockfiles; update each occurrence of "~*.lock" in the examples (the jj diff --git lines) to use workspace-root anchored patterns that explicitly match common lockfiles (e.g., package-lock.json, pnpm-lock.yaml, yarn.lock, etc.) so exclusion is directory-independent and consistent across the repository; ensure all four example lines that call jj diff --git are changed to reference the new workspace-root lockfile pattern.packages/env/package.json (1)
10-12: Pin@types/bunto a specific version.Using
"latest"for@types/buncan lead to inconsistent builds and unexpected type errors when the types are updated. Consider pinning to a specific version for reproducibility.♻️ Suggested fix
"devDependencies": { - "@types/bun": "latest", + "@types/bun": "^1.2.13", "@cvsa/typescript-config": "workspace:*" },🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/env/package.json` around lines 10 - 12, The devDependency "@types/bun" is pinned to "latest" in package.json which can cause inconsistent builds; update the package.json devDependencies entry for "@types/bun" to a specific, tested semantic version (e.g., replace "latest" with a concrete version string like "0.x.y") so installs are reproducible and CI/typechecks remain stable—edit the "devDependencies" block where "@types/bun" is declared to set the chosen fixed version.packages/logger/tsconfig.json (1)
1-4: Minor: Redundant include pattern.The pattern
"src/**/*.ts"already matches files in the rootsrc/directory, making"src/*.ts"redundant. This is purely cosmetic but could be simplified.♻️ Optional simplification
{ "extends": "@cvsa/typescript-config/base.json", - "include": ["src/**/*.ts", "src/*.ts"] + "include": ["src/**/*.ts"] }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/logger/tsconfig.json` around lines 1 - 4, The tsconfig.json includes a redundant pattern in the "include" array: "src/*.ts" is unnecessary because "src/**/*.ts" already matches those files; update the "include" array in tsconfig.json to remove "src/*.ts" and keep only "src/**/*.ts" to simplify the config.packages/env/tsconfig.json (1)
1-4: Redundant glob pattern ininclude.The pattern
src/**/*.tsalready matches all.tsfiles in thesrc/directory and its subdirectories, makingsrc/*.tsredundant.♻️ Suggested simplification
{ "extends": "@cvsa/typescript-config/base.json", - "include": ["src/**/*.ts", "src/*.ts"] + "include": ["src/**/*.ts"] }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/env/tsconfig.json` around lines 1 - 4, The include array in tsconfig.json contains a redundant glob: "src/**/*.ts" already covers top-level files so remove "src/*.ts" from the include list; update the include property to only ["src/**/*.ts"] to simplify and avoid duplication (look for the include array and the two patterns "src/**/*.ts" and "src/*.ts" in the current tsconfig.json).packages/db/tsconfig.json (1)
3-3: Non-recursive glob pattern may miss nested files.The pattern
./src/*.tsonly matches files directly insrc/, not in subdirectories. For consistency withpackages/env/tsconfig.json(which usessrc/**/*.ts) and to accommodate future growth, consider using a recursive pattern.♻️ Suggested recursive pattern
{ "extends": "@cvsa/typescript-config/base.json", - "include": ["./src/*.ts"] + "include": ["./src/**/*.ts"] }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/db/tsconfig.json` at line 3, The tsconfig "include" pattern in packages/db/tsconfig.json currently uses a non-recursive glob ("./src/*.ts") which will miss files in subdirectories; update the "include" value in that tsconfig (the include field) to a recursive glob such as "./src/**/*.ts" so all .ts files under src, including nested folders, are compiled (mirror the pattern used in packages/env/tsconfig.json).packages/embedding/tsconfig.json (1)
1-4: Verify glob patterns match intended file structure.The patterns
./src/*.tsand./tests/*.tsonly match.tsfiles directly in these directories, not in subdirectories. If the package grows to have nested directories, those files won't be included in compilation.Consider using
**/*.tspatterns for consistency with other packages and future-proofing:♻️ Suggested future-proof patterns
{ "extends": "@cvsa/typescript-config/base.json", - "include": ["./src/*.ts", "./tests/*.ts"] + "include": ["./src/**/*.ts", "./tests/**/*.ts"] }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/embedding/tsconfig.json` around lines 1 - 4, The tsconfig include globs currently use "./src/*.ts" and "./tests/*.ts", which only match top-level files; update the include patterns in tsconfig.json to recursive globs (e.g. "./src/**/*.ts" and "./tests/**/*.ts") so nested TypeScript files are picked up and to keep this package consistent with other packages' patterns.packages/embedding/tests/e2e.test.ts (1)
13-16: Strengthen assertion to avoid silent pass on undefined.
data?.embeddings[0]could beundefinedif the response is malformed. Bun'stoHaveLengthmay throw on undefined, but it's safer to explicitly assert existence first:♻️ Proposed fix
const { data, status } = await api.embeddings.post(payload); expect(status).toBe(200); + expect(data?.embeddings).toBeDefined(); + expect(data?.embeddings).toHaveLength(1); expect(data?.embeddings[0]).toHaveLength(256);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/embedding/tests/e2e.test.ts` around lines 13 - 16, The test currently dereferences data?.embeddings[0] which can be undefined and may cause a silent or brittle failure; after calling api.embeddings.post(payload) assert the response shape before checking length: explicitly assert that data is defined, that data.embeddings is an array (or defined), and that data.embeddings[0] is defined, then assert its length equals 256 — update the assertions around api.embeddings.post, payload, data and embeddings to perform these presence checks before calling toHaveLength on data.embeddings[0].packages/observability/tsconfig.json (1)
1-4: Redundant include pattern.The
src/*.tspattern is already covered bysrc/**/*.ts(which matches any depth including the root ofsrc/). Consider simplifying:♻️ Suggested simplification
{ "extends": "@cvsa/typescript-config/base.json", - "include": ["src/**/*.ts", "src/*.ts"] + "include": ["src/**/*.ts"] }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/observability/tsconfig.json` around lines 1 - 4, The include array in tsconfig.json contains a redundant pattern: "src/*.ts" is unnecessary because "src/**/*.ts" already matches files at any depth including the src root; update the include property to only ["src/**/*.ts"] (remove "src/*.ts") so the include patterns are simplified and non-redundant, editing the "include" key in the provided tsconfig.json.apps/backend/src/startMessage.ts (1)
38-44: Inconsistent environment variable access.
logStartupusesenv.NODE_ENV(line 22) from the imported@cvsa/env, butgetBindingInforeadsprocess.env.NODE_ENVdirectly. This inconsistency could cause unexpected behavior if@cvsa/envapplies validation or defaults differently than rawprocess.env.♻️ Proposed fix for consistency
export function getBindingInfo(): [string, number] { const DEFAULT_PORT = 16412; - const NODE_ENV = process.env.NODE_ENV || "production"; - const HOST = process.env.HOST ?? (NODE_ENV === "development" ? "0.0.0.0" : "127.0.0.1"); - const PORT = process.env.PORT ? parseInt(process.env.PORT, 10) : DEFAULT_PORT; + const HOST = process.env.HOST ?? (env.NODE_ENV === "development" ? "0.0.0.0" : "127.0.0.1"); + const PORT = process.env.PORT ? parseInt(process.env.PORT, 10) : DEFAULT_PORT; return [HOST, PORT]; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/backend/src/startMessage.ts` around lines 38 - 44, getBindingInfo currently reads process.env directly which is inconsistent with logStartup using the imported env from `@cvsa/env`; change getBindingInfo to use the same env object (e.g., env.NODE_ENV, env.HOST, env.PORT) instead of process.env so validation/defaults remain consistent, keep DEFAULT_PORT and parseInt behavior for PORT, and update HOST selection logic to reference env.NODE_ENV when deciding between "0.0.0.0" and "127.0.0.1".apps/backend/src/handlers/dev/index.ts (1)
5-7: Add a single env guard atdevHandlerlevel.Right now each dev route must remember to self-guard. A parent-level gate is safer for future additions.
Suggested refactor
import { Elysia } from "elysia"; +import { env } from "@cvsa/env"; import { getHeapSnapshotHandler } from "./snapshot"; import { gcHandler } from "./gc"; export const devHandler = new Elysia({ name: "devHandler" }) + .onBeforeHandle(({ status }) => { + if (env.NODE_ENV !== "development") { + throw status(404); + } + }) .use(getHeapSnapshotHandler) .use(gcHandler);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/backend/src/handlers/dev/index.ts` around lines 5 - 7, Add a parent-level environment guard to the devHandler Elysia instance so dev routes no longer must self-guard: modify the devHandler construction to include a middleware or .guard-like check that inspects the environment (e.g. NODE_ENV or a dedicated ENABLE_DEV_ROUTES flag) and short-circuits requests (403 or next-less response) when not in the allowed environment, while still chaining the existing getHeapSnapshotHandler and gcHandler; locate and change the devHandler creation (the new Elysia({ name: "devHandler" }) instance) to perform this check before invoking getHeapSnapshotHandler and gcHandler.packages/core/turbo.json (1)
5-5: Usesrc/**/*.tsfor standard recursive glob matching.The pattern
src/**.tsdoes match nested paths likesrc/catalog/song/service.tsand will not cause stale caches. However,src/**/*.tsis the standard and more readable convention for recursive globbing in Turborepo. Update this pattern (and corresponding patterns on lines 11 and 19) for consistency with best practices.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/turbo.json` at line 5, Update the recursive glob patterns in turbo.json by replacing the nonstandard "src/**.ts" occurrences with the conventional "src/**/*.ts" (and likewise change the matching patterns used elsewhere that follow the same form) so Turborepo uses the standard recursive glob; search for the exact string "src/**.ts" in the file and replace it with "src/**/*.ts" (also update the same style patterns mentioned in the comment) to ensure consistent, readable recursive matching.packages/embedding/package.json (1)
16-21: Use explicit semver versions instead of "latest" in dependencies."elysia": "latest" and "bun-types": "latest" make installs non-deterministic across environments and CI. Pin to explicit semver ranges (e.g., "^1.24.3") for reproducible builds.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/embedding/package.json` around lines 16 - 21, Replace the non-deterministic "latest" pins in package.json by specifying explicit semver versions: change the "elysia": "latest" dependency to a concrete semver range (e.g., "^<appropriate-version>") and change the "bun-types": "latest" devDependency to a concrete semver range (e.g., "^<appropriate-version>"). Update the versions to the desired tested releases, run a fresh install to verify compatibility, and commit the updated package.json (refer to the dependency keys "elysia" and "bun-types" to locate the entries).packages/observability/package.json (2)
16-16: Minor version mismatch with@opentelemetry/apiin backend package.This package uses
^1.9.0whileapps/backend/package.jsonuses^1.9.1. While semver ranges may resolve this, consider aligning versions across the monorepo to avoid potential conflicts.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/observability/package.json` at line 16, Update the `@opentelemetry/api` dependency so the monorepo uses a consistent minor version: change the entry for "@opentelemetry/api" currently set to "^1.9.0" to match the backend's "^1.9.1" (or pick a single agreed-upon version) in the observability package.json and ensure other package.json files use the same version to avoid conflicts.
11-11: Pin@types/bunto a specific version.Using
"latest"can cause non-reproducible builds and unexpected breakages when the package is updated. Consider pinning to a specific version.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/observability/package.json` at line 11, The dependency "@types/bun" is pinned to "latest", which causes non-reproducible builds; replace the "latest" value with a specific version number (an exact semver string) that you have tested in CI for compatibility (e.g., "0.x.y"), update the lockfile (npm/shrinkwrap/pnpm-lock) and commit the change so builds are deterministic; look for the "@types/bun" entry in package.json to make this edit.apps/backend/src/common/i18n.ts (2)
19-19: Top-level await blocks module loading — ensure this is acceptable.The top-level
await i18nRuntime.load()blocks the module from being fully loaded until translations are fetched. This is typically fine for server startup, but be aware it will delay application readiness. Consider adding error handling for load failures.-await i18nRuntime.load(); +try { + await i18nRuntime.load(); +} catch (error) { + console.error("Failed to load i18n bundles:", error); + throw error; +}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/backend/src/common/i18n.ts` at line 19, The file currently uses a top-level await on i18nRuntime.load(), which blocks module evaluation and can delay app readiness and lacks error handling; change this by moving the call out of the top-level to an explicit async initializer (e.g., export an async initI18n or call i18nRuntime.load() inside an async startup routine) or wrap the top-level await in a try/catch to handle load failures and log/propagate errors from i18nRuntime.load() so failures are visible and the app can decide how to proceed.
5-11: Verify production bundle path and consider explicit environment handling.The condition
env.NODE_ENV !== "production"meanstestenvironment uses the development path. This may be intentional, but consider being explicit:const getBundleLocation = () => { - if (env.NODE_ENV !== "production") { + if (env.NODE_ENV === "development" || env.NODE_ENV === "test") { return path.join(import.meta.dir, "../../../../locale/"); - } else { + } else if (env.NODE_ENV === "production") { return path.join(import.meta.dir, "./locale/"); + } else { + throw new Error(`Unknown NODE_ENV: ${env.NODE_ENV}`); } };Also verify that
./locale/relative toimport.meta.dirresolves correctly in your production deployment — the locale folder must be copied alongside the built output.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/backend/src/common/i18n.ts` around lines 5 - 11, The getBundleLocation function uses env.NODE_ENV !== "production" which treats "test" as dev; update getBundleLocation to explicitly branch on env.NODE_ENV values (e.g., "development", "production", and a fallback for "test"/other) so behavior is deterministic, and ensure the production branch uses the correct path relative to import.meta.dir (import.meta.dir + "./locale/") while documenting or adding a build step that copies the locale folder into the production output; reference getBundleLocation, env.NODE_ENV, and import.meta.dir when making the change.apps/backend/src/handlers/auth/currentUser.ts (1)
13-15: Consider whether tracing a synchronous DTO mapping adds value.
betterAuthToCurrentUserInfoDtoappears to be a simple synchronous transformation. Wrapping it intraceTaskwith an async callback adds overhead for minimal observability benefit. Consider removing the trace wrapper for trivial synchronous operations, or ensuretraceTaskhas a synchronous variant for such cases.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/backend/src/handlers/auth/currentUser.ts` around lines 13 - 15, The current code wraps a simple synchronous DTO mapper in an async traceTask: traceTask("getCurrentUser", async () => betterAuthToCurrentUserInfoDto(session.user)), which adds unnecessary overhead; either call betterAuthToCurrentUserInfoDto(session.user) directly inside getCurrentUser (remove the traceTask wrapper) or replace it with a synchronous tracing helper (e.g., traceTaskSync or a new sync overload) and use that so the synchronous transformation remains synchronous; locate the call to betterAuthToCurrentUserInfoDto and update it accordingly in the getCurrentUser handler.apps/backend/src/index.ts (1)
24-28: OTLPTraceExporter relies on environment variables for endpoint configuration.The exporter is instantiated without explicit configuration, meaning it will use
OTEL_EXPORTER_OTLP_ENDPOINTorOTEL_EXPORTER_OTLP_TRACES_ENDPOINTenvironment variables. Ensure these are documented and configured in your deployment environment.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/backend/src/index.ts` around lines 24 - 28, The OTLPTraceExporter is being constructed with new OTLPTraceExporter() inside the opentelemetry(...) setup (wrapped by BatchSpanProcessor) which means it will default to reading OTEL_EXPORTER_OTLP_ENDPOINT / OTEL_EXPORTER_OTLP_TRACES_ENDPOINT from the environment; update the deployment docs and runtime configuration to ensure those environment variables are set to the correct collector endpoint (or alternatively change the initialization of OTLPTraceExporter in the opentelemetry(...) call to pass an explicit url/headers), and verify that BatchSpanProcessor(new OTLPTraceExporter()) will use those values in your environment.packages/core/src/search/interface.ts (1)
6-14: Consider renaming toAbstractSearchServiceorSearchServiceBase.The
Iprefix conventionally denotes interfaces, but this is an abstract class. Using a name likeAbstractSearchServiceorSearchServiceBasewould better communicate its nature.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/src/search/interface.ts` around lines 6 - 14, The class name ISearchService is misleading because the I prefix implies an interface; rename the abstract class ISearchService to a clearer name such as AbstractSearchService or SearchServiceBase (pick one consistently) and update all references and imports to that new symbol (e.g., change class declaration "ISearchService<T>" to "AbstractSearchService<T>" and update any subclass extends, type annotations, and exported names that reference ISearchService, including constructor signature references to IRepositoryWithGetDetails, SearchManager, and EmbeddingAppApi); ensure exports and any tests or consumers import the new class name so behavior remains identical.packages/logger/src/pinoSetup.ts (1)
11-11: Consider redacting nested paths for request/response objects.The current redaction only covers top-level fields. If log entries include nested objects like
req.headers.authorizationorbody.password, they won't be redacted. Consider using nested path patterns if such structures are logged.Example nested redaction paths
- const redact = ["authorization", "cookie", "password", "token"]; + const redact = [ + "authorization", + "cookie", + "password", + "token", + "*.authorization", + "*.cookie", + "*.password", + "*.token", + "req.headers.authorization", + "req.headers.cookie", + "body.password", + ];🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/logger/src/pinoSetup.ts` at line 11, The current redact array only lists top-level keys and will miss nested secrets; update the redact constant in pinoSetup.ts (the redact variable) to include nested paths and wildcard patterns (for example add entries like "req.headers.authorization", "req.cookies", "req.cookies.*", "req.body.password", "request.body.password", "res.headers.set-cookie" and/or generic patterns like "**.password" and "**.authorization") so that Pino's redaction will remove sensitive values inside nested request/response objects and arrays.packages/logger/package.json (1)
10-12: Consider pinning@types/bunto a specific version.Using
"latest"for@types/buncan cause unexpected build failures when breaking changes are released. Pin to a specific version for reproducible builds.Proposed fix
"devDependencies": { - "@types/bun": "latest", + "@types/bun": "^1.2.13", "@cvsa/typescript-config": "workspace:*" },🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/logger/package.json` around lines 10 - 12, In package.json, replace the floating "latest" entry for the devDependency "@types/bun" with a pinned exact semver (e.g. "x.y.z") in the "devDependencies" object so builds are reproducible; update the lockfile (npm/yarn/pnpm) by running install and commit the updated lockfile and package.json; verify the project builds and tests pass.apps/backend/src/middlewares/requestLogger.ts (1)
7-9: Unused interface definition.
RequestLoggerOptionsis defined but never used increateRequestLoggerMiddleware. If path exclusion is planned for a future iteration, consider removing this interface until it's implemented to avoid dead code.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/backend/src/middlewares/requestLogger.ts` around lines 7 - 9, Remove the unused RequestLoggerOptions interface to eliminate dead code: delete the RequestLoggerOptions declaration and any unused imports or references, or alternatively wire it into createRequestLoggerMiddleware by accepting an options parameter (e.g., createRequestLoggerMiddleware(options: RequestLoggerOptions)) and implementing excludePaths handling; reference the RequestLoggerOptions symbol and the createRequestLoggerMiddleware function to locate where to remove or integrate the interface.packages/logger/src/appLogger.ts (2)
58-74: Inverted Pino logger call signature.Pino's standard signature is
logger.info(obj, msg)(object first, message second), but here you're callingpinoMethod(msg, mergedAttrs)(message first). While Pino accepts both patterns, mixing them can cause inconsistent log output formatting.♻️ Suggested fix to use standard Pino signature
- const pinoMethod = pinoLogger[level].bind(pinoLogger) as ( - msg: string, - ...args: unknown[] - ) => void; if (level === "error" || level === "fatal") { const err = attrs && "error" in attrs ? (attrs.error as Error) : undefined; if (err instanceof Error) { - pinoMethod(msg, { + pinoLogger[level]({ ...mergedAttrs, err, - }); + }, msg); } else { - pinoMethod(msg, mergedAttrs); + pinoLogger[level](mergedAttrs, msg); } } else { - pinoMethod(msg, mergedAttrs); + pinoLogger[level](mergedAttrs, msg); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/logger/src/appLogger.ts` around lines 58 - 74, The pino calls in appLogger (the pinoMethod invocations inside the logger level branch) use message-first ordering; change them to use Pino's standard object-first signature so the metadata object (mergedAttrs, and mergedAttrs including err when present) is passed as the first argument and msg as the second; update all places where pinoMethod(msg, mergedAttrs) or pinoMethod(msg, { ...mergedAttrs, err }) is used (the pinoMethod bound from pinoLogger[level]) to call pinoMethod(mergedAttrs, msg) and pinoMethod({ ...mergedAttrs, err }, msg) respectively to ensure consistent formatting.
10-17: DuplicateApplicationLoggerinterface definition.This file defines
ApplicationLoggerlocally (lines 10-17), butpackages/logger/src/types.tsalso exports anApplicationLoggerinterface (lines 22-29) that includes achild(initialAttrs)method. The local interface lacks this method.Consider removing the local interface and importing from
./typesto ensure consistency, or implement thechild()method here if required.♻️ Proposed fix
import type { Logger as PinoLogger } from "pino"; import { type Logger, SeverityNumber } from "@opentelemetry/api-logs"; import { trace, SpanStatusCode, type Attributes } from "@opentelemetry/api"; import { pinoLogger } from "./pinoSetup"; import { otelLogger } from "@cvsa/observability"; +import type { ApplicationLogger } from "./types"; type PinoLevel = "trace" | "debug" | "info" | "warn" | "error" | "fatal"; type LogAttrs = Record<string, unknown> | undefined; - -interface ApplicationLogger { - trace(msg: string, attrs?: LogAttrs): void; - debug(msg: string, attrs?: LogAttrs): void; - info(msg: string, attrs?: LogAttrs): void; - warn(msg: string, attrs?: LogAttrs): void; - error(msg: string, attrs?: LogAttrs): void; - fatal(msg: string, attrs?: LogAttrs): void; -}Also applies to: 22-29
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/logger/src/appLogger.ts` around lines 10 - 17, Remove the duplicated local ApplicationLogger interface and use the shared definition: import the exported ApplicationLogger from the types module (the shared interface that includes child(initialAttrs)). Replace the local interface declaration with the imported type, or if you prefer to keep a local type, add the child(initialAttrs?: Record<string, any>): ApplicationLogger method to match the shared definition so signatures stay consistent; update any usages of ApplicationLogger accordingly (e.g., places referencing trace/debug/info/warn/error/fatal and child) to rely on the single canonical interface.packages/logger/src/types.ts (1)
3-9: Redundant index signature inLogAttributes.The explicit
[key: string]: unknownon line 8 is redundant because the interface alreadyextends Record<string, unknown>, which provides the same index signature.♻️ Proposed cleanup
export interface LogAttributes extends Record<string, unknown> { userId?: string; requestId?: string; traceId?: string; spanId?: string; - [key: string]: unknown; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/logger/src/types.ts` around lines 3 - 9, The LogAttributes interface declares an explicit index signature that duplicates the Record<string, unknown> from the extends clause; remove the redundant `[key: string]: unknown;` entry in the LogAttributes definition so the interface simply extends Record<string, unknown> while keeping the optional properties userId, requestId, traceId, and spanId intact.packages/core/src/catalog/song/service.ts (1)
36-37: Replaceconsole.errorwith structured logging.The TODO comment on line 36 notes this. Using
console.errorbypasses the observability infrastructure being introduced in this PR. Consider injecting a logger or usingappLoggerfor consistency with the rest of the codebase.Also applies to: 53-54, 70-70
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/src/catalog/song/service.ts` around lines 36 - 37, Replace the three plain console.error calls in packages/core/src/catalog/song/service.ts with structured logging via the existing observability logger (e.g., appLogger) or an injected Logger instance: update the error sites (inside the functions where console.error(e) appears) to call appLogger.error (or this.logger.error) and include both a contextual message (e.g., "Failed to <operation> for song" with song id/params) and the caught error object so the stack and details are preserved; if those functions (or the module) don't currently have access to appLogger, add it as a dependency (constructor parameter or function arg) or import the shared appLogger used elsewhere to keep logging consistent and remove the TODO comment.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI (base), Organization UI (inherited)
Review profile: CHILL
Plan: Pro
Run ID: 7dd5d6be-d5cf-4ae3-8f87-b65cfc1eb471
⛔ Files ignored due to path filters (1)
bun.lockis excluded by!**/*.lock
📒 Files selected for processing (107)
.agents/skills/jujutsu-vcs/SKILL.md.opencode/commands/commit.mdapps/backend/package.jsonapps/backend/src/common/error/getErrorResponse.tsapps/backend/src/common/i18n.tsapps/backend/src/common/log.tsapps/backend/src/common/middlewares/index.tsapps/backend/src/common/trace.tsapps/backend/src/errorHandler.tsapps/backend/src/handlers/auth/currentUser.tsapps/backend/src/handlers/auth/index.tsapps/backend/src/handlers/auth/login.tsapps/backend/src/handlers/auth/logout.tsapps/backend/src/handlers/auth/signup.tsapps/backend/src/handlers/catalog/song/create.tsapps/backend/src/handlers/catalog/song/delete.tsapps/backend/src/handlers/catalog/song/details.tsapps/backend/src/handlers/catalog/song/index.tsapps/backend/src/handlers/catalog/song/search.tsapps/backend/src/handlers/catalog/song/update.tsapps/backend/src/handlers/dev/gc.tsapps/backend/src/handlers/dev/index.tsapps/backend/src/handlers/dev/snapshot.tsapps/backend/src/handlers/index.tsapps/backend/src/index.tsapps/backend/src/middlewares/auth.tsapps/backend/src/middlewares/i18n.tsapps/backend/src/middlewares/index.tsapps/backend/src/middlewares/requestLogger.tsapps/backend/src/onAfterHandle.tsapps/backend/src/startMessage.tsapps/backend/tests/auth.test.tsapps/backend/tests/song.test.tsapps/backend/tsconfig.jsonlocale/manifest.jsonlocale/modules/backend/manifest.jsonlocale/modules/backend/messages/en.jsonlocale/modules/backend/messages/zh.jsonpackages/core/common/index.tspackages/core/common/internal.tspackages/core/common/schema/index.tspackages/core/common/types/repository.tspackages/core/index.tspackages/core/modules/catalog/song/service.tspackages/core/modules/internal.tspackages/core/modules/ml/index.tspackages/core/modules/search/search.interface.tspackages/core/package.jsonpackages/core/src/auth/betterAuth.tspackages/core/src/auth/index.tspackages/core/src/auth/user/dto.tspackages/core/src/auth/user/index.tspackages/core/src/catalog/index.tspackages/core/src/catalog/song/container.tspackages/core/src/catalog/song/dto.tspackages/core/src/catalog/song/index.tspackages/core/src/catalog/song/repository.interface.tspackages/core/src/catalog/song/repository.tspackages/core/src/catalog/song/service.tspackages/core/src/error/AppError.tspackages/core/src/error/index.tspackages/core/src/error/schema.tspackages/core/src/index.tspackages/core/src/internal.tspackages/core/src/search/catalog/index.tspackages/core/src/search/catalog/song.tspackages/core/src/search/config.tspackages/core/src/search/container.tspackages/core/src/search/index.tspackages/core/src/search/interface.tspackages/core/src/search/manager.tspackages/core/src/types/index.tspackages/core/src/types/repository.tspackages/core/src/types/service.tspackages/core/src/utils/deepEqualUnordered.tspackages/core/src/utils/index.tspackages/core/src/utils/randomId.tspackages/core/tests/integration/SongRepository.test.tspackages/core/tests/unit/SongService.test.tspackages/core/tsconfig.jsonpackages/core/turbo.jsonpackages/db/index.tspackages/db/package.jsonpackages/db/src/index.tspackages/db/src/prisma.tspackages/db/src/types.tspackages/db/tsconfig.jsonpackages/embedding/package.jsonpackages/embedding/src/embedding.tspackages/embedding/src/index.tspackages/embedding/tests/e2e.test.tspackages/embedding/tsconfig.jsonpackages/env/package.jsonpackages/env/src/index.tspackages/env/tsconfig.jsonpackages/logger/package.jsonpackages/logger/src/appLogger.tspackages/logger/src/index.tspackages/logger/src/pinoSetup.tspackages/logger/src/types.tspackages/logger/tsconfig.jsonpackages/observability/package.jsonpackages/observability/src/index.tspackages/observability/src/init.tspackages/observability/src/tracer.tspackages/observability/tsconfig.jsonturbo.json
💤 Files with no reviewable changes (12)
- apps/backend/src/common/middlewares/index.ts
- packages/core/modules/ml/index.ts
- packages/core/src/types/index.ts
- packages/core/common/schema/index.ts
- packages/core/index.ts
- packages/core/common/index.ts
- packages/db/index.ts
- packages/core/modules/internal.ts
- packages/core/common/types/repository.ts
- packages/core/common/internal.ts
- packages/core/modules/catalog/song/service.ts
- packages/core/modules/search/search.interface.ts
| export function formatGinLog(data: GinLogData = {}): string { | ||
| const parts: string[] = []; | ||
|
|
||
| if (data.method) { | ||
| const m = data.method.toUpperCase(); | ||
| let methodBg = chalk.bgBlue.white.bold; | ||
| if (m === "POST") methodBg = chalk.bgCyan.black.bold; | ||
| else if (m === "PUT") methodBg = chalk.bgYellow.black.bold; | ||
| else if (m === "DELETE") methodBg = chalk.bgRed.white.bold; | ||
| else if (m === "PATCH") methodBg = chalk.bgGreen.white.bold; | ||
| parts.push(methodBg(` ${m.padEnd(6)} `)); | ||
| } | ||
|
|
||
| if (data.status) { | ||
| let statusColor = chalk.green; | ||
| if (data.status >= 400 && data.status < 500) statusColor = chalk.yellow; | ||
| else if (data.status >= 500) statusColor = chalk.red; | ||
| parts.push(statusColor(data.status.toString().padStart(3))); | ||
| } | ||
|
|
||
| if (data.latency !== undefined) { | ||
| parts.push(chalk.magenta(data.latency.padStart(10))); | ||
| } | ||
|
|
||
| if (data.ip) { | ||
| parts.push(chalk.gray(data.ip.padStart(15))); | ||
| } | ||
|
|
||
| if (data.path) { | ||
| parts.push(`"${data.path}"`); | ||
| } | ||
|
|
||
| return parts.join(" | "); |
There was a problem hiding this comment.
Sanitize user-controlled fields before formatting the log line.
path is written verbatim into the terminal output. A crafted request containing newlines or ANSI escapes can forge or mangle log lines.
Suggested fix
export function formatGinLog(data: GinLogData = {}): string {
+ const sanitize = (value: string) =>
+ value.replace(/[\u0000-\u001f\u007f-\u009f]/g, "");
+
const parts: string[] = [];
@@
if (data.ip) {
- parts.push(chalk.gray(data.ip.padStart(15)));
+ parts.push(chalk.gray(sanitize(data.ip).padStart(15)));
}
if (data.path) {
- parts.push(`"${data.path}"`);
+ parts.push(JSON.stringify(sanitize(data.path)));
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/backend/src/common/log.ts` around lines 17 - 49, formatGinLog writes
user-controlled fields (e.g., data.path and data.ip) verbatim to the terminal
allowing newline or ANSI escape injection; sanitize these fields before
formatting by stripping or escaping ANSI control sequences and
removing/normalizing newlines and control chars (for example use a stripAnsi
utility or a regex to remove \x1b... sequences, replace CR/LF with a single
space, and escape quotes) in the formatGinLog function so that parts.push uses
the cleaned values (refer to formatGinLog, data.path, data.ip).
| export async function traceTask<T>(name: string, fn: () => T): Promise<T> { | ||
| return tracer.startActiveSpan(name, async (span) => { | ||
| try { | ||
| span.setStatus({ code: SpanStatusCode.OK }); | ||
| return await fn(); | ||
| } catch (e) { | ||
| span.setStatus({ code: SpanStatusCode.ERROR }); | ||
| throw e; | ||
| } finally { | ||
| span.end(); | ||
| } | ||
| }); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
git ls-files | grep -E "trace\.ts$"Repository: project-cvsa/cvsa
Length of output: 93
🏁 Script executed:
cat -n apps/backend/src/common/trace.tsRepository: project-cvsa/cvsa
Length of output: 951
🏁 Script executed:
rg "recordException|setStatus" apps/backend/src --type ts -B 2 -A 2Repository: project-cvsa/cvsa
Length of output: 550
🏁 Script executed:
rg "traceTask" apps/backend/src --type ts -lRepository: project-cvsa/cvsa
Length of output: 507
🏁 Script executed:
cat -n apps/backend/src/handlers/auth/login.tsRepository: project-cvsa/cvsa
Length of output: 2431
🏁 Script executed:
rg "traceTask" apps/backend/src/handlers --type ts -B 2 -A 5Repository: project-cvsa/cvsa
Length of output: 9955
🌐 Web query:
OpenTelemetry Node.js Span recordException method API
💡 Result:
The OpenTelemetry Node.js Span.recordException method is part of the @opentelemetry/api package (latest version 1.9.1 as of March 2026). It records an exception as a span event named "exception" with automatically extracted attributes like exception.type, exception.message, and exception.stacktrace. Signature: recordException(exception: Exception, time?: TimeInput): void - exception: string or Error object (required) - time: Optional timestamp for the event (defaults to current time) Usage example: const { trace, SpanStatusCode } = require('@opentelemetry/api'); const span = trace.getTracer('example').startSpan('my-span'); try { // code that might throw } catch (error) { span.recordException(error); span.setStatus({ code: SpanStatusCode.ERROR, message: error.message }); } span.end; This creates a span event with semantic attributes per OpenTelemetry specs. Note: The API does not support passing custom attributes directly to recordException (proposals discussed but not implemented in stable API; use span.addEvent or setAttributes separately if needed). Always pair with setStatus for complete error reporting, and ensure the span is active (isRecording === true).
Citations:
- 1: https://open-telemetry.github.io/opentelemetry-js/interfaces/_opentelemetry_api._opentelemetry_api.Span.html
- 2: https://github.com/open-telemetry/opentelemetry-js/blob/main/packages/opentelemetry-sdk-trace-base/src/Span.ts
- 3: https://oneuptime.com/blog/post/2026-01-25-exception-recording-opentelemetry/view
- 4: https://github.com/open-telemetry/opentelemetry-js/blob/main/doc/tracing.md
Record the exception and fix the callback typing.
The function currently accepts async () => ... callbacks but declares fn: () => T, which causes TypeScript to infer T as Promise<U>, creating a misleading Promise<Promise<U>> return type. Additionally, failed spans only receive the ERROR status without recording the actual exception, limiting observability for debugging.
Suggested fix
-export async function traceTask<T>(name: string, fn: () => T): Promise<T> {
+export async function traceTask<T>(
+ name: string,
+ fn: () => Promise<T> | T
+): Promise<T> {
return tracer.startActiveSpan(name, async (span) => {
try {
- span.setStatus({ code: SpanStatusCode.OK });
- return await fn();
+ const result = await fn();
+ span.setStatus({ code: SpanStatusCode.OK });
+ return result;
} catch (e) {
+ span.recordException(e instanceof Error ? e : new Error(String(e)));
span.setStatus({ code: SpanStatusCode.ERROR });
throw e;
} finally {📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| export async function traceTask<T>(name: string, fn: () => T): Promise<T> { | |
| return tracer.startActiveSpan(name, async (span) => { | |
| try { | |
| span.setStatus({ code: SpanStatusCode.OK }); | |
| return await fn(); | |
| } catch (e) { | |
| span.setStatus({ code: SpanStatusCode.ERROR }); | |
| throw e; | |
| } finally { | |
| span.end(); | |
| } | |
| }); | |
| export async function traceTask<T>( | |
| name: string, | |
| fn: () => Promise<T> | T | |
| ): Promise<T> { | |
| return tracer.startActiveSpan(name, async (span) => { | |
| try { | |
| const result = await fn(); | |
| span.setStatus({ code: SpanStatusCode.OK }); | |
| return result; | |
| } catch (e) { | |
| span.recordException(e instanceof Error ? e : new Error(String(e))); | |
| span.setStatus({ code: SpanStatusCode.ERROR }); | |
| throw e; | |
| } finally { | |
| span.end(); | |
| } | |
| }); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/backend/src/common/trace.ts` around lines 6 - 17, The traceTask helper's
callback typing and error handling are wrong: change the fn signature from () =>
T to () => Promise<T> (so traceTask<T>(name: string, fn: () => Promise<T>):
Promise<T>) to avoid Promise<Promise<U>> inference, and inside the catch block
call span.recordException(e) before setting span.setStatus({ code:
SpanStatusCode.ERROR }) (and then rethrow) so the span records the actual
exception; locate these changes around traceTask, tracer.startActiveSpan and the
span.setStatus usage.
| .onRequest(function setRequestMeta({ store }) { | ||
| store.startTime = {}; | ||
| store.error = {}; | ||
| const traceId = getTraceId() ?? ""; | ||
| if (traceId) { | ||
| store.startTime[traceId] = Bun.nanoseconds() / 1_000_000; | ||
| } | ||
|
|
||
| if (AUTH_INVALID_CODES.includes(bodyCode as AuthInvalidCode)) { | ||
| return response.betterAuthInvalidCred(status); | ||
| }) |
There was a problem hiding this comment.
Race condition when traceId is empty.
When traceId is an empty string, all concurrent requests will share the same store entry at key "". Line 48-50 guards startTime writes with if (traceId), but lines 60-68 unconditionally write to store.error[traceId]. This can cause cross-request data corruption.
🐛 Proposed fix: generate a fallback request ID
.onRequest(function setRequestMeta({ store }) {
store.startTime = {};
store.error = {};
- const traceId = getTraceId() ?? "";
+ const traceId = getTraceId() ?? crypto.randomUUID();
if (traceId) {
store.startTime[traceId] = Bun.nanoseconds() / 1_000_000;
}
})Or store the generated ID for later use:
+ .state("requestId", {} as Record<string, string>)
.onRequest(function setRequestMeta({ store, request }) {
store.startTime = {};
store.error = {};
- const traceId = getTraceId() ?? "";
- if (traceId) {
- store.startTime[traceId] = Bun.nanoseconds() / 1_000_000;
- }
+ const traceId = getTraceId() ?? crypto.randomUUID();
+ // Store mapping for this request
+ store.requestId[request] = traceId;
+ store.startTime[traceId] = Bun.nanoseconds() / 1_000_000;
})Also applies to: 59-68
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/backend/src/errorHandler.ts` around lines 44 - 51, The current
setRequestMeta handler uses getTraceId() and writes to store.startTime[traceId]
only if traceId is truthy, but later unconditionally writes to
store.error[traceId], causing different requests to share the empty-string key
when getTraceId() returns "". Modify setRequestMeta (and the same pattern around
lines 59-68) to ensure a non-empty per-request id: call getTraceId(), and if it
returns an empty string generate and persist a fallback id (e.g., a short UUID
or timestamp-based id) into the request store so both store.startTime and
store.error always use that same non-empty id; alternatively, guard writes to
store.error with the same truthy check or use the persisted fallback id
consistently across setRequestMeta and later error handling code (referencing
setRequestMeta, store.startTime, store.error, and getTraceId).
| .onError({ as: "global" }, function logErrorResponse({ store, status: statusFunc, ip }) { | ||
| const traceId = getTraceId() ?? ""; | ||
| const { request, locale, data, status, rawError, known } = store.error[traceId]; | ||
| const startTime = store.startTime[traceId]; |
There was a problem hiding this comment.
Missing null-safety check before destructuring store data.
Line 123 destructures store.error[traceId] without verifying the entry exists. If the first onError hook didn't populate the store (e.g., due to an unexpected code path), this will throw a runtime error.
🐛 Proposed fix
.onError({ as: "global" }, function logErrorResponse({ store, status: statusFunc, ip }) {
const traceId = getTraceId() ?? "";
+ const errorData = store.error[traceId];
+ if (!errorData) {
+ // Fallback for unexpected error paths
+ pinoLogger.error("Error handler invoked without error data in store");
+ return;
+ }
- const { request, locale, data, status, rawError, known } = store.error[traceId];
+ const { request, locale, data, status, rawError, known } = errorData;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| .onError({ as: "global" }, function logErrorResponse({ store, status: statusFunc, ip }) { | |
| const traceId = getTraceId() ?? ""; | |
| const { request, locale, data, status, rawError, known } = store.error[traceId]; | |
| const startTime = store.startTime[traceId]; | |
| .onError({ as: "global" }, function logErrorResponse({ store, status: statusFunc, ip }) { | |
| const traceId = getTraceId() ?? ""; | |
| const errorData = store.error[traceId]; | |
| if (!errorData) { | |
| // Fallback for unexpected error paths | |
| pinoLogger.error("Error handler invoked without error data in store"); | |
| return; | |
| } | |
| const { request, locale, data, status, rawError, known } = errorData; | |
| const startTime = store.startTime[traceId]; |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/backend/src/errorHandler.ts` around lines 121 - 124, The onError hook
handler logErrorResponse destructures store.error[traceId] without checking it
exists (traceId from getTraceId), which can throw if the entry is missing;
modify logErrorResponse to first retrieve the errorEntry =
store.error?.[traceId] (or guard with if (!traceId || !store?.error ||
!store.error[traceId]) ) and handle the missing case (e.g., set defaults or
return/continue) before destructuring request, locale, data, status, rawError,
known and using startTime from store.startTime[traceId]; ensure you reference
getTraceId(), logErrorResponse, store.error and startTime guards in the change.
| const vectors = await this.embeddingManager.embeddings.post({ | ||
| texts: [ | ||
| `Name: ${song.name ?? ""} | ||
| Lyrics: ${song.lyrics ?? ""} | ||
| Description: ${getDesc() ?? ""} | ||
| Singers: ${getSingers().join(", ")} | ||
| Artists: ${getArtists().join(", ")} | ||
| `, | ||
| ]); | ||
| ], |
There was a problem hiding this comment.
Use lyric text in the embedding payload.
song.lyrics is an array here, so this template string becomes "[object Object],..." instead of searchable lyrics.
Suggested fix
- `Name: ${song.name ?? ""}
+ `Name: ${song.name ?? ""}
Lyrics: ${song.lyrics ?? ""}
Description: ${getDesc() ?? ""}
Singers: ${getSingers().join(", ")}
Artists: ${getArtists().join(", ")}
`,
+ `Name: ${song.name ?? ""}
+Lyrics: ${getLyrics() ?? ""}
+Description: ${getDesc() ?? ""}
+Singers: ${getSingers().join(", ")}
+Artists: ${getArtists().join(", ")}
+`,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/core/src/search/catalog/song.ts` around lines 54 - 62, The embedding
payload is using song.lyrics (which is an array) directly inside the template
string, producing "[object Object]" instead of real lyric text; update the code
around embeddingManager.embeddings.post (the vectors creation) to convert
song.lyrics into a single string before interpolation — e.g., map the array to
the lyric text field or toString and join with "\n" (or another delimiter) so
the template includes actual lyrics (reference song.lyrics, the vectors
variable, and the embeddingManager.embeddings.post call).
| expect(songService.getDetails(999)).rejects.toThrow(AppError); | ||
| expect(songService.getDetails(999)).rejects.toThrow("Song not found"); | ||
| expect(songService.getDetails(999)).rejects.toThrow("error.song.notfound"); | ||
| expect(songService.getDetails(999)).rejects.toMatchObject({ |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
find . -type f -name "SongService.test.ts" | head -20Repository: project-cvsa/cvsa
Length of output: 107
🏁 Script executed:
cat -n ./packages/core/tests/unit/SongService.test.tsRepository: project-cvsa/cvsa
Length of output: 2684
Add await to the rejection assertion chains to ensure assertions execute properly.
Lines 60–62 call songService.getDetails(999) without awaiting the .rejects chains. In Bun test (and Jest), the .rejects matchers return promises that must be awaited; without awaiting, the test completes before assertions run, making it unreliable.
Fix
- expect(songService.getDetails(999)).rejects.toThrow(AppError);
- expect(songService.getDetails(999)).rejects.toThrow("error.song.notfound");
- expect(songService.getDetails(999)).rejects.toMatchObject({
+ const result = songService.getDetails(999);
+ await expect(result).rejects.toThrow(AppError);
+ await expect(result).rejects.toThrow("error.song.notfound");
+ await expect(result).rejects.toMatchObject({
code: "NOT_FOUND",
statusCode: 404,
});📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| expect(songService.getDetails(999)).rejects.toThrow(AppError); | |
| expect(songService.getDetails(999)).rejects.toThrow("Song not found"); | |
| expect(songService.getDetails(999)).rejects.toThrow("error.song.notfound"); | |
| expect(songService.getDetails(999)).rejects.toMatchObject({ | |
| const result = songService.getDetails(999); | |
| await expect(result).rejects.toThrow(AppError); | |
| await expect(result).rejects.toThrow("error.song.notfound"); | |
| await expect(result).rejects.toMatchObject({ | |
| code: "NOT_FOUND", | |
| statusCode: 404, | |
| }); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/core/tests/unit/SongService.test.ts` around lines 60 - 62, The three
rejection assertions for songService.getDetails(999) must be awaited so the
promise matchers run; update the tests to prefix each expect(...).rejects chain
with await (i.e., await
expect(songService.getDetails(999)).rejects.toThrow(AppError), await
expect(...).rejects.toThrow("error.song.notfound"), and await
expect(...).rejects.toMatchObject({...})) so the assertions actually execute.
| const response = await fetch(url); | ||
| if (!response.ok) { | ||
| appLogger.error(`Failed to download model: ${response.status} ${response.statusText}`); | ||
| return; | ||
| } | ||
|
|
||
| if (!response.body) { | ||
| appLogger.error("Response body is null"); | ||
| return; | ||
| } | ||
|
|
||
| const reader = response.body.getReader(); | ||
| const chunks: Uint8Array[] = []; | ||
|
|
||
| while (true) { | ||
| const { done, value } = await reader.read(); | ||
| if (done) break; | ||
| downloaded += value.byteLength; | ||
| const percent = size > 0 ? Math.round((downloaded / size) * 100) : 0; | ||
| appLogger.info(`Embedding model download: ${percent}% (${downloaded}/${size} bytes)`); | ||
| chunks.push(value); | ||
| } | ||
|
|
||
| const buffer = new Uint8Array(size); | ||
| let offset = 0; | ||
| for (const chunk of chunks) { | ||
| buffer.set(chunk, offset); | ||
| offset += chunk.byteLength; | ||
| } | ||
|
|
||
| await Bun.write(modelFile, buffer); | ||
| appLogger.info("Embedding model downloaded."); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cd packages/embedding/src && cat -n embedding.ts | sed -n '60,140p'Repository: project-cvsa/cvsa
Length of output: 2621
Add timeout to the fetch and validate downloaded bytes before writing.
The fetch at line 92 has no timeout and can hang startup indefinitely if the Hugging Face connection stalls. Additionally, if the stream ends prematurely with fewer than size bytes received, a zero-padded buffer is written to disk. Since exists() checks only file presence, not integrity, this corrupt model passes validation on the next startup. Add an AbortController timeout to the fetch and verify that downloaded === size before writing the model file.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/embedding/src/embedding.ts` around lines 92 - 123, The fetch that
downloads the embedding model must be made abortable and the downloaded byte
count must be validated before writing: wrap fetch(url) with an AbortController
and a setTimeout to call controller.abort() after a configurable timeout, pass
controller.signal into fetch, and clear the timeout once the response is
received; during streaming, if aborted ensure the reader is cancelled/closed and
handle the thrown error; after the while loop check that downloaded === size
(and size > 0) and if not, log an error via appLogger, do not call
Bun.write(modelFile, buffer), and return/throw to avoid writing a truncated file
(also consider deleting any partial file if created); update references around
response.body.getReader(), downloaded, size, modelFile, and Bun.write
accordingly.
| async ({ body, status }) => { | ||
| const { texts } = body; | ||
| const embeddings = await embeddingManager?.getEmbedding(texts); | ||
| if (!embeddings) { | ||
| return status(500, { message: "cannot generate embedding" }); | ||
| } | ||
| return status(200, { | ||
| embeddings, | ||
| }); | ||
| }, |
There was a problem hiding this comment.
Preserve the declared 500 response shape when embedding generation throws.
embeddingManager?.getEmbedding(texts) can still reject, and that currently escapes the route instead of returning the { message } payload declared in response.500.
Suggested fix
async ({ body, status }) => {
- const { texts } = body;
- const embeddings = await embeddingManager?.getEmbedding(texts);
- if (!embeddings) {
- return status(500, { message: "cannot generate embedding" });
- }
- return status(200, {
- embeddings,
- });
+ try {
+ const { texts } = body;
+ const embeddings = await embeddingManager?.getEmbedding(texts);
+ if (!embeddings) {
+ return status(500, { message: "cannot generate embedding" });
+ }
+ return status(200, {
+ embeddings,
+ });
+ } catch {
+ return status(500, { message: "cannot generate embedding" });
+ }
},📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| async ({ body, status }) => { | |
| const { texts } = body; | |
| const embeddings = await embeddingManager?.getEmbedding(texts); | |
| if (!embeddings) { | |
| return status(500, { message: "cannot generate embedding" }); | |
| } | |
| return status(200, { | |
| embeddings, | |
| }); | |
| }, | |
| async ({ body, status }) => { | |
| try { | |
| const { texts } = body; | |
| const embeddings = await embeddingManager?.getEmbedding(texts); | |
| if (!embeddings) { | |
| return status(500, { message: "cannot generate embedding" }); | |
| } | |
| return status(200, { | |
| embeddings, | |
| }); | |
| } catch { | |
| return status(500, { message: "cannot generate embedding" }); | |
| } | |
| }, |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/embedding/src/index.ts` around lines 9 - 18, The handler currently
awaits embeddingManager?.getEmbedding(texts) without catching rejections, so
thrown errors escape the route; wrap the call in a try/catch around the await to
catch any exceptions from embeddingManager.getEmbedding and on failure return
status(500, { message: "cannot generate embedding" }) (you can also log the
error); target the anonymous async handler and the
embeddingManager?.getEmbedding call and ensure the catch returns the declared
500 payload shape.
| export const env = createEnv({ | ||
| server: { | ||
| DATABASE_URL: z.url(), | ||
| DATABASE_URL: z.url().optional(), |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check how DATABASE_URL is used in the codebase to verify if undefined handling exists
rg -n "DATABASE_URL|env\.DATABASE_URL" --type=ts -C2Repository: project-cvsa/cvsa
Length of output: 836
🏁 Script executed:
cat -n packages/db/src/prisma.tsRepository: project-cvsa/cvsa
Length of output: 1809
🏁 Script executed:
cat -n packages/db/prisma.config.tsRepository: project-cvsa/cvsa
Length of output: 459
🏁 Script executed:
# Check if DATABASE_URL is wrapped in any error handling or conditional logic
rg -n "DATABASE_URL" --type=ts -B5 -A5Repository: project-cvsa/cvsa
Length of output: 1760
🏁 Script executed:
# Check for .env files and documentation
find . -name ".env*" -o -name "README*" | head -20Repository: project-cvsa/cvsa
Length of output: 419
🏁 Script executed:
# Look for test setup or configuration that might handle missing DATABASE_URL
rg -n "DATABASE_URL|process.env.DATABASE_URL" --type=ts -l | head -10Repository: project-cvsa/cvsa
Length of output: 141
🏁 Script executed:
# Check if there's any guard logic around prisma initialization
rg -n "prisma|PrismaClient" --type=ts -B2 -A2 | grep -A2 -B2 "if\|try\|catch"Repository: project-cvsa/cvsa
Length of output: 328
🏁 Script executed:
cat ./packages/db/.env.exampleRepository: project-cvsa/cvsa
Length of output: 118
🏁 Script executed:
cat ./apps/backend/.env.exampleRepository: project-cvsa/cvsa
Length of output: 188
🏁 Script executed:
# Check if there's any conditional initialization or error handling at app startup
rg -n "DATABASE_URL|prisma|PrismaClient" apps/backend --type=ts -A3 | head -40Repository: project-cvsa/cvsa
Length of output: 2098
Fix missing error handling for optional DATABASE_URL.
Making DATABASE_URL optional is unsafe. The code passes env.DATABASE_URL directly to PrismaPg constructor (line 6 of packages/db/src/prisma.ts) and Prisma config (line 10 of packages/db/prisma.config.ts) without any null/undefined checks. If DATABASE_URL is missing, Prisma initialization will fail with a runtime error. Either revert DATABASE_URL to required, skip Prisma initialization when the URL is unavailable, or add explicit error handling.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/env/src/index.ts` at line 6, DATABASE_URL is marked optional but is
consumed unguarded by Prisma, causing runtime failures; either make DATABASE_URL
required in the zod schema (change DATABASE_URL to a required url) or add
explicit guards where Prisma is initialized (check env.DATABASE_URL before
calling the PrismaPg constructor and before using it in prisma config/prisma
client setup) and throw a clear descriptive error or skip Prisma initialization
when missing; reference the DATABASE_URL schema entry and the PrismaPg
constructor / prisma config initialization points to implement the chosen fix.
8303412 to
5ab461a
Compare
5ab461a to
a5341e4
Compare
|
@coderabbitai Can you write a commit message body that describes the motivation and the changes involved? |
|
Tip For best results, initiate chat on the files or code changes.
|
Observability & Logging Infrastructure
@cvsa/logger), observability (@cvsa/observability), and environment configuration (@cvsa/env)appLogger) that integrates with both Pino and OpenTelemetry logs, including span context extractionBackend Error Handling & Tracing
errorHandlerfrom a single function to an Elysia middleware plugin with dedicated error and logging hooksX-Trace-IDheaders) and locale-aware i18n translationerror.unauthorized,error.not-found,error.internal) with English and Chinese translationsMiddleware & Request Processing
i18nMiddlewarethat derives request locale fromx-localeheader oraccept-language, supporting Chinese (default) and EnglishrequestLoggerMiddlewarewith request/response metadata logging, IP extraction, and formatted Gin-style logs/middlewaresdirectory and barrel indexService Layer & Tracing Integration
traceTaskfor distributed tracingSongServicefrom core package to backend-specific implementation with tracing support@/handlers,@/middlewares)Database Package Reorganization
@cvsa/dbwith explicit logging configuration@cvsa/coreexports: removedcommonandmodulessubpaths, consolidated via newinternal.tsentry@cvsa/db,@cvsa/env,@cvsa/core/internal)Environment Configuration
@cvsa/envpackage with environment validation using ZodDATABASE_URLoptional with proper validationEmbedding Service
@cvsa/embeddingpackage with Elysia-based REST API for text embeddings/embeddingsendpoint with typed API contractDevelopment Tools
/gc-resultand/heap-snapshotendpoints for memory management and debuggingPackage Metadata & Configuration
0.0.1to2.1.0@modules/*, added@handlers/*)lint,test, and newtypechecktasksNODE_ENVvariables to package dev/test scriptsLocale & Internationalization
locale/manifest.json,locale/modules/backend/)