Skip to content

refactor: observability, log & i18n support#39

Merged
alikia2x merged 8 commits intodevelopfrom
ref/observability
Apr 4, 2026
Merged

refactor: observability, log & i18n support#39
alikia2x merged 8 commits intodevelopfrom
ref/observability

Conversation

@alikia2x
Copy link
Copy Markdown
Contributor

@alikia2x alikia2x commented Apr 4, 2026

Observability & Logging Infrastructure

  • Added new packages for logging (@cvsa/logger), observability (@cvsa/observability), and environment configuration (@cvsa/env)
  • Introduced OpenTelemetry integration with tracer instances and trace task wrappers for async operations
  • Implemented structured logging with Pino logger supporting multiple log levels and pretty-printing in non-production environments
  • Added application-level logger (appLogger) that integrates with both Pino and OpenTelemetry logs, including span context extraction

Backend Error Handling & Tracing

  • Refactored errorHandler from a single function to an Elysia middleware plugin with dedicated error and logging hooks
  • Enhanced error responses with trace IDs (X-Trace-ID headers) and locale-aware i18n translation
  • Added error message i18n keys (e.g., error.unauthorized, error.not-found, error.internal) with English and Chinese translations
  • Updated error context to track request metadata (path, latency, user info) and conditional logging based on error classification

Middleware & Request Processing

  • Added i18nMiddleware that derives request locale from x-locale header or accept-language, supporting Chinese (default) and English
  • Implemented requestLoggerMiddleware with request/response metadata logging, IP extraction, and formatted Gin-style logs
  • Updated auth middleware to use i18n error messages instead of hardcoded strings
  • Reorganized middleware structure with new /middlewares directory and barrel index

Service Layer & Tracing Integration

  • Wrapped all service operations (song create/update/delete/search, auth sign-in/up/out, user retrieval) in traceTask for distributed tracing
  • Moved SongService from core package to backend-specific implementation with tracing support
  • Updated handlers to import from reorganized module paths (@/handlers, @/middlewares)

Database Package Reorganization

  • Extracted Prisma configuration from core package to @cvsa/db with explicit logging configuration
  • Reorganized @cvsa/core exports: removed common and modules subpaths, consolidated via new internal.ts entry
  • Updated all imports across backend and tests to use new package boundaries (@cvsa/db, @cvsa/env, @cvsa/core/internal)

Environment Configuration

  • Created @cvsa/env package with environment validation using Zod
  • Added OTEL service name/version and log level environment variables
  • Made DATABASE_URL optional with proper validation

Embedding Service

  • Added new @cvsa/embedding package with Elysia-based REST API for text embeddings
  • Implemented configurable ONNX model support with Hugging Face integration
  • Exposed embedding functionality via POST /embeddings endpoint with typed API contract

Development Tools

  • Added development handler with /gc-result and /heap-snapshot endpoints for memory management and debugging
  • These endpoints are restricted to development environment only

Package Metadata & Configuration

  • Updated backend package version from 0.0.1 to 2.1.0
  • Reorganized TypeScript path aliases (removed @modules/*, added @handlers/*)
  • Updated Turbo configuration with explicit lint, test, and new typecheck tasks
  • Added NODE_ENV variables to package dev/test scripts

Locale & Internationalization

  • Created locale manifest structure (locale/manifest.json, locale/modules/backend/)
  • Added English and Chinese message translations for backend error scenarios

alikia2x added 7 commits April 4, 2026 22:41
Elysia's .derive can't pass context to onError,
so we have to use a global map (.state) with traceID
to pass context used by onError for logging.
Extract env, logger, and observability into separate packages:
@cvsa/env, @cvsa/logger and @cvsa/observability

Restructure packages/core from common/modules layout to src/ layout
with clear public API boundaries

Update all consumers to import from new package locations
@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Apr 4, 2026

Warning

Rate limit exceeded

@alikia2x has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 4 minutes and 24 seconds before requesting another review.

Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 4 minutes and 24 seconds.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI (base), Organization UI (inherited)

Review profile: CHILL

Plan: Pro

Run ID: 4b6f66c4-66c7-419c-9656-5ceea3f57933

📥 Commits

Reviewing files that changed from the base of the PR and between 87a448f and a5341e4.

⛔ Files ignored due to path filters (1)
  • bun.lock is excluded by !**/*.lock
📒 Files selected for processing (3)
  • packages/core/package.json
  • packages/core/src/error/schema.ts
  • packages/logger/src/pinoSetup.ts
✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch ref/observability

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@alikia2x alikia2x changed the base branch from main to develop April 4, 2026 17:03
@project-cvsa project-cvsa deleted a comment from chatgpt-codex-connector Bot Apr 4, 2026
@alikia2x alikia2x changed the title refactor: add observability & log support refactor: observability, log & i18n support Apr 4, 2026
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 16

Note

Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
packages/core/src/search/catalog/song.ts (1)

54-83: ⚠️ Potential issue | 🟠 Major

Handle the embedding API's failure path explicitly.

Both indexing and query search treat data?.embeddings[0] as optional, so a failed embedding request becomes silent partial indexing or a hybrid search with an undefined vector. This should either fail fast or fall back to pure lexical search deliberately.

Also applies to: 123-127

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/core/src/search/catalog/song.ts` around lines 54 - 83, The embedding
API call in embeddingManager.embeddings.post is treated as optional and silent
on failure; update the code that builds the catalog entry (the return object
with id/.../_vectors) to explicitly handle the failure path: detect when
vectors.data?.embeddings[0] is missing and either (a) throw an error to fail
fast during indexing or (b) record a clear lexical-only marker (e.g., set a
fallback flag or metadata like vectorsMissing: true and ensure downstream search
logic uses pure lexical search), and add logging around the embedding request so
failures are visible; ensure the same explicit handling is applied to the other
occurrence referenced (lines 123-127) so hybrid search never receives undefined
vectors.
🟡 Minor comments (9)
apps/backend/src/handlers/catalog/song/update.ts-20-22 (1)

20-22: ⚠️ Potential issue | 🟡 Minor

Add 422 status code to response schema for param validation errors.

When Zod validates the id parameter, invalid values trigger a VALIDATION error in the global error handler, which returns 422 (not 400). The response schema should document this status code to accurately reflect possible responses to clients.

Suggested response contract adjustment
 			response: {
+				422: ErrorResponseSchema,
 				200: SongSchema,

Also applies to lines 28-33.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/backend/src/handlers/catalog/song/update.ts` around lines 20 - 22, The
route's parameter Zod schema (params: z.object({ id:
z.coerce.number().int().positive() })) can trigger the global VALIDATION error
which returns HTTP 422, so update the handler's response contract to include a
422 response for param validation errors; locate the response schema defined
alongside the update handler (and the other similar handler referenced around
the params block at lines 28-33) and add a 422 entry with an appropriate error
shape/description to document validation failures.
packages/observability/src/init.ts-6-18 (1)

6-18: ⚠️ Potential issue | 🟡 Minor

initOtelSdk() is currently unused dead code, but fix before integration.

This function is exported but never called in the codebase—the backend uses @elysiajs/opentelemetry middleware instead. However, if this function is integrated in the future, the stated issues must be addressed: the SDK instance is discarded without being returned or stored for shutdown, there are no SIGTERM/SIGINT handlers, and sdk.start() is not awaited despite the function being marked async.

Either remove this unused export or refactor it following the suggested pattern (return the instance, register process shutdown handlers, and await sdk.start()) before integrating it into the application.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/observability/src/init.ts` around lines 6 - 18, The exported
initOtelSdk() currently discards the NodeSDK instance, doesn't await startup,
and lacks shutdown handlers; either remove the unused export or refactor
initOtelSdk to create and await sdk.start(), return the NodeSDK instance (sdk)
so callers can manage lifecycle, and register SIGINT/SIGTERM handlers that call
sdk.shutdown() (and await it) to ensure proper shutdown; update references to
use the returned sdk when integrating or delete the export if not needed.
packages/db/src/prisma.ts-50-58 (1)

50-58: ⚠️ Potential issue | 🟡 Minor

Query events are configured but no handler is attached.

The query log level is set to emit: "event", but without a corresponding prisma.$on('query', ...) handler, query logs will be silently dropped. Either attach an event handler or change to emit: "stdout" if you want queries logged.

 export const prisma = new PrismaClient({
 	adapter,
 	log: [
-		{ level: "query", emit: "event" },
+		{ level: "query", emit: "stdout" },
 		{ level: "info", emit: "stdout" },
 		{ level: "warn", emit: "stdout" },
 		{ level: "error", emit: "stdout" },
 	],
 });

Alternatively, add an event handler:

prisma.$on('query', (e) => {
  console.log(`Query: ${e.query}`);
  console.log(`Duration: ${e.duration}ms`);
});
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/db/src/prisma.ts` around lines 50 - 58, The PrismaClient is
configured to emit query logs as events but no handler is registered, so query
events are dropped; either change the log config for queries to use emit:
"stdout" or register a query event handler by calling prisma.$on('query', ...)
(referencing the prisma constant and PrismaClient instantiation) and log
e.query/e.duration as needed to surface query text and duration; implement one
of these fixes next to where prisma is exported so queries are actually logged.
apps/backend/src/index.ts-24-28 (1)

24-28: ⚠️ Potential issue | 🟡 Minor

Add graceful shutdown handling for the batch span processor to avoid losing spans on process termination.

Currently, BatchSpanProcessor buffers spans in memory before export. Without shutdown handling, accumulated spans will be lost when the server terminates abruptly. To fix this, store the span processor as a variable and call its shutdown() method during graceful shutdown:

Example implementation
const spanProcessor = new BatchSpanProcessor(new OTLPTraceExporter());

const app = new Elysia({
  serve: { hostname: host },
  prefix: "/v2",
})
  .use(opentelemetry({ spanProcessors: [spanProcessor] }))
  // ... rest of middleware
  .listen(16412);

process.on('SIGTERM', async () => {
  await spanProcessor.shutdown();
  process.exit(0);
});

Alternatively, if using Elysia's lifecycle hooks, attach the shutdown to the app stop event via @opentelemetry/sdk-node or a similar OTEL SDK instance.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/backend/src/index.ts` around lines 24 - 28, The BatchSpanProcessor used
in the opentelemetry middleware should be created and stored in a variable
(e.g., spanProcessor = new BatchSpanProcessor(new OTLPTraceExporter())) instead
of an inline array so you can call spanProcessor.shutdown() during graceful
termination; update the opentelemetry usage to pass spanProcessors:
[spanProcessor] and add a shutdown handler that awaits spanProcessor.shutdown()
on process signals (SIGINT/SIGTERM) or via the app stop lifecycle hook to flush
buffered spans before exiting.
packages/observability/src/tracer.ts-6-17 (1)

6-17: ⚠️ Potential issue | 🟡 Minor

Set span status to OK after successful execution, not before.

The span status is set to OK on line 9 before fn() executes. While this technically works because the catch block overwrites it to ERROR on failure, it's non-idiomatic and misleading. The standard pattern is to set OK only after successful completion.

Proposed fix
 export async function traceTask<T>(name: string, fn: () => T, startTime?: number): Promise<T> {
 	return tracer.startActiveSpan(name, { startTime }, async (span) => {
 		try {
-			span.setStatus({ code: SpanStatusCode.OK });
-			return await fn();
+			const result = await fn();
+			span.setStatus({ code: SpanStatusCode.OK });
+			return result;
 		} catch (e) {
 			span.setStatus({ code: SpanStatusCode.ERROR });
 			throw e;
 		} finally {
 			span.end();
 		}
 	});
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/observability/src/tracer.ts` around lines 6 - 17, The span status in
traceTask is set to OK before executing the provided function; move the
span.setStatus({ code: SpanStatusCode.OK }) so it runs only after fn() completes
successfully. In the traceTask function (the tracer.startActiveSpan callback),
remove the pre-execution OK set and add span.setStatus({ code: SpanStatusCode.OK
}) immediately after the await fn() returns (before returning its value),
keeping the existing catch block that sets SpanStatusCode.ERROR and the final
span.end() in finally.
packages/logger/src/pinoSetup.ts-28-46 (1)

28-46: ⚠️ Potential issue | 🟡 Minor

LOG_LEVEL is ignored in test and development environments.

In test mode the level is hardcoded to "warn", and in development it's hardcoded to "debug", overriding env.LOG_LEVEL. If this is intentional, consider documenting it. Otherwise, respect the configured level or use it as a minimum.

Proposed fix to respect LOG_LEVEL as minimum
 	if (isTest) {
 		return pino({
 			...baseOptions,
-			level: "warn",
+			level: baseOptions.level,
 			transport: {
 				target: "pino-pretty",
 				options: { colorize: true },
 			},
 		});
 	}

 	return pino({
 		...baseOptions,
-		level: "debug",
+		level: baseOptions.level,
 		transport: {
 			target: "pino-pretty",
 			options: { colorize: true },
 		},
 	});
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/logger/src/pinoSetup.ts` around lines 28 - 46, The code currently
hardcodes level values inside the two pino(...) calls (when isTest and in dev),
ignoring process.env.LOG_LEVEL; update the pinoSetup to compute an
effectiveLevel before calling pino by reading process.env.LOG_LEVEL and using it
as the configured minimum (or as the desired level if present), then pass that
effectiveLevel into the pino(...) calls instead of the hardcoded "warn"/"debug";
modify the branches around isTest and the final return to use that computed
effective level (refer to symbols isTest, baseOptions, and the pino(...)
invocations) so tests/dev respect LOG_LEVEL or treat it as a minimum.
packages/embedding/src/index.ts-4-4 (1)

4-4: ⚠️ Potential issue | 🟡 Minor

Change the import to remove type keyword so treaty is available as a value.

Line 4 imports treaty as import type, which makes it unavailable in typeof expressions. The correct pattern is to import treaty as a regular value: import { treaty } from "@elysiajs/eden". Then ReturnType<typeof treaty<EmbeddingApp>> will properly resolve to the client type.

Suggested fix
-import type { treaty } from "@elysiajs/eden";
+import { treaty } from "@elysiajs/eden";
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/embedding/src/index.ts` at line 4, The import currently uses "import
type { treaty }" which makes treaty unavailable as a runtime value and breaks
type expressions like ReturnType<typeof treaty<EmbeddingApp>>; change the import
to a normal value import (remove the "type" keyword) so treaty is imported as a
value (e.g., import { treaty } from "@elysiajs/eden") so ReturnType<typeof
treaty<EmbeddingApp>> resolves correctly.
packages/logger/src/appLogger.ts-97-98 (1)

97-98: ⚠️ Potential issue | 🟡 Minor

Potential type mismatch when casting attrs to OTel Attributes.

attrs may contain nested objects or arrays, which are not valid OpenTelemetry Attributes values (only primitives and arrays of primitives are allowed). This cast could cause runtime issues when OTel attempts to serialize the attributes.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/logger/src/appLogger.ts` around lines 97 - 98, The call to
activeSpan.addEvent(msg, attrs as Attributes) can pass nested objects/arrays
invalid for OpenTelemetry; replace the unchecked cast by sanitizing/serializing
attrs before calling addEvent — implement or invoke a sanitizer (e.g.,
sanitizeAttributes or normalizeAttributes) that walks the attrs object, converts
non-primitive values to strings or flattens arrays to arrays of primitives,
drops unsupported types, and returns a safe Attributes map; then call
activeSpan.addEvent(msg, sanitizedAttrs as Attributes) (update the code around
activeSpan.addEvent in appLogger.ts and ensure the sanitizer is used wherever
attrs are forwarded to OTel).
apps/backend/src/errorHandler.ts-106-112 (1)

106-112: ⚠️ Potential issue | 🟡 Minor

Incomplete handling for ElysiaCustomStatusResponse.

Only status code 404 is handled. Other custom status responses (e.g., 403, 500) will fall through to the else block and be logged as unknown 500 errors, which may misrepresent the actual error state.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/backend/src/errorHandler.ts` around lines 106 - 112, The handler only
treats ElysiaCustomStatusResponse for code 404, causing other custom codes to be
misclassified as 500; update the ElysiaCustomStatusResponse branch in
errorHandler.ts to switch on error.code and map common status codes (e.g., 401
-> "UNAUTHORIZED", 403 -> "FORBIDDEN", 404 -> "NOT_FOUND", 409 -> "CONFLICT",
500 -> "INTERNAL_SERVER_ERROR") and call setErrorResponse(error.code, { code:
<mapped_string>, message: error.message || "error.<lowercase_mapped>" }) or, for
unknown codes, call setErrorResponse(error.code, { code: `HTTP_${error.code}`,
message: error.message || "error.unknown" }) so the real status and message from
ElysiaCustomStatusResponse are preserved instead of falling through to the
generic 500 path; reference the ElysiaCustomStatusResponse type, the error.code
property, and the setErrorResponse function to locate where to implement this
mapping.
🧹 Nitpick comments (27)
.opencode/commands/commit.md (1)

7-7: Consider adding a brief explanation of the 50/70 rule.

The 50/70 rule is referenced but not defined. While this may be understood in context, adding a brief explanation would improve clarity and maintainability.

📝 Suggested enhancement
-The commit message should follow the 50/70 rule and describe both the motivation for the commit and the changes it involves.
+The commit message should follow the 50/70 rule (max 50 characters for the subject line, max 70 characters per line in the body) and describe both the motivation for the commit and the changes it involves.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.opencode/commands/commit.md at line 7, Add a short definition of the "50/70
rule" next to the existing reference so readers immediately understand the
formatting expectations: state that the commit subject line should be <=50
characters and the commit message body lines should be wrapped at ~70
characters, and provide a one-sentence example of what belongs in the subject
(brief intent) versus the body (motivation and detailed changes). Update the
guidance that the subject must summarize the change and the body must describe
both the motivation and the specific changes, referencing the existing phrasing
about motivation and changes so maintainers know where to apply it (the commit
message guidance section).
.agents/skills/jujutsu-vcs/SKILL.md (1)

91-95: Lockfile exclusion pattern is too narrow and cwd-dependent.

~*.lock only filters cwd-relative *.lock files. It won't exclude common lockfiles like package-lock.json, pnpm-lock.yaml, or yarn.lock, and behavior changes by current directory. Use workspace-root patterns instead.

Suggested doc update
-# We exclude lock files using filesets syntax
-jj diff --git '~*.lock' # Show diff of working copy vs parent
-jj diff --git -r x '~*.lock' # Show diff between x and its parent
+# We exclude common lockfiles using workspace-root filesets syntax
+jj diff --git '~(/*.lock | /package-lock.json | /pnpm-lock.yaml | /yarn.lock)' # Show diff of working copy vs parent
+jj diff --git -r x '~(/*.lock | /package-lock.json | /pnpm-lock.yaml | /yarn.lock)' # Show diff between x and its parent
 jj diff --git [paths] # Show diff for specific paths
-jj diff --git --from x --to y '~*.lock' # Show diff from x to y
+jj diff --git --from x --to y '~(/*.lock | /package-lock.json | /pnpm-lock.yaml | /yarn.lock)' # Show diff from x to y
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.agents/skills/jujutsu-vcs/SKILL.md around lines 91 - 95, The example uses
the cwd-relative pattern "~*.lock" which only excludes lockfiles relative to the
current directory and misses common root lockfiles; update each occurrence of
"~*.lock" in the examples (the jj diff --git lines) to use workspace-root
anchored patterns that explicitly match common lockfiles (e.g.,
package-lock.json, pnpm-lock.yaml, yarn.lock, etc.) so exclusion is
directory-independent and consistent across the repository; ensure all four
example lines that call jj diff --git are changed to reference the new
workspace-root lockfile pattern.
packages/env/package.json (1)

10-12: Pin @types/bun to a specific version.

Using "latest" for @types/bun can lead to inconsistent builds and unexpected type errors when the types are updated. Consider pinning to a specific version for reproducibility.

♻️ Suggested fix
 	"devDependencies": {
-		"@types/bun": "latest",
+		"@types/bun": "^1.2.13",
 		"@cvsa/typescript-config": "workspace:*"
 	},
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/env/package.json` around lines 10 - 12, The devDependency
"@types/bun" is pinned to "latest" in package.json which can cause inconsistent
builds; update the package.json devDependencies entry for "@types/bun" to a
specific, tested semantic version (e.g., replace "latest" with a concrete
version string like "0.x.y") so installs are reproducible and CI/typechecks
remain stable—edit the "devDependencies" block where "@types/bun" is declared to
set the chosen fixed version.
packages/logger/tsconfig.json (1)

1-4: Minor: Redundant include pattern.

The pattern "src/**/*.ts" already matches files in the root src/ directory, making "src/*.ts" redundant. This is purely cosmetic but could be simplified.

♻️ Optional simplification
 {
 	"extends": "@cvsa/typescript-config/base.json",
-	"include": ["src/**/*.ts", "src/*.ts"]
+	"include": ["src/**/*.ts"]
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/logger/tsconfig.json` around lines 1 - 4, The tsconfig.json includes
a redundant pattern in the "include" array: "src/*.ts" is unnecessary because
"src/**/*.ts" already matches those files; update the "include" array in
tsconfig.json to remove "src/*.ts" and keep only "src/**/*.ts" to simplify the
config.
packages/env/tsconfig.json (1)

1-4: Redundant glob pattern in include.

The pattern src/**/*.ts already matches all .ts files in the src/ directory and its subdirectories, making src/*.ts redundant.

♻️ Suggested simplification
 {
 	"extends": "@cvsa/typescript-config/base.json",
-	"include": ["src/**/*.ts", "src/*.ts"]
+	"include": ["src/**/*.ts"]
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/env/tsconfig.json` around lines 1 - 4, The include array in
tsconfig.json contains a redundant glob: "src/**/*.ts" already covers top-level
files so remove "src/*.ts" from the include list; update the include property to
only ["src/**/*.ts"] to simplify and avoid duplication (look for the include
array and the two patterns "src/**/*.ts" and "src/*.ts" in the current
tsconfig.json).
packages/db/tsconfig.json (1)

3-3: Non-recursive glob pattern may miss nested files.

The pattern ./src/*.ts only matches files directly in src/, not in subdirectories. For consistency with packages/env/tsconfig.json (which uses src/**/*.ts) and to accommodate future growth, consider using a recursive pattern.

♻️ Suggested recursive pattern
 {
 	"extends": "@cvsa/typescript-config/base.json",
-	"include": ["./src/*.ts"]
+	"include": ["./src/**/*.ts"]
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/db/tsconfig.json` at line 3, The tsconfig "include" pattern in
packages/db/tsconfig.json currently uses a non-recursive glob ("./src/*.ts")
which will miss files in subdirectories; update the "include" value in that
tsconfig (the include field) to a recursive glob such as "./src/**/*.ts" so all
.ts files under src, including nested folders, are compiled (mirror the pattern
used in packages/env/tsconfig.json).
packages/embedding/tsconfig.json (1)

1-4: Verify glob patterns match intended file structure.

The patterns ./src/*.ts and ./tests/*.ts only match .ts files directly in these directories, not in subdirectories. If the package grows to have nested directories, those files won't be included in compilation.

Consider using **/*.ts patterns for consistency with other packages and future-proofing:

♻️ Suggested future-proof patterns
 {
 	"extends": "@cvsa/typescript-config/base.json",
-	"include": ["./src/*.ts", "./tests/*.ts"]
+	"include": ["./src/**/*.ts", "./tests/**/*.ts"]
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/embedding/tsconfig.json` around lines 1 - 4, The tsconfig include
globs currently use "./src/*.ts" and "./tests/*.ts", which only match top-level
files; update the include patterns in tsconfig.json to recursive globs (e.g.
"./src/**/*.ts" and "./tests/**/*.ts") so nested TypeScript files are picked up
and to keep this package consistent with other packages' patterns.
packages/embedding/tests/e2e.test.ts (1)

13-16: Strengthen assertion to avoid silent pass on undefined.

data?.embeddings[0] could be undefined if the response is malformed. Bun's toHaveLength may throw on undefined, but it's safer to explicitly assert existence first:

♻️ Proposed fix
 		const { data, status } = await api.embeddings.post(payload);

 		expect(status).toBe(200);
+		expect(data?.embeddings).toBeDefined();
+		expect(data?.embeddings).toHaveLength(1);
 		expect(data?.embeddings[0]).toHaveLength(256);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/embedding/tests/e2e.test.ts` around lines 13 - 16, The test
currently dereferences data?.embeddings[0] which can be undefined and may cause
a silent or brittle failure; after calling api.embeddings.post(payload) assert
the response shape before checking length: explicitly assert that data is
defined, that data.embeddings is an array (or defined), and that
data.embeddings[0] is defined, then assert its length equals 256 — update the
assertions around api.embeddings.post, payload, data and embeddings to perform
these presence checks before calling toHaveLength on data.embeddings[0].
packages/observability/tsconfig.json (1)

1-4: Redundant include pattern.

The src/*.ts pattern is already covered by src/**/*.ts (which matches any depth including the root of src/). Consider simplifying:

♻️ Suggested simplification
 {
 	"extends": "@cvsa/typescript-config/base.json",
-	"include": ["src/**/*.ts", "src/*.ts"]
+	"include": ["src/**/*.ts"]
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/observability/tsconfig.json` around lines 1 - 4, The include array
in tsconfig.json contains a redundant pattern: "src/*.ts" is unnecessary because
"src/**/*.ts" already matches files at any depth including the src root; update
the include property to only ["src/**/*.ts"] (remove "src/*.ts") so the include
patterns are simplified and non-redundant, editing the "include" key in the
provided tsconfig.json.
apps/backend/src/startMessage.ts (1)

38-44: Inconsistent environment variable access.

logStartup uses env.NODE_ENV (line 22) from the imported @cvsa/env, but getBindingInfo reads process.env.NODE_ENV directly. This inconsistency could cause unexpected behavior if @cvsa/env applies validation or defaults differently than raw process.env.

♻️ Proposed fix for consistency
 export function getBindingInfo(): [string, number] {
 	const DEFAULT_PORT = 16412;
-	const NODE_ENV = process.env.NODE_ENV || "production";
-	const HOST = process.env.HOST ?? (NODE_ENV === "development" ? "0.0.0.0" : "127.0.0.1");
-	const PORT = process.env.PORT ? parseInt(process.env.PORT, 10) : DEFAULT_PORT;
+	const HOST = process.env.HOST ?? (env.NODE_ENV === "development" ? "0.0.0.0" : "127.0.0.1");
+	const PORT = process.env.PORT ? parseInt(process.env.PORT, 10) : DEFAULT_PORT;
 	return [HOST, PORT];
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/backend/src/startMessage.ts` around lines 38 - 44, getBindingInfo
currently reads process.env directly which is inconsistent with logStartup using
the imported env from `@cvsa/env`; change getBindingInfo to use the same env
object (e.g., env.NODE_ENV, env.HOST, env.PORT) instead of process.env so
validation/defaults remain consistent, keep DEFAULT_PORT and parseInt behavior
for PORT, and update HOST selection logic to reference env.NODE_ENV when
deciding between "0.0.0.0" and "127.0.0.1".
apps/backend/src/handlers/dev/index.ts (1)

5-7: Add a single env guard at devHandler level.

Right now each dev route must remember to self-guard. A parent-level gate is safer for future additions.

Suggested refactor
 import { Elysia } from "elysia";
+import { env } from "@cvsa/env";
 import { getHeapSnapshotHandler } from "./snapshot";
 import { gcHandler } from "./gc";
 
 export const devHandler = new Elysia({ name: "devHandler" })
+	.onBeforeHandle(({ status }) => {
+		if (env.NODE_ENV !== "development") {
+			throw status(404);
+		}
+	})
 	.use(getHeapSnapshotHandler)
 	.use(gcHandler);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/backend/src/handlers/dev/index.ts` around lines 5 - 7, Add a
parent-level environment guard to the devHandler Elysia instance so dev routes
no longer must self-guard: modify the devHandler construction to include a
middleware or .guard-like check that inspects the environment (e.g. NODE_ENV or
a dedicated ENABLE_DEV_ROUTES flag) and short-circuits requests (403 or
next-less response) when not in the allowed environment, while still chaining
the existing getHeapSnapshotHandler and gcHandler; locate and change the
devHandler creation (the new Elysia({ name: "devHandler" }) instance) to perform
this check before invoking getHeapSnapshotHandler and gcHandler.
packages/core/turbo.json (1)

5-5: Use src/**/*.ts for standard recursive glob matching.

The pattern src/**.ts does match nested paths like src/catalog/song/service.ts and will not cause stale caches. However, src/**/*.ts is the standard and more readable convention for recursive globbing in Turborepo. Update this pattern (and corresponding patterns on lines 11 and 19) for consistency with best practices.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/core/turbo.json` at line 5, Update the recursive glob patterns in
turbo.json by replacing the nonstandard "src/**.ts" occurrences with the
conventional "src/**/*.ts" (and likewise change the matching patterns used
elsewhere that follow the same form) so Turborepo uses the standard recursive
glob; search for the exact string "src/**.ts" in the file and replace it with
"src/**/*.ts" (also update the same style patterns mentioned in the comment) to
ensure consistent, readable recursive matching.
packages/embedding/package.json (1)

16-21: Use explicit semver versions instead of "latest" in dependencies.

"elysia": "latest" and "bun-types": "latest" make installs non-deterministic across environments and CI. Pin to explicit semver ranges (e.g., "^1.24.3") for reproducible builds.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/embedding/package.json` around lines 16 - 21, Replace the
non-deterministic "latest" pins in package.json by specifying explicit semver
versions: change the "elysia": "latest" dependency to a concrete semver range
(e.g., "^<appropriate-version>") and change the "bun-types": "latest"
devDependency to a concrete semver range (e.g., "^<appropriate-version>").
Update the versions to the desired tested releases, run a fresh install to
verify compatibility, and commit the updated package.json (refer to the
dependency keys "elysia" and "bun-types" to locate the entries).
packages/observability/package.json (2)

16-16: Minor version mismatch with @opentelemetry/api in backend package.

This package uses ^1.9.0 while apps/backend/package.json uses ^1.9.1. While semver ranges may resolve this, consider aligning versions across the monorepo to avoid potential conflicts.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/observability/package.json` at line 16, Update the
`@opentelemetry/api` dependency so the monorepo uses a consistent minor version:
change the entry for "@opentelemetry/api" currently set to "^1.9.0" to match the
backend's "^1.9.1" (or pick a single agreed-upon version) in the observability
package.json and ensure other package.json files use the same version to avoid
conflicts.

11-11: Pin @types/bun to a specific version.

Using "latest" can cause non-reproducible builds and unexpected breakages when the package is updated. Consider pinning to a specific version.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/observability/package.json` at line 11, The dependency "@types/bun"
is pinned to "latest", which causes non-reproducible builds; replace the
"latest" value with a specific version number (an exact semver string) that you
have tested in CI for compatibility (e.g., "0.x.y"), update the lockfile
(npm/shrinkwrap/pnpm-lock) and commit the change so builds are deterministic;
look for the "@types/bun" entry in package.json to make this edit.
apps/backend/src/common/i18n.ts (2)

19-19: Top-level await blocks module loading — ensure this is acceptable.

The top-level await i18nRuntime.load() blocks the module from being fully loaded until translations are fetched. This is typically fine for server startup, but be aware it will delay application readiness. Consider adding error handling for load failures.

-await i18nRuntime.load();
+try {
+	await i18nRuntime.load();
+} catch (error) {
+	console.error("Failed to load i18n bundles:", error);
+	throw error;
+}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/backend/src/common/i18n.ts` at line 19, The file currently uses a
top-level await on i18nRuntime.load(), which blocks module evaluation and can
delay app readiness and lacks error handling; change this by moving the call out
of the top-level to an explicit async initializer (e.g., export an async
initI18n or call i18nRuntime.load() inside an async startup routine) or wrap the
top-level await in a try/catch to handle load failures and log/propagate errors
from i18nRuntime.load() so failures are visible and the app can decide how to
proceed.

5-11: Verify production bundle path and consider explicit environment handling.

The condition env.NODE_ENV !== "production" means test environment uses the development path. This may be intentional, but consider being explicit:

 const getBundleLocation = () => {
-	if (env.NODE_ENV !== "production") {
+	if (env.NODE_ENV === "development" || env.NODE_ENV === "test") {
 		return path.join(import.meta.dir, "../../../../locale/");
-	} else {
+	} else if (env.NODE_ENV === "production") {
 		return path.join(import.meta.dir, "./locale/");
+	} else {
+		throw new Error(`Unknown NODE_ENV: ${env.NODE_ENV}`);
 	}
 };

Also verify that ./locale/ relative to import.meta.dir resolves correctly in your production deployment — the locale folder must be copied alongside the built output.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/backend/src/common/i18n.ts` around lines 5 - 11, The getBundleLocation
function uses env.NODE_ENV !== "production" which treats "test" as dev; update
getBundleLocation to explicitly branch on env.NODE_ENV values (e.g.,
"development", "production", and a fallback for "test"/other) so behavior is
deterministic, and ensure the production branch uses the correct path relative
to import.meta.dir (import.meta.dir + "./locale/") while documenting or adding a
build step that copies the locale folder into the production output; reference
getBundleLocation, env.NODE_ENV, and import.meta.dir when making the change.
apps/backend/src/handlers/auth/currentUser.ts (1)

13-15: Consider whether tracing a synchronous DTO mapping adds value.

betterAuthToCurrentUserInfoDto appears to be a simple synchronous transformation. Wrapping it in traceTask with an async callback adds overhead for minimal observability benefit. Consider removing the trace wrapper for trivial synchronous operations, or ensure traceTask has a synchronous variant for such cases.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/backend/src/handlers/auth/currentUser.ts` around lines 13 - 15, The
current code wraps a simple synchronous DTO mapper in an async traceTask:
traceTask("getCurrentUser", async () =>
betterAuthToCurrentUserInfoDto(session.user)), which adds unnecessary overhead;
either call betterAuthToCurrentUserInfoDto(session.user) directly inside
getCurrentUser (remove the traceTask wrapper) or replace it with a synchronous
tracing helper (e.g., traceTaskSync or a new sync overload) and use that so the
synchronous transformation remains synchronous; locate the call to
betterAuthToCurrentUserInfoDto and update it accordingly in the getCurrentUser
handler.
apps/backend/src/index.ts (1)

24-28: OTLPTraceExporter relies on environment variables for endpoint configuration.

The exporter is instantiated without explicit configuration, meaning it will use OTEL_EXPORTER_OTLP_ENDPOINT or OTEL_EXPORTER_OTLP_TRACES_ENDPOINT environment variables. Ensure these are documented and configured in your deployment environment.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/backend/src/index.ts` around lines 24 - 28, The OTLPTraceExporter is
being constructed with new OTLPTraceExporter() inside the opentelemetry(...)
setup (wrapped by BatchSpanProcessor) which means it will default to reading
OTEL_EXPORTER_OTLP_ENDPOINT / OTEL_EXPORTER_OTLP_TRACES_ENDPOINT from the
environment; update the deployment docs and runtime configuration to ensure
those environment variables are set to the correct collector endpoint (or
alternatively change the initialization of OTLPTraceExporter in the
opentelemetry(...) call to pass an explicit url/headers), and verify that
BatchSpanProcessor(new OTLPTraceExporter()) will use those values in your
environment.
packages/core/src/search/interface.ts (1)

6-14: Consider renaming to AbstractSearchService or SearchServiceBase.

The I prefix conventionally denotes interfaces, but this is an abstract class. Using a name like AbstractSearchService or SearchServiceBase would better communicate its nature.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/core/src/search/interface.ts` around lines 6 - 14, The class name
ISearchService is misleading because the I prefix implies an interface; rename
the abstract class ISearchService to a clearer name such as
AbstractSearchService or SearchServiceBase (pick one consistently) and update
all references and imports to that new symbol (e.g., change class declaration
"ISearchService<T>" to "AbstractSearchService<T>" and update any subclass
extends, type annotations, and exported names that reference ISearchService,
including constructor signature references to IRepositoryWithGetDetails,
SearchManager, and EmbeddingAppApi); ensure exports and any tests or consumers
import the new class name so behavior remains identical.
packages/logger/src/pinoSetup.ts (1)

11-11: Consider redacting nested paths for request/response objects.

The current redaction only covers top-level fields. If log entries include nested objects like req.headers.authorization or body.password, they won't be redacted. Consider using nested path patterns if such structures are logged.

Example nested redaction paths
-	const redact = ["authorization", "cookie", "password", "token"];
+	const redact = [
+		"authorization",
+		"cookie",
+		"password",
+		"token",
+		"*.authorization",
+		"*.cookie",
+		"*.password",
+		"*.token",
+		"req.headers.authorization",
+		"req.headers.cookie",
+		"body.password",
+	];
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/logger/src/pinoSetup.ts` at line 11, The current redact array only
lists top-level keys and will miss nested secrets; update the redact constant in
pinoSetup.ts (the redact variable) to include nested paths and wildcard patterns
(for example add entries like "req.headers.authorization", "req.cookies",
"req.cookies.*", "req.body.password", "request.body.password",
"res.headers.set-cookie" and/or generic patterns like "**.password" and
"**.authorization") so that Pino's redaction will remove sensitive values inside
nested request/response objects and arrays.
packages/logger/package.json (1)

10-12: Consider pinning @types/bun to a specific version.

Using "latest" for @types/bun can cause unexpected build failures when breaking changes are released. Pin to a specific version for reproducible builds.

Proposed fix
 	"devDependencies": {
-		"@types/bun": "latest",
+		"@types/bun": "^1.2.13",
 		"@cvsa/typescript-config": "workspace:*"
 	},
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/logger/package.json` around lines 10 - 12, In package.json, replace
the floating "latest" entry for the devDependency "@types/bun" with a pinned
exact semver (e.g. "x.y.z") in the "devDependencies" object so builds are
reproducible; update the lockfile (npm/yarn/pnpm) by running install and commit
the updated lockfile and package.json; verify the project builds and tests pass.
apps/backend/src/middlewares/requestLogger.ts (1)

7-9: Unused interface definition.

RequestLoggerOptions is defined but never used in createRequestLoggerMiddleware. If path exclusion is planned for a future iteration, consider removing this interface until it's implemented to avoid dead code.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/backend/src/middlewares/requestLogger.ts` around lines 7 - 9, Remove the
unused RequestLoggerOptions interface to eliminate dead code: delete the
RequestLoggerOptions declaration and any unused imports or references, or
alternatively wire it into createRequestLoggerMiddleware by accepting an options
parameter (e.g., createRequestLoggerMiddleware(options: RequestLoggerOptions))
and implementing excludePaths handling; reference the RequestLoggerOptions
symbol and the createRequestLoggerMiddleware function to locate where to remove
or integrate the interface.
packages/logger/src/appLogger.ts (2)

58-74: Inverted Pino logger call signature.

Pino's standard signature is logger.info(obj, msg) (object first, message second), but here you're calling pinoMethod(msg, mergedAttrs) (message first). While Pino accepts both patterns, mixing them can cause inconsistent log output formatting.

♻️ Suggested fix to use standard Pino signature
-		const pinoMethod = pinoLogger[level].bind(pinoLogger) as (
-			msg: string,
-			...args: unknown[]
-		) => void;
 		if (level === "error" || level === "fatal") {
 			const err = attrs && "error" in attrs ? (attrs.error as Error) : undefined;
 			if (err instanceof Error) {
-				pinoMethod(msg, {
+				pinoLogger[level]({
 					...mergedAttrs,
 					err,
-				});
+				}, msg);
 			} else {
-				pinoMethod(msg, mergedAttrs);
+				pinoLogger[level](mergedAttrs, msg);
 			}
 		} else {
-			pinoMethod(msg, mergedAttrs);
+			pinoLogger[level](mergedAttrs, msg);
 		}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/logger/src/appLogger.ts` around lines 58 - 74, The pino calls in
appLogger (the pinoMethod invocations inside the logger level branch) use
message-first ordering; change them to use Pino's standard object-first
signature so the metadata object (mergedAttrs, and mergedAttrs including err
when present) is passed as the first argument and msg as the second; update all
places where pinoMethod(msg, mergedAttrs) or pinoMethod(msg, { ...mergedAttrs,
err }) is used (the pinoMethod bound from pinoLogger[level]) to call
pinoMethod(mergedAttrs, msg) and pinoMethod({ ...mergedAttrs, err }, msg)
respectively to ensure consistent formatting.

10-17: Duplicate ApplicationLogger interface definition.

This file defines ApplicationLogger locally (lines 10-17), but packages/logger/src/types.ts also exports an ApplicationLogger interface (lines 22-29) that includes a child(initialAttrs) method. The local interface lacks this method.

Consider removing the local interface and importing from ./types to ensure consistency, or implement the child() method here if required.

♻️ Proposed fix
 import type { Logger as PinoLogger } from "pino";
 import { type Logger, SeverityNumber } from "@opentelemetry/api-logs";
 import { trace, SpanStatusCode, type Attributes } from "@opentelemetry/api";
 import { pinoLogger } from "./pinoSetup";
 import { otelLogger } from "@cvsa/observability";
+import type { ApplicationLogger } from "./types";

 type PinoLevel = "trace" | "debug" | "info" | "warn" | "error" | "fatal";
 type LogAttrs = Record<string, unknown> | undefined;
-
-interface ApplicationLogger {
-	trace(msg: string, attrs?: LogAttrs): void;
-	debug(msg: string, attrs?: LogAttrs): void;
-	info(msg: string, attrs?: LogAttrs): void;
-	warn(msg: string, attrs?: LogAttrs): void;
-	error(msg: string, attrs?: LogAttrs): void;
-	fatal(msg: string, attrs?: LogAttrs): void;
-}

Also applies to: 22-29

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/logger/src/appLogger.ts` around lines 10 - 17, Remove the duplicated
local ApplicationLogger interface and use the shared definition: import the
exported ApplicationLogger from the types module (the shared interface that
includes child(initialAttrs)). Replace the local interface declaration with the
imported type, or if you prefer to keep a local type, add the
child(initialAttrs?: Record<string, any>): ApplicationLogger method to match the
shared definition so signatures stay consistent; update any usages of
ApplicationLogger accordingly (e.g., places referencing
trace/debug/info/warn/error/fatal and child) to rely on the single canonical
interface.
packages/logger/src/types.ts (1)

3-9: Redundant index signature in LogAttributes.

The explicit [key: string]: unknown on line 8 is redundant because the interface already extends Record<string, unknown>, which provides the same index signature.

♻️ Proposed cleanup
 export interface LogAttributes extends Record<string, unknown> {
 	userId?: string;
 	requestId?: string;
 	traceId?: string;
 	spanId?: string;
-	[key: string]: unknown;
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/logger/src/types.ts` around lines 3 - 9, The LogAttributes interface
declares an explicit index signature that duplicates the Record<string, unknown>
from the extends clause; remove the redundant `[key: string]: unknown;` entry in
the LogAttributes definition so the interface simply extends Record<string,
unknown> while keeping the optional properties userId, requestId, traceId, and
spanId intact.
packages/core/src/catalog/song/service.ts (1)

36-37: Replace console.error with structured logging.

The TODO comment on line 36 notes this. Using console.error bypasses the observability infrastructure being introduced in this PR. Consider injecting a logger or using appLogger for consistency with the rest of the codebase.

Also applies to: 53-54, 70-70

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/core/src/catalog/song/service.ts` around lines 36 - 37, Replace the
three plain console.error calls in packages/core/src/catalog/song/service.ts
with structured logging via the existing observability logger (e.g., appLogger)
or an injected Logger instance: update the error sites (inside the functions
where console.error(e) appears) to call appLogger.error (or this.logger.error)
and include both a contextual message (e.g., "Failed to <operation> for song"
with song id/params) and the caught error object so the stack and details are
preserved; if those functions (or the module) don't currently have access to
appLogger, add it as a dependency (constructor parameter or function arg) or
import the shared appLogger used elsewhere to keep logging consistent and remove
the TODO comment.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI (base), Organization UI (inherited)

Review profile: CHILL

Plan: Pro

Run ID: 7dd5d6be-d5cf-4ae3-8f87-b65cfc1eb471

📥 Commits

Reviewing files that changed from the base of the PR and between fdace6b and 87a448f.

⛔ Files ignored due to path filters (1)
  • bun.lock is excluded by !**/*.lock
📒 Files selected for processing (107)
  • .agents/skills/jujutsu-vcs/SKILL.md
  • .opencode/commands/commit.md
  • apps/backend/package.json
  • apps/backend/src/common/error/getErrorResponse.ts
  • apps/backend/src/common/i18n.ts
  • apps/backend/src/common/log.ts
  • apps/backend/src/common/middlewares/index.ts
  • apps/backend/src/common/trace.ts
  • apps/backend/src/errorHandler.ts
  • apps/backend/src/handlers/auth/currentUser.ts
  • apps/backend/src/handlers/auth/index.ts
  • apps/backend/src/handlers/auth/login.ts
  • apps/backend/src/handlers/auth/logout.ts
  • apps/backend/src/handlers/auth/signup.ts
  • apps/backend/src/handlers/catalog/song/create.ts
  • apps/backend/src/handlers/catalog/song/delete.ts
  • apps/backend/src/handlers/catalog/song/details.ts
  • apps/backend/src/handlers/catalog/song/index.ts
  • apps/backend/src/handlers/catalog/song/search.ts
  • apps/backend/src/handlers/catalog/song/update.ts
  • apps/backend/src/handlers/dev/gc.ts
  • apps/backend/src/handlers/dev/index.ts
  • apps/backend/src/handlers/dev/snapshot.ts
  • apps/backend/src/handlers/index.ts
  • apps/backend/src/index.ts
  • apps/backend/src/middlewares/auth.ts
  • apps/backend/src/middlewares/i18n.ts
  • apps/backend/src/middlewares/index.ts
  • apps/backend/src/middlewares/requestLogger.ts
  • apps/backend/src/onAfterHandle.ts
  • apps/backend/src/startMessage.ts
  • apps/backend/tests/auth.test.ts
  • apps/backend/tests/song.test.ts
  • apps/backend/tsconfig.json
  • locale/manifest.json
  • locale/modules/backend/manifest.json
  • locale/modules/backend/messages/en.json
  • locale/modules/backend/messages/zh.json
  • packages/core/common/index.ts
  • packages/core/common/internal.ts
  • packages/core/common/schema/index.ts
  • packages/core/common/types/repository.ts
  • packages/core/index.ts
  • packages/core/modules/catalog/song/service.ts
  • packages/core/modules/internal.ts
  • packages/core/modules/ml/index.ts
  • packages/core/modules/search/search.interface.ts
  • packages/core/package.json
  • packages/core/src/auth/betterAuth.ts
  • packages/core/src/auth/index.ts
  • packages/core/src/auth/user/dto.ts
  • packages/core/src/auth/user/index.ts
  • packages/core/src/catalog/index.ts
  • packages/core/src/catalog/song/container.ts
  • packages/core/src/catalog/song/dto.ts
  • packages/core/src/catalog/song/index.ts
  • packages/core/src/catalog/song/repository.interface.ts
  • packages/core/src/catalog/song/repository.ts
  • packages/core/src/catalog/song/service.ts
  • packages/core/src/error/AppError.ts
  • packages/core/src/error/index.ts
  • packages/core/src/error/schema.ts
  • packages/core/src/index.ts
  • packages/core/src/internal.ts
  • packages/core/src/search/catalog/index.ts
  • packages/core/src/search/catalog/song.ts
  • packages/core/src/search/config.ts
  • packages/core/src/search/container.ts
  • packages/core/src/search/index.ts
  • packages/core/src/search/interface.ts
  • packages/core/src/search/manager.ts
  • packages/core/src/types/index.ts
  • packages/core/src/types/repository.ts
  • packages/core/src/types/service.ts
  • packages/core/src/utils/deepEqualUnordered.ts
  • packages/core/src/utils/index.ts
  • packages/core/src/utils/randomId.ts
  • packages/core/tests/integration/SongRepository.test.ts
  • packages/core/tests/unit/SongService.test.ts
  • packages/core/tsconfig.json
  • packages/core/turbo.json
  • packages/db/index.ts
  • packages/db/package.json
  • packages/db/src/index.ts
  • packages/db/src/prisma.ts
  • packages/db/src/types.ts
  • packages/db/tsconfig.json
  • packages/embedding/package.json
  • packages/embedding/src/embedding.ts
  • packages/embedding/src/index.ts
  • packages/embedding/tests/e2e.test.ts
  • packages/embedding/tsconfig.json
  • packages/env/package.json
  • packages/env/src/index.ts
  • packages/env/tsconfig.json
  • packages/logger/package.json
  • packages/logger/src/appLogger.ts
  • packages/logger/src/index.ts
  • packages/logger/src/pinoSetup.ts
  • packages/logger/src/types.ts
  • packages/logger/tsconfig.json
  • packages/observability/package.json
  • packages/observability/src/index.ts
  • packages/observability/src/init.ts
  • packages/observability/src/tracer.ts
  • packages/observability/tsconfig.json
  • turbo.json
💤 Files with no reviewable changes (12)
  • apps/backend/src/common/middlewares/index.ts
  • packages/core/modules/ml/index.ts
  • packages/core/src/types/index.ts
  • packages/core/common/schema/index.ts
  • packages/core/index.ts
  • packages/core/common/index.ts
  • packages/db/index.ts
  • packages/core/modules/internal.ts
  • packages/core/common/types/repository.ts
  • packages/core/common/internal.ts
  • packages/core/modules/catalog/song/service.ts
  • packages/core/modules/search/search.interface.ts

Comment thread apps/backend/src/common/error/getErrorResponse.ts
Comment on lines +17 to +49
export function formatGinLog(data: GinLogData = {}): string {
const parts: string[] = [];

if (data.method) {
const m = data.method.toUpperCase();
let methodBg = chalk.bgBlue.white.bold;
if (m === "POST") methodBg = chalk.bgCyan.black.bold;
else if (m === "PUT") methodBg = chalk.bgYellow.black.bold;
else if (m === "DELETE") methodBg = chalk.bgRed.white.bold;
else if (m === "PATCH") methodBg = chalk.bgGreen.white.bold;
parts.push(methodBg(` ${m.padEnd(6)} `));
}

if (data.status) {
let statusColor = chalk.green;
if (data.status >= 400 && data.status < 500) statusColor = chalk.yellow;
else if (data.status >= 500) statusColor = chalk.red;
parts.push(statusColor(data.status.toString().padStart(3)));
}

if (data.latency !== undefined) {
parts.push(chalk.magenta(data.latency.padStart(10)));
}

if (data.ip) {
parts.push(chalk.gray(data.ip.padStart(15)));
}

if (data.path) {
parts.push(`"${data.path}"`);
}

return parts.join(" | ");
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Sanitize user-controlled fields before formatting the log line.

path is written verbatim into the terminal output. A crafted request containing newlines or ANSI escapes can forge or mangle log lines.

Suggested fix
 export function formatGinLog(data: GinLogData = {}): string {
+	const sanitize = (value: string) =>
+		value.replace(/[\u0000-\u001f\u007f-\u009f]/g, "");
+
 	const parts: string[] = [];
@@
 	if (data.ip) {
-		parts.push(chalk.gray(data.ip.padStart(15)));
+		parts.push(chalk.gray(sanitize(data.ip).padStart(15)));
 	}
 
 	if (data.path) {
-		parts.push(`"${data.path}"`);
+		parts.push(JSON.stringify(sanitize(data.path)));
 	}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/backend/src/common/log.ts` around lines 17 - 49, formatGinLog writes
user-controlled fields (e.g., data.path and data.ip) verbatim to the terminal
allowing newline or ANSI escape injection; sanitize these fields before
formatting by stripping or escaping ANSI control sequences and
removing/normalizing newlines and control chars (for example use a stripAnsi
utility or a regex to remove \x1b... sequences, replace CR/LF with a single
space, and escape quotes) in the formatGinLog function so that parts.push uses
the cleaned values (refer to formatGinLog, data.path, data.ip).

Comment on lines +6 to +17
export async function traceTask<T>(name: string, fn: () => T): Promise<T> {
return tracer.startActiveSpan(name, async (span) => {
try {
span.setStatus({ code: SpanStatusCode.OK });
return await fn();
} catch (e) {
span.setStatus({ code: SpanStatusCode.ERROR });
throw e;
} finally {
span.end();
}
});
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

git ls-files | grep -E "trace\.ts$"

Repository: project-cvsa/cvsa

Length of output: 93


🏁 Script executed:

cat -n apps/backend/src/common/trace.ts

Repository: project-cvsa/cvsa

Length of output: 951


🏁 Script executed:

rg "recordException|setStatus" apps/backend/src --type ts -B 2 -A 2

Repository: project-cvsa/cvsa

Length of output: 550


🏁 Script executed:

rg "traceTask" apps/backend/src --type ts -l

Repository: project-cvsa/cvsa

Length of output: 507


🏁 Script executed:

cat -n apps/backend/src/handlers/auth/login.ts

Repository: project-cvsa/cvsa

Length of output: 2431


🏁 Script executed:

rg "traceTask" apps/backend/src/handlers --type ts -B 2 -A 5

Repository: project-cvsa/cvsa

Length of output: 9955


🌐 Web query:

OpenTelemetry Node.js Span recordException method API

💡 Result:

The OpenTelemetry Node.js Span.recordException method is part of the @opentelemetry/api package (latest version 1.9.1 as of March 2026). It records an exception as a span event named "exception" with automatically extracted attributes like exception.type, exception.message, and exception.stacktrace. Signature: recordException(exception: Exception, time?: TimeInput): void - exception: string or Error object (required) - time: Optional timestamp for the event (defaults to current time) Usage example: const { trace, SpanStatusCode } = require('@opentelemetry/api'); const span = trace.getTracer('example').startSpan('my-span'); try { // code that might throw } catch (error) { span.recordException(error); span.setStatus({ code: SpanStatusCode.ERROR, message: error.message }); } span.end; This creates a span event with semantic attributes per OpenTelemetry specs. Note: The API does not support passing custom attributes directly to recordException (proposals discussed but not implemented in stable API; use span.addEvent or setAttributes separately if needed). Always pair with setStatus for complete error reporting, and ensure the span is active (isRecording === true).

Citations:


Record the exception and fix the callback typing.

The function currently accepts async () => ... callbacks but declares fn: () => T, which causes TypeScript to infer T as Promise<U>, creating a misleading Promise<Promise<U>> return type. Additionally, failed spans only receive the ERROR status without recording the actual exception, limiting observability for debugging.

Suggested fix
-export async function traceTask<T>(name: string, fn: () => T): Promise<T> {
+export async function traceTask<T>(
+	name: string,
+	fn: () => Promise<T> | T
+): Promise<T> {
 	return tracer.startActiveSpan(name, async (span) => {
 		try {
-			span.setStatus({ code: SpanStatusCode.OK });
-			return await fn();
+			const result = await fn();
+			span.setStatus({ code: SpanStatusCode.OK });
+			return result;
 		} catch (e) {
+			span.recordException(e instanceof Error ? e : new Error(String(e)));
 			span.setStatus({ code: SpanStatusCode.ERROR });
 			throw e;
 		} finally {
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
export async function traceTask<T>(name: string, fn: () => T): Promise<T> {
return tracer.startActiveSpan(name, async (span) => {
try {
span.setStatus({ code: SpanStatusCode.OK });
return await fn();
} catch (e) {
span.setStatus({ code: SpanStatusCode.ERROR });
throw e;
} finally {
span.end();
}
});
export async function traceTask<T>(
name: string,
fn: () => Promise<T> | T
): Promise<T> {
return tracer.startActiveSpan(name, async (span) => {
try {
const result = await fn();
span.setStatus({ code: SpanStatusCode.OK });
return result;
} catch (e) {
span.recordException(e instanceof Error ? e : new Error(String(e)));
span.setStatus({ code: SpanStatusCode.ERROR });
throw e;
} finally {
span.end();
}
});
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/backend/src/common/trace.ts` around lines 6 - 17, The traceTask helper's
callback typing and error handling are wrong: change the fn signature from () =>
T to () => Promise<T> (so traceTask<T>(name: string, fn: () => Promise<T>):
Promise<T>) to avoid Promise<Promise<U>> inference, and inside the catch block
call span.recordException(e) before setting span.setStatus({ code:
SpanStatusCode.ERROR }) (and then rethrow) so the span records the actual
exception; locate these changes around traceTask, tracer.startActiveSpan and the
span.setStatus usage.

Comment on lines +44 to +51
.onRequest(function setRequestMeta({ store }) {
store.startTime = {};
store.error = {};
const traceId = getTraceId() ?? "";
if (traceId) {
store.startTime[traceId] = Bun.nanoseconds() / 1_000_000;
}

if (AUTH_INVALID_CODES.includes(bodyCode as AuthInvalidCode)) {
return response.betterAuthInvalidCred(status);
})
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Race condition when traceId is empty.

When traceId is an empty string, all concurrent requests will share the same store entry at key "". Line 48-50 guards startTime writes with if (traceId), but lines 60-68 unconditionally write to store.error[traceId]. This can cause cross-request data corruption.

🐛 Proposed fix: generate a fallback request ID
 	.onRequest(function setRequestMeta({ store }) {
 		store.startTime = {};
 		store.error = {};
-		const traceId = getTraceId() ?? "";
+		const traceId = getTraceId() ?? crypto.randomUUID();
 		if (traceId) {
 			store.startTime[traceId] = Bun.nanoseconds() / 1_000_000;
 		}
 	})

Or store the generated ID for later use:

+	.state("requestId", {} as Record<string, string>)
 	.onRequest(function setRequestMeta({ store, request }) {
 		store.startTime = {};
 		store.error = {};
-		const traceId = getTraceId() ?? "";
-		if (traceId) {
-			store.startTime[traceId] = Bun.nanoseconds() / 1_000_000;
-		}
+		const traceId = getTraceId() ?? crypto.randomUUID();
+		// Store mapping for this request
+		store.requestId[request] = traceId;
+		store.startTime[traceId] = Bun.nanoseconds() / 1_000_000;
 	})

Also applies to: 59-68

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/backend/src/errorHandler.ts` around lines 44 - 51, The current
setRequestMeta handler uses getTraceId() and writes to store.startTime[traceId]
only if traceId is truthy, but later unconditionally writes to
store.error[traceId], causing different requests to share the empty-string key
when getTraceId() returns "". Modify setRequestMeta (and the same pattern around
lines 59-68) to ensure a non-empty per-request id: call getTraceId(), and if it
returns an empty string generate and persist a fallback id (e.g., a short UUID
or timestamp-based id) into the request store so both store.startTime and
store.error always use that same non-empty id; alternatively, guard writes to
store.error with the same truthy check or use the persisted fallback id
consistently across setRequestMeta and later error handling code (referencing
setRequestMeta, store.startTime, store.error, and getTraceId).

Comment on lines +121 to +124
.onError({ as: "global" }, function logErrorResponse({ store, status: statusFunc, ip }) {
const traceId = getTraceId() ?? "";
const { request, locale, data, status, rawError, known } = store.error[traceId];
const startTime = store.startTime[traceId];
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Missing null-safety check before destructuring store data.

Line 123 destructures store.error[traceId] without verifying the entry exists. If the first onError hook didn't populate the store (e.g., due to an unexpected code path), this will throw a runtime error.

🐛 Proposed fix
 	.onError({ as: "global" }, function logErrorResponse({ store, status: statusFunc, ip }) {
 		const traceId = getTraceId() ?? "";
+		const errorData = store.error[traceId];
+		if (!errorData) {
+			// Fallback for unexpected error paths
+			pinoLogger.error("Error handler invoked without error data in store");
+			return;
+		}
-		const { request, locale, data, status, rawError, known } = store.error[traceId];
+		const { request, locale, data, status, rawError, known } = errorData;
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
.onError({ as: "global" }, function logErrorResponse({ store, status: statusFunc, ip }) {
const traceId = getTraceId() ?? "";
const { request, locale, data, status, rawError, known } = store.error[traceId];
const startTime = store.startTime[traceId];
.onError({ as: "global" }, function logErrorResponse({ store, status: statusFunc, ip }) {
const traceId = getTraceId() ?? "";
const errorData = store.error[traceId];
if (!errorData) {
// Fallback for unexpected error paths
pinoLogger.error("Error handler invoked without error data in store");
return;
}
const { request, locale, data, status, rawError, known } = errorData;
const startTime = store.startTime[traceId];
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/backend/src/errorHandler.ts` around lines 121 - 124, The onError hook
handler logErrorResponse destructures store.error[traceId] without checking it
exists (traceId from getTraceId), which can throw if the entry is missing;
modify logErrorResponse to first retrieve the errorEntry =
store.error?.[traceId] (or guard with if (!traceId || !store?.error ||
!store.error[traceId]) ) and handle the missing case (e.g., set defaults or
return/continue) before destructuring request, locale, data, status, rawError,
known and using startTime from store.startTime[traceId]; ensure you reference
getTraceId(), logErrorResponse, store.error and startTime guards in the change.

Comment on lines +54 to +62
const vectors = await this.embeddingManager.embeddings.post({
texts: [
`Name: ${song.name ?? ""}
Lyrics: ${song.lyrics ?? ""}
Description: ${getDesc() ?? ""}
Singers: ${getSingers().join(", ")}
Artists: ${getArtists().join(", ")}
`,
]);
],
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Use lyric text in the embedding payload.

song.lyrics is an array here, so this template string becomes "[object Object],..." instead of searchable lyrics.

Suggested fix
-				`Name: ${song.name ?? ""}
+				`Name: ${song.name ?? ""}
 Lyrics: ${song.lyrics ?? ""}
 Description: ${getDesc() ?? ""}
 Singers: ${getSingers().join(", ")}
 Artists: ${getArtists().join(", ")}
 `,
+				`Name: ${song.name ?? ""}
+Lyrics: ${getLyrics() ?? ""}
+Description: ${getDesc() ?? ""}
+Singers: ${getSingers().join(", ")}
+Artists: ${getArtists().join(", ")}
+`,
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/core/src/search/catalog/song.ts` around lines 54 - 62, The embedding
payload is using song.lyrics (which is an array) directly inside the template
string, producing "[object Object]" instead of real lyric text; update the code
around embeddingManager.embeddings.post (the vectors creation) to convert
song.lyrics into a single string before interpolation — e.g., map the array to
the lyric text field or toString and join with "\n" (or another delimiter) so
the template includes actual lyrics (reference song.lyrics, the vectors
variable, and the embeddingManager.embeddings.post call).

Comment on lines 60 to 62
expect(songService.getDetails(999)).rejects.toThrow(AppError);
expect(songService.getDetails(999)).rejects.toThrow("Song not found");
expect(songService.getDetails(999)).rejects.toThrow("error.song.notfound");
expect(songService.getDetails(999)).rejects.toMatchObject({
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

find . -type f -name "SongService.test.ts" | head -20

Repository: project-cvsa/cvsa

Length of output: 107


🏁 Script executed:

cat -n ./packages/core/tests/unit/SongService.test.ts

Repository: project-cvsa/cvsa

Length of output: 2684


Add await to the rejection assertion chains to ensure assertions execute properly.

Lines 60–62 call songService.getDetails(999) without awaiting the .rejects chains. In Bun test (and Jest), the .rejects matchers return promises that must be awaited; without awaiting, the test completes before assertions run, making it unreliable.

Fix
-			expect(songService.getDetails(999)).rejects.toThrow(AppError);
-			expect(songService.getDetails(999)).rejects.toThrow("error.song.notfound");
-			expect(songService.getDetails(999)).rejects.toMatchObject({
+			const result = songService.getDetails(999);
+			await expect(result).rejects.toThrow(AppError);
+			await expect(result).rejects.toThrow("error.song.notfound");
+			await expect(result).rejects.toMatchObject({
 				code: "NOT_FOUND",
 				statusCode: 404,
 			});
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
expect(songService.getDetails(999)).rejects.toThrow(AppError);
expect(songService.getDetails(999)).rejects.toThrow("Song not found");
expect(songService.getDetails(999)).rejects.toThrow("error.song.notfound");
expect(songService.getDetails(999)).rejects.toMatchObject({
const result = songService.getDetails(999);
await expect(result).rejects.toThrow(AppError);
await expect(result).rejects.toThrow("error.song.notfound");
await expect(result).rejects.toMatchObject({
code: "NOT_FOUND",
statusCode: 404,
});
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/core/tests/unit/SongService.test.ts` around lines 60 - 62, The three
rejection assertions for songService.getDetails(999) must be awaited so the
promise matchers run; update the tests to prefix each expect(...).rejects chain
with await (i.e., await
expect(songService.getDetails(999)).rejects.toThrow(AppError), await
expect(...).rejects.toThrow("error.song.notfound"), and await
expect(...).rejects.toMatchObject({...})) so the assertions actually execute.

Comment on lines +92 to +123
const response = await fetch(url);
if (!response.ok) {
appLogger.error(`Failed to download model: ${response.status} ${response.statusText}`);
return;
}

if (!response.body) {
appLogger.error("Response body is null");
return;
}

const reader = response.body.getReader();
const chunks: Uint8Array[] = [];

while (true) {
const { done, value } = await reader.read();
if (done) break;
downloaded += value.byteLength;
const percent = size > 0 ? Math.round((downloaded / size) * 100) : 0;
appLogger.info(`Embedding model download: ${percent}% (${downloaded}/${size} bytes)`);
chunks.push(value);
}

const buffer = new Uint8Array(size);
let offset = 0;
for (const chunk of chunks) {
buffer.set(chunk, offset);
offset += chunk.byteLength;
}

await Bun.write(modelFile, buffer);
appLogger.info("Embedding model downloaded.");
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

cd packages/embedding/src && cat -n embedding.ts | sed -n '60,140p'

Repository: project-cvsa/cvsa

Length of output: 2621


Add timeout to the fetch and validate downloaded bytes before writing.

The fetch at line 92 has no timeout and can hang startup indefinitely if the Hugging Face connection stalls. Additionally, if the stream ends prematurely with fewer than size bytes received, a zero-padded buffer is written to disk. Since exists() checks only file presence, not integrity, this corrupt model passes validation on the next startup. Add an AbortController timeout to the fetch and verify that downloaded === size before writing the model file.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/embedding/src/embedding.ts` around lines 92 - 123, The fetch that
downloads the embedding model must be made abortable and the downloaded byte
count must be validated before writing: wrap fetch(url) with an AbortController
and a setTimeout to call controller.abort() after a configurable timeout, pass
controller.signal into fetch, and clear the timeout once the response is
received; during streaming, if aborted ensure the reader is cancelled/closed and
handle the thrown error; after the while loop check that downloaded === size
(and size > 0) and if not, log an error via appLogger, do not call
Bun.write(modelFile, buffer), and return/throw to avoid writing a truncated file
(also consider deleting any partial file if created); update references around
response.body.getReader(), downloaded, size, modelFile, and Bun.write
accordingly.

Comment on lines +9 to +18
async ({ body, status }) => {
const { texts } = body;
const embeddings = await embeddingManager?.getEmbedding(texts);
if (!embeddings) {
return status(500, { message: "cannot generate embedding" });
}
return status(200, {
embeddings,
});
},
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Preserve the declared 500 response shape when embedding generation throws.

embeddingManager?.getEmbedding(texts) can still reject, and that currently escapes the route instead of returning the { message } payload declared in response.500.

Suggested fix
 		async ({ body, status }) => {
-			const { texts } = body;
-			const embeddings = await embeddingManager?.getEmbedding(texts);
-			if (!embeddings) {
-				return status(500, { message: "cannot generate embedding" });
-			}
-			return status(200, {
-				embeddings,
-			});
+			try {
+				const { texts } = body;
+				const embeddings = await embeddingManager?.getEmbedding(texts);
+				if (!embeddings) {
+					return status(500, { message: "cannot generate embedding" });
+				}
+				return status(200, {
+					embeddings,
+				});
+			} catch {
+				return status(500, { message: "cannot generate embedding" });
+			}
 		},
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
async ({ body, status }) => {
const { texts } = body;
const embeddings = await embeddingManager?.getEmbedding(texts);
if (!embeddings) {
return status(500, { message: "cannot generate embedding" });
}
return status(200, {
embeddings,
});
},
async ({ body, status }) => {
try {
const { texts } = body;
const embeddings = await embeddingManager?.getEmbedding(texts);
if (!embeddings) {
return status(500, { message: "cannot generate embedding" });
}
return status(200, {
embeddings,
});
} catch {
return status(500, { message: "cannot generate embedding" });
}
},
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/embedding/src/index.ts` around lines 9 - 18, The handler currently
awaits embeddingManager?.getEmbedding(texts) without catching rejections, so
thrown errors escape the route; wrap the call in a try/catch around the await to
catch any exceptions from embeddingManager.getEmbedding and on failure return
status(500, { message: "cannot generate embedding" }) (you can also log the
error); target the anonymous async handler and the
embeddingManager?.getEmbedding call and ensure the catch returns the declared
500 payload shape.

Comment thread packages/env/src/index.ts
export const env = createEnv({
server: {
DATABASE_URL: z.url(),
DATABASE_URL: z.url().optional(),
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Check how DATABASE_URL is used in the codebase to verify if undefined handling exists
rg -n "DATABASE_URL|env\.DATABASE_URL" --type=ts -C2

Repository: project-cvsa/cvsa

Length of output: 836


🏁 Script executed:

cat -n packages/db/src/prisma.ts

Repository: project-cvsa/cvsa

Length of output: 1809


🏁 Script executed:

cat -n packages/db/prisma.config.ts

Repository: project-cvsa/cvsa

Length of output: 459


🏁 Script executed:

# Check if DATABASE_URL is wrapped in any error handling or conditional logic
rg -n "DATABASE_URL" --type=ts -B5 -A5

Repository: project-cvsa/cvsa

Length of output: 1760


🏁 Script executed:

# Check for .env files and documentation
find . -name ".env*" -o -name "README*" | head -20

Repository: project-cvsa/cvsa

Length of output: 419


🏁 Script executed:

# Look for test setup or configuration that might handle missing DATABASE_URL
rg -n "DATABASE_URL|process.env.DATABASE_URL" --type=ts -l | head -10

Repository: project-cvsa/cvsa

Length of output: 141


🏁 Script executed:

# Check if there's any guard logic around prisma initialization
rg -n "prisma|PrismaClient" --type=ts -B2 -A2 | grep -A2 -B2 "if\|try\|catch"

Repository: project-cvsa/cvsa

Length of output: 328


🏁 Script executed:

cat ./packages/db/.env.example

Repository: project-cvsa/cvsa

Length of output: 118


🏁 Script executed:

cat ./apps/backend/.env.example

Repository: project-cvsa/cvsa

Length of output: 188


🏁 Script executed:

# Check if there's any conditional initialization or error handling at app startup
rg -n "DATABASE_URL|prisma|PrismaClient" apps/backend --type=ts -A3 | head -40

Repository: project-cvsa/cvsa

Length of output: 2098


Fix missing error handling for optional DATABASE_URL.

Making DATABASE_URL optional is unsafe. The code passes env.DATABASE_URL directly to PrismaPg constructor (line 6 of packages/db/src/prisma.ts) and Prisma config (line 10 of packages/db/prisma.config.ts) without any null/undefined checks. If DATABASE_URL is missing, Prisma initialization will fail with a runtime error. Either revert DATABASE_URL to required, skip Prisma initialization when the URL is unavailable, or add explicit error handling.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/env/src/index.ts` at line 6, DATABASE_URL is marked optional but is
consumed unguarded by Prisma, causing runtime failures; either make DATABASE_URL
required in the zod schema (change DATABASE_URL to a required url) or add
explicit guards where Prisma is initialized (check env.DATABASE_URL before
calling the PrismaPg constructor and before using it in prisma config/prisma
client setup) and throw a clear descriptive error or skip Prisma initialization
when missing; reference the DATABASE_URL schema entry and the PrismaPg
constructor / prisma config initialization points to implement the chosen fix.

@alikia2x alikia2x force-pushed the ref/observability branch from 8303412 to 5ab461a Compare April 4, 2026 17:28
@alikia2x alikia2x force-pushed the ref/observability branch from 5ab461a to a5341e4 Compare April 4, 2026 17:29
@alikia2x
Copy link
Copy Markdown
Contributor Author

alikia2x commented Apr 4, 2026

@coderabbitai Can you write a commit message body that describes the motivation and the changes involved?
Make it less than 10 sentences please.

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Apr 4, 2026

Tip

For best results, initiate chat on the files or code changes.

@alikia2x Here's a commit message body:

Introduce distributed tracing and structured logging to improve observability
and debuggability of the backend service.

Extract environment configuration, logging, and observability concerns into
dedicated packages (`@cvsa/env`, `@cvsa/logger`, `@cvsa/observability`), and move
the embedding inference server into its own `@cvsa/embedding` package. Add an
OpenTelemetry SDK setup with a traceTask helper, and wrap all backend request
handlers with it to emit spans. Introduce a Pino-based request logger with
gin-style formatting and an application logger that bridges to OpenTelemetry.
Add an i18n middleware for per-request locale detection, replace hard-coded
error message strings with i18n keys, and include trace IDs in error responses
via X-Trace-ID headers. Refactor the error handler into a composable Elysia
plugin and restructure packages/core source files under a src/ directory.
Add locale bundles for English and Chinese backend error messages, and expose
development-only endpoints for heap snapshot inspection and GC triggering.

@alikia2x alikia2x merged commit 99b8d49 into develop Apr 4, 2026
2 checks passed
@alikia2x alikia2x deleted the ref/observability branch April 4, 2026 17:36
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant