fix: address round-2 review findings + document deliberate decisions#2
Conversation
Fixes: - 7 uppercase error strings → lowercase (content, mcp, notion, oreilly) - 4 packages add ErrConflict sentinel (auth, monitor, session, task) - mcp: merge 4 duplicate store field pairs (projectWriter→projects, etc.) - mcp: rename Get-prefix input types (GetSessionNotesInput→SessionNotesInput) - auth: generateState panic → return (string, error) - pipeline: NewContentSync 7 params → ContentSyncDeps struct - topic: remove compile-time assertion (concrete coupling) - note.Note, tag.Tag, tag.Alias: add JSON struct tags - db: add doc.go package documentation Deliberate decision docs (to stop review cycle): - mcp/server.go: package doc explains transport gateway role - pipeline/pipeline.go: package doc explains orchestration + sub-struct design - notion/notion.go: package doc explains unified package rationale - content/store.go: document tx-in-store exception with rationale - api/api.go: document why Response.Data is any (not generic)
| // Flip a character in the signature portion | ||
| tampered := state[:len(state)-1] + "X" | ||
| if h.validateState(tampered) { |
There was a problem hiding this comment.
The test for a tampered signature modifies only the last character of the state string, which may not always guarantee a failed HMAC validation, especially if the last character is padding or does not affect the decoded bytes. For a more robust test, consider flipping a random byte in the base64-decoded signature portion to ensure the HMAC validation fails reliably.
Recommended solution:
// Tamper with the signature by flipping a byte in the decoded signature
parts := strings.Split(state, ".")
sig, _ := base64.URLEncoding.DecodeString(parts[2])
sig[0] ^= 0xFF // flip bits in the first byte
parts[2] = base64.URLEncoding.EncodeToString(sig)
tampered := strings.Join(parts, ".")
if h.validateState(tampered) {
t.Errorf("validateState(tampered) = true, want false")
}| h := newTestHandler(t) | ||
| h.oauthCfg = buildOAuthConfig("cid", "csec", "https://example.com/cb") | ||
|
|
||
| state := h.generateState() | ||
| state, err := h.generateState() | ||
| if err != nil { | ||
| t.Fatalf("generateState() unexpected error: %v", err) | ||
| } | ||
| req := httptest.NewRequest(http.MethodGet, "/api/auth/google/callback?state="+state, http.NoBody) | ||
| w := httptest.NewRecorder() | ||
| h.GoogleCallback(w, req) |
There was a problem hiding this comment.
Missing HTTP Status Code Assertion
The test for GoogleCallback with missing code does not verify the HTTP status code returned by the handler. This omission could allow regressions where the handler returns an incorrect status (e.g., 500 or 200 instead of the expected redirect status). To improve robustness, add an assertion for the expected status code:
if w.Code != http.StatusOK {
t.Errorf("GoogleCallback(missing code) status = %d, want %d", w.Code, http.StatusOK)
}| const n = 20 | ||
| seen := make(map[string]struct{}, n) | ||
| for range n { | ||
| state := h.generateState() | ||
| state, err := h.generateState() | ||
| if err != nil { | ||
| t.Fatalf("generateState() unexpected error: %v", err) | ||
| } | ||
| if _, dup := seen[state]; dup { | ||
| t.Errorf("generateState() produced duplicate: %q", state) | ||
| } |
There was a problem hiding this comment.
Incorrect Loop Syntax: 'for range n'
The loop for range n is not valid Go syntax for iterating n times. It should be replaced with a standard for loop:
for i := 0; i < n; i++ {
// ...
}This ensures the test actually runs n iterations and checks for uniqueness as intended.
| if pgErr, ok := errors.AsType[*pgconn.PgError](err); ok && pgErr.Code == pgerrcode.UniqueViolation { | ||
| return ErrConflict |
There was a problem hiding this comment.
Non-idiomatic error type assertion:
The code uses errors.AsType[*pgconn.PgError](err) to check for a Postgres unique violation. The idiomatic Go approach is to use errors.As(err, &pgErr) for error type extraction. This improves maintainability and clarity:
var pgErr *pgconn.PgError
if errors.As(err, &pgErr) && pgErr.Code == pgerrcode.UniqueViolation {
return ErrConflict
}Recommend updating to the idiomatic pattern.
| pool, ok := s.dbtx.(interface { | ||
| Begin(ctx context.Context) (pgx.Tx, error) | ||
| }) | ||
| if !ok { | ||
| return nil, fmt.Errorf("CreateContent requires a connection with Begin support") | ||
| return nil, fmt.Errorf("create content requires a connection with begin support") | ||
| } | ||
|
|
||
| tx, err := pool.Begin(ctx) |
There was a problem hiding this comment.
Code Duplication:
The transaction pool assertion and begin logic are repeated in both CreateContent and UpdateContent methods. This repetition increases maintenance burden and risk of inconsistency if changes are required in the future.
Recommendation:
Extract the transaction initialization logic into a private helper method, e.g., beginTx(ctx), to centralize transaction handling and improve maintainability.
|
|
||
| func (s *Server) readOReillyChapter(ctx context.Context, _ *mcp.CallToolRequest, input *ReadChapterInput) (*mcp.CallToolResult, ReadChapterOutput, error) { | ||
| if s.oreilly == nil { | ||
| return nil, ReadChapterOutput{}, fmt.Errorf("O'Reilly search is not configured (ORM_JWT not set)") | ||
| return nil, ReadChapterOutput{}, fmt.Errorf("oreilly search is not configured (ORM_JWT not set)") | ||
| } | ||
| if input.ArchiveID == "" || input.Filename == "" { | ||
| return nil, ReadChapterOutput{}, fmt.Errorf("archive_id and filename are required") |
There was a problem hiding this comment.
Missing format validation for ArchiveID and Filename
The function checks for non-empty values but does not validate the format of input.ArchiveID and input.Filename. This could lead to backend errors or security issues if malformed or malicious values are passed. Implement format validation for both fields before making backend calls.
Recommended solution:
if !isValidArchiveID(input.ArchiveID) || !isValidFilename(input.Filename) {
return nil, ReadChapterOutput{}, fmt.Errorf("valid archive_id and filename are required")
}Where isValidArchiveID and isValidFilename enforce the expected formats.
| func (s *Server) searchSemanticNotes(ctx context.Context, query string, limit int) ([]note.SimilarityResult, error) { | ||
| if s.queryEmbedder == nil || s.semanticNotes == nil { | ||
| if s.queryEmbedder == nil || s.notes == nil { | ||
| return nil, nil | ||
| } | ||
| vec, err := s.queryEmbedder.EmbedQuery(ctx, query) | ||
| if err != nil { | ||
| return nil, err | ||
| } | ||
| return s.semanticNotes.SearchBySimilarity(ctx, vec, limit) | ||
| return s.notes.SearchBySimilarity(ctx, vec, limit) | ||
| } |
There was a problem hiding this comment.
The searchSemanticNotes function returns (nil, nil) if s.queryEmbedder or s.notes is nil. This can mask configuration or initialization errors, making it difficult to detect misconfiguration during runtime. It is preferable to return an explicit error in this case, e.g.:
if s.queryEmbedder == nil || s.notes == nil {
return nil, errors.New("semantic search not configured: missing embedder or notes store")
}This approach improves observability and aids debugging.
| Annotations: readOnly, | ||
| }, s.getCollectionStats) | ||
|
|
||
| if s.systemStatus != nil { | ||
| if s.stats != nil { | ||
| addTool(s, &mcp.Tool{ | ||
| Name: "get_system_status", | ||
| Description: "Get system observability: flow run stats, feed health, pipeline summaries, and recent flow runs. Scopes: summary (flow stats + feed health), pipelines (per-flow-name aggregation), flows (recent individual runs).", |
There was a problem hiding this comment.
The get_system_status tool is only registered if s.stats != nil, which is correct. However, for defensive programming, consider adding a nil check for s.stats inside the s.getSystemStatus handler itself. This ensures that if the handler is ever invoked when s.stats is nil (due to future code changes or registration bugs), it will fail gracefully rather than panic.
Recommended solution:
func (s *Server) getSystemStatus(...) ... {
if s.stats == nil {
return ... // return an error or empty result
}
// existing logic
}| params.Project = s.resolveProjectSlug(ctx, *updated.ProjectID) | ||
| } |
There was a problem hiding this comment.
If s.resolveProjectSlug fails and returns nil, the params.Project field will be set to nil, which may reduce the usefulness of the audit trail for downstream consumers expecting a project slug. Consider logging a warning when the project slug cannot be resolved to aid in debugging missing project attributions.
Recommended solution:
if updated.ProjectID != nil {
params.Project = s.resolveProjectSlug(ctx, *updated.ProjectID)
if params.Project == nil {
s.logger.Warn("complete_task: project slug not found for activity event", "project_id", *updated.ProjectID)
}
}| if pgErr, ok := errors.AsType[*pgconn.PgError](err); ok && pgErr.Code == pgerrcode.UniqueViolation { | ||
| return nil, ErrConflict |
There was a problem hiding this comment.
Non-standard error type matching:
The use of errors.AsType[*pgconn.PgError](err) is not part of the standard Go errors API and may not reliably match the error type. If this utility is not robust, unique violation errors may not be properly detected, resulting in inconsistent error handling.
Recommended solution:
Use the standard errors.As function for error type matching:
var pgErr *pgconn.PgError
if errors.As(err, &pgErr) && pgErr.Code == pgerrcode.UniqueViolation {
return nil, ErrConflict
}This ensures reliable error type matching and consistent handling of unique constraint violations.
🤖 Augment PR SummarySummary: This PR applies round-2 audit fixes and adds design notes to document intentional tradeoffs. Changes:
Technical Notes: Error strings are normalized to lowercase per project conventions, and transaction-in-store behavior is explicitly documented where used. 🤖 Was this summary useful? React with 👍 or 👎 |
| pool: deps.Pool, | ||
| content: deps.Content, | ||
| topics: deps.Topics, | ||
| fetcher: deps.Fetcher, |
There was a problem hiding this comment.
NewContentSync stores deps.Topics/deps.Fetcher without validation, but methods like SyncAllFromGitHub/resolveTopics call cs.fetcher/cs.topics unconditionally, so missing deps can panic at runtime. Other locations where this applies: backend/internal/server/server_test.go:162.
Severity: medium
Other Locations
backend/internal/server/server_test.go:162
🤖 Was this useful? React with 👍 or 👎, or 🚀 if it prevented an incident/outage.
| // writes remain opt-in at the wiring site. | ||
| func WithProjectWriter(w *project.Store) ServerOption { | ||
| return func(s *Server) { s.projectWriter = w } | ||
| return func(s *Server) { s.projects = w } |
There was a problem hiding this comment.
After merging projectWriter into projects, the “writes remain opt-in” note is misleading: s.projects is required by read tools and update_project_status is always registered, so the nil gate can’t realistically trigger. Other locations where this applies: backend/internal/mcp/write.go:857.
Severity: medium
Other Locations
backend/internal/mcp/write.go:857
🤖 Was this useful? React with 👍 or 👎, or 🚀 if it prevented an incident/outage.
| @@ -0,0 +1,4 @@ | |||
| // Package db contains sqlc-generated query and model types for PostgreSQL. | |||
| // All files in this package are generated by sqlc — do not edit by hand. | |||
|
|
||
| // Seed with realistic values | ||
| f.Add(h.generateState()) | ||
| seed, _ := h.generateState() |
- db/doc.go: 'All files' → 'All other files' (doc.go itself is hand-written) - mcp/server.go: WithProjectWriter comment updated after store field merge
Dashboard (new page at /admin/dashboard): - Trends analytics answering '系統方向對嗎?' (not stat cards) - 4-card grid: execution velocity, goal health, learning signals, content+inbox - Every metric shows trend direction + comparison with last period - Compact directive health footer - DashboardTrends model + DashboardService with mock data - Route + sidebar nav added (between Today and Inbox) Schema alignment fixes (round 2): - #1: GoalBreadcrumb simplified — removed milestone_id/title (no FK exists) - #2: streak marked as derived (computed from sessions, not stored) - #3: Added missing learning plan management endpoints (add/remove/reorder/update) - #4: DirectiveLifecycle simplified to 3 values + has_report boolean - #5: Domain examples changed from 'algorithms' to 'leetcode' - #6: Added GET /api/admin/learn/review-queue (FSRS detail endpoint) - #7: estimated_minutes marked as derived from tasks table
Summary
Round-2 audit findings (Claude Code + Codex) — all actionable items fixed, deliberate decisions documented to stop review cycle.
Fixes (10 items)
Deliberate decision docs (stop review cycle)
mcp/server.go: transport gateway role, sub-packaging rejectedpipeline/pipeline.go: orchestration + sub-struct designnotion/notion.go: unified package rationalecontent/store.go: tx-in-store exception with rationaleapi/api.go: why Response.Data isany(not generic)Test Plan
go build ./...✅go vet ./...✅go test ./...35 packages all pass ✅