Refactor/admin v2 alignment#4
Conversation
- Add complete implementation plan for next week's session_notes strengthening - Document current validation gaps: directive/report metadata unvalidated - Research findings: outbox pattern, SKIP LOCKED, CloudEvents, JSONB CHECK - Decision log: CHECK + Go validation dual-layer, consumed_at columns, expression index on target, no pg_jsonschema/LISTEN-NOTIFY/saga - Update known issues table with fix status and scheduled items - Backward compatibility strategy for existing data
- Add decision doc from Design Review Board (4 AI reviewers, 6 reports) - Add discussion context doc for cross-AI review - Incorporate Claude Code implementation validation: - Relax report in_response_to to nullable (soft validate in Go) - Drop redundant source CHECK before adding actor FK - Server auto-generates correlation_id (not agent responsibility) - Update migration SQL, Go validation, and agent protocol accordingly
- Actor table: add role column (department vs executor) to resolve semantic mixing of departments and execution identities - Replace NOT VALID with backfill: table has hundreds of rows, no production traffic, NOT VALID solves a problem that doesn't exist - correlation_id: define as thread-level (not pair-level), server auto-managed, write semantic boundary into doc - Add deferred items: table splitting, report_kind, actor role FK enforcement, note_type lookup table
- Audit all 20 tables for semantic errors similar to session_notes - collected_data → feed_entries (aligns with Go package + Miniflux convention) - tracking_topics: confirmed in use (monitor package), needs COMMENT not DROP - Unified participant model: department vs agent vs human roles - Empty string CHECK → nullable (tasks.energy/priority, projects.cadence) - 8 questions for Koopa to confirm before migration - P0-P6 severity grading with full change list
…mplification - Normalize topics: feeds.topics TEXT[] → feed_topics junction table - Remove feed_entry_topics: entries inherit topics from feed via JOIN - tracking_topics → topic_monitors FK to topics (name from topics, not duplicated) - Remove collected_data.topics column (redundant with feed inheritance) - Consolidate all 19 changes into ordered migration plan - 9 questions for Koopa to confirm before implementation
Complete schema redesign incorporating all audit findings: Tables renamed: - session_notes → messages + journal + insights (IPC split) - collected_data → feed_entries - tracking_topics → topic_monitors (FK to topics) - obsidian_notes → notes (platform-agnostic) - notion_sources → sources (platform-agnostic) - activity_events → events - fsrs_cards → review_cards - fsrs_review_logs → review_logs - task_skip_log → task_skips - collected_status enum → feed_entry_status New tables: - platform + participant (hierarchical identity model) - feed_topics (junction, replaces feeds.topics TEXT[]) - topic_monitors (replaces tracking_topics, FK to topics) Structural changes: - Empty string CHECK → nullable (tasks.energy/priority/recur_unit, projects.expected_cadence) - tasks.assignee → FK to participant - insights: hypothesis/status promoted from JSONB to columns - events.event_type: CHECK constraint with 11 enumerated values - notes.difficulty: CHECK constraint (easy/medium/hard) - All forward-reference ALTERs eliminated via table ordering - Seed data separated into 002_seed (schema/data split) - INSERT with correct values (no INSERT + UPDATE pattern)
Messages IPC constraints added: - chk_report_no_priority: reports cannot have priority - chk_no_self_target: cannot send directive to self - chk_response_only_report: only reports can have in_response_to - chk_ack_must_be_target: only target can acknowledge Tasks state consistency: - chk_completed_at_consistency: done ↔ completed_at paired - chk_recurrence_pair: recur_interval ↔ recur_unit paired Naming fixes: - sources.database_id → external_id (provider-agnostic) - notes.tags → raw_tags (ingestion snapshot, not canonical) - review_cards.tag → tag_id UUID FK to tags (canonical reference) - feed_entries.url_hash: DEFAULT '' → nullable Added COMMENTs: - messages.target/acknowledged_by: Go validates platform=claude-cowork - tasks.assignee: notes participant seed dependency - tasks.updated_at: explicitly application-managed, no trigger - insights.source: FK to participant
…al, dedup contract Review findings incorporated: - events.event_type: TEXT CHECK → CREATE TYPE ENUM (closed contract) - notes.context: COMMENT updated to QUASI-CANONICAL — actively used for MCP search, not pure raw field - feed_entries.url_hash: nullable → NOT NULL (dedup contract, not optional metadata — pipeline must compute before INSERT) - feed_entries TABLE COMMENT: explicit about topic retroactive change being a deliberate product semantics choice, not a small trade-off - users.role COMMENT: placeholder with deletion trigger condition - goals.area, goals.quarter: NOT NULL DEFAULT '' → nullable - projects.area, projects.role: NOT NULL DEFAULT '' → nullable Schema header: three governing principles documented: 1. Raw ingestion vs canonical truth vs quasi-canonical 2. Closed contract (ENUM) vs evolving taxonomy (CHECK) vs open (TEXT) 3. Absence = NULL, never empty string
Blocker 1 — messages.in_response_to parent-kind: DB cannot cross-reference rows in CHECK constraints. Trigger violates project rules. COMMENT now explicitly documents the limitation: Go MUST validate referenced parent is directive. If invariant is violated, causal model is corrupted. Splitting into directives/reports tables noted as future resolution path. Blocker 2 — feeds empty string violations: etag, last_modified, last_error, disabled_reason all changed from NOT NULL DEFAULT '' to nullable. COMMENTs explain NULL semantics for each. Schema now consistent with principle 3. Non-blocking improvements: - review_cards: COALESCE sentinel UUID → two partial unique indexes (WHERE tag_id IS NULL / IS NOT NULL). Cleaner. - updated_at: global policy added to schema header (principle 4). Application-managed, no triggers, project-wide convention.
…t doc Split messages table into directives and reports: - directives: target NOT NULL, priority NOT NULL, acknowledged_at/by - reports: in_response_to FK to directives(id) — DB now guarantees parent is a directive. No more cross-row CHECK limitation. - reports have no target, no priority, no acknowledgment fields (clean separation, no NULL waste) Added docs/SCHEMA-V2-MIGRATION-IMPACT.md: - All Go package rename/split impacts - All MCP tool renames (save_session_note → send_directive etc) - All sqlc query.sql changes needed - Cowork instructions update checklist - Retention cron simplification - 8-step implementation order
Every table now has TABLE and COLUMN level COMMENTs covering: - Semantic purpose of each table - Every nullable column explains when/why NULL - All FK columns explain CASCADE/SET NULL behavior - All JSONB columns document expected structure - All updated_at columns marked as application-managed - All quasi-canonical fields marked explicitly - Junction tables document the relationship they represent
Q1: report has no target — deliberate. Recipients are implicit via
in_response_to (directive source) or morning briefing (all).
Q2: directive source not restricted to HQ — cross-department
directives are a valid future scenario. Go validates platform.
Q3: directive:report cardinality is 1:N. Progress reports, completion
reports, follow-ups all valid. Completion = metadata-level signal.
Route A chosen: directives are scoped to claude-cowork platform. Cross-platform work dispatch (HQ → Claude Code) uses tasks.assignee. - directives TABLE COMMENT: explicit scope declaration - directives.source COMMENT: any Cowork department can issue (not HQ-only) - directives.target COMMENT: claude-cowork only, cross-platform out of scope - reports TABLE COMMENT: completion signal upgrade path documented - Removed contradictory hq → koopa0.dev example from design rationale
- participant.name COMMENT: 'messages' → directives/reports/journal/insights - reports.source COMMENT: explicitly scoped to claude-cowork platform, matching directives scope. Go validates source.platform = claude-cowork.
…, seed safety - reports.source COMMENT: 'Scoped to claude-cowork' → 'Currently limited by Go validation. May expand if cross-platform reporting needed.' - directives.note_date COMMENT: documents inherited naming, semantically the issued date - reports.note_date COMMENT: same, semantically the reported date - Identity model section: IMPORTANT note that platform/participant INSERTs are schema-critical, not optional seed data
001_initial.down.sql: verified correct — 35 tables, 11 types, 2 views, 1 extension, all matching up.sql exactly. 002_seed.up.sql: verified complete — topics, tags, feeds (with correct priority inline), feed_topics junction (DO block). No seed data needed for sources/topic_monitors (runtime inserts). 002_seed.down.sql: correct — cleans seed data in dependency order. docs/COWORK-SYSTEM.md: marked IPC sections as outdated. Points to migrations/001_initial.up.sql as source of truth. Full rewrite deferred to after Go code migration.
- sources.poll_interval: DB now validates interval format natively - review_logs.state: CHECK BETWEEN 0 AND 3, consistent with rating BETWEEN 1 AND 4
- Capability flags on participant (5 booleans) replace platform hard-code - participant_schedules table with execution_backend, trigger_type, missed_run_policy - Seed values for all 8 participants + 4 known schedules - Execution backend capability matrix (documented, not schema-enforced) - Directive/report COMMENT updates: Cowork-internal → capability-driven - Go validation changes: platform check → capability check - Upgrade path: when to promote to routing_policies table - Clear separation: locked (一級語意) vs deferred (二級策略)
participant table:
- Add 5 capability flags: can_issue_directives, can_receive_directives,
can_write_reports, task_assignable, can_own_schedules
- Seed values set per participant (not per platform)
- platform COMMENT: informational only, capabilities drive routing
New tables:
- participant_schedules: participant-owned standing instructions with
trigger_type, execution_backend, missed_run_policy, expected_outputs
- schedule_runs: append-only execution history (avoids consecutive_failures
pattern from feeds)
Directive/report COMMENTs:
- All 'Cowork-internal' / 'platform = claude-cowork' references replaced
with capability-driven language (can_issue_directives = true, etc.)
- No snapshot descriptions ('Currently: X') — only check semantics
execution_backend: cowork_desktop, claude_code (merged), github_actions,
koopa_native. Claude Code cloud/desktop/loop merged — split when needed.
Design doc updated with review feedback:
- task_assignable naming (precision > consistency)
- schedule_runs from day one
- expected_outputs convention documented in COMMENT
- cross-table invariant (can_own_schedules cascade) documented
participant_schedules: - Add UNIQUE (participant, name) — no duplicate schedule names per participant - Add chk_manual_no_expr — manual triggers must have NULL schedule_expr - schedule_expr COMMENT: specify Go time.Duration format for intervals, fix misleading '1h, 30m' example (comma implied two values) - missed_run_policy COMMENT: add 'backend support may vary' caveat - schedule_runs.status COMMENT: success = no execution error, not guaranteed output completeness (separate monitoring concern) Design doc: - Seed matrix header: recv_task → task_assignable (naming consistency)
Design doc was out of sync with 001_initial.up.sql after last round. Now matches: UNIQUE(participant,name), chk_manual_no_expr, schedule_expr format spec (Go time.Duration), missed_run_policy backend caveat, schedule_runs.status precision, seed matrix header task_assignable.
Silent fallback is the hardest bug to debug. COMMENT now explicitly sets the expectation that Go layer logs when backend cannot honor the requested missed_run_policy.
All four capability-gated FK columns now have consistent COMMENT: - directives.source: Go validates can_issue_directives = true - directives.target: Go validates can_receive_directives = true - reports.source: Go validates can_write_reports = true - tasks.assignee: Go validates task_assignable = true
DDL constraint fixes: - All 63 FKs now have explicit ON DELETE (RESTRICT/CASCADE/SET NULL) - schedule_runs: add chk_error_on_failure CHECK - review_queue: bidirectional reviewed_at/status CHECK - project_aliases.project_id: NOT NULL - learning_item_concepts: add created_at column - reports.in_response_to: ON DELETE SET NULL - Down migration: fix FK ordering (review_cards before learning_items) New indexes (8): - goals.area_id, projects.area_id (PARA dashboard) - directives unacked (morning briefing) - tasks assignee/created_by (GTD operational) - feed_entries unread+relevance (feed dashboard) Comment additions (12): - flow_runs: full column coverage (was zero comments) - review_cards.tag_id: CASCADE reasoning documented - note_tags, event_tags: table comments added - review_queue: all columns documented - attempts: append-only noted - notes: nullable fields convention, source clarification - search_vector: simple config rationale - feed_entries: title, original_content documented - Cross-domain and contradiction invariants on concepts, item_relations Document sync: - Sign-off: 46 tables, 0 views, four-party review recorded - Design doc §10: index count corrected - Review prompt removed (served its purpose)
Covers all 7 new tables + review_cards modification: - Entity semantics and required store operations - Session lifecycle write path (the primary data flow) - Domain validation shared constant set - Go-enforced invariants (cross-domain, contradictions, acyclicity) - Concept merge procedure (RESTRICT-aware) - MCP tool migration map (JSONB → first-class tables) - Key queries with index paths - Backfill strategy from existing notes - Updated implementation order (steps 12-18) Does not prescribe package structure — learning may be refactored.
Covers the full authentication gap in the IPC protocol: - Problem: source param is self-declared, unvalidated - 3-step validation chain: authentication → binding → authorization - Platform identity by transport (stdio flag, static tokens, OAuth binding) - Platform selection page for OAuth flows - Participant store design (reads existing schema) - MCP admin tools for participant management - 5-phase implementation plan - Anthropic infra constraints (token is the only identity signal) - Cross-references to all relevant source files
v1 (rejected): OAuth platform binding, per-platform tokens, platform selection page, --platform flag. Over-engineered because MCP protocol has no project identity and Anthropic proxies obscure the caller. v2 (approved): source param validation + capability flags at tool call level. Trust model: all callers are owner's Claude instances, identity comes from system prompt convention. One DB query per IPC write: participant exists + has capability. Implementation: participant store + IPC tool hardening + admin tools. No OAuth changes, no transport changes, no schema changes.
Connects daily plan items to their planning reasoning journal entry. Symmetric with learning_sessions.journal_id — session produces journal entry, journal_id links back. Enables quarterly review queries: which reasoning drove which task selections, especially when re-planning happens on the same day.
…ools Phase 1 tools: morning_context, reflection_context, search_knowledge, capture_inbox, advance_work, plan_day, write_journal. New stores: internal/journal/, internal/daily/. Refactored: internal/task/ (PostgreSQL-native, no Notion dependency, inbox/someday status, created_by field). Deleted: internal/session/ (→journal), internal/notion/ (removed), internal/monitor/, internal/learning/, internal/pipeline/, internal/reconcile/, internal/server/, cmd/app/. Query files fixed for v2 schema: 11 files with table/column renames (obsidian_notes→notes, activity_events→events, collected_data→feed_entries, tasks.my_day removed, projects.area→area_id, etc). Design: .claude/plans/mcp-v2-design.md Policy: .claude/rules/mcp-decision-policy.md
…alidate task state transitions DeleteByDate → DeletePlannedByDate: only removes 'planned' items, preserving done/deferred/dropped as historical records. advance_work now validates current task status before transitioning: - inbox: only clarify, defer - todo: start, complete, defer - in-progress: complete, defer - someday: clarify, start - done: no transitions allowed
… IPC tools Phase 2 tools: propose_commitment (readOnly, HMAC-signed token), commit_proposal (creates entity from verified token), goal_progress, file_report, acknowledge_directive, track_insight. New stores: internal/directive/, internal/report/, internal/insight/. Extended: internal/goal/ (CreateGoal, CreateMilestone, ActiveGoals, GoalByID). Proposal flow: stateless HMAC-SHA256 signed token with 10-min TTL. Token encodes entity type + resolved fields + nonce + expiry. commit_proposal verifies signature and expiry before writing. Total registered tools: 13 (7 Phase 1 + 6 Phase 2).
…me mapping Phase 3 tools: start_session, record_attempt, end_session, learning_dashboard. New store: internal/learnsession/ (sessions, attempts, observations, concepts, learning items — all via sqlc queries against v2 schema). Key features: - Semantic outcome mapping: 'got it' → solved_independent (practice mode), 'got it' → completed (reading mode). Raw enums also accepted. - Observation confidence tiers: high-confidence written directly, low-confidence returned as pending_observations for user confirmation. - Auto-create leaf concepts within session domain. - Session orchestration: one active session at a time, attempts scoped to session. Total registered tools: 17 (7 Phase 1 + 6 Phase 2 + 4 Phase 3).
Phase 4 tools: manage_content (create/update/publish multiplexer), manage_feeds (list/add/update/remove multiplexer), system_status (stub). Feed store wired via WithFeedStore option (optional dependency). Content store uses existing CreateContent/UpdateContent methods. Total registered tools: 20 (7 + 6 + 4 + 3). All 4 phases complete. MCP v2 tool surface fully implemented.
- morning_context: wire active goals, unacked directives, unverified insights - advance_work: transactional consistency via pgx.Tx for task + daily plan - search_knowledge: wire note search (merges content + note results) - system_status: wire stats.Store.Overview - commit_proposal: implement project creation - directive: participant capability validation (can_issue/can_receive) - rename insight_tool.go → insight_track.go - go mod tidy All 20 tools are now fully functional. Zero stubs remaining.
…ns, outcome mapping
Proposal token: sign/verify round-trip (6 cases), expiry rejection,
tamper detection (6 cases including wrong secret, malformed, empty).
State transitions: 20 cases covering all 5 statuses × 4 actions.
Verifies inbox cannot skip to done, done cannot transition, etc.
Outcome mapping: 34 cases covering semantic input ('got it', 'needed help'),
raw enum passthrough, mode-specific mapping (practice vs reading),
and error cases (unknown input, empty string).
…n detail All stores are derived from the same pool. Callers don't choose stores — they're internal to the server. 13 positional params → 2 + variadic opts. Extracted encodeToken helper from signProposal for test use. Fixed lint: shadow in proposal_test, suppress gocognit on test func.
Deleted 51 archived Go files (cmd/app, internal/server, internal/pipeline, internal/reconcile, internal/learning, task handlers, notion, retrieval). Retrieval will be rebuilt for Phase 3 learning_dashboard retrieval view.
…alidation, cleanup - morning_context: wire RSS highlights via feedEntries.HighPriorityRecent - morning_context: add 'rss' section to available sections - file_report: validate source participant has can_write_reports capability - directive.Store: expose ParticipantByName for cross-package capability checks - Remove _archive/ (51 files) and internal/retrieval/ (5 files) - Fix lint: extract fillMorningSections to reduce morningContext complexity
Wire all surviving internal handlers (content, project, topic, feed, goal, tag, stats, activity, review, auth, upload, note) into a new cmd/app/ entry point. Middleware chain: recovery → requestID → CORS → logging → securityHeaders. Health endpoints: /healthz (liveness), /readyz (readiness with DB ping). Auth middleware guards all /api/admin/* routes. Dropped v1 handlers that no longer exist: session, learning, retrieval, monitor, notion, pipeline, reconcile, webhook.
…l, timeline, variations) Add 5 new sqlc queries: - ConceptMastery: per-concept signal counts from attempt_observations - WeaknessAnalysis: cross-pattern weakness breakdown by category/severity - RetrievalQueue: items due for spaced review from review_cards - SessionTimeline: sessions with attempt/success counts - ItemVariations: problem relationship graph from item_relations Add corresponding store methods and update the MCP handler to route by view parameter instead of returning overview-only data.
Add handler_test.go with: - Input validation tests for capture_inbox, advance_work, plan_day, write_journal, propose_commitment, commit_proposal, file_report, acknowledge_directive, track_insight, start_session, record_attempt, end_session, learning_dashboard, manage_content - Happy path test for propose_commitment (goal preview + token) - Comprehensive outcome mapping tests (34 cases across all modes) - JSON schema generation smoke test for all input types Total: 85 test cases across existing + new test files.
…on workflows
Three testcontainers-go workflow tests:
- TestDailyCycleWorkflow: morning_context → capture → clarify → plan_day
→ complete → write_journal → reflection_context
- TestProposalWorkflow: propose_commitment(goal) → commit_proposal → verify
- TestLearningSessionWorkflow: start_session → record_attempt(×2) →
end_session → learning_dashboard (overview, mastery, weaknesses, timeline)
Also fixes:
- Migration ordering: daily_plan_items.journal_id FK deferred via ALTER TABLE
(journal table is created after daily_plan_items)
- attempts.metadata: default to '{}' when nil (NOT NULL constraint)
session_delta: cross-session context bridge — tasks created/completed, journal entries, and learning session count since a given date (default 24h). Use at session start to understand what happened since last time. weekly_summary: week retrospective aggregation — completed tasks, journal entries, learning sessions, and concept mastery for a given week (default current). Use for weekly reviews. Tool count: 20 → 22.
Remove 8 orphaned packages (unreachable from any cmd/ binary): - internal/ai/ (29 files) — Genkit pipeline cluster - internal/github/ — GitHub API client (only used by ai/) - internal/budget/ — token budget counter (only used by ai/) - internal/obsidian/ — Obsidian markdown parser (zero imports) - internal/oreilly/ — O'Reilly API client (zero imports) - internal/event/ — in-process event bus (zero imports) - internal/webhook/ — HMAC signature verifier (zero imports) - internal/concept/ — empty directory Remove Notion sync adapters from live packages: - goal/notion.go, goal/store.go Notion methods, goal/query.sql Notion queries - project/notion.go, project/store.go Notion methods, project/query.sql Notion queries - project/project.go UpsertByNotionParams type - goal/goal.go UpsertByNotionParams type Remove stale artifacts: - docs/MCP-TOOLS-REFERENCE.md (52 v1 tools, none match v2) - sqlc.yaml ai/exec/query.sql reference - Fix mcp.go comment: 20 → 22 tools go mod tidy: removed 14 unused dependencies. Total: -11,916 lines across 80 files.
- statusRecorder: add Unwrap() method for Go 1.20+ response writer interface discovery (http.Flusher, http.Hijacker). Without this, streaming/SSE responses break when wrapped by logging middleware. - LearningDashboardOutput: replace 'any' fields with concrete types ([]Session, []ConceptMasteryRow, []WeaknessRow, etc.). Restores type safety at the JSON serialization boundary.
…n/HTTPHandler Package rename: - internal/learnsession/ → internal/learning/ - Package name learnsession → learning - File learnsession.go → learning.go - Error prefix learnsession: → learning: - All imports and type references updated API improvement: - Remove MCPServer() getter that exposed internal mcp.Server - Add Server.Run(ctx, transport) — blocks until ctx cancelled - Add Server.HTTPHandler() — returns http.Handler for Streamable HTTP - cmd/mcp uses the new methods directly
MCP-TOOLS-v2.md: 22 tools inventory for Claude Desktop users. Includes tool table, learning_dashboard views, proposal/commit flow, outcome semantic mapping, participant resolution, decision principles. HTTP-API-v2.md: 65 routes reference for Angular frontend. Includes public API, admin CRUD, auth, response format, env config. Explicitly lists all removed v1 APIs with their MCP replacements so frontend developer knows what needs redesign.
feed.Store.alerts field was never set (SetAlerts never called from cmd/app/main.go), so the auto-disable notification branch was dead. Removed: SetAlerts method, alerts field, notification block in autoDisableFeed, and the entire internal/notify package (LINE, Telegram, Multi, Noop — all now orphaned). Auto-disable still works — the DB query runs, just without external notifications.
Add PARA/GTD/Learning domain model explanation so frontend developer understands the semantic architecture, not just endpoint paths. Includes: content types with display guidance, GTD task lifecycle, learning engine concepts, IPC model, page design suggestions, and explicit list of removed v1 pages with reasons.
- Remove 11 admin pages moved to MCP (today, tasks, journal, insights, knowledge-metrics, pipeline, flow-runs, notion-sources, tracking, session-notes, planning) - Remove 8 services no longer needed (task, insight, session-note, learning-analytics, pipeline, flow-run, notion-source, tracking) - Remove pipeline-actions shared component - Restructure sidebar navigation: 5 groups/15 items -> 3 groups/8 items - Redesign dashboard around content production (remove task/insight/ capacity/pipeline widgets, add active projects panel) - Add 7 shared UI components: LoadingSpinner, StatusBadge, EmptyState, PageHeader, Modal, FormField, DataTable - Rewrite Tags page using shared components, merge duplicate alias logic - Polish Goals, Activity, Project Editor with shared components - Fix Tailwind v4 compliance across all admin templates (rounded-sm -> rounded-xs, outline-none -> outline-hidden)
| func requireEnv(key string, logger *slog.Logger) string { | ||
| v := os.Getenv(key) | ||
| if v == "" { | ||
| logger.Error(key + " is required") | ||
| logger.Error("required environment variable not set", "key", key) | ||
| os.Exit(1) | ||
| } |
There was a problem hiding this comment.
Abrupt Termination with os.Exit(1) in requireEnv
The use of os.Exit(1) in the requireEnv function causes the application to terminate immediately if a required environment variable is missing. This approach prevents any deferred cleanup, resource release, or higher-level error handling. It is generally preferable to return an error and handle it at the top level, allowing for more graceful shutdown and better observability.
Recommended Solution:
Return an error from requireEnv and handle it in loadConfig or the main function, e.g.:
func requireEnv(key string, logger *slog.Logger) (string, error) {
v := os.Getenv(key)
if v == "" {
logger.Error("required environment variable not set", "key", key)
return "", fmt.Errorf("missing required env var: %s", key)
}
return v, nil
}Then handle the error appropriately in the calling code.
| cfg := config{ | ||
| Port: envOr("PORT", "8080"), | ||
| CORSOrigin: envOr("CORS_ORIGIN", "http://localhost:4200"), | ||
| DatabaseURL: requireEnv("DATABASE_URL", logger), | ||
| JWTSecret: requireEnv("JWT_SECRET", logger), | ||
| GoogleClientID: os.Getenv("GOOGLE_CLIENT_ID"), | ||
| GoogleClientSecret: os.Getenv("GOOGLE_CLIENT_SECRET"), | ||
| GoogleRedirectURI: os.Getenv("GOOGLE_REDIRECT_URI"), | ||
| AdminEmail: os.Getenv("ADMIN_EMAIL"), | ||
| R2Endpoint: os.Getenv("R2_ENDPOINT"), | ||
| R2AccessKeyID: os.Getenv("R2_ACCESS_KEY_ID"), | ||
| R2SecretAccessKey: os.Getenv("R2_SECRET_ACCESS_KEY"), | ||
| R2Bucket: os.Getenv("R2_BUCKET"), | ||
| R2PublicURL: os.Getenv("R2_PUBLIC_URL"), | ||
| SiteURL: envOr("SITE_URL", "https://koopa0.dev"), | ||
| } |
There was a problem hiding this comment.
Lack of Validation for Critical Configuration Values
The configuration loader does not validate the format or strength of critical values such as JWTSecret, DatabaseURL, or OAuth credentials. This could lead to misconfiguration or security vulnerabilities if, for example, a weak JWT secret is used or a malformed URL is provided.
Recommended Solution:
Add validation logic after loading configuration values to ensure they meet expected criteria (e.g., minimum length for secrets, valid URL formats). Return an error or terminate gracefully if validation fails.
| feedCollector := collector.New(entryStore, feedStore, logger) | ||
| defer feedCollector.Stop() |
There was a problem hiding this comment.
The feedCollector is always initialized and its Stop() method is deferred, but there is no error handling for its initialization or stopping. If the collector's initialization or shutdown fails, this could lead to resource leaks or silent failures.
Recommendation: Ensure that collector.New returns an error if initialization fails, and handle it appropriately. Also, consider logging or handling errors from feedCollector.Stop() if it can fail.
| logger: logger, | ||
| } | ||
|
|
||
| authMid := auth.Middleware(cfg.JWTSecret) |
There was a problem hiding this comment.
The authentication middleware (auth.Middleware(cfg.JWTSecret)) is always added to the middleware chain, even if authentication is not configured (i.e., when cfg.GoogleClientID is empty and authHandler is nil). This could result in requests being rejected or in undefined behavior if the middleware expects a valid configuration.
Recommendation: Only add the authentication middleware to the chain if authentication is configured. For example:
var authMid func(http.Handler) http.Handler = func(h http.Handler) http.Handler { return h }
if cfg.GoogleClientID != "" {
authMid = auth.Middleware(cfg.JWTSecret)
}This ensures that the middleware chain behaves as expected in both authenticated and unauthenticated modes.
| _, _ = rand.Read(b) | ||
| return hex.EncodeToString(b) |
There was a problem hiding this comment.
Potential Security and Logic Issue: Ignoring Error from rand.Read
The call to rand.Read(b) ignores the error, which could result in a non-random or zeroed request ID if the random source fails. This may compromise request traceability and security.
Recommended Solution:
Check the error and handle it appropriately, e.g.:
if _, err := rand.Read(b); err != nil {
// fallback or log error
return ""
}| achieved: all.filter((g) => g.status === 'done').length, | ||
| abandoned: all.filter((g) => g.status === 'abandoned').length, | ||
| }; | ||
| }); |
There was a problem hiding this comment.
Potential Inconsistency: Duplicated Filtering Logic
The filtering logic for tab counts in tabCounts is duplicated from the logic in filteredGoals. If the filtering criteria change in one place but not the other, this could lead to inconsistent UI state or incorrect tab counts.
Recommendation:
Refactor to centralize the filtering logic, e.g., by extracting the filter predicate into a shared function or by deriving tab counts from the already filtered goals.
| protected deadlineDays(goal: ApiGoal): number | null { | ||
| if (!goal.deadline || goal.status === 'done' || goal.status === 'abandoned') { | ||
| if ( | ||
| !goal.deadline || | ||
| goal.status === 'done' || | ||
| goal.status === 'abandoned' | ||
| ) { | ||
| return null; | ||
| } | ||
| const deadline = new Date(goal.deadline + 'T00:00:00'); |
There was a problem hiding this comment.
Lack of Deadline Format Validation
The deadlineDays method assumes that goal.deadline is always in the 'YYYY-MM-DD' format and appends 'T00:00:00' for parsing. If the deadline is missing or in an unexpected format, this can result in NaN or incorrect calculations.
Recommendation:
Add validation for the deadline format before parsing, and handle invalid formats gracefully to avoid runtime errors or misleading results.
| id="role" | ||
| type="text" | ||
| formControlName="role" | ||
| class="w-full rounded-sm border border-zinc-700 bg-zinc-900 px-3 py-2 text-sm text-zinc-100 outline-hidden placeholder:text-zinc-500 focus:border-zinc-500" | ||
| class="w-full rounded-xs border border-zinc-700 bg-zinc-900 px-3 py-2 text-sm text-zinc-100 outline-hidden placeholder:text-zinc-500 focus:border-zinc-500" | ||
| placeholder="Full-Stack Developer" | ||
| /> | ||
| </div> |
There was a problem hiding this comment.
The Role field is marked as required (with a red asterisk), but there is no error message displayed if the field is invalid. This is inconsistent with other required fields (e.g., Title, Slug, Description), which show validation errors using getFieldError.
Recommendation:
Add an error message display similar to:
@if (getFieldError('role')) {
<p class="mt-1 text-xs text-red-400">{{ getFieldError('role') }}</p>
}This will provide consistent validation feedback to users.
| <select | ||
| id="status" | ||
| formControlName="status" | ||
| class="w-full rounded-sm border border-zinc-700 bg-zinc-900 px-3 py-2 text-sm text-zinc-300 outline-hidden focus:border-zinc-500" | ||
| class="w-full rounded-xs border border-zinc-700 bg-zinc-900 px-3 py-2 text-sm text-zinc-300 outline-hidden focus:border-zinc-500" | ||
| > | ||
| @for (option of statusOptions; track option.value) { | ||
| <option [value]="option.value">{{ option.label }}</option> |
There was a problem hiding this comment.
The Status field is also marked as required, but there is no error message shown if the field is invalid. This can lead to confusion if the form is invalid and the user does not know why.
Recommendation:
Add an error message display for the Status field, for example:
@if (getFieldError('status')) {
<p class="mt-1 text-xs text-red-400">{{ getFieldError('status') }}</p>
}This will ensure consistent validation feedback across all required fields.
| status: ['in-progress' as ProjectStatus, Validators.required], | ||
| }); | ||
|
|
||
| this.projectForm.valueChanges | ||
| .pipe(takeUntilDestroyed()) | ||
| .subscribe(() => { | ||
| this.isFormDirty.set(true); | ||
| }); | ||
| this.projectForm.valueChanges.pipe(takeUntilDestroyed()).subscribe(() => { | ||
| this.isFormDirty.set(true); | ||
| }); | ||
| } | ||
|
|
||
| hasUnsavedChanges(): boolean { |
There was a problem hiding this comment.
Potential inconsistency in default 'status' value:
The form initialization sets the default value for the 'status' field to 'in-progress' (line 87). If the available status options change in the future, this hardcoded value may become invalid or inconsistent with the statusOptions array.
Recommended solution:
Derive the default value from the statusOptions array to ensure consistency:
status: [this.statusOptions[0].value, Validators.required],This approach ensures the default value always matches the first available option.
| cfg := config{ | ||
| Port: envOr("PORT", "8080"), | ||
| CORSOrigin: envOr("CORS_ORIGIN", "http://localhost:4200"), | ||
| DatabaseURL: requireEnv("DATABASE_URL", logger), | ||
| JWTSecret: requireEnv("JWT_SECRET", logger), | ||
| GoogleClientID: os.Getenv("GOOGLE_CLIENT_ID"), | ||
| GoogleClientSecret: os.Getenv("GOOGLE_CLIENT_SECRET"), | ||
| GoogleRedirectURI: os.Getenv("GOOGLE_REDIRECT_URI"), | ||
| AdminEmail: os.Getenv("ADMIN_EMAIL"), | ||
| R2Endpoint: os.Getenv("R2_ENDPOINT"), | ||
| R2AccessKeyID: os.Getenv("R2_ACCESS_KEY_ID"), | ||
| R2SecretAccessKey: os.Getenv("R2_SECRET_ACCESS_KEY"), | ||
| R2Bucket: os.Getenv("R2_BUCKET"), | ||
| R2PublicURL: os.Getenv("R2_PUBLIC_URL"), | ||
| SiteURL: envOr("SITE_URL", "https://koopa0.dev"), | ||
| } |
There was a problem hiding this comment.
There is inconsistent handling of required and optional environment variables. For example, DatabaseURL and JWTSecret are required (using requireEnv), but other potentially critical variables (such as GoogleClientID, GoogleClientSecret, and R2 credentials) are loaded with os.Getenv and may silently default to empty strings if not set. This can lead to subtle misconfigurations that are hard to debug. Recommendation: Clearly distinguish required and optional variables, and enforce presence checks for all required configuration values.
| logger.Error("required environment variable not set", "key", key) | ||
| os.Exit(1) | ||
| } |
There was a problem hiding this comment.
The use of os.Exit(1) in requireEnv is abrupt and does not allow for any cleanup or deferred operations to run. In more complex applications, this can lead to resource leaks or incomplete logging. Recommendation: Consider returning an error and handling shutdown logic at a higher level, or at least ensure that all necessary cleanup is performed before exiting.
| feedCollector := collector.New(entryStore, feedStore, logger) | ||
| defer feedCollector.Stop() |
There was a problem hiding this comment.
The feedCollector is always initialized and passed to feedHandler if not nil, but there is no error handling for potential initialization failures. If collector.New can return a nil or invalid collector (e.g., due to dependency issues), this could result in a nil pointer dereference later.
Recommendation: Ensure that collector.New cannot fail silently, or add explicit error handling for its initialization. If initialization can fail, handle the error and avoid passing a nil collector to downstream handlers.
| poolCfg.MaxConns = 10 | ||
| poolCfg.MinConns = 2 | ||
| poolCfg.MaxConnIdleTime = 5 * time.Minute | ||
| poolCfg.HealthCheckPeriod = 30 * time.Second |
There was a problem hiding this comment.
The database connection pool parameters (MaxConns, MinConns, MaxConnIdleTime, HealthCheckPeriod) are hardcoded. This reduces flexibility and may not be optimal for all deployment environments, especially under varying load or resource constraints.
Recommendation: Make these pool configuration values configurable via environment variables or configuration files to allow tuning without code changes.
| _, _ = rand.Read(b) | ||
| return hex.EncodeToString(b) |
There was a problem hiding this comment.
Error Handling in generateID:
The error returned by rand.Read is ignored:
_, _ = rand.Read(b)If rand.Read fails, the generated request ID may be predictable or zeroed, undermining traceability and potentially security.
Recommended Solution:
Check the error and handle it appropriately, e.g.:
if _, err := rand.Read(b); err != nil {
// handle error, e.g., log and fallback to a less secure ID or panic
}| const all = this.goals(); | ||
| return { | ||
| all: all.length, | ||
| active: all.filter((g) => g.status === 'in-progress' || g.status === 'not-started').length, | ||
| active: all.filter( | ||
| (g) => g.status === 'in-progress' || g.status === 'not-started', | ||
| ).length, | ||
| achieved: all.filter((g) => g.status === 'done').length, | ||
| abandoned: all.filter((g) => g.status === 'abandoned').length, | ||
| }; | ||
| }); |
There was a problem hiding this comment.
The filtering logic for tab counts is duplicated from the 'filteredGoals' computed property. This duplication can lead to inconsistencies and increases maintenance overhead if the filtering criteria change.
Recommendation:
Centralize the filtering logic in a single utility function or method, and reuse it in both 'filteredGoals' and 'tabCounts'. For example:
private filterGoalsByTab(goals: ApiGoal[], tab: FilterTab): ApiGoal[] { ... }This will improve maintainability and reduce the risk of errors.
| } | ||
|
|
||
| protected deadlineDays(goal: ApiGoal): number | null { | ||
| if (!goal.deadline || goal.status === 'done' || goal.status === 'abandoned') { | ||
| if ( | ||
| !goal.deadline || | ||
| goal.status === 'done' || | ||
| goal.status === 'abandoned' | ||
| ) { | ||
| return null; | ||
| } | ||
| const deadline = new Date(goal.deadline + 'T00:00:00'); |
There was a problem hiding this comment.
The 'deadlineDays' method assumes that the 'goal.deadline' string is always in a valid 'YYYY-MM-DD' format. If the deadline is missing or malformed, 'new Date()' may produce an invalid date, resulting in unexpected behavior.
Recommendation:
Add validation for the deadline format before parsing, and handle invalid dates explicitly. For example:
const deadline = new Date(goal.deadline + 'T00:00:00');
if (isNaN(deadline.getTime())) return null;This will make the code more robust against malformed input.
| id="live_url" | ||
| type="url" | ||
| formControlName="live_url" | ||
| class="w-full rounded-sm border border-zinc-700 bg-zinc-900 px-3 py-2 text-sm text-zinc-100 outline-hidden placeholder:text-zinc-500 focus:border-zinc-500" | ||
| class="w-full rounded-xs border border-zinc-700 bg-zinc-900 px-3 py-2 text-sm text-zinc-100 outline-hidden placeholder:text-zinc-500 focus:border-zinc-500" | ||
| placeholder="https://..." | ||
| /> | ||
| </div> | ||
| </div> | ||
| </div> | ||
|
|
||
| <!-- Tech Stack --> | ||
| <div class="rounded-sm border border-zinc-800 bg-zinc-900/50 p-6"> | ||
| <div class="rounded-xs border border-zinc-800 bg-zinc-900/50 p-6"> | ||
| <h2 class="mb-4 text-sm font-medium text-zinc-400">Tech Stack</h2> | ||
|
|
||
| <div class="mb-3 flex gap-2"> |
There was a problem hiding this comment.
The tech stack input and add button do not prevent adding duplicate or empty entries at the template level. If not handled in the TypeScript logic (addTech()), users may add the same technology multiple times or add empty strings, leading to data inconsistency.
Recommendation:
Ensure that addTech() in the component checks for duplicates and trims/validates input before adding to the list:
addTech() {
const tech = this.newTech().trim();
if (!tech || this.techStack.includes(tech)) return;
this.techStack.push(tech);
// ...
}| </div> | ||
|
|
||
| <!-- Highlights --> | ||
| <div class="rounded-sm border border-zinc-800 bg-zinc-900/50 p-6"> | ||
| <div class="rounded-xs border border-zinc-800 bg-zinc-900/50 p-6"> | ||
| <h2 class="mb-4 text-sm font-medium text-zinc-400">Highlights</h2> | ||
|
|
||
| <div class="mb-3 flex gap-2"> |
There was a problem hiding this comment.
The highlights input and add button do not prevent adding duplicate or empty entries at the template level. If not handled in the TypeScript logic (addHighlight()), users may add the same highlight multiple times or add empty strings, which can reduce data quality.
Recommendation:
Ensure that addHighlight() in the component checks for duplicates and trims/validates input before adding to the list:
addHighlight() {
const highlight = this.newHighlight().trim();
if (!highlight || this.highlights.includes(highlight)) return;
this.highlights.push(highlight);
// ...
}| status: ['in-progress' as ProjectStatus, Validators.required], | ||
| }); | ||
|
|
||
| this.projectForm.valueChanges | ||
| .pipe(takeUntilDestroyed()) | ||
| .subscribe(() => { | ||
| this.isFormDirty.set(true); | ||
| }); | ||
| this.projectForm.valueChanges.pipe(takeUntilDestroyed()).subscribe(() => { | ||
| this.isFormDirty.set(true); | ||
| }); | ||
| } | ||
|
|
||
| hasUnsavedChanges(): boolean { |
There was a problem hiding this comment.
Potential inconsistency in form dirty state tracking
The custom isFormDirty signal is set to true on every form value change, but it is only reset after a successful save. If the form is patched with new values (e.g., when loading a project for editing), isFormDirty remains true, which may not accurately reflect whether the user has unsaved changes. Additionally, this approach does not leverage the form's built-in dirty property, which could lead to inconsistencies and maintenance challenges.
Recommended solution:
- Consider using the form's built-in
dirtyproperty to track unsaved changes, or resetisFormDirtywhen patching values (e.g., after loading a project). - Example:
this.projectForm.patchValue(...); this.projectForm.markAsPristine(); // Resets dirty state this.isFormDirty.set(false);
- Alternatively, update
hasUnsavedChanges()to check both the signal and the form'sdirtyproperty for more robust detection.
HIGH fixes: - #1-2: GoalDetail projects are goal siblings, not milestone children (projects.goal_id → goals.id, no milestone_id FK exists) Restructured GoalDetail: milestones[] + projects[] as separate arrays MEDIUM fixes: - #3: signal_type uses weakness/improvement/mastery (not misconception) - #4: outcome uses schema 7-values (incomplete, not partial) - #5: duration_minutes not duration_seconds - #6: removed ease_rating (FSRS rating is in review_logs, not attempts) - #8: mastery is aggregated from observations, not a stored score - #12: drop only affects daily_plan_item status, doesn't delete task - #13: ClarifyAsInsight requires invalidation_condition (schema NOT NULL) LOW fixes: - #7: session start uses session_mode, focus_concepts noted as metadata - #9: concept drilldown uses observation_trend, not mastery_trend - #10: removed stale directive schema note (resolved_at already exists) NEW: - #11: Added Learning Plans API section (§7.5) — plans list, detail, item status update with mandatory reason for completions - Added PlanStatus, PlanItemStatus, LearningPlanSummary, LearningPlanDetail, PlanItemDetail, PlanProgress types to admin.model.ts
Dashboard (new page at /admin/dashboard): - Trends analytics answering '系統方向對嗎?' (not stat cards) - 4-card grid: execution velocity, goal health, learning signals, content+inbox - Every metric shows trend direction + comparison with last period - Compact directive health footer - DashboardTrends model + DashboardService with mock data - Route + sidebar nav added (between Today and Inbox) Schema alignment fixes (round 2): - #1: GoalBreadcrumb simplified — removed milestone_id/title (no FK exists) - #2: streak marked as derived (computed from sessions, not stored) - #3: Added missing learning plan management endpoints (add/remove/reorder/update) - #4: DirectiveLifecycle simplified to 3 values + has_report boolean - #5: Domain examples changed from 'algorithms' to 'leetcode' - #6: Added GET /api/admin/learn/review-queue (FSRS detail endpoint) - #7: estimated_minutes marked as derived from tasks table
No description provided.