Fix/hq audit bugs#5
Conversation
… timeout - Add partial indexes on feeds(priority=high) and feed_entries(status=unread) to eliminate full table scans in HighPriorityRecent query - Run all 6 morning_context sections in parallel using sync.WaitGroup.Go - Each section gets a 15s context timeout so one slow query cannot block others - Previously all sections ran synchronously, causing 60s+ timeouts when RSS query hit unindexed feeds table with 20+ enabled feeds
- Add UnackedIssuedBySource and UnresolvedIssuedBySource SQL queries so the issuer (e.g. HQ) can see directives they sent - Split morning_context directive section into directives_received (targeting caller) and directives_issued (sent by caller) - Previously only target-direction queries existed, making it impossible for the source/issuer to track their own outbound directives
- Change commitGoal to set status='in-progress' instead of 'not-started' since proposal commit semantically means 'I confirm I am doing this' - Add GoalsByOptionalStatus SQL query with nullable status filter - Add status parameter to goal_progress tool input (default: in-progress, accepts 'all' or specific status like 'not-started') - Previously committed goals were invisible because goal_progress only queried in-progress goals and commit set not-started
- Add p0, critical, urgent to normalizePriority mapping (all → high) Previously p0 passed through unmapped and hit DB check constraint error - Add CreatedAt and UpdatedAt to planItemToTaskDetail conversion Previously these were zero-initialized, producing 0001-01-01 timestamps in morning_context overdue_tasks for yesterday's unfinished plan items
…plete - Validate task status in plan_day: reject inbox tasks with clear error message directing user to clarify first - Change UpdateItemStatusByTask from :exec to :execrows to return affected row count - CompleteByTask now returns (bool, error) — true only when a matching plan item was actually found and updated - Previously plan_day accepted any task status and advanceComplete always reported plan_item_updated:true even for unplanned tasks
…peline - manage_content: add list action (filter by status/type, paginated) - manage_content: add read action (full content by ID with tags) - manage_content: update action now supports status parameter (draft→review) - manage_content: output now returns ContentDetail/ContentSummary structs - search_knowledge: remove published-only filter from InternalSearch queries, drafts and all statuses are now searchable via MCP tools - morning_context: add content_pipeline section showing draft/review items - morning_context: refactor fillMorningSections to reduce cyclomatic complexity
- Add feed.Scheduler with 15-min tick, schedule-aware fetching (hourly_4/daily/weekly), and flow_runs audit recording - Wire scheduler into cmd/app with sync.WaitGroup.Go() lifecycle - Add InsertFlowRun sqlc query for pipeline monitoring - Fix ipc.md: document content as required field for insight - Fix tools.md: add search_knowledge query syntax docs - Fix research.md: add mixed research, metadata convention, insight tracking, and query tips
…e archived - mcListContent: unified path via AdminContents with Go-level status filter, both status and content_type now work together - fillContentPipeline: symmetric error handling, collect partial results from both draft and review queries independently - InternalSearch SQL: exclude archived content (status != 'archived'), archived is logically deleted and should not appear in MCP search
- bookmark_rss: takes entry_id, fetches RSS entry metadata, creates bookmark content with source URL + excerpt + personal comment, marks RSS entry as curated via entry.Curate() - manage_content now at 6 actions (multiplexer ceiling) - update docs: content.md and tools.md reflect all 6 actions - golangci: raise hugeParam threshold to 96 for MCP input structs (addTool[I,O] generic constrains value types) - remove stale nolint directive in capture.go
| var wg sync.WaitGroup | ||
| feedScheduler := feed.NewScheduler(feedStore, feedCollector, db.New(pool), logger) | ||
| wg.Go(func() { feedScheduler.Run(ctx) }) |
There was a problem hiding this comment.
The use of wg.Go(func() { feedScheduler.Run(ctx) }) is problematic because the standard library's sync.WaitGroup does not provide a Go method. Unless a custom WaitGroup implementation is used, this will result in a compilation error or the scheduler goroutine not being properly tracked. Consider using wg.Add(1) before starting the goroutine and defer wg.Done() inside the goroutine:
wg.Add(1)
go func() {
defer wg.Done()
feedScheduler.Run(ctx)
}()This ensures proper synchronization and shutdown handling.
| // Feed collector for manual fetch | ||
| // Feed collector for manual fetch + scheduled fetch | ||
| feedCollector := collector.New(entryStore, feedStore, logger) | ||
| defer feedCollector.Stop() |
There was a problem hiding this comment.
The unconditional defer feedCollector.Stop() immediately after initialization could lead to a panic if an error occurs later in the setup and the function returns early, as the deferred call will attempt to stop a possibly uninitialized or nil collector. Consider deferring feedCollector.Stop() only after all initialization steps that could fail have completed, or add a nil check in the deferred function.
| Issue directives, file reports, and acknowledge work through Claude | ||
| </p> | ||
| } | ||
|
|
||
| <!-- Directive detail modal --> | ||
| @if (selectedDirective(); as d) { | ||
| <app-modal | ||
| [title]="'Directive #' + d.id" | ||
| maxWidth="lg" | ||
| titleId="directive-detail-title" | ||
| (closed)="closeDetail()" | ||
| > | ||
| <!-- Meta row --> | ||
| <div class="mb-4 flex flex-wrap items-center gap-3"> | ||
| <span | ||
| class="rounded-sm border px-1.5 py-0.5 text-xs" | ||
| [class]="getLifecycleColor(d.lifecycle_status)" | ||
| > | ||
| {{ d.lifecycle_status }} | ||
| </span> | ||
| <span class="text-xs font-medium" [class]="getPriorityColor(d.priority)"> | ||
| {{ d.priority }} | ||
| </span> | ||
| <span | ||
| class="flex items-center gap-1 text-xs" | ||
| [class]="getAgeColor(d.age_days)" | ||
| > | ||
| <lucide-icon [img]="ClockIcon" [size]="10" /> | ||
| {{ d.age_days }}d old | ||
| </span> | ||
| </div> | ||
|
|
||
| <!-- Participants --> | ||
| <div class="mb-4 grid grid-cols-2 gap-4 text-sm"> | ||
| <div> | ||
| <span class="text-xs uppercase tracking-wider text-zinc-500">From</span> | ||
| <p class="mt-0.5 text-zinc-200">{{ d.source }}</p> | ||
| </div> | ||
| <div> | ||
| <span class="text-xs uppercase tracking-wider text-zinc-500">To</span> | ||
| <p class="mt-0.5 text-zinc-200">{{ d.target }}</p> | ||
| </div> | ||
| </div> | ||
|
|
||
| <!-- Dates --> | ||
| <div class="mb-4 grid grid-cols-2 gap-4 text-sm sm:grid-cols-3"> | ||
| <div> | ||
| <span class="text-xs uppercase tracking-wider text-zinc-500" | ||
| >Issued</span | ||
| > | ||
| <p class="mt-0.5 text-zinc-300"> | ||
| {{ d.issued_date | date: 'MMM d, y' }} | ||
| </p> | ||
| </div> | ||
| @if (d.acknowledged_at) { | ||
| <div> | ||
| <span class="text-xs uppercase tracking-wider text-zinc-500" | ||
| >Acknowledged</span | ||
| > | ||
| <p class="mt-0.5 text-zinc-300"> | ||
| {{ d.acknowledged_at | date: 'MMM d, y' }} | ||
| </p> | ||
| </div> | ||
| } @if (d.resolved_at) { | ||
| <div> | ||
| <span class="text-xs uppercase tracking-wider text-zinc-500" | ||
| >Resolved</span | ||
| > | ||
| <p class="mt-0.5 text-zinc-300"> | ||
| {{ d.resolved_at | date: 'MMM d, y' }} | ||
| </p> | ||
| </div> | ||
| } @if (d.days_to_resolution) { | ||
| <div> | ||
| <span class="text-xs uppercase tracking-wider text-zinc-500" | ||
| >Resolution time</span | ||
| > | ||
| <p class="mt-0.5 text-zinc-300">{{ d.days_to_resolution }}d</p> | ||
| </div> | ||
| } | ||
| </div> | ||
|
|
||
| <!-- Content --> | ||
| <div class="rounded-lg border border-zinc-800 bg-zinc-950/50 p-4"> | ||
| <span class="mb-2 block text-xs uppercase tracking-wider text-zinc-500" | ||
| >Content</span | ||
| > | ||
| <p class="whitespace-pre-wrap text-sm leading-relaxed text-zinc-200"> | ||
| {{ d.content }} | ||
| </p> | ||
| </div> | ||
|
|
||
| <!-- Related reports --> | ||
| @if (getRelatedReports(d.id).length > 0) { | ||
| <div class="mt-4"> | ||
| <span class="mb-2 block text-xs uppercase tracking-wider text-zinc-500"> | ||
| Related Reports | ||
| </span> | ||
| <div class="space-y-2"> | ||
| @for (report of getRelatedReports(d.id); track report.id) { | ||
| <div class="rounded-lg border border-zinc-800 bg-zinc-950/50 p-3"> | ||
| <div class="mb-1 flex items-center justify-between"> | ||
| <span class="text-xs text-zinc-400">from {{ report.source }}</span> | ||
| <span class="text-xs text-zinc-600" | ||
| >{{ report.reported_date | date: 'MMM d' }}</span | ||
| > | ||
| </div> | ||
| <p class="whitespace-pre-wrap text-sm text-zinc-300"> | ||
| {{ report.content }} | ||
| </p> | ||
| </div> | ||
| } | ||
| </div> | ||
| </div> | ||
| } | ||
|
|
||
| <!-- Resolution report indicator --> | ||
| @if (d.resolution_report_id) { | ||
| <div | ||
| class="mt-4 flex items-center gap-1.5 rounded-lg border border-emerald-900/30 bg-emerald-950/20 p-3 text-xs text-emerald-400" | ||
| > | ||
| <lucide-icon [img]="FileTextIcon" [size]="12" /> | ||
| Resolution report #{{ d.resolution_report_id }} filed | ||
| </div> | ||
| } | ||
|
|
||
| <ng-container modal-footer> | ||
| <button | ||
| type="button" | ||
| class="rounded-sm bg-zinc-800 px-4 py-2 text-sm text-zinc-300 transition-colors hover:bg-zinc-700" | ||
| (click)="closeDetail()" | ||
| > | ||
| Close | ||
| </button> | ||
| </ng-container> | ||
| </app-modal> | ||
| } | ||
| </div> |
There was a problem hiding this comment.
Potential XSS risk in content rendering
Directive and report content are rendered using Angular interpolation ({{ d.content }} and {{ report.content }}) in the modal detail view. If these fields can contain user-supplied HTML or script, there is a risk of cross-site scripting (XSS) if they are ever rendered with [innerHTML] or similar mechanisms. Ensure that all content is strictly treated as plain text and never rendered as HTML, or sanitize content appropriately before rendering.
Recommendation:
- Confirm that all interpolated content is plain text and never rendered with
[innerHTML]. - If HTML rendering is required, use Angular's DomSanitizer and sanitize content before display.
| @if (getRelatedReports(d.id).length > 0) { | ||
| <div class="mt-4"> | ||
| <span class="mb-2 block text-xs uppercase tracking-wider text-zinc-500"> | ||
| Related Reports | ||
| </span> | ||
| <div class="space-y-2"> | ||
| @for (report of getRelatedReports(d.id); track report.id) { | ||
| <div class="rounded-lg border border-zinc-800 bg-zinc-950/50 p-3"> | ||
| <div class="mb-1 flex items-center justify-between"> | ||
| <span class="text-xs text-zinc-400">from {{ report.source }}</span> | ||
| <span class="text-xs text-zinc-600" | ||
| >{{ report.reported_date | date: 'MMM d' }}</span | ||
| > | ||
| </div> | ||
| <p class="whitespace-pre-wrap text-sm text-zinc-300"> | ||
| {{ report.content }} | ||
| </p> | ||
| </div> | ||
| } | ||
| </div> | ||
| </div> |
There was a problem hiding this comment.
Performance inefficiency: repeated calls to getRelatedReports(d.id)
In the modal detail view, getRelatedReports(d.id) is called in both the @if condition and the @for loop. If getRelatedReports is a non-trivial computation or involves data fetching, this can lead to unnecessary repeated work and degrade performance.
Recommendation:
- Cache the result of
getRelatedReports(d.id)in a local variable within the template or component, and reuse it for both the condition and the loop. - Example:
@let relatedReports = getRelatedReports(d.id); @if (relatedReports.length > 0) { @for (report of relatedReports; ...) }
| protected getRelatedReports(directiveId: number) { | ||
| return this.reports().filter((r) => r.in_response_to === directiveId); | ||
| } |
There was a problem hiding this comment.
The getRelatedReports method filters the entire reports array each time it is called. If the reports array is large or this method is invoked frequently (such as within a template loop), this could negatively impact performance.
Recommendation:
If performance profiling indicates this is a bottleneck, consider caching the results or restructuring the data to allow for more efficient lookups, such as indexing reports by in_response_to.
|
|
||
| // Side effect: update today's daily_plan_item if exists. | ||
| today := s.today() | ||
| if completeErr := txDayplan.CompleteByTask(ctx, taskID, today); completeErr == nil { | ||
| out.PlanItemUpdated = true | ||
| if updated, completeErr := txDayplan.CompleteByTask(ctx, taskID, today); completeErr == nil { | ||
| out.PlanItemUpdated = updated | ||
| } | ||
|
|
||
| // Side effect: handle recurring task. |
There was a problem hiding this comment.
Silent Error Handling for Side Effects
In the advanceComplete function, errors from txDayplan.CompleteByTask and txTasks.ResetRecurring are ignored (lines 164 and 173). This can result in silent failures where side effects are not applied, but the transaction still commits, leading to inconsistent state between tasks and daily plan items or recurring task schedules.
Recommendation:
Handle errors explicitly and decide whether to abort the transaction or at least log the errors and surface them to the caller. For example:
if updated, completeErr := txDayplan.CompleteByTask(ctx, taskID, today); completeErr != nil {
// handle or log error
}Failing silently should be avoided for critical side effects.
| pos := item.Position | ||
| if pos == 0 { | ||
| pos = i |
There was a problem hiding this comment.
Ambiguous Position Assignment in planDay
The logic for assigning pos (lines 255-257) defaults to the loop index if item.Position is zero. However, zero may be a valid position (i.e., the first item in the plan). This can lead to incorrect ordering if the caller intends to set the position to zero.
Recommendation:
Use a pointer or a sentinel value (e.g., -1) to distinguish between an unset position and a valid zero value. For example:
if item.Position < 0 {
pos = i
} else {
pos = item.Position
}This ensures that zero is treated as a valid position.
| for _, sec := range requested { | ||
| has[sec] = true | ||
| } | ||
| if all || has["tasks"] { | ||
| s.fillMorningTasks(ctx, date, out) | ||
| } | ||
| if all || has["goals"] { | ||
| s.fillGoals(ctx, out) | ||
| } | ||
| if all || has["directives"] { | ||
| s.fillDirectives(ctx, out) | ||
| } | ||
| if all || has["insights"] { | ||
| s.fillInsights(ctx, out) | ||
| } | ||
| if all || has["rss"] { | ||
| s.fillRSSHighlights(ctx, date, out) | ||
| } | ||
| if all || has["plan_history"] { | ||
| s.fillPlanHistory(ctx, date, out) | ||
|
|
||
| // runSection launches a fill function with a per-section timeout. | ||
| // Each section writes to disjoint fields in out, so no mutex needed. | ||
| var wg sync.WaitGroup | ||
| runSection := func(name string, fn func(context.Context)) { | ||
| if all || has[name] { | ||
| wg.Go(func() { | ||
| secCtx, cancel := context.WithTimeout(ctx, sectionTimeout) | ||
| defer cancel() | ||
| fn(secCtx) | ||
| }) | ||
| } | ||
| } | ||
|
|
||
| runSection("tasks", func(c context.Context) { s.fillMorningTasks(c, date, out) }) | ||
| runSection("goals", func(c context.Context) { s.fillGoals(c, out) }) | ||
| runSection("directives", func(c context.Context) { s.fillDirectives(c, out) }) | ||
| runSection("insights", func(c context.Context) { s.fillInsights(c, out) }) | ||
| runSection("rss", func(c context.Context) { s.fillRSSHighlights(c, date, out) }) | ||
| runSection("plan_history", func(c context.Context) { s.fillPlanHistory(c, date, out) }) | ||
| runSection("content_pipeline", func(c context.Context) { s.fillContentPipeline(c, out) }) | ||
| wg.Wait() | ||
| } | ||
|
|
||
| func (s *Server) fillMorningTasks(ctx context.Context, date time.Time, out *MorningContextOutput) { |
There was a problem hiding this comment.
In fillMorningSections, each section is launched in a separate goroutine and writes to a field of the shared out struct. While the comment claims that fields are disjoint and thus no mutex is needed, Go's memory model does not guarantee safe concurrent writes to different fields of the same struct without explicit synchronization. This can lead to data races and undefined behavior, especially if the struct is accessed elsewhere concurrently (e.g., for logging or inspection).
Recommended Solution:
- Use a mutex to protect writes to the
outstruct, or - Refactor to collect results in local variables within each goroutine and assign them to the struct after all goroutines complete, or
- Use a
sync/atomic.Valuefor each field if performance is critical and fields are large.
Example (using local variables):
var (
overdueTasks []task.PendingTaskDetail
// ... other fields
)
runSection("tasks", func(c context.Context) { overdueTasks = s.fillMorningTasks(c, date) })
// ...
out.OverdueTasks = overdueTasksAlternatively, run the section fills sequentially if the performance benefit of concurrency is not significant.
|
|
||
| func (s *Server) fillDirectives(ctx context.Context, out *MorningContextOutput) { | ||
| caller := s.callerIdentity(ctx) | ||
|
|
||
| // Received: directives targeting the caller (unacked + unresolved). | ||
| if dirs, err := s.directives.UnackedForTarget(ctx, caller); err == nil { | ||
| out.UnackedDirectives = dirs | ||
| out.DirectivesReceived = append(out.DirectivesReceived, dirs...) | ||
| } else { | ||
| s.logger.Warn("morning_context: unacked directives", "error", err) | ||
| s.logger.Warn("morning_context: unacked directives received", "error", err) | ||
| } | ||
| if dirs, err := s.directives.UnresolvedForTarget(ctx, caller); err == nil { | ||
| out.UnresolvedDirectives = dirs | ||
| out.DirectivesReceived = append(out.DirectivesReceived, dirs...) | ||
| } else { | ||
| s.logger.Warn("morning_context: unresolved directives", "error", err) | ||
| s.logger.Warn("morning_context: unresolved directives received", "error", err) | ||
| } | ||
|
|
||
| // Issued: directives the caller sent that are still open. | ||
| if dirs, err := s.directives.UnackedIssuedBySource(ctx, caller); err == nil { | ||
| out.DirectivesIssued = append(out.DirectivesIssued, dirs...) | ||
| } else { | ||
| s.logger.Warn("morning_context: unacked directives issued", "error", err) | ||
| } | ||
| if dirs, err := s.directives.UnresolvedIssuedBySource(ctx, caller); err == nil { | ||
| out.DirectivesIssued = append(out.DirectivesIssued, dirs...) | ||
| } else { | ||
| s.logger.Warn("morning_context: unresolved directives issued", "error", err) | ||
| } | ||
| } | ||
|
|
There was a problem hiding this comment.
In fillDirectives, the code appends results from both UnackedForTarget and UnresolvedForTarget (and similarly for issued directives) to the same output slices. If a directive appears in both queries, it will be duplicated in the output, which may confuse consumers and lead to incorrect downstream processing.
Recommended Solution:
- Deduplicate directives before appending to the output slices. This can be done by tracking directive IDs in a map and only appending unique entries.
Example:
seen := map[string]struct{}{}
for _, d := range dirs {
if _, ok := seen[d.ID]; !ok {
out.DirectivesReceived = append(out.DirectivesReceived, d)
seen[d.ID] = struct{}{}
}
}Apply similar logic for issued directives.
| addTool(s, &mcp.Tool{ | ||
| Name: "manage_content", | ||
| Description: "Content lifecycle: create (draft), update (draft/review fields), publish (→published). Requires content_id for update/publish. Use for managing knowledge content.", | ||
| Description: "Content lifecycle: create (draft), update (fields+status), publish (→published), list (filter by status/type), read (full content by ID), bookmark_rss (RSS entry → bookmark). Requires content_id for update/publish/read, entry_id for bookmark_rss.", | ||
| Annotations: additive, | ||
| }, s.manageContent) |
There was a problem hiding this comment.
The manage_content tool is registered with the additive annotation, but its supported actions (such as publish) can be destructive. Using an annotation that does not reflect the most destructive possible action may mislead clients or downstream systems about the tool's side effects, potentially resulting in improper usage or lack of necessary safeguards.
Recommendation:
- Consider using the
destructiveannotation for this tool, or splitting it into separate tools for destructive and non-destructive actions to ensure accurate annotation and safer usage.
🤖 Augment PR SummarySummary: This PR fixes several HQ audit issues across the backend MCP tools, feed system, and the admin directives UI. Changes:
Technical Notes: The feed scheduler runs on a 15-minute tick and uses best-effort audit recording; MCP tool docs were updated to reflect the expanded tool surface area. 🤖 Was this summary useful? React with 👍 or 👎 |
| - **Exclusion**: `-draft` — exclude results containing "draft" | ||
| - **Fallback**: if the primary query returns nothing, retries with OR semantics across all words | ||
|
|
||
| Empty results mean "not found in published content" — not a query syntax issue. |
There was a problem hiding this comment.
This doc says empty search_knowledge results mean “not found in published content”, but the backend internal search was changed to status != 'archived' (so drafts/review can also match). That makes the guidance misleading for users diagnosing empty results.
Other locations where this applies: skills/koopa-system/references/research.md:126
Severity: low
🤖 Was this useful? React with 👍 or 👎, or 🚀 if it prevented an incident/outage.
| </div> | ||
|
|
||
| <!-- Related reports --> | ||
| @if (getRelatedReports(d.id).length > 0) { |
There was a problem hiding this comment.
getRelatedReports(d.id) is called twice from the template, which re-filters on every change detection and can potentially diverge between the @if and @for if reports() changes. Consider caching this as a computed value instead of a method call. (Rule: frontend/AGENTS.md)
Severity: low
🤖 Was this useful? React with 👍 or 👎, or 🚀 if it prevented an incident/outage.
| if input.Title != nil && *input.Title != "" { | ||
| title = *input.Title | ||
| } | ||
| slug := strings.ToLower(strings.ReplaceAll(title, " ", "-")) |
There was a problem hiding this comment.
slug generation for bookmark_rss is a simple space→hyphen replacement, so RSS entries with identical/similar titles can easily collide and cause CreateContent to fail with a duplicate-slug conflict. It may also leave punctuation in the slug, producing awkward URLs compared to a dedicated slugify routine.
Severity: medium
🤖 Was this useful? React with 👍 or 👎, or 🚀 if it prevented an incident/outage.
| <div class="rounded-lg border border-zinc-800 bg-zinc-900/50 p-4"> | ||
| <button | ||
| type="button" | ||
| class="w-full cursor-pointer rounded-lg border border-zinc-800 bg-zinc-900/50 p-4 text-left transition-colors hover:border-zinc-700 hover:bg-zinc-800/50" |
There was a problem hiding this comment.
Inside this button block, the lifecycle badge uses Tailwind class rounded-sm, but the repo’s Tailwind v4 rules expect rounded-xs (and reserve rounded-sm for the old rounded). This can lead to inconsistent border radius across the UI. (Rule: frontend/AGENTS.md)
Other locations where this applies: frontend/src/app/admin/commitments/directives.html:217, frontend/src/app/admin/commitments/directives.html:331
Severity: low
🤖 Was this useful? React with 👍 or 👎, or 🚀 if it prevented an incident/outage.
No description provided.