Conversation
Co-Authored-By: Virgil <virgil@lethean.io>
Replace all forge.lthn.ai import paths in Go source files with dappco.re equivalents. Update go.mod deps to dappco.re versions as specified. Co-Authored-By: Virgil <virgil@lethean.io>
All RegisterTools and internal register*Tool methods updated from *mcp.Server to *coremcp.Service. Tool registration calls updated to use svc.Server() for SDK AddTool calls. Monitor subsystem updated to store *coremcp.Service and access Server() for Sessions/ResourceUpdated. Tests updated to create coremcp.Service via New() instead of raw SDK server. Co-Authored-By: Virgil <virgil@lethean.io>
…andard - Replace broken registerMCPService with mcp.Register (fixes nil ServiceRuntime panic) - Remove dead mcp_service.go, update tests to use mcp.Register directly - Add setTestWorkspace() helper to clear workspaceRootOverride between tests - Fix 40+ test files with workspace state poisoning from loadAgentConfig - Fix forge.lthn.ai → dappco.re in findConsumersList test - Fix BranchWorkspaceCount test to use isolated temp dir - Add CLI test standard: 32 tests across 19 subsystems (tests/cli/) - All 9 packages pass, 0 failures Co-Authored-By: Virgil <virgil@lethean.io>
- Add isLEMProfile(): codex:lemmy/lemer/lemma/lemrd use --profile not --model - Add isNativeAgent(): Claude agents run natively (not in Docker) - Update localAgentCommandScript for LEM profile support - 12 new tests (Good/Bad/Ugly for profiles, native agent, codex variants) Co-Authored-By: Virgil <virgil@lethean.io>
Gate non-essential MCP tools behind CORE_MCP_FULL=1 env var. Core factory tools (dispatch, status, plan, issue, PR, scan, mirror, watch, brain, files) always registered. Extended tools (session, sprint, state, phase, task, template, message, content, platform, epic, remote, review-queue, setup, metrics, RAG, webview) only when full mode enabled. 189 → 82 tools in default mode. Fixes slow MCP startup and tool registration timeout in Claude Code. Co-Authored-By: Virgil <virgil@lethean.io>
…re primitives Replaced fmt, strings, sort, os, io, sync, encoding/json, path/filepath, errors, log, reflect with core.Sprintf, core.E, core.Contains, core.Trim, core.Split, core.Join, core.JoinPath, slices.Sort, c.Fs(), c.Lock(), core.JSONMarshal, core.ReadAll and other CoreGO v0.8.0 primitives. Framework boundary exceptions preserved where stdlib types are required by external interfaces (Gin, net/http, CGo, Wails, bubbletea). Co-Authored-By: Virgil <virgil@lethean.io>
Co-Authored-By: Virgil <virgil@lethean.io>
…gaps - paths.go: resolve relative workspace_root against $HOME/Code so workspaces land in the conventional location regardless of launch cwd (MCP stdio vs CLI) - dispatch.go: container mounts use /home/agent (matches DEV_USER), plus runtime-aware dispatch (apple/docker/podman) with GPU toggle per RFC §15.5 - queue.go / runner/queue.go: DispatchConfig adds Runtime/Image/GPU fields; AgentIdentity parsing for the agents: block (RFC §10/§11) - pr.go / commands_forge.go / actions.go: agentic_delete_branch tool + branch/delete CLI (RFC §7) - brain/tools.go / provider.go: Org + IndexedAt fields on Memory (RFC §4) - config/agents.yaml: document new dispatch fields, fix identity table - tests: dispatch_runtime_test.go (21), expanded pr_test.go + queue_test.go, new CLI fixtures for branch/delete and pr/list Co-Authored-By: Virgil <virgil@lethean.io>
- sync.go: syncBackoffSchedule (1s/5s/15s/60s/5min) with per-push Attempts and NextAttempt honoured on retry (RFC §16.5) - runSyncFlushLoop: ticks every minute from OnStartup when API key present, drains the queue without re-scanning workspaces - SyncPushInput.QueueOnly: lets flush loop drain without triggering a full workspace scan (prevents duplicate pushes) - Sync ledger at .core/sync/ledger.json: fingerprints keyed by workspace name + (updated_at, runs); skips already-synced workspaces until fresh activity - docs/RFC-AGENT.md: synced from plans/code/core/agent/RFC.md with latest AgentPlan status enum, complete capability, pr.close/branch.delete, indexed_at/org brain fields Co-Authored-By: Virgil <virgil@lethean.io>
Introduce an optional go-store persistence layer for the three state groups described in RFC §15.3 — queue, concurrency, registry — plus runtime_state and dispatch_history used by the sync pipeline. - statestore.go lazily opens `.core/db.duckdb` via go-store when available; nil-safe helpers return cleanly so in-memory/file-based fallbacks survive when the store cannot open (graceful degradation, RFC §15.6) - prep.go tracks the store reference on the subsystem and closes it on shutdown; hydrateWorkspaces now consults the registry group before the filesystem scan so ghost agents are marked failed across restarts, and TrackWorkspace mirrors updates back into the cache - runtime_state.go persists backoff + fail-count snapshots into the go-store runtime group so dispatch backoff survives restarts even when the JSON file rotates - commit.go writes the completed dispatch record into dispatch_history for RFC §16.3 sync push to drain without rescanning workspaces - statestore_test.go covers lazy-once init, restore/delete round trip, ghost-agent failure marking, and runtime-state replay across subsystem instances Co-Authored-By: Virgil <virgil@lethean.io>
…store
- prep.go TrackWorkspace mirrors into queue + concurrency store groups
(previously only registry); hydrateWorkspaces reaps filesystem ghosts
(dead PID → failed, persisted back to status.json) so cmdStatus and
out-of-process consumers see coherent state (RFC §15.3)
- sync.go queue read/write goes through go-store first per RFC §16.5
("Queue persists across restarts in db.duckdb"), file remains fallback
for graceful degradation
- statestore.go stateStoreGet helper for go-store-first reads
- tests/cli/restart — new CLI test for RFC §15.7 "dispatch → kill →
restart → no ghost agents" dead-PID reap flow
- 4 new statestore tests: queue group mirror, concurrency refresh,
sync queue persistence, fs ghost reap with disk write-back
Co-Authored-By: Virgil <virgil@lethean.io>
Implements `core login CODE` — exchanges a 6-digit pairing code generated at app.lthn.ai/device for an AgentApiKey, persisted to ~/.claude/brain.key. Pairing code is the proof, so the POST is unauthenticated. - auth.go: AuthLoginInput/Output DTOs + handleAuthLogin handler - commands_platform.go: login / auth/login / agentic:login CLI commands with cmdAuthLogin persisting the returned key - prep.go: registered agentic.auth.login / agent.auth.login actions - auth_test.go / commands_platform_test.go / prep_test.go: Good/Bad/Ugly triads per repo convention, including key persistence verification Co-Authored-By: Virgil <virgil@lethean.io>
The runQA handler now captures every lint finding, tool run, build, vet
and test result into a go-store workspace buffer and commits the cycle
to the journal. Intelligence survives in the report and the journal per
RFC §7 Completion Pipeline.
- qa.go: QAFinding / QAToolRun / QASummary / QAReport DTOs mirroring
lint.Report shape; DispatchReport struct written to .meta/report.json;
runQAWithReport opens NewWorkspace("qa-<workspace>"), invokes
core-lint run --output json via c.Process().RunIn(), records every
finding + tool + stage result, then commits
- runQALegacy preserved for graceful degradation when go-store is
unavailable (RFC §15.6)
- dispatch.go: runQA now delegates to runQAWithReport, bool contract
unchanged for existing call sites
- qa_test.go: Good/Bad/Ugly triads per repo convention
Poindexter clustering from RFC §7 Post-Run Analysis remains open —
needs its own RFC pass for the package boundary.
Co-Authored-By: Virgil <virgil@lethean.io>
Extends DispatchReport with the three RFC §7 diff lists (New, Resolved, Persistent) and a Clusters list that groups findings by tool/severity/ category/rule_id. runQAWithReport now queries the SQLite journal for up to persistentThreshold previous cycles of the same workspace, computes the diff against the current cycle, and populates .meta/report.json before ws.Commit(). The full findings payload is also pushed to the journal via CommitToJournal so later cycles have findings-level data to compare against (workspace.Commit only stores aggregated counts). Matches RFC §7 Post-Run Analysis without pulling in Poindexter as a direct dependency — uses straightforward deterministic clustering so agent stays inside the core/go-* dependency tier. Co-Authored-By: Virgil <virgil@lethean.io>
Adds `.core/workspace/db.duckdb` — the permanent record of dispatch cycles described in RFC §15.5. Stats rows persist BEFORE workspace directories are deleted so "what happened in the last 50 dispatches" queries survive cleanup and sync drain. - `workspace_stats.go` — lazy go-store handle for the parent stats DB, build/record/filter/list helpers, report payload projection - `commit.go` — writes a stats row as part of the completion pipeline so every committed dispatch carries forward into the permanent record - `commands_workspace.go` — `workspace/clean` captures stats before deleting, new `workspace/stats` command + `agentic.workspace.stats` action answer the spec's "query on the parent" use case Co-Authored-By: Virgil <virgil@lethean.io>
Adds `recoverStateOrphans` per RFC §15.5 — startup scans `.core/state/`
for leftover QA workspace buffers from dispatches that crashed before
commit, and discards them so partial cycles do not poison the diff
history described in RFC §7.
- `statestore.go` — new `recoverStateOrphans` wrapper around go-store's
`RecoverOrphans("")` so the agent inherits the store's configured
state directory
- `prep.go` — wires the recovery into OnStartup immediately after
`hydrateWorkspaces` so the registry, queue, and buffers all come back
into a consistent state on restart
- `statestore_test.go` — Good/Bad/Ugly coverage, includes the cwd
redirect guard so the go-store default relative path cannot leak test
artefacts into the package working tree
Co-Authored-By: Virgil <virgil@lethean.io>
runWorkspaceLanguagePrep now appends `GOWORK=` (empty) to the env passed to `go work sync` so inherited `GOWORK=off` from a test runner or CI environment doesn't short-circuit the workspace lookup. The extracted workspace template includes a go.work referencing ./repo; without this override the sync fails even though the file is right there. Converged pass — no new features found this sample. Co-Authored-By: Virgil <virgil@lethean.io>
Replace os/exec.LookPath with process.Program.Find() — keeps dispatch runtime detection in line with the repo's process-execution convention and removes the os/exec import from pkg/agentic/dispatch.go. Convergence-pass from spark-medium — no new features found on this sample, confirms core/agent and go-store RFC parity is complete. Co-Authored-By: Virgil <virgil@lethean.io>
go-process's OnStartup re-registers process.start/run/kill with string-ID variants, clobbering the agent's custom handlers that return *process.Process. This broke pid/queue helpers and 7 tests that need the rich handle (TestPid_ProcessAlive_Good, TestQueue_CanDispatchAgent_Bad_AgentAtLimit, etc). Register a Startable override service that reapplies the agent handlers after every service finishes OnStartup — since services run in registration order, "agentic.process-overrides" always runs last and wins. Co-Authored-By: Virgil <virgil@lethean.io>
…ations Three load-bearing gaps between the agent RFC and the MCP surface: - RFC §9 Fleet Mode describes the 6-digit pairing-code bootstrap as the primary way an unauthenticated node provisions its first AgentApiKey. `handleAuthLogin` existed as an Action but never surfaced as an MCP tool, so IDE/CLI callers had to shell out. Adds `agentic_auth_login` under `registerPlatformTools` with a thin wrapper over the existing handler so the platform contract stays single-sourced. - `RegisterTools` was double-registering `agentic_scan` (bare `mcp.AddTool` before the CORE_MCP_FULL gate, then again via `AddToolRecorded` inside the gate). The second call silently replaced the first and bypassed tool-registry accounting, so REST bridging and metrics saw a zero for scan. Collapses both into a single recorded registration before the gate. - `registerPlanTools` and `registerWatchTool` were also fired twice in the CORE_MCP_FULL branch. Removes the duplicates so the extended registration list mirrors the always-on list exactly once. - Switches `agentic_prep_workspace` from bare `mcp.AddTool` to `AddToolRecorded` so prep-workspace participates in the same accounting as every other RFC §6 tool. TestPrep_RegisterTools_Good_RegistersCompletionTool now asserts all three `agentic_auth_*` tools surface, covering the new login registration alongside provision/revoke. Co-Authored-By: Virgil <virgil@lethean.io>
Co-Authored-By: Virgil <virgil@lethean.io>
One caveat: - This repo did not contain the requested `.core/TODO.md`; the actionable TODO list present here was `php/TODO.md`. Co-Authored-By: Virgil <virgil@lethean.io>
Co-Authored-By: Virgil <virgil@lethean.io>
Co-Authored-By: Virgil <virgil@lethean.io>
Add warnings for silent filesystem write/delete failures in agentic persistence helpers and record two adjacent hardening gaps for follow-up.\n\nCo-Authored-By: Virgil <virgil@lethean.io>
Co-Authored-By: Virgil <virgil@lethean.io>
Co-Authored-By: Virgil <virgil@lethean.io>
Co-Authored-By: Virgil <virgil@lethean.io>
Co-Authored-By: Virgil <virgil@lethean.io>
Read-only audit deliverable per ticket scope (no code changes).
docs/flow-audit-2026-04-25.md findings:
* Only 2 canonical RFC YAML flows present in pkg/lib/flow/
(upgrade/v080-plan.yaml + upgrade/v080-implement.yaml).
* 13 of the 15 canonical RFC §3.1 flows are missing.
* pr/merge.yaml appears only as an extra RFC example path.
* Runner feature support:
- flow: directive — preview-only, REJECTED in execution
- when: conditional — NOT implemented
- parallel: fan-out — preview-only
- --dry-run — exists but is effectively a preview alias
- embedded RFC YAML path lookup — missing (mounted lookup forces
Markdown slugs instead of RFC directory structure)
* CLI probe: `./core-agent run/flow pkg/lib/flow/upgrade/v080-plan.yaml
--dry-run` exits 0 with 5-step preview. `./core-agent run/flow
upgrade/v080-plan --dry-run` exits 1 — confirms embedded path
lookup gap.
Per the ticket: output is the audit + child ticket list, not impl.
Supervisor files the 13 missing-flow + 4 runner-feature child tickets
into Mantis.
Co-authored-by: Codex <noreply@openai.com>
Closes tasks.lthn.sh/view.php?id=544
Per RFC.pipeline.md "core pipeline Command Tree": full audit/epic/ monitor/fix/onboard/budget/training subcommand tree wired into core-agent. Lands across 18 files: * pkg/agentic/pipeline_commands.go — registry: audit, epic/create|run| status|sync, monitor, fix/reviews|conflicts|format|threads, onboard, budget/*, training/* * pkg/agentic/commands.go — pipeline registration wired in * pkg/agentic/pipeline_audit.go — audit issue expansion + bug fix: audit-created implementation issues no longer carry the 'audit' label, so epic + onboard see them as implementation candidates * pkg/agentic/pipeline_epic.go — epic group/run/sync * pkg/agentic/pipeline_monitor.go — open-PR watcher * pkg/agentic/pipeline_fix.go — reviews/conflicts/format/threads helpers * pkg/agentic/pipeline_onboard.go — chained audit → epic → dispatch * pkg/agentic/pipeline_budget.go + pipeline_training.go — stubs returning blocked-on-sibling pointer (sibling tickets own deep impl) * pkg/agentic/pipeline_*_test.go — AX-10 per handler * tests/cli/pipeline/Taskfile.yaml — CLI smoke coverage go test could not complete in this sandbox due to wider workspace go.sum/private-module issues outside this ticket; supervisor catches in clean workspace. Co-authored-by: Codex <noreply@openai.com> Closes tasks.lthn.sh/view.php?id=535
Per RFC §7 Post-Run Analysis: analyseWorkspace() builds 5D Poindexter
points (tool_id, severity_score, file_hash, category_id, frequency),
clusters by distance 0.15, diffs against previous journal entries to
classify New / Resolved / Persistent (≥5 consecutive cycles).
Lands:
* pkg/agentic/qa_analysis.go — analyseWorkspace, DispatchReport,
findingToPoint, diffFindings, persistentFindings; integrates with
forge.lthn.ai/Snider/Poindexter (canonical path per memory)
* pkg/agentic/qa.go — wires analysis into runQAWithReport before
ws.Commit() (sync.go untouched — ws.Commit lives in runQAWithReport
in this branch)
* journal publication extended so summary text + analysis fields travel
with the report
* qa_analysis_test.go — TestAnalyseWorkspace_{Good_EmptyFindings,
Good_FiveClusters,Bad_NilWorkspace,Ugly_PoindexterPanic}; the panic
test uses a panic-injecting clusterer override and asserts graceful
recovery
* go.mod — adds forge.lthn.ai/Snider/Poindexter (canonical, NOT
dappco.re — Poindexter is OG load-bearing math primitive)
Sandbox go test blocked by pre-existing unrelated issues in
commands_forge.go / fetch_loop.go / commands_flow_test.go (out of
allowlist); supervisor catches in clean workspace.
Co-authored-by: Codex <noreply@openai.com>
Closes tasks.lthn.sh/view.php?id=538
Per RFC §7 Post-Completion Repo Sync: workspace push → IPC event →
local clone fetch+reset so the supervisor's working tree always
matches Forge.
Lands:
* pkg/agentic/repo_sync.go — registers sync.fetch + sync.reset core
actions; subscribes to WorkspacePushed IPC; exposes core-agent
repo/sync --repo <name> [--reset] manual command
* commands.go — wires the subscriber + actions at startup
* pkg/agentic/repo_sync_test.go — AX-10: WorkspacePushed handler,
branch-switch/reset behaviour, command path
* tests/cli/sync/{Taskfile.yaml,repo/Taskfile.yaml} — end-to-end
smoke proving local clone matches Forge HEAD after sync
The existing 5min fallback fetch loop in fetch_loop.go is reused
unchanged — this lane fills the event-driven half of the contract.
Sandbox blocked from go test / go build by pre-existing go.work
dappco.re/go/api replacement conflict; supervisor's clean workspace
catches.
Co-authored-by: Codex <noreply@openai.com>
Closes tasks.lthn.sh/view.php?id=546
… success paths (#545)
Per RFC §15.5.1: agent branches (agent/*) must be deleted from Forge
after successful push or merge — stale branches pollute workspace prep
and cause clone confusion.
Lands:
* pkg/agentic/branch_cleanup.go (NEW) — cleanupBranch(ctx, repo, branch)
helper. Refuses main/dev/master regardless of input (defensive).
Normalises refs/heads/* prefix. Treats missing-remote-branch as
harmless cleanup-success (idempotent).
* pkg/agentic/branch_cleanup_test.go (NEW) — AX-10 TestCleanupBranch_
{Good_DeletesAgentBranch, Bad_RefuseProtected, Ugly_DeleteFailsForge}.
* pkg/agentic/pr.go — createPR success-on-push path now calls cleanupBranch.
* pkg/agentic/commands.go — cmdComplete success path also calls cleanupBranch.
* tests/cli/branch/Taskfile.yaml — end-to-end smoke + AX-10 unit hook.
agentic.branch.delete action was already registered in prep.go; this lane
routes the actual delete behaviour through the new helper instead of
editing the registration site.
Sandbox blocked from go test by outer go.work conflicting replacements;
gofmt clean. Supervisor's clean workspace catches.
Co-authored-by: Codex <noreply@openai.com>
Closes tasks.lthn.sh/view.php?id=545
…#536) Replaces the #535 stubs with full impl per RFC.pipeline.md. Lands: * pkg/agentic/pipeline_budget.go (extend) — budget/plan reads pool/rate config, counts logged dispatches from .core/db.duckdb (JSONL fallback), prints per-pool remaining budget. budget/log appends to .core/journal/dispatch.jsonl + mirrors to state store. * pkg/agentic/pipeline_training.go (extend) — training/capture pulls PR meta via MetaReader, captures PR diff via Forge PR-diff endpoint with `git show` fallback, records structural CodeRabbit-equivalent finding counts from review-thread totals, appends to .core/training/journal.jsonl. training/stats aggregates totals + zero-finding counts by repo. training/export filters to zero-finding entries → .core/training/export.jsonl (clean LEM training data). * pkg/agentic/training_journal.go (NEW) — shared journal helpers * AX-10 tests replace stubs (pipeline_budget_test.go + pipeline_training_test.go) * tests/cli/pipeline/Taskfile.yaml — end-to-end smoke covers all 5 subcommands against isolated temp workspace + local Forge stub LEM training data pipeline now feedable: merged PRs → training/capture → journal.jsonl → training/export (zero-finding filter) → ready for next LEK iteration. Sandbox blocked from go test by go.work + private-dep resolution; gofmt clean. Forge PR diff endpoint shape verified against Gitea API docs (1.19). Co-authored-by: Codex <noreply@openai.com> Closes tasks.lthn.sh/view.php?id=536
…kdb (#537)
Per RFC §15.3: restart was losing in-flight queue + workspace registry.
"Ghost agents" and "lost queue" pain now fixed.
Lands:
* pkg/agentic/persist.go (NEW):
- OnStartup(ctx, c): opens .core/db.duckdb via go-store, restores
registry/queue/concurrency groups
- Dead-PID detection: registry entries with status=running but
!pidAlive(PID) → marked failed with question="dead worker on
restart"; status.json files re-written to disk
- Orphaned workspace cleanup: walk .core/workspace/, dir-exists +
registry-says-completed → delete
- OnShutdown(ctx): flushes in-memory registry + queue back to store
before close
* pkg/agentic/prep.go — PrepSubsystem.OnStartup/OnShutdown wired
* pkg/agentic/persist_test.go — AX-10 covering queue restore,
dead-worker reaping, shutdown persistence, invalid-store-payload,
orphan cleanup
* tests/cli/restart/Taskfile.yaml — extended smoke seeds DuckDB state
for queued workspace + dead running worker, asserts status.json
reflects restore correctly
Sandbox blocked from go test by go.work conflicting dappco.re/go/api
replacements (pre-existing); gofmt clean. Supervisor's clean workspace
catches.
Co-authored-by: Codex <noreply@openai.com>
Closes tasks.lthn.sh/view.php?id=537
Per RFC §9 Fleet Mode: device pairing + SSE-with-poll-fallback +
heartbeat + status reporting now wired.
Lands:
* pkg/agentic/fleet_login.go — `core login CODE` POSTs /v1/device/pair
with the 6-digit code; writes {agent_api_key, agent_id, expires_at}
to ~/.core/agent.key (mode 0600). Errors clean (no panic) on invalid
code / network fail.
* pkg/agentic/fleet_connect.go — Connect(ctx) opens SSE to
/v1/fleet/events with Bearer auth; reconnect backoff 1s→2s→4s→8s→
16s→30s. PollFallback via /v1/fleet/task/next every 30s when SSE
keeps failing. Heartbeat goroutine POSTs /v1/fleet/heartbeat every
60s with {agent_id, status}. Persists last-known fleet snapshot to
~/.core/fleet.status.json so fleet/status survives restart.
* pkg/agentic/fleet_mode.go — `core fleet` top-level + `fleet/nodes`
(lists registered nodes) + `fleet/status` (connection state, last
heartbeat, last task). All exit cleanly on API-unreachable.
* commands.go — registerFleetCommands wired into registerCommands.
* AX-10 tests + CLI Taskfiles for login + nodes (unreachable-API
asserted clean-exit, no panic).
Sandbox blocked from go test by go.work + private-module-graph
(pre-existing); gofmt clean.
Co-authored-by: Codex <noreply@openai.com>
Closes tasks.lthn.sh/view.php?id=539
… trigger (#543) Implements pkg/agentic/content_seo.go: SEORevision, ScheduleRevision, GetPendingRevisions, OnGooglebotVisit, Gin middleware hook, and content_seo_schedule MCP tool. ScheduledAt assigned random 8-62m window on first Googlebot hit. Persistence via .core/db.duckdb seo_revisions group. AX-10 coverage in content_seo_test.go (Good/Bad/Ugly + MCP tool registration). Closes tasks.lthn.sh/view.php?id=543 Co-authored-by: Codex <noreply@openai.com>
Adds Phase 3 foundation: agent_profiles table (Postgres-compatible, FK-free),
AgentProfile Eloquent model with encrypted api_key_cipher cast, array
capability_tags cast, datetime last_dispatched_at cast, active() and
forClass(string) local scopes.
AX-10 Pest Feature test covers:
- active() returns only enabled+headroom>0 rows
- active()->forClass('A') chains correctly
- api_key_cipher round-trips via encrypted cast
- capability_tags round-trips as array
Codex note: php -l clean on all 3 files; pest+artisan unavailable at
repo root for runtime verification (composer + bootstrap live downstream
in lab/lthn.ai).
Closes tasks.lthn.sh/view.php?id=825
Co-authored-by: Codex <noreply@openai.com>
…or (#828)
Phase 3 lane: queueable Job that normalises Hermes fetch/event payloads,
extracts the first commit SHA via ShaExtractor (matches both forge URL
and bare 7-40 hex forms), builds a close-note with optional
?\Mod\AgentProfile profile reference, posts the Mantis note via
MantisClient->note(), then transitions the ticket to closed/fixed via
MantisClient->close().
ShaExtractor: extract($modelOutput): {?sha, ?repo, ?forge_url} regex
matcher for forge.lthn.sh commit URLs + bare SHAs.
MantisClient: thin Guzzle wrapper around tasks.lthn.sh REST API
(note + close), Authorization header, base URL via config.
Pest Feature test: 8 tests / 17 assertions covering forge URL parsing,
bare SHA parsing, note/close ordering, repo-hint fallback, missing-SHA
RuntimeException path. Verified via temp Composer harness (no checked-in
vendor/ at this repo level).
Closes tasks.lthn.sh/view.php?id=828
Co-authored-by: Codex <noreply@openai.com>
… 3 (#826) ShapeClassifier::classify($ticket) returns 'A'|'B'|'C' per policy v1: - (severity=critical OR priority=urgent) → A - has tag in ['security','crypto','core'] → A - (severity=major OR priority=high) → B - everything else → C ProfileSelector::pickFor($ticket) walks AgentProfile::active(), matches capability tags case-insensitively against ticket.tags: - Class A: cheapest matching profile (cost_class alphabetic order) - Class B: any active profile with quota_headroom_pct >= 25 - Class C: deterministic round-robin via last_dispatched_at Pest Unit tests cover Good (matching profile picked), Bad (no match → null), Ugly (all profiles disabled → null), plus class A/B headroom gating + class C round-robin determinism. Codex note: php -l clean; pest skipped — no vendor/ at this repo root (downstream lab/lthn.ai owns composer install). Closes tasks.lthn.sh/view.php?id=826 Co-authored-by: Codex <noreply@openai.com>
Migration 2026_04_25_000002 adds nullable plugin_cc_name string column to agent_profiles. AgentProfile::$fillable extended to allow it. agentic:sync-plugins-cc artisan command: - Scans ~/.claude/plugins via Storage::disk(...) local disk (Finder fallback) for directories with plugin.json - Maps to enabled AgentProfile by name first, plugin_cc_name second - Upserts plugin_cc_name on matches; emits mapped/unmapped table Pest Feature test fakes HOME, creates plugin dirs, verifies both mapping paths + disabled/non-matching profiles stay null. Codex note: php -l clean; pest skipped (no vendor/). Boot.php command registration deferred — new test registers the command directly to verify behaviour; Boot wiring belongs to a follow-up that touches existing Boot file. Closes tasks.lthn.sh/view.php?id=837 Co-authored-by: Codex <noreply@openai.com>
…tches table (#827)
Phase 3 lane: queueable Job that resolves a profile via ProfileSelector,
posts POST {gateway}/v1/responses to the chosen Hermes gateway, persists
ticket_id/profile_id/response_id/run_id/status in agent_dispatches, and
chains CaptureDispatchResultJob.
Migration 2026_04_25_000003 creates agent_dispatches table (FK-free,
Postgres-compatible).
HermesClient: thin Laravel HTTP wrapper around the Hermes /v1/responses
endpoint with Authorization header + JSON body.
DispatchMantisTicketJob behaviour:
- Resolves profile via ProfileSelector::pickFor()
- Null-profile → log warn + ->release(60) requeue
- Otherwise POSTs to gateway, persists AgentDispatch row, queues
CaptureDispatchResultJob
AgentDispatch Eloquent model with minimal $fillable.
Pest Feature test (Http::fake): verifies request shape, persisted row,
downstream capture-job queueing, and the no-profile requeue path.
Test file conditionally aliases minimal stubs for sibling-lane services
so this file remains runnable before #826/#828 fully land in dev.
Codex note: php -l clean; pest skipped (no vendor/).
Closes tasks.lthn.sh/view.php?id=827
Co-authored-by: Codex <noreply@openai.com>
Console\Kernel registers agentic:sync-profiles hourly + agentic:dispatch-queue --limit=3 every 5 minutes. Api\MantisWebhookController accepts POST /api/agentic/mantis-webhook, authenticates via X-Mantis-Webhook-Secret header (config-driven), validates payload, dispatches DispatchMantisTicketJob immediately for issue.opened (when ProfileSelector finds a profile), 204 for issue.closed/other events, 401 wrong secret, 422 malformed body. Pest Feature test covers all four cases (200 + dispatch, 401, 204, 422). Codex note: php -l clean; pest skipped (no vendor/). OUT OF SCOPE for this narrowed lane: OpenBrain memory writes + Langfuse trace observability (track separately as #830 follow-up). Closes tasks.lthn.sh/view.php?id=830 (narrowed — observability is followup) Co-authored-by: Codex <noreply@openai.com>
…ue (#829)
agentic:sync-profiles iterates AgentProfile rows, calls GET {gateway}/v1/models
via Http::withToken, infers capability_tags from exposed model ids
(claude-opus → handoff/analysis/core; gpt-5.4-mini → dispatch/cheap;
embedding-* → embedding), leaves last_dispatched_at untouched.
agentic:dispatch-queue uses extended MantisClient->listOpen() (new
small wrapper), skips assigned tickets + 5min in-flight markers, runs
ProfileSelector::pickFor, adds suppression note via MantisClient->note,
queues DispatchMantisTicketJob up to --limit (default 3). Both
commands emit progress via Log.
Pest Feature tests use Http::fake + Queue::fake. Tests register the
new commands directly (Boot.php registration is a deferred follow-up
per #837 lane note).
Codex note: php -l clean; pest blocked by unrelated repo migration
infra (dedicated brain connection + SQLite-incompatible agent_sessions
rename).
Closes tasks.lthn.sh/view.php?id=829
Co-authored-by: Codex <noreply@openai.com>
…add missing TestPrep_EnsureWorkspaceTaskFile_Bad test
github/dev tree was reduced to 2 files (pkg/agentic/prep{,_test}.go) after
an experimental branch reset. Cumulative net diff vs merge-base b338e12
is only those 2 files. Local has been growing the full tree for 120
commits and absorbed every substantive prep.go change in passing —
dispatch sync hooks, AddToolRecorded migrations, ensureWorkspaceTaskFile
wiring, atomic CODEX.md write. prep_test.go missing only the
TestPrep_EnsureWorkspaceTaskFile_Bad case; appended.
The ~1,500 "deleted by them" entries are artifacts of github's branch
reset, not a delete intent. Kept canonical local tree.
Co-authored-by: Hephaestus <hephaestus@cladius>
|
Warning Rate limit exceeded
To keep reviews running without waiting, you can enable usage-based add-on for your organization. This allows additional reviews beyond the hourly cap. Account admins can enable it under billing. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (13)
📝 WalkthroughWalkthroughRenamed/rebranded agent manifests and MCP invocations to "core", added multiple Python MCP servers (Camofox, Hermes), large PHP agentic subsystem (fleet, credits, sessions, MCP infra), tightened filesystem permissions, and removed legacy Gemini CLI integrations. Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant MCP as Camofox MCP (stdio)
participant Gateway as Camofox HTTP API
Client->>MCP: tools/call navigate(url)
MCP->>Gateway: POST /navigate
Gateway-->>MCP: JSON {status, ...}
MCP-->>Client: JSON result
Client->>MCP: tools/call screenshot()
MCP->>Gateway: GET /screenshot
Gateway-->>MCP: PNG bytes
MCP->>MCP: base64_encode(bytes)
MCP-->>Client: {screenshot: "base64..."}
sequenceDiagram
participant Client
participant MCP as Hermes MCP (stdio)
participant Gateway as Hermes Gateway
Client->>MCP: hermes_dispatch(runner, task)
MCP->>Gateway: POST /dispatch
Gateway-->>MCP: {run_id, status_url}
MCP-->>Client: {run_id, status_url}
Client->>MCP: hermes_status(run_id)
MCP->>Gateway: GET /runs/{run_id}
Gateway-->>MCP: {state, progress}
MCP-->>Client: normalized status
sequenceDiagram
participant Agent
participant FleetAPI as Fleet API (PHP)
participant DB
participant CreditSvc as CreditService
Agent->>FleetAPI: dispatch(workspace, task)
FleetAPI->>DB: create FleetTask
DB-->>FleetAPI: task_id
FleetAPI->>DB: lock FleetNode (lockForUpdate)
DB-->>FleetAPI: locked node
FleetAPI->>CreditSvc: award(..., fleet_task_id)
CreditSvc->>DB: lock latest CreditEntry
DB-->>CreditSvc: previous balance
CreditSvc->>DB: create CreditEntry (balance_after)
DB-->>CreditSvc: entry
CreditSvc-->>FleetAPI: CreditTransaction
FleetAPI-->>Agent: assigned task response
|
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Actionable comments posted: 2
Note
Due to the large number of review comments, Critical severity comments were prioritized as inline comments.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (4)
.core/reference/runtime.go (1)
156-167:⚠️ Potential issue | 🟠 Major
ServiceShutdownlacks nil-receiver guard unlikeServiceStartup.Line 156 correctly guards against nil receiver with
if r == nil || r.Core == nil. However, line 164 only checksr.Core != nil, meaning(*Runtime)(nil).ServiceShutdown(ctx)will panic. Mirror the startup guard to ensure consistency.Proposed fix
func (r *Runtime) ServiceShutdown(ctx context.Context) Result { - if r.Core != nil { - return r.Core.ServiceShutdown(ctx) - } - return Result{OK: true} + if r == nil || r.Core == nil { + return Result{OK: true} + } + return r.Core.ServiceShutdown(ctx) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.core/reference/runtime.go around lines 156 - 167, ServiceShutdown currently checks only r.Core and will panic if called on a nil receiver; mirror the startup guard used in ServiceStartup by checking both r == nil || r.Core == nil before accessing r.Core. Update the Runtime.ServiceShutdown method to return Result{OK: true} when r is nil or r.Core is nil, then otherwise delegate to r.Core.ServiceShutdown(ctx), referencing the Runtime type and its ServiceShutdown and ServiceStartup methods to locate the change.php/Migrations/0001_01_01_000008_create_brain_memories_table.php (1)
1-3:⚠️ Potential issue | 🟡 MinorMissing SPDX license header.
Per coding guidelines, PHP source files should include
// SPDX-License-Identifier: EUPL-1.2as a header comment.Proposed fix
<?php +// SPDX-License-Identifier: EUPL-1.2 + declare(strict_types=1);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@php/Migrations/0001_01_01_000008_create_brain_memories_table.php` around lines 1 - 3, Add the SPDX license header comment before the declare statement: insert the single-line comment "// SPDX-License-Identifier: EUPL-1.2" immediately after the opening <?php tag and before declare(strict_types=1); in the migration file so the file begins with the required license header while preserving the existing declare(strict_types=1) line and class CreateBrainMemoriesTable contents.php/Migrations/0001_01_01_000009_drop_brain_memories_workspace_fk.php (1)
1-3:⚠️ Potential issue | 🟡 MinorMissing SPDX license header.
Per coding guidelines, PHP source files should include
// SPDX-License-Identifier: EUPL-1.2as a header comment.Proposed fix
<?php +// SPDX-License-Identifier: EUPL-1.2 + declare(strict_types=1);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@php/Migrations/0001_01_01_000009_drop_brain_memories_workspace_fk.php` around lines 1 - 3, Add the required SPDX license header to the top of the migration file by inserting a single-line comment with the exact text "// SPDX-License-Identifier: EUPL-1.2" immediately after the opening <?php and declare(strict_types=1); line (i.e., at the very top of 0001_01_01_000009_drop_brain_memories_workspace_fk.php) so the file begins with the SPDX comment followed by the existing declare statement.php/Mcp/Tools/Agent/Brain/BrainList.php (1)
1-6:⚠️ Potential issue | 🟡 MinorMissing SPDX license header.
As per coding guidelines, all PHP files must include the SPDX license identifier.
🔧 Proposed fix
<?php +// SPDX-License-Identifier: EUPL-1.2 + declare(strict_types=1);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@php/Mcp/Tools/Agent/Brain/BrainList.php` around lines 1 - 6, Add the required SPDX license header to the top of BrainList.php: insert a single-line SPDX comment (for example "// SPDX-License-Identifier: MIT" or the project's chosen identifier) immediately after the "<?php" line and before "declare(strict_types=1);" to satisfy the coding guidelines and ensure the file in namespace Core\Mod\Agentic\Mcp\Tools\Agent\Brain contains the license identifier.
🟠 Major comments (27)
php/Mcp/Services/DataRedactor.php-65-79 (1)
65-79:⚠️ Potential issue | 🟠 MajorObjects bypass redaction in both public APIs.
On Line 79 and Line 127, any non-array/non-string input is returned unchanged. That includes object payloads, which can carry secrets and PII without any masking.
Proposed fix
final class DataRedactor { @@ public function redact(mixed $data, int $maxDepth = 10): mixed { + if (is_object($data)) { + $data = get_object_vars($data); + } + if ($maxDepth <= 0) { return '[MAX_DEPTH_EXCEEDED]'; } @@ public function summarize(mixed $data, int $maxDepth = 3): mixed { + if (is_object($data)) { + $data = get_object_vars($data); + } + if ($maxDepth <= 0) { return '[...]'; }Based on learnings: Redact secrets and minimise sensitive data exposure by default.
Also applies to: 82-128
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@php/Mcp/Services/DataRedactor.php` around lines 65 - 79, The redact() method currently returns any non-array/non-string input unchanged, which skips objects and can leak PII; add handling for objects by introducing a private redactObject(object $obj, int $maxDepth) and call it from redact() when is_object($data), ensuring you decrement maxDepth like for arrays (pass $maxDepth - 1). redactObject should iterate public properties (or use get_object_vars / JsonSerializable when implemented), redact each property value via existing redact() recursion, and return a sanitized representation (e.g., an object/array of redacted properties or a class-name wrapper) so object payloads are masked consistently with redactArray() and redactString().php/Actions/Forge/ManagePullRequest.php-44-45 (1)
44-45:⚠️ Potential issue | 🟠 MajorUnhandled metadata/merge failures can break the action contract.
If
getPRMeta()ormergePullRequest()throws, callers receive an exception rather than the documented result array. Consider converting failures into explicitreasonvalues.Defensive error handling example
- $prMeta = $metaReader->getPRMeta($prNumber); + try { + $prMeta = $metaReader->getPRMeta($prNumber); + } catch (\Throwable) { + return ['merged' => false, 'reason' => 'meta_unavailable']; + } @@ - $forge->mergePullRequest($owner, $repo, $prNumber); + try { + $forge->mergePullRequest($owner, $repo, $prNumber); + } catch (\Throwable) { + return ['merged' => false, 'reason' => 'merge_failed']; + }Also applies to: 58-60
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@php/Actions/Forge/ManagePullRequest.php` around lines 44 - 45, The calls to $metaReader->getPRMeta($prNumber) and to mergePullRequest(...) can throw exceptions and currently bubble up; wrap both calls in try/catch blocks inside ManagePullRequest so failures are converted into the documented result array (e.g. return ['success' => false, 'reason' => 'meta_error' or 'merge_error', 'details' => $e->getMessage()]); for getPRMeta catch exceptions and return a consistent failure array instead of throwing, and for mergePullRequest do the same (map different exception types or messages to distinct reason strings) so callers always receive an explicit result rather than an uncaught exception.php/Actions/Forge/ScanForWork.php-72-73 (1)
72-73:⚠️ Potential issue | 🟠 MajorA single metadata fetch failure can terminate the full scan.
getEpicMeta()andgetIssueState()are external reads inside iteration. If either throws, the action exits early instead of returning partial actionable items.Resilience-oriented guard
- $epicMeta = $metaReader->getEpicMeta($epicNumber); + try { + $epicMeta = $metaReader->getEpicMeta($epicNumber); + } catch (\Throwable) { + continue; + } @@ - $issueState = $metaReader->getIssueState($childMeta->issueId); + try { + $issueState = $metaReader->getIssueState($childMeta->issueId); + } catch (\Throwable) { + continue; + }Also applies to: 87-90
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@php/Actions/Forge/ScanForWork.php` around lines 72 - 73, The loop calling $metaReader->getEpicMeta($epicNumber) and later $metaReader->getIssueState(...) can throw and currently aborts the entire scan; wrap each external read (getEpicMeta and getIssueState) in a try/catch that catches exceptions, logs a non-fatal warning (including the epic/issue id and error message), and then continues to the next epic/issue so the action returns partial actionable items instead of exiting; update the loop in ScanForWork.php (where getEpicMeta and getIssueState are invoked) to handle failures gracefully and ensure any downstream logic handles a null/empty meta or state.php/Console/Commands/AgenticSyncPluginsCcCommand.php-28-31 (1)
28-31:⚠️ Potential issue | 🟠 MajorFail fast when
HOMEis unavailable instead of silently scanning/.claude/plugins.If
HOMEis missing,pluginsPath()resolves to/.claude/plugins,discoverPluginNames()returns an empty list, and the command exits withSUCCESS. In non-interactive environments that turns a configuration error into a silent no-op.💡 Suggested fix
public function handle(): int { - $pluginsPath = $this->pluginsPath(); + try { + $pluginsPath = $this->pluginsPath(); + } catch (\RuntimeException $exception) { + $this->error($exception->getMessage()); + + return self::FAILURE; + } + $pluginNames = $this->discoverPluginNames($pluginsPath); @@ private function pluginsPath(): string { $home = getenv('HOME'); if (! is_string($home) || $home === '') { $home = $_SERVER['HOME'] ?? $_ENV['HOME'] ?? ''; } - return rtrim((string) $home, '/').'/.claude/plugins'; + if (! is_string($home) || $home === '') { + throw new \RuntimeException('Unable to resolve HOME for Claude Code plugin discovery.'); + } + + return rtrim($home, '/').'/.claude/plugins'; }Also applies to: 259-267
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@php/Console/Commands/AgenticSyncPluginsCcCommand.php` around lines 28 - 31, The command currently proceeds silently when HOME is missing (pluginsPath() -> '/.claude/plugins') causing discoverPluginNames() to return empty and exit with SUCCESS; change the behavior to fail fast by checking for a valid HOME before scanning: in AgenticSyncPluginsCcCommand (before calling pluginsPath() / discoverPluginNames()) verify getenv('HOME') or env('HOME') is set and non-empty, and if not call $this->error(...) with a clear message and return self::FAILURE; apply the same check/update to the other identical exit path referenced around the second occurrence (the block currently returning self::SUCCESS at lines 259-267) so both code paths error out instead of silently succeeding.php/Console/Commands/AgenticSyncPluginsCcCommand.php-42-80 (1)
42-80:⚠️ Potential issue | 🟠 MajorLet
plugin_cc_namedisambiguate duplicate name matches before marking them unmapped.A plugin that hits multiple normalised
namematches is dropped here before the second pass runs. That means a uniqueplugin_cc_namematch can never rescue it, so valid mappings stay unmapped whenever two enabled profiles share the same name.💡 Suggested fix
- $pendingPluginNames = []; + $pendingPluginNames = []; foreach ($pluginNames as $pluginName) { $nameMatches = $this->matchProfilesByName($profiles, $pluginName, $claimedProfileIds); if ($nameMatches->count() === 1) { $profile = $nameMatches->first(); $claimedProfileIds[$profile->id] = $pluginName; $report[] = $this->mapPluginToProfile($pluginName, $profile, 'name'); continue; } - if ($nameMatches->count() > 1) { - $report[] = $this->unmappedRow($pluginName, 'ambiguous name match'); - - continue; - } - - $pendingPluginNames[] = $pluginName; + $pendingPluginNames[$pluginName] = $nameMatches->count() > 1 + ? 'ambiguous name match' + : 'no enabled profile'; } - foreach ($pendingPluginNames as $pluginName) { + foreach ($pendingPluginNames as $pluginName => $fallbackReason) { $pluginNameMatches = $this->matchProfilesByPluginCcName($profiles, $pluginName, $claimedProfileIds); if ($pluginNameMatches->count() === 1) { $profile = $pluginNameMatches->first(); $claimedProfileIds[$profile->id] = $pluginName; @@ if ($pluginNameMatches->count() > 1) { $report[] = $this->unmappedRow($pluginName, 'ambiguous plugin_cc_name match'); continue; } - $report[] = $this->unmappedRow($pluginName, 'no enabled profile'); + $report[] = $this->unmappedRow($pluginName, $fallbackReason); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@php/Console/Commands/AgenticSyncPluginsCcCommand.php` around lines 42 - 80, The loop currently drops plugins with nameMatches->count() > 1 before the plugin_cc_name pass can rescue them; change the logic in AgenticSyncPluginsCcCommand so that when $nameMatches->count() > 1 you attempt a plugin_cc_name disambiguation first: call $this->matchProfilesByPluginCcName($profiles, $pluginName, $claimedProfileIds) inside that branch, and if that returns exactly one profile, map it with $this->mapPluginToProfile(..., 'plugin_cc_name') and mark the profile id in $claimedProfileIds; only if the plugin_cc_name match is not unique should you add the ambiguous unmapped row via $this->unmappedRow($pluginName, 'ambiguous name match') (and do not push it onto $pendingPluginNames). This keeps the existing single-match and pending-pass behavior for truly unmatched names.php/Mcp/Services/ToolRateLimiter.php-50-59 (1)
50-59:⚠️ Potential issue | 🟠 MajorNon-atomic first-hit path can lose increments under concurrency.
Line 50 reads the counter and then Line 54 conditionally writes
1; concurrent requests can both observe0and overwrite each other, undercounting attempts.Suggested fix
public function hit(string $identifier, string $toolName): void { if (! config('mcp.rate_limiting.enabled', true)) { return; } $cacheKey = $this->cacheKey($identifier, $toolName); - $current = (int) Cache::get($cacheKey, 0); $decaySeconds = (int) config('mcp.rate_limiting.decay_seconds', 60); - if ($current === 0) { - Cache::put($cacheKey, 1, $decaySeconds); - - return; - } - - Cache::increment($cacheKey); + if (Cache::add($cacheKey, 1, $decaySeconds)) { + return; // created atomically with TTL + } + + Cache::increment($cacheKey); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@php/Mcp/Services/ToolRateLimiter.php` around lines 50 - 59, The current read-then-write path using Cache::get followed by Cache::put is racy and can lose increments under concurrency; replace that logic with an atomic "add-if-not-exists" operation (use Cache::add with $cacheKey and $decaySeconds) and only call Cache::increment($cacheKey) when Cache::add reports the key already existed (i.e. add returned false). Update the block around Cache::get/Cache::put/Cache::increment in ToolRateLimiter (the code using $cacheKey and $decaySeconds) to first attempt Cache::add(...) and fallback to Cache::increment(...) to ensure atomic first-hit semantics.php/Jobs/DeleteFromIndex.php-31-35 (1)
31-35:⚠️ Potential issue | 🟠 MajorGuard against deleting indexes for memories that were restored before job execution.
This queued job deletes blindly by ID. If a memory is restored after dispatch but before handling, its index entries can be removed incorrectly.
Suggested fix
@@ namespace Core\Mod\Agentic\Jobs; +use Core\Mod\Agentic\Models\BrainMemory; use Core\Mod\Agentic\Services\BrainService; @@ public function handle(BrainService $brain): void { + $memory = BrainMemory::withTrashed()->find($this->memoryId); + if ($memory instanceof BrainMemory && $memory->deleted_at === null) { + return; + } + $brain->qdrantDelete([$this->memoryId]); $brain->elasticDelete($this->memoryId); } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@php/Jobs/DeleteFromIndex.php` around lines 31 - 35, The job currently deletes index entries unconditionally; load the Memory model inside DeleteFromIndex::handle (e.g., Memory::withTrashed()->find($this->memoryId)) and check its trashed/restored state before calling qdrantDelete and elasticDelete: if the memory exists and is not trashed (i.e., it was restored) skip the deletion, otherwise proceed; update handle to perform this guard using the Memory model and the existing qdrantDelete and elasticDelete calls.php/Actions/Brain/RememberKnowledge.php-88-88 (1)
88-88:⚠️ Potential issue | 🟠 MajorValidate
orgbefore forwarding toBrainService.
orgis now accepted but not validated in this action. Direct callers can still pass invalid types/lengths, which may fail downstream.Suggested fix
@@ $tags = $data['tags'] ?? null; if (is_array($tags)) { @@ } + + $org = $data['org'] ?? null; + if ($org !== null) { + if (! is_string($org) || $org === '') { + throw new \InvalidArgumentException('org must be a non-empty string when provided'); + } + if (mb_strlen($org) > 128) { + throw new \InvalidArgumentException('org must not exceed 128 characters'); + } + } @@ - 'org' => $data['org'] ?? null, + 'org' => $org,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@php/Actions/Brain/RememberKnowledge.php` at line 88, The action RememberKnowledge.php forwards an unvalidated 'org' value to BrainService which can cause downstream failures; add input validation in the RememberKnowledge handler (e.g., in the method that builds $data before calling BrainService::remember or similar) to ensure 'org' is either null or a string matching the expected constraints (type check string, trim, enforce max length, and optionally allowed pattern/characters) and return a 4xx error or normalize to null when invalid; update the call site that currently sets 'org' => $data['org'] ?? null to use the validated/normalized value and document the constraint so BrainService receives only valid org identifiers.php/Console/Commands/BrainPruneCommand.php-50-56 (1)
50-56:⚠️ Potential issue | 🟠 MajorPotential race condition between job dispatch and record deletion.
DeleteFromIndex::dispatch($memory->id)queues an asynchronous job, but$memory->forceDelete()executes immediately. If the queued job needs to look up theBrainMemoryrecord (e.g. to fetch index keys or metadata), it will fail because the record is already deleted.Consider either:
- Making the deletion part of the job itself (delete from index, then delete from DB)
- Passing all necessary data to the job upfront so it doesn't need to query the record
🐛 Option 1: Let the job handle both index and DB deletion
$query->chunkById($chunkSize, function (Collection $memories) use (&$pruned): void { foreach ($memories as $memory) { - DeleteFromIndex::dispatch($memory->id); - $memory->forceDelete(); + // Pass memory data the job needs, let job handle full cleanup + DeleteFromIndex::dispatch($memory->id, $memory->toArray()); $pruned++; } });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@php/Console/Commands/BrainPruneCommand.php` around lines 50 - 56, The current loop in BrainPruneCommand dispatches DeleteFromIndex::dispatch($memory->id) and then calls $memory->forceDelete(), which can race with the queued job; either move DB deletion into the job or give the job all data it needs so it doesn't query the deleted BrainMemory. Fix by updating the DeleteFromIndex job (or creating a new job) to perform both index removal and the database deletion (use BrainMemory lookup and forceDelete() inside the job) and remove the immediate $memory->forceDelete() from the chunk callback, or alternatively change the dispatch call to pass the necessary memory attributes (e.g., $memory->toArray() or specific index keys/metadata) so the job doesn't need to query BrainMemory by id before the record is removed. Ensure changes reference DeleteFromIndex::dispatch, $memory->forceDelete(), the BrainPruneCommand chunk callback, and BrainMemory.go.mod-140-142 (1)
140-142:⚠️ Potential issue | 🟠 MajorRemove local
replacedirectives before merging to main.These
replacedirectives point to relative filesystem paths (../mcp,../../snider/Poindexter), which will break builds for anyone without identical local directory structures. Verification confirms both modules have published versions available (dappco.re/go/mcpversions include v0.5.5, v0.4.6, etc.;forge.lthn.ai/Snider/Poindexterversions include v0.0.1, v0.0.3, etc.), making the local replace directives unnecessary. Replace them with versioned module references before merging.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@go.mod` around lines 140 - 142, Remove the local filesystem replace directives in go.mod that point dappco.re/go/mcp => ../mcp and forge.lthn.ai/Snider/Poindexter => ../../snider/Poindexter; instead reference the published module versions (e.g., add or update require entries for dappco.re/go/mcp at a released version such as v0.5.5 and forge.lthn.ai/Snider/Poindexter at an appropriate released version such as v0.0.3) so builds don’t depend on local paths; edit the go.mod entries touching the module names dappco.re/go/mcp and forge.lthn.ai/Snider/Poindexter and remove the two replace lines before merging.php/Mcp/Resources/DatabaseSchema.php-41-53 (1)
41-53:⚠️ Potential issue | 🟠 MajorPotential SQL injection via table name interpolation.
$tableNameis interpolated directly into the SQL query usingsprintf. Although the table names originate fromSHOW TABLESorSchema::getTableListing(), a table with special characters (e.g., containing quotes) could break the query or, in adversarial scenarios, exploit the interpolation.🛡️ Proposed fix using proper escaping
protected function describeTable(string $tableName): array { $driver = DB::getDriverName(); + $escapedName = str_replace(['`', '"'], '', $tableName); try { return array_map(static fn (object $column): array => (array) $column, DB::select(sprintf( - $driver === 'sqlite' ? 'PRAGMA table_info("%s")' : 'DESCRIBE `%s`', - $tableName, + $driver === 'sqlite' ? 'PRAGMA table_info("%s")' : 'DESCRIBE `%s`', + $escapedName, ))); } catch (\Throwable) { return []; } }Alternatively, consider using Laravel's
Schema::getColumnListing($tableName)andSchema::getColumnType($tableName, $column)for a driver-agnostic, safer approach.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@php/Mcp/Resources/DatabaseSchema.php` around lines 41 - 53, The describeTable method currently interpolates $tableName into raw SQL causing potential injection; replace the raw DB::select approach in describeTable with Laravel Schema APIs: call Schema::getColumnListing($tableName) to get columns and Schema::getColumnType($tableName, $column) (or DB::getSchemaBuilder()->getColumnType) to build the column arrays, and return an array of column metadata instead of executing raw PRAGMA/DESCRIBE queries; this removes direct SQL interpolation and is driver-agnostic for the describeTable implementation.php/Actions/Credits/AwardCredits.php-56-73 (1)
56-73:⚠️ Potential issue | 🟠 MajorLock the task row before the ownership and idempotency checks.
Line 57 reads
FleetTaskwithoutlockForUpdate(), so another transaction can still mutate or reassign that task between the validation here and theCreditEntryinsert below. That leaves the new ownership guard TOCTOU-prone. Fetch the task under the same row lock as the node.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@php/Actions/Credits/AwardCredits.php` around lines 56 - 73, The FleetTask fetch in AwardCredits (the FleetTask::query()->where(...)->find($fleetTaskId) block) must be locked for update to avoid TOCTOU races: replace the find call with a query that uses lockForUpdate() (e.g., FleetTask::query()->where('workspace_id', $workspaceId)->lockForUpdate()->find($fleetTaskId)) and ensure this fetch occurs inside the same DB transaction/lock context as the node lookup and the subsequent CreditEntry insert so ownership and idempotency checks use the same row lock.php/Agentic/Data/CreditTransaction.php-25-45 (1)
25-45:⚠️ Potential issue | 🟠 MajorFail fast on incomplete ledger rows instead of inventing values.
fromModel()currently converts missing required fields into valid-looking defaults (0,'', and, for Line 34, effectively “now”). That will hide broken selects and can misreport transaction ordering or balances to API consumers. Please treat missing ledger fields as invalid input and throw, rather than synthesising a transaction.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@php/Agentic/Data/CreditTransaction.php` around lines 25 - 45, The fromModel method is currently masking missing/invalid ledger data by coercing defaults; change CreditTransaction::fromModel to validate required fields (at minimum: workspace_id, task_type, amount, balance_after, created_at) and throw a clear exception (e.g., InvalidArgumentException) when any of these are absent or null/invalid instead of substituting 0/''/'now'; keep the existing CarbonImmutable handling for created_at but if created_at is missing or unparsable throw, and replace the isset/?? fallbacks for id, workspace_id, fleet_node_id, task_type, amount, balance_after, description with explicit presence/type checks that either convert valid values or throw with a message identifying the missing/invalid field.php/Mcp/Services/OpenApiGenerator.php-42-61 (1)
42-61:⚠️ Potential issue | 🟠 MajorAdd exception handling for YAML parse failures to match the surrounding fault-tolerance pattern.
Lines 45 and 60 call
Yaml::parseFile()without catching parse exceptions. The fallback values on these lines and the defensive logic at lines 52–55 suggest this code is designed to tolerate missing or incomplete configuration. However, a malformed YAML file will currently throw an uncaughtParseExceptionand crash the spec generator. Wrap bothYaml::parseFile()calls in try-catch blocks and fall back to the empty/minimal structures.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@php/Mcp/Services/OpenApiGenerator.php` around lines 42 - 61, Wrap both Yaml::parseFile() calls in try-catch blocks to catch Symfony\Component\Yaml\Exception\ParseException and fall back to the current safe defaults: in loadRegistry() catch parse errors and set $this->registry = ['servers' => []]; in loadServers() catch parse errors for each server file and set $this->servers[$id] = ['id' => $id, 'name' => $id]; keep the existing checks around $this->registry['servers'] and the is_array/isset guard in loadServers() so malformed YAML won't throw an uncaught exception.php/Mcp/Services/OpenApiGenerator.php-213-224 (1)
213-224:⚠️ Potential issue | 🟠 MajorRefactor resource identifier to use a query parameter instead of a path segment.
The
/resources/{uri}endpoint models the resource identifier as a path parameter, but MCP resource URIs commonly contain:and/characters (e.g.file:///path/to/resource,postgres://database/schema). Path parameters require URL encoding these characters, making the generated contract awkward and prone to encoding/decoding issues. A query parameter on/resourcesis safer and more appropriate for arbitrary string identifiers:GET /resources?uri=file%3A%2F%2F%2Fpath.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@php/Mcp/Services/OpenApiGenerator.php` around lines 213 - 224, The current OpenAPI path definition uses '/resources/{uri}' and a path parameter named "uri" (operationId 'readResource'), which forces encoding of characters like ':' and '/'—change the path key to '/resources' and move the "uri" parameter from a path parameter to a required query parameter ('in' => 'query') in the 'get' operation; update the parameter schema accordingly and ensure operationId 'readResource' remains unchanged so existing references still match.php/Jobs/CaptureDispatchResultJob.php-68-75 (1)
68-75:⚠️ Potential issue | 🟠 MajorFix the
\Mod\AgentProfilenamespace on lines 72 and 115.The type hints use the incomplete namespace
\Mod\AgentProfileinstead of the correct namespaceCore\Mod\Agentic\Models\AgentProfile. Add the appropriateusestatement and correct both occurrences:use Core\Mod\Agentic\Models\AgentProfile;Then update lines 72 and 115 to use
?AgentProfileinstead of?\Mod\AgentProfile.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@php/Jobs/CaptureDispatchResultJob.php` around lines 68 - 75, Update the incorrect AgentProfile type hints in CaptureDispatchResultJob: add the import "use Core\Mod\Agentic\Models\AgentProfile;" at the top of the file and replace both occurrences of the fully-qualified "\Mod\AgentProfile" type with the imported "?AgentProfile" (e.g., the constructor property and the other occurrence around the dispatch result handling) so the class uses the correct namespace and nullable type.php/Controllers/Api/Credits/CreditsController.php-18-27 (1)
18-27:⚠️ Potential issue | 🟠 MajorFail closed when
workspace_idis missing.Casting a missing request attribute to
0lets these endpoints read from a pseudo-workspace and, worse, create manual credit entries under workspace0if the upstream middleware is absent or misconfigured. Resolve the attribute once and reject non-positive IDs before continuing.Suggested fix
class CreditsController extends Controller { + private function workspaceIdFrom(Request $request): int + { + $workspaceId = (int) $request->attributes->get('workspace_id'); + + abort_if($workspaceId <= 0, 400, 'workspace_id attribute is required.'); + + return $workspaceId; + } + public function balance(Request $request): JsonResponse { - $workspaceId = (int) $request->attributes->get('workspace_id'); + $workspaceId = $this->workspaceIdFrom($request); $service = $this->resolveCreditService(); @@ public function deduct(Request $request): JsonResponse { @@ - $workspaceId = (int) $request->attributes->get('workspace_id'); + $workspaceId = $this->workspaceIdFrom($request); $service = $this->resolveCreditService(); @@ public function refund(Request $request): JsonResponse { @@ - $workspaceId = (int) $request->attributes->get('workspace_id'); + $workspaceId = $this->workspaceIdFrom($request); $service = $this->resolveCreditService(); @@ public function ledger(Request $request): JsonResponse { @@ - $workspaceId = (int) $request->attributes->get('workspace_id'); + $workspaceId = $this->workspaceIdFrom($request); $limit = (int) ($validated['limit'] ?? 50);Also applies to: 30-61, 64-93
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@php/Controllers/Api/Credits/CreditsController.php` around lines 18 - 27, In CreditsController::balance (and similarly in the other methods noted), stop casting the workspace_id inline; instead fetch the attribute once into a variable, validate that it's a positive integer (>0) and immediately return a JsonResponse error (400 Bad Request) if missing or non-positive before calling resolveCreditService(), fallbackBalance(), or any service methods; ensure you reference the local $workspaceId variable in downstream calls (and avoid creating/using workspace ID 0).php/Mcp/Services/QueryAuditService.php-22-27 (1)
22-27:⚠️ Potential issue | 🟠 Major
isSafe()will reject ordinary prose queries.This regex blocks bare words like
create,update, anddeleteanywhere in the string. Queries such as “create a branch” or “update the docs” will be marked unsafe even though they are not executable statements.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@php/Mcp/Services/QueryAuditService.php` around lines 22 - 27, The current isSafe() regex flags those keywords anywhere in the text and will mark ordinary prose like "create a branch" as unsafe; change the check to only match SQL statements by requiring the keyword appear at the start of the trimmed query (allowing leading whitespace and optional SQL comment prefixes) instead of anywhere. Update the preg_match in isSafe to use a pattern that anchors at the beginning (e.g. start with ^\s* and then (?:drop|delete|truncate|alter|create|insert|update)\b) so only queries that begin with those SQL verbs are considered unsafe.php/Mcp/Services/QueryAuditService.php-115-145 (1)
115-145:⚠️ Potential issue | 🟠 MajorAvoid loading the entire audit table into memory for aggregation.
aggregate()does an unboundedget()and groups in PHP. Oncemcp_audit_entriesgrows, this path will become slow and memory-hungry for dashboards and monitors.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@php/Mcp/Services/QueryAuditService.php` around lines 115 - 145, The aggregate() method currently pulls all rows via McpAuditEntry::query()->orderBy(...)->get() and groups in PHP which will OOM for large tables; replace this with DB-side aggregation or streaming. For each resolved period from resolvePeriod(), compute the time-bucket in SQL (use DB::raw/column expression derived from the period — e.g. DATE_TRUNC/DATE_FORMAT depending on your DB) and run a single grouped query on McpAuditEntry that selects the bucket plus COUNT(*) as total, SUM(result_count) as result_count, ROUND(AVG(duration_ms)) as average_duration_ms and conditional SUMs for safe/unsafe (SUM(CASE WHEN is_safe THEN 1 ELSE 0 END)), then map the query result into the existing ['bucket','total','safe','unsafe','average_duration_ms','result_count'] shape; if SQL bucketing isn’t available for some DBs, use chunkById on McpAuditEntry and maintain incremental aggregates using bucketFor() to avoid loading the whole table. Ensure you change the code in aggregate(), and keep resolvePeriod(), bucketFor(), and McpAuditEntry references to locate where to implement the DB query or chunked aggregation.php/Mcp/Services/McpHealthService.php-203-225 (1)
203-225:⚠️ Potential issue | 🟠 MajorAlways close the process handle after a timeout.
The timeout path kills the child process but skips
proc_close(). Repeated health checks can leak process handles and leave unreaped children behind.Suggested fix
- $exitCode = $timedOut ? 124 : proc_close($process); + $closeCode = proc_close($process); + $exitCode = $timedOut ? 124 : $closeCode;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@php/Mcp/Services/McpHealthService.php` around lines 203 - 225, The timeout branch currently calls proc_terminate($process, 9) and sets $timedOut = true but never calls proc_close($process), leaking the process handle; update the timeout handling so that after proc_terminate($process, 9) you still call proc_close($process) (assign its return value to a variable if needed) and ensure pipes ($pipes[1], $pipes[2]) are closed before using that exit code, then compute $exitCode using the proc_close result only when appropriate (or explicitly set it to 124 while still calling proc_close to reap the process) so proc_terminate, proc_close, $process, $timedOut and the existing $exitCode logic are all handled safely.php/Mcp/Services/McpHealthService.php-111-132 (1)
111-132:⚠️ Potential issue | 🟠 MajorResolve placeholders in
commandandargsas well.Only
cwdgoes throughresolveEnvVars(). If a server config uses${VAR}inconnection.commandorconnection.args, this health check will pass the literal placeholder toproc_open()and report a healthy server as offline.Suggested fix
- $command = trim((string) ($connection['command'] ?? '')); + $command = trim($this->resolveEnvVars((string) ($connection['command'] ?? ''))); if ($command === '') { return $this->buildResult(self::STATUS_OFFLINE, 'No command configured'); } - $args = array_map(static fn (mixed $value): string => (string) $value, (array) ($connection['args'] ?? [])); + $args = array_map( + fn (mixed $value): string => $this->resolveEnvVars((string) $value), + (array) ($connection['args'] ?? []), + ); $cwd = $this->resolveEnvVars((string) ($connection['cwd'] ?? getcwd()));🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@php/Mcp/Services/McpHealthService.php` around lines 111 - 132, The code reads $command and $args from the connection but only resolves environment placeholders for $cwd, so placeholders like ${VAR} in connection['command'] or connection['args'] will be passed unchanged to executeProcess(); fix by running $command and each element of $args through resolveEnvVars() before calling executeProcess() (i.e., call $this->resolveEnvVars(...) on the string for $command and map resolveEnvVars over the $args array) so executeProcess(array_merge([$command], $args), $cwd, $payload.PHP_EOL) receives resolved values.php/Agentic/Livewire/CreditLedger.php-179-186 (1)
179-186:⚠️ Potential issue | 🟠 MajorValidate the agent against the current workspace before mutating credits.
selectedAgentIdis a public Livewire property, but the server only validates it as a string. A crafted request can therefore refund or deduct credits for an arbitrary agent ID unless the action layer re-checks workspace ownership.Suggested fix
+use Illuminate\Validation\Rule; + private function validateAdjustment(): void { $this->validate([ 'workspaceId' => 'required|integer|min:1', - 'selectedAgentId' => 'required|string|max:255', + 'selectedAgentId' => [ + 'required', + 'string', + 'max:255', + Rule::in(collect($this->agents)->pluck('agent_id')->all()), + ], 'adjustmentAmount' => 'required|integer|min:1|max:100000', 'adjustmentReason' => 'nullable|string|max:1000', ]); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@php/Agentic/Livewire/CreditLedger.php` around lines 179 - 186, The validation in validateAdjustment currently only checks selectedAgentId as a string, allowing forged IDs; update validateAdjustment to ensure the selected agent belongs to the current workspace before any credit mutation (e.g., use a database existence rule or explicit lookup to confirm Agent id = selectedAgentId and workspace_id = workspaceId), and fail validation if no matching agent is found; reference the validateAdjustment method and the public property selectedAgentId when adding the existence check so credits can only be adjusted for agents tied to the provided workspaceId.php/Mcp/Console/McpAgentServerCommand.php-223-304 (1)
223-304:⚠️ Potential issue | 🟠 MajorDo not burn quota on failed tool calls.
Quota is consumed before
ToolRegistry::call()succeeds. Invalid tool names, bad arguments, and downstream exceptions will still reduce the workspace allowance, which makes malformed requests an easy way to exhaust quota.Suggested fix
- $consumedQuota = $quotaService->consume($workspaceId); $startedAt = microtime(true); try { $result = $toolRegistry->call($toolName, $arguments, [ 'workspace_id' => $workspaceId, 'request_id' => $id, 'transport' => 'stdio', ]); + $consumedQuota = $quotaService->consume($workspaceId); $durationMs = (int) round((microtime(true) - $startedAt) * 1000);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@php/Mcp/Console/McpAgentServerCommand.php` around lines 223 - 304, The code currently calls $quotaService->consume($workspaceId) before invoking $toolRegistry->call(), which charges quota even when the tool call fails; move the consumption so it only happens after a successful call. Specifically, remove the early $consumedQuota = $quotaService->consume($workspaceId) and instead call $consumedQuota = $quotaService->consume($workspaceId) inside the try block after $result = $toolRegistry->call(... ) succeeds (before building the successResponse), so exceptions caught by the InvalidArgumentException and Throwable handlers do not reduce quota; keep using $consumedQuota->toArray() in the successResponse as before.php/Agentic/Livewire/FleetOverview.php-151-171 (1)
151-171:⚠️ Potential issue | 🟠 MajorAuthorise the dispatch target against the current workspace.
dispatchAgentIdis a public Livewire property, but the server only validates it as a string. A crafted request can therefore dispatch work to an arbitrary agent unlessAssignTask::run()re-checks workspace ownership.Suggested fix
+use Illuminate\Validation\Rule; + public function dispatchTask(): void { $this->validate([ 'workspaceId' => 'required|integer|min:1', - 'dispatchAgentId' => 'required|string|max:255', + 'dispatchAgentId' => [ + 'required', + 'string', + 'max:255', + Rule::in(collect($this->nodes)->pluck('agent_id')->all()), + ], 'dispatchRepo' => 'required|string|max:255', 'dispatchTask' => 'required|string|max:10000', 'dispatchBranch' => 'nullable|string|max:255',🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@php/Agentic/Livewire/FleetOverview.php` around lines 151 - 171, The dispatchTask method currently trusts the public Livewire property dispatchAgentId and passes it to AssignTask::run without ensuring the target agent belongs to the current workspace; add an explicit authorization check inside dispatchTask before calling AssignTask::run: load the Agent (or equivalent entity) by the given dispatchAgentId and verify its workspace_id (or workspace relation) equals $this->workspaceId, and if not, throw or abort with a validation/authorization error; alternatively, if you prefer centralized checks, ensure AssignTask::run performs the same workspace ownership validation for the provided agent id (reference dispatchTask(), AssignTask::run, dispatchAgentId, workspaceId).php/Mcp/Services/McpMetricsService.php-186-187 (1)
186-187:⚠️ Potential issue | 🟠 MajorUse a consistent N-day window; current filters can include an extra day.
Line 186, Line 207, and Line 270 use
subDays($days), while other methods usesubDays($days - 1). This makes cross-widget metrics inconsistent and can overcount by one calendar day.Suggested fix
- ->where('created_at', '>=', CarbonImmutable::now()->subDays($days)->startOfDay()->toDateTimeString()) + ->where('created_at', '>=', CarbonImmutable::now()->subDays($days - 1)->startOfDay()->toDateTimeString())Apply this to all three occurrences (Line 186, Line 207, Line 270).
Also applies to: 207-208, 270-270
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@php/Mcp/Services/McpMetricsService.php` around lines 186 - 187, The code uses CarbonImmutable::now()->subDays($days)->startOfDay()->toDateTimeString() in several queries which creates an inconsistent N-day window; update each occurrence (the three places that call CarbonImmutable::now()->subDays($days)->startOfDay()->toDateTimeString(), e.g. the where('created_at', '>=', ...) expressions around the groupBy blocks) to use subDays($days - 1) instead so the window matches other methods (replace subDays($days) with subDays($days - 1) in those three spots).claude/hermes_runner_mcp/README.md-15-17 (1)
15-17:⚠️ Potential issue | 🟠 MajorDo not document the API key on the command line.
This example expands
$HERMES_API_KEYin the shell, so the secret can end up in shell history or any tooling that persists the full command. The server already readsHERMES_API_KEY, so the safer example is to omit--api-keyentirely.Suggested doc fix
-claude mcp add hermes-runner -- hermes-runner-mcp --hermes-url=http://localhost:8642 --api-key=$HERMES_API_KEY +claude mcp add hermes-runner -- hermes-runner-mcp --hermes-url=http://localhost:8642🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@claude/hermes_runner_mcp/README.md` around lines 15 - 17, The README example exposes a secret by expanding $HERMES_API_KEY on the command line; update the shown command (the "claude mcp add hermes-runner -- hermes-runner-mcp --hermes-url=..." example) to omit the --api-key flag entirely and instead document that the server reads HERMES_API_KEY from the environment so users should set that variable in their shell or secret manager rather than passing it on the command line.claude/hermes_runner_mcp/server.py-505-513 (1)
505-513:⚠️ Potential issue | 🟠 MajorReturn error responses for requests with an
idbut invalidmethod, not silent drops.When a payload contains an
idfield but has a missing or non-stringmethod, the function returnsNoneand the caller never receives a response. According to JSON-RPC 2.0 protocol, this scenario should send anInvalid Requesterror to the client. Only requests without anid(notifications) should remain silent.Suggested fix
if not isinstance(method, str): - return None + return None if "id" not in payload else self._error(request_id, -32600, "Invalid Request")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@claude/hermes_runner_mcp/server.py` around lines 505 - 513, In _handle_message, when the payload has a non-string or missing method, do not silently return None if the request includes an id; instead check request_id and if it's present return self._error(request_id, -32600, "Invalid Request"), otherwise (notification) continue to return None; update the conditional that currently does "if not isinstance(method, str): return None" to perform this id-aware error response so JSON-RPC error messages are sent for requests but notifications remain silent.
|
Disposition update for PR #6 review findings:
No open GitHub Advanced Security code-scanning alerts were present on |
20+ CHANGES_REQUESTED dispositions across PHP MCP services, Go pkg/agentic, hermes_runner_mcp Python server, plugin shell scripts. Highlights: - DatabaseSchema.php: identifier quoting - AwardCredits.php: task row locking order - CreditTransaction.php: fail-fast row decoding - OpenApiGenerator.php: YAML parse handling + uri query params - CaptureDispatchResultJob.php: AgentProfile namespace fix - CreditsController.php: missing workspace_id fail-closed - QueryAuditService.php: prose query false positives + unbounded aggregation - McpHealthService.php: proc_close after timeout + env var resolution - CreditLedger.php + FleetOverview.php: workspace agent + dispatch target validation - McpAgentServerCommand.php: quota burn on failed tool calls - McpMetricsService.php: N-day window consistency - hermes_runner_mcp: API key off command line + invalid method+id + run_id encoding - CircuitBreaker.php: extracted CircuitOpenException class with autoload-correct placement - pkg/agentic + brain + flow: SonarCloud sendMessage/fetchLoopRepoRefs/commitWorkspace/Connect annotations - shell scripts: removed [[ usage for portability 43 files modified, 1 new (CircuitOpenException.php). Verification: gofmt -w + php -l + python3 -m py_compile + bash -n all clean. Touched-package go test passes (pkg/lib/flow, pkg/lib). Full go test ./... blocked by pre-existing dappco.re module graph drift, out of scope. Parked for separate work: - Mantis #1062: go.mod local replace removal (cross-repo architectural) - Mantis #1063: Sonar residual line-length / duplication quality-gate cluster Closes findings on #6 Co-authored-by: Codex <noreply@openai.com>
There was a problem hiding this comment.
Actionable comments posted: 14
🧹 Nitpick comments (9)
php/Mcp/Services/McpMetricsService.php (1)
21-48: Missing explicit return type ongetOverview().The method returns an array but lacks a return type declaration. For consistency with the coding guidelines requiring full type hints, add
: array.Proposed fix
- public function getOverview(int $days = 7): array + public function getOverview(int $days = 7): arrayActually, looking again the return type
: arrayis already present. The implementation is correct.LGTM on method signature.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@php/Mcp/Services/McpMetricsService.php` around lines 21 - 48, The review flagged a missing return type for getOverview(), but the method Mcp\\Services\\McpMetricsService::getOverview already declares a return type of : array; no change is needed—leave the function signature and implementation as-is.php/Mcp/Console/McpAgentServerCommand.php (1)
350-361: Silent exception swallowing for audit failures.Catching
RuntimeExceptionsilently is documented as intentional for optional audit tables, but this also swallows otherRuntimeExceptioncauses (e.g., database connection issues). Consider catching a more specific exception or logging the failure for observability.Proposed improvement for observability
private function recordAudit( QueryAuditService $queryAuditService, string $query, string $workspaceId, string $toolName, int $durationMs, array $metadata = [], ?int $resultCount = null, ): void { try { $queryAuditService->log($query, [ 'workspace_id' => $workspaceId, 'tool_name' => $toolName, 'result_count' => $resultCount, 'duration_ms' => $durationMs, 'metadata' => $metadata, ]); } catch (RuntimeException) { - // The audit table is optional in this worktree. + // The audit table is optional in this worktree. + // Consider: Log::debug('Audit logging skipped: table not available'); } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@php/Mcp/Console/McpAgentServerCommand.php` around lines 350 - 361, Replace the silent broad RuntimeException catch around queryAuditService->log in McpAgentServerCommand with either a more specific exception type thrown by the audit/data layer or add a logged fallback: catch the specific audit-related exception (or DatabaseException/QueryAuditException if available) and, if that specific exception type is not present, catch RuntimeException but call the logger (e.g., $this->logger or $io) to record the error and context (workspace_id, tool_name) before swallowing; ensure the handler still allows optional audit tables to be ignored but preserves observability by logging the failure instead of silently suppressing all RuntimeExceptions.php/Mcp/Services/QueryAuditService.php (2)
237-262: Model class should be in a separate file.The
McpAuditEntryEloquent model is defined in the same file as the service. Per PSR-4 autoloading conventions and Laravel best practices, each class should reside in its own file (e.g.,php/Mcp/Models/McpAuditEntry.php).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@php/Mcp/Services/QueryAuditService.php` around lines 237 - 262, The McpAuditEntry Eloquent model is currently defined inline in QueryAuditService.php; move it to its own file (e.g., create php/Mcp/Models/McpAuditEntry.php) with the proper namespace (e.g., namespace Mcp\Models) and use statements (use Illuminate\Database\Eloquent\Model), copy the $table, $fillable and $casts properties there, then remove the McpAuditEntry class from QueryAuditService.php and update that file to import the model (use Mcp\Models\McpAuditEntry) where referenced; ensure the new class namespace matches your PSR-4 autoload config and run composer dump-autoload if needed.
22-32: TheisSafe()check may miss some SQL attack patterns.The regex blocks common write statements and dangerous functions, but doesn't catch:
UNION SELECTfor data exfiltrationINTO OUTFILE/INTO DUMPFILEfor file writes;followed by additional statements (stacked queries)LOAD_FILE()for reading filesConsider expanding the pattern or documenting the intended scope of protection.
Proposed enhancement for broader coverage
public function isSafe(string $query): bool { $trimmedQuery = ltrim($query); $startsWithWriteStatement = preg_match( - '/^(?:--[^\n]*\n\s*)*(?:drop|delete|truncate|alter|create|insert|update)\b/i', + '/^(?:--[^\n]*\n\s*)*(?:drop|delete|truncate|alter|create|insert|update|grant|revoke)\b/i', $trimmedQuery, ) === 1; - $callsDangerousFunction = preg_match('/(?:exec|system|passthru)\s*\(/i', $query) === 1; + $callsDangerousFunction = preg_match('/(?:exec|system|passthru|load_file)\s*\(/i', $query) === 1; + $containsDangerousClause = preg_match('/\b(?:into\s+(?:outfile|dumpfile)|union\s+(?:all\s+)?select)\b/i', $query) === 1; - return ! $startsWithWriteStatement && ! $callsDangerousFunction; + return ! $startsWithWriteStatement && ! $callsDangerousFunction && ! $containsDangerousClause; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@php/Mcp/Services/QueryAuditService.php` around lines 22 - 32, The isSafe() logic misses UNION SELECT, INTO OUTFILE/DUMPFILE, stacked queries via semicolons, and LOAD_FILE() checks; update isSafe() to add additional preg_match checks (or extend the existing regex) to detect patterns like '\bunion\s+select\b', '\binto\s+(outfile|dumpfile)\b', a semicolon followed by non-whitespace and SQL token (to catch stacked queries), and '\bload_file\s*\('; reference the existing symbols $startsWithWriteStatement and $callsDangerousFunction and either add new booleans (e.g. $containsUnionSelect, $containsIntoOutfile, $hasStackedQueries, $callsLoadFile) or extend the current preg_match usage so the final return remains ! $startsWithWriteStatement && ! $callsDangerousFunction && ! $containsUnionSelect && ! $containsIntoOutfile && ! $hasStackedQueries && ! $callsLoadFile, ensuring regexes are case-insensitive and you trim/normalize input before matching.php/Mcp/Resources/DatabaseSchema.php (1)
41-54: PostgreSQL compatibility issue indescribeTable().The
DESCRIBEstatement is MySQL-specific. PostgreSQL usesinformation_schemaor\dcommands. When$driverispgsql, this will throw an exception and return an empty array, which may be acceptable as a fallback, but could be improved for better PostgreSQL support.Proposed fix for PostgreSQL support
protected function describeTable(string $tableName): array { $driver = DB::getDriverName(); try { - $statement = $driver === 'sqlite' - ? 'PRAGMA table_info('.$this->quoteIdentifier($tableName, $driver).')' - : 'DESCRIBE '.$this->quoteIdentifier($tableName, $driver); + $statement = match ($driver) { + 'sqlite' => 'PRAGMA table_info('.$this->quoteIdentifier($tableName, $driver).')', + 'pgsql' => sprintf( + "SELECT column_name, data_type, is_nullable, column_default FROM information_schema.columns WHERE table_name = %s", + $this->quoteIdentifier($tableName, $driver), + ), + default => 'DESCRIBE '.$this->quoteIdentifier($tableName, $driver), + }; return array_map(static fn (object $column): array => (array) $column, DB::select($statement)); } catch (\Throwable) { return []; } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@php/Mcp/Resources/DatabaseSchema.php` around lines 41 - 54, describeTable currently uses MySQL's DESCRIBE which breaks for pgsql; update describeTable to detect when $driver === 'pgsql' and build a PostgreSQL-compatible query (e.g., query information_schema.columns or pg_catalog to fetch column metadata for $tableName) instead of DESCRIBE, reusing the existing quoteIdentifier($tableName, $driver) for safe quoting and returning the same array_map conversion; keep the sqlite PRAGMA branch and the catch fallback but ensure pgsql returns real metadata rather than an empty array.php/Agentic/Livewire/FleetOverview.php (1)
125-144: Consider usingdistinct()at query level for better efficiency.The current approach filters unique values in PHP after fetching all platforms. Using
->distinct()in the query would reduce database overhead.♻️ Optional performance improvement
try { return FleetNode::query() ->where('workspace_id', $this->workspaceId) + ->distinct() ->orderBy('platform') ->pluck('platform') ->filter(static fn (mixed $platform): bool => is_string($platform) && $platform !== '') - ->unique() ->values() ->all();🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@php/Agentic/Livewire/FleetOverview.php` around lines 125 - 144, Replace the current PHP-level uniqueness/filtering with a DB-level distinct query in the platforms() method: modify the FleetNode::query() call to include distinct on the platform column (e.g., use ->distinct('platform') or ->distinct()->pluck('platform')), add a whereNotNull('platform') and/or where('platform', '<>', '') to avoid empty/null platforms at the DB level, then remove the PHP ->filter(...)->unique()->values() steps while keeping the existing where('workspace_id', $this->workspaceId), orderBy('platform') and the try/catch block around the query.php/Agentic/Livewire/CreditLedger.php (1)
163-247: Consider extracting shared methods to a base trait or class.The
checkHadesAccess(),toast(),resolveWorkspaceId(), and sync pattern methods are duplicated verbatim fromFleetOverview. This duplication could be reduced with a shared trait.♻️ Optional: Extract common methods to a trait
Create a trait such as
HasHadesAccessorAgenticComponentcontaining:
checkHadesAccess()resolveWorkspaceId()toast()Both
FleetOverviewandCreditLedgercould thenusethis trait.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@php/Agentic/Livewire/CreditLedger.php` around lines 163 - 247, The duplicated helper methods (checkHadesAccess, toast, resolveWorkspaceId and the syncSelectedAgentId pattern) should be extracted into a shared trait (e.g., AgenticComponent or HasHadesAccess) and both FleetOverview and CreditLedger should use that trait; create a trait containing the private methods (move implementations of checkHadesAccess, toast, resolveWorkspaceId and optionally syncSelectedAgentId if reusable), update both classes to remove their local copies and add "use AgenticComponent;" plus any needed imports (Flux, Workspace, etc.), keep method visibility and behavior unchanged, and run existing tests or smoke the Livewire components to verify no behavior/regression changes.php/Controllers/Api/Credits/CreditsController.php (1)
64-94: Thetotalresponse key may mislead API consumers.Line 92 returns
count($entries)astotal, which represents the number of entries in the current response, not the total count of all ledger entries in the database. This naming could confuse clients expecting pagination metadata.♻️ Consider renaming for clarity
return response()->json([ 'data' => $entries, - 'total' => count($entries), + 'count' => count($entries), ]);Alternatively, add a separate
total_entriesfield with the actual database count if pagination metadata is needed.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@php/Controllers/Api/Credits/CreditsController.php` around lines 64 - 94, The response currently returns 'total' => count($entries) which is the number of entries in this response and may mislead clients; update ledger(Request $request) to include an explicit total_entries (the full DB count) and keep total as the response count (or rename total to count if you prefer); compute the full count via CreditEntry::query()->where('workspace_id', $workspaceId)->count() (or call a service count method if resolveCreditService() exposes one), and return both keys in the JSON alongside the existing data produced by $this->formatEntry() and the ledger() iteration.php/Actions/Brain/RememberKnowledge.php (1)
103-104: Avoid hard-coded org length in validation.Line 103 duplicates a literal
128. Extracting it to a class constant will reduce drift with other layers enforcing the same limit.Proposed refactor
class RememberKnowledge { use Action; + private const MAX_ORG_LENGTH = 128; @@ - if (mb_strlen($org) > 128) { - throw new \InvalidArgumentException('org must not exceed 128 characters'); + if (mb_strlen($org) > self::MAX_ORG_LENGTH) { + throw new \InvalidArgumentException( + sprintf('org must not exceed %d characters', self::MAX_ORG_LENGTH) + ); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@php/Actions/Brain/RememberKnowledge.php` around lines 103 - 104, The org length check in RememberKnowledge (the mb_strlen($org) > 128) uses a hard-coded 128; add a class constant (e.g. const MAX_ORG_LENGTH = 128) to the RememberKnowledge class and replace the literal with self::MAX_ORG_LENGTH in the validation and error message (throw new \InvalidArgumentException("org must not exceed " . self::MAX_ORG_LENGTH . " characters")). Update any other occurrences in this class that repeat 128 to reference the constant to avoid drift.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@php/Actions/Credits/AwardCredits.php`:
- Around line 81-96: The balance read/update in AwardCredits.php uses
CreditEntry::query()->value('balance_after') (captured as $previousBalance)
without any locking, which allows a race when the same agent_id has multiple
FleetNode entries; wrap the create in a DB transaction and ensure you serialize
by agent_id—either obtain a row-level lock on the agent (e.g., select the latest
CreditEntry for agent_id with FOR UPDATE) or use a DB-level/advisory lock keyed
by the agent_id before reading $previousBalance and inserting the new
CreditEntry—apply this change around the CreditEntry queries so balance_after is
computed from a locked/read-consistent value.
In `@php/Controllers/Api/Credits/CreditsController.php`:
- Around line 121-139: The current recordTransaction function can race on the
first transaction because lockForUpdate() finds no rows; to fix it, obtain a
workspace-level advisory lock at the start of the DB::transaction (before
calling CreditEntry::query()->lockForUpdate()) so only one transaction at a time
computes previousBalance and inserts the first row; specifically, inside
recordTransaction's DB::transaction block call the DB-level advisory lock (e.g.
pg_advisory_xact_lock with the workspaceId) then read previousBalance via
CreditEntry::query()->lockForUpdate()->latest('id')->value('balance_after') and
create the new CreditEntry as before.
In `@php/Jobs/CaptureDispatchResultJob.php`:
- Around line 101-108: The job posts a non-idempotent note via
$mantis->note(...) before calling $mantis->close(), so retries can create
duplicate notes; make the retry path idempotent by either (A) reversing the
order to call $mantis->close($this->ticketId) first and then post the note, or
(B) check for an existing identical note before calling $mantis->note(...) (e.g.
compute a fingerprint from buildCloseNote(...) / firstLineOfSummary() and skip
posting if a matching note exists), or persist a "note posted" flag before/after
posting to prevent duplicate posts; update CaptureDispatchResultJob to use one
of these approaches referencing $this->ticketId, buildCloseNote(),
$mantis->note(), and $mantis->close().
- Around line 69-73: Constructor parameter $response on CaptureDispatchResultJob
is typed as array but DispatchMantisTicketJob dispatches a string response_id
(from extractResponseId()), causing a strict types failure; fix by either (A)
updating the dispatch call in DispatchMantisTicketJob to pass the full parsed
response array (the same data used to create the AgentDispatch record) instead
of $dispatch->response_id, or (B) change CaptureDispatchResultJob::__construct
signature to accept a string $responseId (or ?string) and update the job's
handle() to load the full response via the AgentDispatch model (e.g., find the
dispatch by id and retrieve its parsed response) so types align; locate
CaptureDispatchResultJob::__construct, DispatchMantisTicketJob dispatch site,
extractResponseId(), and AgentDispatch usage to implement the chosen fix.
In `@php/Mcp/Console/McpAgentServerCommand.php`:
- Around line 282-304: Summary: Do not return raw $exception->getMessage() to
external clients; sanitize the message and log the full exception internally.
Fix: in McpAgentServerCommand's catch block (the block that calls
recordAudit(...) and return $this->errorResponse(...)), replace uses of
$exception->getMessage() in the external-facing payloads with a
generic/sanitized string (e.g., "Internal server error" or a short,
non-sensitive code), ensure recordAudit(...) does not leak sensitive details to
external consumers (either store a sanitized error or remove the message there),
and instead log the full exception (message and stack) to an internal logger
(e.g., $this->logger->error) for debugging; update calls to recordAudit and
errorResponse to reference the sanitized variable rather than
$exception->getMessage().
In `@php/Mcp/Services/CircuitBreaker.php`:
- Around line 40-52: The half-open probe lock in CircuitBreaker
(STATE_HALF_OPEN) uses a hard-coded 30s lease in acquireTrialLock which can
expire mid-request and allow a second probe; change acquireTrialLock and its
callers to use a TTL tied to the actual probe lifecycle (pass a configurable
timeout or request-derived lease duration) and ensure the lock is held until the
probe completes and is explicitly released (or renewed while the probe runs)
instead of relying on a fixed 30s constant; update any code that currently
hard-codes 30 seconds to use the new parameter or a config value and make sure
lock release happens after the probe's success/failure handling.
- Around line 130-141: recordSuccess currently decrements the failure counter
using atomicDecrement($this->failureCountKey($service)) which causes the key to
be rewritten with the default COUNTER_TTL and thus overrides any
service-specific failure_window; change the call to pass the service's
configured failure window TTL (e.g.
atomicDecrement($this->failureCountKey($service),
$this->getFailureWindow($service)) or the equivalent accessor for the configured
failure_window) so the failure-count key retains the correct window, and apply
the same fix to the other occurrence noted (lines ~240-248) where the failure
key is updated.
- Around line 55-73: The try currently mixes bookkeeping with the protected
$operation(), so move/guard bookkeeping calls: execute $operation() alone inside
the try and immediately return its result, then call
$this->recordSuccess($service) in a separate try/catch that swallows or logs
bookkeeping errors (so a failure to record success cannot turn a good call into
a failure); likewise, in the catch(Throwable $throwable) keep the original catch
handling but invoke $this->recordFailure($service, $throwable) inside its own
try/catch so any throw from recordFailure does not replace $throwable, then
proceed to call $this->shouldTrip($service)/$this->tripCircuit($service) and the
fallback logic using the original $throwable; ensure any calls to
isRecoverable($throwable) and invoking $fallback() are also protected so
bookkeeping errors never mask or replace the original exception.
In `@php/Mcp/Services/McpMetricsService.php`:
- Around line 264-287: The raw SQL uses `CASE WHEN success = 1` which assumes
success is stored as integer; update the aggregate to use boolean-friendly
syntax in the query that works on PostgreSQL and others (for example `SUM(CASE
WHEN success THEN 1 ELSE 0 END) as success_count` or an equivalent DB::raw
cast), so modify the query in the method that builds the
DB::table($this->callsTable) select (the block that selects 'plan_slug' and
selects success_count) to replace `CASE WHEN success = 1` with a boolean-safe
expression; keep the rest of the grouping, ordering, and the mapping logic that
computes success_rate unchanged.
In `@php/Mcp/Services/OpenApiGenerator.php`:
- Line 226: In the Mcp\Services\OpenApiGenerator class update the OpenAPI
response description for the 401 entry: replace the string "Unauthorized" with
UK English "Unauthorised" (look for the array entry with key '401' =>
['description' => ...] in OpenApiGenerator.php) so the response descriptions
follow the repository's UK spelling guidelines.
- Around line 68-70: The server ID ($id) is used directly in resource_path(...)
which allows path traversal; validate/sanitize $id in OpenApiGenerator before
building $path (e.g. enforce a strict whitelist pattern like /^[A-Za-z0-9_-]+$/
or use basename and reject IDs containing directory separators or ".."), and if
validation fails throw or skip the reference; then use the validated $id to
construct resource_path(sprintf('mcp/servers/%s.yaml', $id)) and proceed with
the existing file_exists(...) logic.
- Around line 33-36: The toJson() method in OpenApiGenerator currently casts
json_encode() to string which hides failures (false -> empty string); update to
call json_encode($this->generate(), JSON_PRETTY_PRINT | JSON_UNESCAPED_SLASHES |
JSON_UNESCAPED_SLASHES | JSON_THROW_ON_ERROR) (or add JSON_THROW_ON_ERROR to the
existing flags) and remove the blind (string) cast so encoding errors throw a
JsonException; optionally wrap the json_encode call in a try/catch inside
toJson() to catch JsonException and rethrow or convert it to a more specific
exception with context about OpenApiGenerator::toJson().
In `@php/Mcp/Services/QueryAuditService.php`:
- Around line 99-101: The search filter currently injects raw wildcards into the
LIKE pattern; escape SQL LIKE wildcards and backslashes before building the
pattern to avoid unexpected matches and injection-like behaviour. In
QueryAuditService where you handle $filters['search'], replace the direct
interpolation with an escaped pattern: first escape backslash, percent and
underscore (e.g. transform \ -> \\ , % -> \% , _ -> \_), then wrap with % on
both sides to create the search pattern, and use a parameterised query with an
explicit ESCAPE '\\' (e.g. via whereRaw("query_text LIKE ? ESCAPE '\\'",
[$pattern])) or the equivalent builder call so the binding remains parameterised
and the ESCAPE clause is applied.
In `@php/Mcp/Tools/Agent/Brain/BrainList.php`:
- Around line 89-95: The query in BrainList.php builds a BrainMemory query using
the caller-supplied $org without validating org scope; add a call to
assertAuthorisedOrgScope($org) immediately after $org is extracted (after the
existing extraction around line 83) and before the query is constructed so the
method enforces the same org-scope authorization as other BrainService methods
(e.g., remember, recall, discover).
---
Nitpick comments:
In `@php/Actions/Brain/RememberKnowledge.php`:
- Around line 103-104: The org length check in RememberKnowledge (the
mb_strlen($org) > 128) uses a hard-coded 128; add a class constant (e.g. const
MAX_ORG_LENGTH = 128) to the RememberKnowledge class and replace the literal
with self::MAX_ORG_LENGTH in the validation and error message (throw new
\InvalidArgumentException("org must not exceed " . self::MAX_ORG_LENGTH . "
characters")). Update any other occurrences in this class that repeat 128 to
reference the constant to avoid drift.
In `@php/Agentic/Livewire/CreditLedger.php`:
- Around line 163-247: The duplicated helper methods (checkHadesAccess, toast,
resolveWorkspaceId and the syncSelectedAgentId pattern) should be extracted into
a shared trait (e.g., AgenticComponent or HasHadesAccess) and both FleetOverview
and CreditLedger should use that trait; create a trait containing the private
methods (move implementations of checkHadesAccess, toast, resolveWorkspaceId and
optionally syncSelectedAgentId if reusable), update both classes to remove their
local copies and add "use AgenticComponent;" plus any needed imports (Flux,
Workspace, etc.), keep method visibility and behavior unchanged, and run
existing tests or smoke the Livewire components to verify no behavior/regression
changes.
In `@php/Agentic/Livewire/FleetOverview.php`:
- Around line 125-144: Replace the current PHP-level uniqueness/filtering with a
DB-level distinct query in the platforms() method: modify the FleetNode::query()
call to include distinct on the platform column (e.g., use
->distinct('platform') or ->distinct()->pluck('platform')), add a
whereNotNull('platform') and/or where('platform', '<>', '') to avoid empty/null
platforms at the DB level, then remove the PHP ->filter(...)->unique()->values()
steps while keeping the existing where('workspace_id', $this->workspaceId),
orderBy('platform') and the try/catch block around the query.
In `@php/Controllers/Api/Credits/CreditsController.php`:
- Around line 64-94: The response currently returns 'total' => count($entries)
which is the number of entries in this response and may mislead clients; update
ledger(Request $request) to include an explicit total_entries (the full DB
count) and keep total as the response count (or rename total to count if you
prefer); compute the full count via CreditEntry::query()->where('workspace_id',
$workspaceId)->count() (or call a service count method if resolveCreditService()
exposes one), and return both keys in the JSON alongside the existing data
produced by $this->formatEntry() and the ledger() iteration.
In `@php/Mcp/Console/McpAgentServerCommand.php`:
- Around line 350-361: Replace the silent broad RuntimeException catch around
queryAuditService->log in McpAgentServerCommand with either a more specific
exception type thrown by the audit/data layer or add a logged fallback: catch
the specific audit-related exception (or DatabaseException/QueryAuditException
if available) and, if that specific exception type is not present, catch
RuntimeException but call the logger (e.g., $this->logger or $io) to record the
error and context (workspace_id, tool_name) before swallowing; ensure the
handler still allows optional audit tables to be ignored but preserves
observability by logging the failure instead of silently suppressing all
RuntimeExceptions.
In `@php/Mcp/Resources/DatabaseSchema.php`:
- Around line 41-54: describeTable currently uses MySQL's DESCRIBE which breaks
for pgsql; update describeTable to detect when $driver === 'pgsql' and build a
PostgreSQL-compatible query (e.g., query information_schema.columns or
pg_catalog to fetch column metadata for $tableName) instead of DESCRIBE, reusing
the existing quoteIdentifier($tableName, $driver) for safe quoting and returning
the same array_map conversion; keep the sqlite PRAGMA branch and the catch
fallback but ensure pgsql returns real metadata rather than an empty array.
In `@php/Mcp/Services/McpMetricsService.php`:
- Around line 21-48: The review flagged a missing return type for getOverview(),
but the method Mcp\\Services\\McpMetricsService::getOverview already declares a
return type of : array; no change is needed—leave the function signature and
implementation as-is.
In `@php/Mcp/Services/QueryAuditService.php`:
- Around line 237-262: The McpAuditEntry Eloquent model is currently defined
inline in QueryAuditService.php; move it to its own file (e.g., create
php/Mcp/Models/McpAuditEntry.php) with the proper namespace (e.g., namespace
Mcp\Models) and use statements (use Illuminate\Database\Eloquent\Model), copy
the $table, $fillable and $casts properties there, then remove the McpAuditEntry
class from QueryAuditService.php and update that file to import the model (use
Mcp\Models\McpAuditEntry) where referenced; ensure the new class namespace
matches your PSR-4 autoload config and run composer dump-autoload if needed.
- Around line 22-32: The isSafe() logic misses UNION SELECT, INTO
OUTFILE/DUMPFILE, stacked queries via semicolons, and LOAD_FILE() checks; update
isSafe() to add additional preg_match checks (or extend the existing regex) to
detect patterns like '\bunion\s+select\b', '\binto\s+(outfile|dumpfile)\b', a
semicolon followed by non-whitespace and SQL token (to catch stacked queries),
and '\bload_file\s*\('; reference the existing symbols $startsWithWriteStatement
and $callsDangerousFunction and either add new booleans (e.g.
$containsUnionSelect, $containsIntoOutfile, $hasStackedQueries, $callsLoadFile)
or extend the current preg_match usage so the final return remains !
$startsWithWriteStatement && ! $callsDangerousFunction && ! $containsUnionSelect
&& ! $containsIntoOutfile && ! $hasStackedQueries && ! $callsLoadFile, ensuring
regexes are case-insensitive and you trim/normalize input before matching.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 1989c537-e414-4390-a418-9c4ffe5f17b3
📒 Files selected for processing (30)
.core/reference/runtime.goclaude/hermes_runner_mcp/README.mdclaude/hermes_runner_mcp/server.pycomposer.jsongo.modphp/Actions/Brain/RememberKnowledge.phpphp/Actions/Credits/AwardCredits.phpphp/Actions/Forge/ManagePullRequest.phpphp/Actions/Forge/ScanForWork.phpphp/Agentic/Data/CreditTransaction.phpphp/Agentic/Livewire/CreditLedger.phpphp/Agentic/Livewire/FleetOverview.phpphp/Console/Commands/AgenticSyncPluginsCcCommand.phpphp/Console/Commands/BrainPruneCommand.phpphp/Controllers/Api/Credits/CreditsController.phpphp/Jobs/CaptureDispatchResultJob.phpphp/Jobs/DeleteFromIndex.phpphp/Mcp/Console/McpAgentServerCommand.phpphp/Mcp/Exceptions/CircuitOpenException.phpphp/Mcp/Resources/DatabaseSchema.phpphp/Mcp/Services/CircuitBreaker.phpphp/Mcp/Services/DataRedactor.phpphp/Mcp/Services/McpHealthService.phpphp/Mcp/Services/McpMetricsService.phpphp/Mcp/Services/OpenApiGenerator.phpphp/Mcp/Services/QueryAuditService.phpphp/Mcp/Services/ToolRateLimiter.phpphp/Mcp/Tools/Agent/Brain/BrainList.phpphp/Migrations/0001_01_01_000008_create_brain_memories_table.phpphp/Migrations/0001_01_01_000009_drop_brain_memories_workspace_fk.php
✅ Files skipped from review due to trivial changes (1)
- composer.json
🚧 Files skipped from review as they are similar to previous changes (10)
- .core/reference/runtime.go
- php/Jobs/DeleteFromIndex.php
- php/Mcp/Services/ToolRateLimiter.php
- php/Mcp/Services/DataRedactor.php
- php/Console/Commands/AgenticSyncPluginsCcCommand.php
- php/Console/Commands/BrainPruneCommand.php
- php/Agentic/Data/CreditTransaction.php
- php/Mcp/Services/McpHealthService.php
- php/Actions/Forge/ManagePullRequest.php
- go.mod
Address residual SonarCloud quality-gate failures on PR #6 directly rather than parking — per Snider's standing rule that CodeRabbit/Sonar rules are the bar, not advisory. 10 over-120-char lines fixed in Sonar-hit files. 4 duplication clusters merged into shared helpers: - hermes/plugins/openbrain_common.py: shared OpenBrain transport + reachability via OpenBrainTransportMixin (replaces inlined http shape across openbrain_memory.py + openbrain_context.py) - php/Agentic/Livewire/HubComponent.php: shared Livewire hub plumbing extracted from BrainExplorer/CreditLedger/FleetOverview - php/tests/Feature/Livewire/LivewireTestCase.php: shared Livewire test harness + wiring assertions (replaces ~70-line repeats per test file) - php/Mcp/Services/OpenApiGenerator.php: extracted authenticated path + response builders for OpenAPI generation 1 reliability bug fixed alongside: openbrain_context.py float == 0.0 replaced with math.isclose(...) for safe floating-point comparison. Net: +301/-880 lines. Verification: php -l + python3 -m py_compile clean on all touched. go vet blocked by pre-existing dappco.re module drift (out of scope). Pre-existing SQLite migration test-env issue noted but separate. Closes Mantis #1063 Co-authored-by: Codex <noreply@openai.com>
Brings exported-surface docstring coverage on the codex-pushed PR #6 delta (commit 32a0356) toward 100%: - pkg/agentic/{flow.go,repo_sync.go} + pkg/lib/flow/flow.go: 8 Go docstrings (FlowRunStepOutput, RepoSyncOutput, RepoSyncCommandOutput, Flow, Step, Parse, ParseFile, LoadEmbedded). Go side now at 100%. - hermes/plugins/openbrain_{common,context,memory}.py: 3 class-level Python docstrings on the OpenBrain transport mixin + memory provider + context engine. Each documents the constructor contract + a usage example. - claude/hermes_runner_mcp/server.py: 18 Python docstrings on classes (DispatchRequest, DispatchResult, StatusResult, FetchResult, ToolSpec, HermesGatewayClient, HermesToolHandler, MinimalMCPServer) and on module-level functions (configure_logging, install_signal_handlers, negotiate_protocol_version, _parse_run_identifier, format_validation_error, tool_success_result, tool_error_result, build_fastmcp_server, parse_args, main). Python side now at ~95% (one false-positive on the audit caused by a multi-line def). - php/Mcp/{Exceptions/CircuitOpenException, Resources/DatabaseSchema, Services/CircuitBreaker, Services/ToolRateLimiter}.php: 4 PHP files get class + main-method phpDocBlocks following the AX-2 example shape. Pure docs. No behaviour change. Python compiles, PHP lints, Go gofmts clean. Outstanding: ~10 more PHP files in php/Mcp/Services/ (DataRedactor, McpHealthService, McpMetricsService, OpenApiGenerator, QueryAuditService) and php/Mcp/Console/McpAgentServerCommand.php + php/Mcp/Tools/Agent/Brain/ + php/Agentic/Livewire/* still need phpDocBlocks. Routed for separate codex pass — coverage target on those is 80% (CodeRabbit threshold) per Snider's v0.9.0 batch policy. Co-authored-by: Hephaestus <hephaestus@cladius>
|



Routine dev→main PR opened by Hephaestus PR-cadence task.
This PR exists to:
ahead_by: 143 commit(s) (per gh api compare)If CodeRabbit clears with no blocking comments, this PR is eligible for
gh pr merge --merge(real merge commit, no force-push). Conflicts and review feedback should be addressed on the dev branch before merge.Co-authored-by: Hephaestus hephaestus@cladius
Summary by CodeRabbit
New Features
Improvements
Refactor
Documentation