feat(proactive): runFollowUpBatch + decidePostingPolicy + block-counter interface#82
Conversation
…er interface
Three primitives extracted from sage so future AA proactive consumers
inherit the same safety properties without reimplementing them.
1. `runFollowUpBatch<C, R>` — bounded-parallel batch runner over candidates.
Layered on `boundedParallel` from @agent-assistant/coordination ^0.4.25.
Adds a per-run cap with overflow tracking (caller gets `dropped` for
provenance — sage uses this to seed `deferredSessionIds` so dropped
signal-inbox candidates aren't silently cleared) and aggregated
structured stats (processed / fulfilled / rejected / timedOut /
aborted / dropped) for /health-style surfaces. Deliberately thin:
the per-candidate pipeline (evidence collection, LLM verification,
policy gate, post, persistence) stays in caller code via the `runOne`
callback — keeping each surface's pipeline free to compose without
dragging the whole follow-up loop into AA.
2. `decidePostingPolicy` + `readFailOpenFromEnv` — fail-open posting
verification policy. Direct extraction from sage's
`verification-policy.ts` (originally PR #186). The kill-switch env
var renames from `SAGE_PROACTIVE_FAIL_OPEN_VERIFICATION` to
`AA_PROACTIVE_FAIL_OPEN_VERIFICATION`; the legacy SAGE-prefixed name
is read as a deprecation alias for one minor cycle so the rename can
land without flipping prod behavior on the dependency bump.
Conservative read: setting EITHER env var to "false" wins. Generic
over evidence type (`<E>` parameter) so callers aren't locked into
sage's `ClaimEvidence` shape.
3. `ProactiveBlockCounter` / `ProactiveBlockCounterReader` interfaces —
minimal contract for the best-effort operational counters that power
/api/proactive/health. Adapter implementations stay in the consuming
app (sage's KV adapter, future Redis or D1 adapters elsewhere). Errors
thrown by adapters are documented as caller-swallowed so a flaky
counter never blocks a proactive decision.
Tests:
- 8 cases for runFollowUpBatch: empty input, no-cap full batch, FIFO
cap with dropped tail, cap-of-zero, cap-larger-than-input,
rejected/timedOut counted separately, external abort threading,
input-order preservation under parallel completion.
- 15 cases for decidePostingPolicy: every outcome × failOpen branch,
idempotent prefix application, AA env var, legacy SAGE env var,
conservative-read precedence, non-"false" values don't trigger
fail-closed.
Existing 118 proactive tests still pass; 23 new = 141/141. tsc clean
across the build chain (surfaces → coordination → proactive).
Sage adoption is the next PR in the sequence. EvidenceCollector
intentionally stays in sage — its `scoreClosure` rules are domain-
specific (pr-merge → close, affirmative-reply + reaction → close, etc.)
and don't generalize across AA consumers.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Plus Run ID: 📒 Files selected for processing (3)
✅ Files skipped from review due to trivial changes (1)
🚧 Files skipped from review as they are similar to previous changes (2)
📝 WalkthroughWalkthroughAdds a proactive posting policy module, a FIFO-capped parallel batch runner, operational block-counter interfaces, tests for both features, a coordination dependency, and re-exports the new APIs from the package entry point. ChangesProactive Posting Policy & Batch Infrastructure
Sequence DiagramsequenceDiagram
participant Caller
participant PolicyModule as Posting Policy Module
participant BatchRunner as Batch Runner
participant Worker as Per-Item Worker
participant Counter as Block Counter
Caller->>PolicyModule: provide VerifyDraftOutcome + env.failOpen
PolicyModule->>PolicyModule: decidePostingPolicy / applyUnverifiedPrefix
PolicyModule-->>Caller: PolicyDecision (post / post_unverified / block)
Caller->>BatchRunner: runFollowUpBatch(candidates, options)
BatchRunner->>BatchRunner: slice FIFO prefix -> processed, dropped
BatchRunner->>BatchRunner: boundedParallel(processed, concurrency, perItemTimeout, signal)
par Parallel Execution
BatchRunner->>Worker: run candidate[i] (with per-item timeout + signal)
Worker->>Counter: increment(event) (fire-and-forget)
Worker-->>BatchRunner: result (fulfilled | rejected | timedOut | aborted)
end
BatchRunner->>BatchRunner: aggregate stats, preserve input order
BatchRunner-->>Caller: RunFollowUpBatchResult(results, processed, dropped, stats)
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~22 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: c75a79cd8c
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| * legacy env var to intentionally fail-closed during an incident must not | ||
| * silently start failing-open after the migration. | ||
| */ | ||
| export function readFailOpenFromEnv(env: NodeJS.ProcessEnv = process.env): boolean { |
There was a problem hiding this comment.
Avoid Node-only default in readFailOpenFromEnv
Using process.env as the default and NodeJS.ProcessEnv in the public signature makes this helper unsafe for Cloudflare Worker consumers: in Worker runtime (without Node compatibility), calling readFailOpenFromEnv() throws because process is undefined, and Worker-only TypeScript projects can also fail on the emitted NodeJS type in declarations. This package is shared across non-Node environments, so the function should accept a plain env map (or require an explicit env argument) instead of hard-binding to Node globals.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@packages/proactive/src/decide-posting-policy.ts`:
- Line 53: The exported function readFailOpenFromEnv currently types its
parameter as NodeJS.ProcessEnv and defaults to process.env, which leaks Node
typings and crashes in non-Node runtimes; change the parameter type to a generic
env map (e.g., Record<string, string | undefined> or
Partial<Record<string,string>>) and make the default safe by using a runtime
guard (check typeof process !== "undefined" && process.env ? process.env : {})
so the function no longer references the process global or forces Node types on
consumers; update any internal accesses in readFailOpenFromEnv to work with the
new env map type.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro Plus
Run ID: 3edfe47e-9fb7-4dfe-b08f-8f49947c9483
📒 Files selected for processing (7)
packages/proactive/package.jsonpackages/proactive/src/decide-posting-policy.test.tspackages/proactive/src/decide-posting-policy.tspackages/proactive/src/index.tspackages/proactive/src/proactive-block-counter.tspackages/proactive/src/run-follow-up-batch.test.tspackages/proactive/src/run-follow-up-batch.ts
…review)
Both reviewers flagged the same P1: `readFailOpenFromEnv(env: NodeJS.ProcessEnv = process.env)`
made the helper unsafe for Cloudflare Worker consumers without
`nodejs_compat`. Two failure modes:
1. Runtime: calling `readFailOpenFromEnv()` throws ReferenceError
because `process` is undefined.
2. Build: `NodeJS.ProcessEnv` in the emitted .d.ts means Worker-only
TypeScript projects fail to consume the package types.
Fix:
- Public signature now takes `EnvSource = Readonly<Record<string,
string | undefined>>` — a structural shape that matches `process.env`
at runtime so existing Node callers pass it without changes, but
doesn't drag the `NodeJS` global type into the package's surface.
- No-arg default falls back to `globalThis.process?.env ?? {}` via a
private `defaultEnvSource()` helper. On hosts where `process` is
undefined (pure Workers, browsers) the empty map cleanly falls
through to the policy default (fail-open) instead of throwing.
- Export `EnvSource` from index.ts so callers can use the type
explicitly when wiring their own env source.
New test case verifies the no-arg path doesn't throw when
`globalThis.process` is deleted (simulating a Worker without
nodejs_compat). 142/142 proactive tests pass; build clean; dist .d.ts
no longer references `NodeJS` in public surface.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
`npm ci` failed on PR #83 because adding `supermemory@^4.21.1` to packages/memory/package.json without refreshing package-lock.json puts the lockfile out of sync. Unlike PR #81 (no new deps) and PR #82 (internal workspace dep), this PR is the first in the series to add a real npm-registry dependency, so the lock must update. Diff is in two parts, both unavoidable: 1. New `node_modules/supermemory@4.21.1` entry — what this PR needs. 2. Workspace package versions sync from 0.4.23 → 0.4.25 — the recent release commits (2740310, 941a840) bumped the workspace package.json files but never refreshed the lockfile. `npm ci` refuses any out-of-sync state, so the sync has to land here even though it's incidental to this feature. Verified locally with the same install path CI runs: - rm -rf node_modules - npm install (regenerates lock) - cd packages/memory && npm test (84/84 pass) - npm run build (clean) Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Summary
Three primitives extracted from sage so future AA proactive consumers inherit the same safety properties without reimplementing them. Spec: sage
docs/specs/proactive-batch-runner.md§5.1.runFollowUpBatch<C, R>— bounded-parallel batch runner over candidates. Layered onboundedParallelfrom@agent-assistant/coordination ^0.4.25(PR feat(coordination): boundedParallel batch runner with timeout + abort #81). Adds per-run cap + overflow tracking + structured stats.decidePostingPolicy+readFailOpenFromEnv— fail-open posting verification policy extracted from sage. Env var renamesSAGE_PROACTIVE_FAIL_OPEN_VERIFICATION→AA_PROACTIVE_FAIL_OPEN_VERIFICATIONwith deprecation alias.ProactiveBlockCounter/ProactiveBlockCounterReaderinterfaces — minimal contract for/api/proactive/healthoperational counters. Adapter implementations stay in the consuming app.Surface
Design notes
runFollowUpBatchis intentionally thin. The per-candidate pipeline (evidence → LLM verify → policy → post → persist) stays in caller code viarunOne. Each AA surface owns its pipeline; AA owns the orchestration safety properties (concurrency, timeout, cap, abort).VerifyDraftOutcome<E>means consumers aren't locked into sage'sClaimEvidence. Sage will pass<ClaimEvidence>; another surface could pass its own.AA_*or legacySAGE_*env var to"false"wins. An operator running the legacy var to intentionally fail-closed during an incident must NOT silently start failing-open after the migration.EvidenceCollectordeliberately stays in sage. ItsscoreClosurehas hardcoded sage-domain rules (pr-merge → close, affirmative-reply + reaction → close) that don't generalize across AA consumers. Each surface picks its own scoring policy.Test plan
cd packages/proactive && npm test— 141/141 pass (118 existing + 23 new)runFollowUpBatch: empty, no-cap, FIFO cap, cap-of-zero, cap-larger-than-input, rejected/timedOut classified, external abort threading, input-order preservationdecidePostingPolicy: every outcome × failOpen branch, idempotent prefix, AA env var, legacy SAGE env var, conservative-read precedence, non-"false" values don't trigger fail-closednpm run buildclean across the build chain (surfaces → coordination → proactive)dist/index.d.tsOut of scope (follow-up)
chore(release)commit per the existing conventioncheckFollowUpscollapses ~200 LOC of sequential loop into ~30 LOC of glue callingrunFollowUpBatch; staged at sageworkflows/proactive-batch-runner/04-sage-adopt.ts. Sage drops its localverification-policy.ts,runCandidatePool,runCandidateWithTimeout, and theProactiveBlockCounterinterface (re-imports from AA)createSupermemoryClientmove from sage to@agent-assistant/memory. Independent of this PRSequence status
boundedParallelpublished as@agent-assistant/coordination@0.4.25createSupermemoryClientextraction🤖 Generated with Claude Code