Skip to content

fix(permission): stop logging full rulesets during evaluation#17293

Open
notzenco wants to merge 2 commits intoanomalyco:devfrom
notzenco:fix/17218-permission-log-bloat
Open

fix(permission): stop logging full rulesets during evaluation#17293
notzenco wants to merge 2 commits intoanomalyco:devfrom
notzenco:fix/17218-permission-log-bloat

Conversation

@notzenco
Copy link

@notzenco notzenco commented Mar 13, 2026

Issue for this PR

Closes #17218

Type of change

  • Bug fix
  • New feature
  • Refactor / code improvement
  • Documentation

What does this PR do?

PermissionNext.evaluate() was logging the entire merged ruleset on every call. On installs with lots of external_directory rules, that makes each permission log line very large and causes the log directory to grow much faster than it should.

This keeps the permission evaluation log, but only records the matched rule and the merged rule count. That still shows why a permission check resolved the way it did without dumping the full ruleset into every log line.

I also added a small config dependency guard after the Windows e2e failure on this branch. When a local .opencode directory is already inside a repo with @opencode-ai/plugin available from a parent install, we now skip creating a second local install just for that directory.

How did you verify your code works?

  • added a regression test that checks permission logs include rule= and rules= but no ruleset=
  • added a config test that verifies Config.needsInstall() skips a local .opencode dir when the plugin is already available from a parent install
  • ran bun test test/permission-next.test.ts test/permission-task.test.ts test/config/config.test.ts
  • ran bun typecheck

Screenshots / recordings

N/A

Checklist

  • I have tested my changes locally
  • I have not included unrelated changes in this PR

Copilot AI review requested due to automatic review settings March 13, 2026 04:31
@github-actions github-actions bot added the needs:compliance This means the issue will auto-close after 2 hours. label Mar 13, 2026
@github-actions github-actions bot removed the needs:compliance This means the issue will auto-close after 2 hours. label Mar 13, 2026
@github-actions
Copy link
Contributor

Thanks for updating your PR! It now meets our contributing guidelines. 👍

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Updates PermissionNext.evaluate() logging to avoid serializing the full merged permission ruleset (which can be large/noisy) while still providing useful debug context about what matched.

Changes:

  • Log the matched rule and total rules count instead of logging the entire ruleset object in PermissionNext.evaluate().
  • Add a regression test asserting the log line includes rule= and rules=<n> and does not include ruleset=.

Reviewed changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated 1 comment.

File Description
packages/opencode/src/permission/next.ts Refines evaluate() logging payload to reduce log size and avoid full ruleset serialization.
packages/opencode/test/permission-next.test.ts Adds coverage to ensure evaluate() logs the matched rule and rule count (and not the full ruleset).

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +21 to +27
await sleep(50)

const line = (await Bun.file(Log.file()).text())
.trim()
.split("\n")
.findLast((x) => x.includes("service=permission") && x.includes(`pattern=${pattern}`))

@titet11
Copy link

titet11 commented Mar 13, 2026

Stop creating mediocre solutions, you're one of the many who end up damaging the program.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Permission service logs full ruleset on every tool call, causing 50GB+ log bloat

3 participants