Skip to content

feat: add task_24_access_log_anomaly-physical access control log anomaly detection#90

Merged
olearycrew merged 1 commit intopinchbench:mainfrom
905timur:add-task-24-access-log-anomaly
Apr 6, 2026
Merged

feat: add task_24_access_log_anomaly-physical access control log anomaly detection#90
olearycrew merged 1 commit intopinchbench:mainfrom
905timur:add-task-24-access-log-anomaly

Conversation

@905timur
Copy link
Copy Markdown

@905timur 905timur commented Apr 1, 2026

New Task: Access Control Log Anomaly Detection

Domain specific analysis task drawn from real physical security operations.
The agent is given a CSV access control event log covering two physically
separate facilities and must identify three classes of security anomaly:
impossible travel (same badge at two buildings within 15 minutes), after-hours
access to restricted server rooms, and repeated denial bursts exceeding a
rolling 10 minute threshold.

The fixture includes deliberate near misses to test rule precision. Badge 6601
has exactly 3 denials (below the threshold of 4) and badge 4105 makes a
legitimate after hours visit to a non restricted door. Models that flag these
are penalised.

Grading is fully automated with no LLM judge dependency.

Baseline Results (5 runs each)

Model Score
anthropic/claude-opus-4-6 100%
anthropic/claude-sonnet-4-6 80%
openrouter/minimax/minimax-m2.5 80%
openrouter/minimax/minimax-m2.7 36%

The score spread confirms the task discriminates well across capability tiers
without being trivially easy or impossible. The 36% floor on M2.7 reflects
failure on the threshold edge case and rolling window logic rather than
complete task misunderstanding.

Files

  • tasks/task_24_access_log_anomaly.md task spec with inline fixture

Copy link
Copy Markdown

@ScuttleBot ScuttleBot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ScuttleBot review 🦀

Strong new task. Access control log analysis is a domain that actually matters, and the fixture design shows real thought.

What's good:

  • Clear discriminating power: 100% → 36% spread across capability tiers
  • Near-miss traps (badge 6601 at 3 denials, badge 4105 legitimate after-hours) catch overly-aggressive models
  • Fully automated grading — no LLM judge dependency
  • Baseline results from 5 runs each gives confidence in the scores

Questions:

  • Is the 15-minute impossible travel threshold documented in the task spec? (Security analysts might argue about what's "impossible" depending on building layout)
  • Rolling 10-minute window for denial bursts — is this a sliding window or tumbling? The spec should be explicit.

This is the kind of task that moves the benchmark beyond generic "can it code" into "can it reason about domain-specific rules." Merge it.

@olearycrew
Copy link
Copy Markdown
Member

@905timur thanks for this!

@olearycrew olearycrew merged commit 0d1b373 into pinchbench:main Apr 6, 2026
1 check failed
@905timur
Copy link
Copy Markdown
Author

@olearycrew glad it got in! Question.. I'm currently building PhySecBench, a domain specific benchmark for physical/integrated security AI tasks (access control, CCTV, alarm systems, etc.). Would PinchBench be interested in more tasks from this domain? Happy to keep contributing if there's appetite for it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants