Skip to content

feat: add validate command to ferrflow benchmarks#83

Merged
BryanFRD merged 1 commit intomainfrom
feat/add-validate-benchmark
Apr 10, 2026
Merged

feat: add validate command to ferrflow benchmarks#83
BryanFRD merged 1 commit intomainfrom
feat/add-validate-benchmark

Conversation

@BryanFRD
Copy link
Copy Markdown
Contributor

Summary

  • Add validate to the list of benchmarked ferrflow commands in tools.json
  • The remaining items in the issue (criterion micro-benchmarks for tag/git operations, larger fixture sizes, commit history variants) require changes in the ferrflow and Fixtures repos respectively

Partially addresses #47

Copilot AI review requested due to automatic review settings April 10, 2026 18:09
@BryanFRD BryanFRD merged commit a69bdc0 into main Apr 10, 2026
8 checks passed
@BryanFRD BryanFRD deleted the feat/add-validate-benchmark branch April 10, 2026 18:10
Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Adds FerrFlow’s validate subcommand to the set of commands exercised by the full (hyperfine) benchmark runner, aligning the benchmarks list with the CLI surface area discussed in Issue #47.

Changes:

  • Add validate to the ferrflow.commands list in tools.json so it is included in benchmark runs.

}
},
"commands": ["check", "release --dry-run", "version", "tag"]
"commands": ["check", "release --dry-run", "version", "tag", "validate"]
Copy link

Copilot AI Apr 10, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Including validate in the hyperfine command list will cause the runner to execute it many times per fixture/method (warmup+runs). Since validate is described as calling GitHub/GitLab APIs, the benchmark will be dominated by network latency and can become flaky (rate limits/transient API failures), which would make hyperfine exit non-zero and abort scripts/run.sh due to set -e. Consider benchmarking an offline/mocked validation mode (or a fixture/config that avoids remote API calls), or keeping validate out of the hyperfine suite until it can run deterministically.

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants