feat: add validate command to ferrflow benchmarks#83
Conversation
There was a problem hiding this comment.
Pull request overview
Adds FerrFlow’s validate subcommand to the set of commands exercised by the full (hyperfine) benchmark runner, aligning the benchmarks list with the CLI surface area discussed in Issue #47.
Changes:
- Add
validateto theferrflow.commandslist intools.jsonso it is included in benchmark runs.
| } | ||
| }, | ||
| "commands": ["check", "release --dry-run", "version", "tag"] | ||
| "commands": ["check", "release --dry-run", "version", "tag", "validate"] |
There was a problem hiding this comment.
Including validate in the hyperfine command list will cause the runner to execute it many times per fixture/method (warmup+runs). Since validate is described as calling GitHub/GitLab APIs, the benchmark will be dominated by network latency and can become flaky (rate limits/transient API failures), which would make hyperfine exit non-zero and abort scripts/run.sh due to set -e. Consider benchmarking an offline/mocked validation mode (or a fixture/config that avoids remote API calls), or keeping validate out of the hyperfine suite until it can run deterministically.
Summary
validateto the list of benchmarked ferrflow commands in tools.jsonPartially addresses #47