Skip to content

Running on CI machines give inherently unstable results #98

@eregon

Description

@eregon

The benchmarks seem to be run in GitHub Actions on GitHub hosted runners.

Those runners are hosted in the cloud, they are probably shared machines running multiple workloads at the same time, etc.
So the results will likely be very noisy.

Is there a plan to address that?

Until then I think it would be useful to run each measurement a few times, or reuse the previous runs to compute the standard deviation or some other estimator of the variance, as this a big caveat.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions