Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for dropping outliers #138

Open
rubenlg opened this issue Dec 11, 2019 · 0 comments
Open

Support for dropping outliers #138

rubenlg opened this issue Dec 11, 2019 · 0 comments
Labels
Needs Discussion Needs Discussion

Comments

@rubenlg
Copy link

rubenlg commented Dec 11, 2019

One would think that the more samples we take, the more stable is the result. However, taking more samples also means there is a higher chance of getting an interference from the system (some daemon doing expensive work, flushing of caches, etc).

This feature request is about having a statistically rigorous way of dropping outliers before computing the confidence interval, so that one or two crazy measurements don't cause an "unsure" result, and adding more samples guarantees getting a more stable result.

This should be optional, not hard-coded, because outliers are not always independent from the page being tested (e.g. if a page has a 1% chance of hitting an expensive GC).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Needs Discussion Needs Discussion
Projects
None yet
Development

No branches or pull requests

1 participant