Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Calculation of adaptive thresholds wrong? #125

Open
corro opened this issue Jun 3, 2019 · 1 comment
Open

Calculation of adaptive thresholds wrong? #125

corro opened this issue Jun 3, 2019 · 1 comment

Comments

@corro
Copy link

corro commented Jun 3, 2019

In #14, adaptive thresholds were introduced, which modify the actual warning and critical thresholds according to a "magic factor". This is very useful, but I think the calculation in the code is wrong.

There is a formula in the description:

y = 100 - (100-P)*(N^(1-m))/(x^(1-m))

According to this formula, a threshold of 75% for a disk of 400GB (normalization factor 50 and magic number 0.8) should be raised to ~83.5%:

y = 100 - (100-75)*(50^(1-0.8))/(400^(1-0.8)) = ~83.50

But actually it is raised to ~95.9%, which is way too high.

My suspicion is, that there is a confusion of units in the following line:
https://github.com/sensu-plugins/sensu-plugins-disk-checks/blob/master/bin/check-disk-usage.rb#L144
The disk size in bytes passed to adj_percent is converted to mebibytes instead of gibibytes. I think it should be divided by 1024**3 instead.

I stumbled upon this because we were alerted of a shortage in disk space later than we expected. Can anyone confirm this issue?

@mlachal
Copy link

mlachal commented May 7, 2021

Hi, we have the same issue and used -n 20480 as a workaround.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants