Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[8.x] OsStats must be lenient with bad data from older nodes #73610

Merged

Conversation

williamrandolph
Copy link
Contributor

We've had a series of bug fixes for cases where an OsProbe gives negative values, most often just -1, to the OsStats class. We added assertions to catch cases where we were initializing OsStats with bad values. Unfortunately, these fixes turned to not be backwards-compatible. In this commit, we simply coerce bad values to 0 when data is coming from nodes that don't have the relevant bug fixes.

Relevant PRs:

Fixes #73459

@williamrandolph williamrandolph added >test Issues or PRs that are addressing/adding tests :Core/Infra/Core Core issues without another label v8.0.0 v7.14.0 v6.8.17 labels Jun 1, 2021
@elasticmachine elasticmachine added the Team:Core/Infra Meta label for core/infra team label Jun 1, 2021
@elasticmachine
Copy link
Collaborator

Pinging @elastic/es-core-infra (Team:Core/Infra)

@williamrandolph williamrandolph changed the title OsStats must be lenient with bad data from older nodes [8.x] OsStats must be lenient with bad data from older nodes Jun 1, 2021
Copy link
Member

@rjernst rjernst left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The approach seems good to me, but can you please link the relevant PRs?

assert this.free >= 0 : "expected free swap to be positive, got: " + total;
} else {
// If we have a node in the cluster without the bug fix for
// negative memory values, we need to coerce negative values to 0 here.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be good to have a link here to the relevant PR that added coercion, so a reader can verify the 7.8 boundary.

assert free >= 0 : "expected free memory to be positive, got: " + total;
} else {
// If we have a node in the cluster without the bug fix for
// negative memory values, we need to coerce negative values to 0 here.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be good to have a link here to the relevant PR that added coercion, so a reader can verify the 7.2 boundary.

@williamrandolph
Copy link
Contributor Author

@elasticmachine run elasticsearch-ci/bwc

(the error was a timeout on an unrelated test)

Copy link
Member

@rjernst rjernst left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@williamrandolph williamrandolph merged commit 80ea64c into elastic:master Jun 1, 2021
@williamrandolph williamrandolph deleted the fix/master/os-stats-bwc branch May 23, 2022 17:23
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
:Core/Infra/Core Core issues without another label Team:Core/Infra Meta label for core/infra team >test Issues or PRs that are addressing/adding tests v6.8.17 v7.14.0 v8.0.0-alpha1
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[CI] mixed-cluster:v6.0.0#mixedClusterTestRunner failure
4 participants