Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ceph: update cluster storage utilization alert threshold and timing #4286

Merged
merged 1 commit into from Nov 12, 2019

Conversation

anmolsachan
Copy link
Contributor

@anmolsachan anmolsachan commented Nov 8, 2019

Signed-off-by: Anmol Sachan anmol13694@gmail.com

Description of your changes:
This PR updates the Cluster storage utilization alert threshold and timing. This is an enhancement based on some experiments and observations.
This mainly takes a proactive approach to alert the user to take action when the cluster is getting close to the maximum capacity. Since ceph stops I/O at 95% utilization, the user should be warned before it.
Which issue is resolved by this Pull Request:

Checklist:

  • Reviewed the developer guide on Submitting a Pull Request
  • Documentation has been updated, if necessary.
  • Unit tests have been added, if necessary.
  • Integration tests have been added, if necessary.
  • Pending release notes updated with breaking and/or notable changes, if necessary.
  • Upgrade from previous release is tested and upgrade user guide is updated, if necessary.
  • Code generation (make codegen) has been run to update object specifications, if necessary.
  • Comments have been added or updated based on the standards set in CONTRIBUTING.md
  • Add the flag for skipping the CI if this PR does not require a build. See here for more details.

[skip ci]

@ghost
Copy link

ghost commented Nov 8, 2019

There were the following issues with this Pull Request

  • Commit: ec2675d
    • ✖ message may not be empty
    • ✖ type may not be empty

You may need to change the commit messages to comply with the repository contributing guidelines.


🤖 This comment was generated by commitlint[bot]. Please report issues here.

Happy coding!

Copy link
Member

@leseb leseb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Look at the bot for the commit

@ghost
Copy link

ghost commented Nov 8, 2019

There were the following issues with this Pull Request

  • Commit: f1cc90e
    • ✖ message may not be empty
    • ✖ type may not be empty

You may need to change the commit messages to comply with the repository contributing guidelines.


🤖 This comment was generated by commitlint[bot]. Please report issues here.

Happy coding!

@travisn
Copy link
Member

travisn commented Nov 8, 2019

@anmolsachan The bot requires a prefix on the commit message, such as:
"ceph: update cluster utilization alert threshold and timing"

This PR mainly takes a proactive approach to
alert the user to take action when the cluster
utilizartion is getting close to the maximum capacity.
Since ceph stops I/O at 95% utilization, the user should
be warned before it.

Signed-off-by: Anmol Sachan <anmol13694@gmail.com>
@anmolsachan
Copy link
Contributor Author

anmolsachan commented Nov 10, 2019

@anmolsachan The bot requires a prefix on the commit message, such as:
"ceph: update cluster utilization alert threshold and timing"

Thanks, @leseb @travisn. It was my first PR on this repo, did not know :)

@anmolsachan anmolsachan changed the title Update cluster storage utilization alert threshold and timing ceph: update cluster storage utilization alert threshold and timing Nov 11, 2019
@leseb leseb added the ceph main ceph tag label Nov 12, 2019
@leseb leseb merged commit 5d16ffc into rook:master Nov 12, 2019
travisn added a commit that referenced this pull request Nov 12, 2019
ceph: update cluster storage utilization alert threshold and timing (bp #4286)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ceph main ceph tag
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants