Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Raise log level on DiskThresholdDecider #8368

Closed
wants to merge 1 commit into from

Conversation

nik9000
Copy link
Member

@nik9000 nik9000 commented Nov 6, 2014

Having not enough disk space to allocate the shard is worth warning about.

Closes #8367

Having not enough disk space to allocate the shard is worth warning about.

Closes elastic#8367
@clintongormley
Copy link

@dakrone please take a look

@clintongormley
Copy link

Hi @nik9000

The problem with this is that the allocation deciders can be called hundreds of times per second, which would flood your logs with warnings. See the discussion here: #3637 (comment)

I think a better solution would be to warn once every 30 seconds when the low watermark is breached, as we do for the high watermark already: https://github.com/elasticsearch/elasticsearch/pull/8270/files#diff-1b8dca987fbcfb8d8e452d7e29c4d058R139

@dakrone says he'll work on that.

thanks

@nik9000
Copy link
Member Author

nik9000 commented Nov 6, 2014

Sounds good.
On Nov 6, 2014 11:31 AM, "Clinton Gormley" notifications@github.com wrote:

Closed #8368 #8368.


Reply to this email directly or view it on GitHub
#8368 (comment)
.

dakrone added a commit to dakrone/elasticsearch that referenced this pull request Nov 7, 2014
Fixes an issue where only absolute bytes were taken into account when
kicking off an automatic reroute due to disk usage. Also randomized the
tests to use either an absolute value or a percentage so this is tested.

Also adds logging for each node over the high and low watermark every
time a new cluster info usage is gathered (defaults to every 30
seconds).

Related to elastic#8368
Fixes elastic#8367
dakrone added a commit that referenced this pull request Nov 7, 2014
Fixes an issue where only absolute bytes were taken into account when
kicking off an automatic reroute due to disk usage. Also randomized the
tests to use either an absolute value or a percentage so this is tested.

Also adds logging for each node over the high and low watermark every
time a new cluster info usage is gathered (defaults to every 30
seconds).

Related to #8368
Fixes #8367
dakrone added a commit that referenced this pull request Nov 7, 2014
Fixes an issue where only absolute bytes were taken into account when
kicking off an automatic reroute due to disk usage. Also randomized the
tests to use either an absolute value or a percentage so this is tested.

Also adds logging for each node over the high and low watermark every
time a new cluster info usage is gathered (defaults to every 30
seconds).

Related to #8368
Fixes #8367
dakrone added a commit that referenced this pull request Nov 13, 2014
Fixes an issue where only absolute bytes were taken into account when
kicking off an automatic reroute due to disk usage. Also randomized the
tests to use either an absolute value or a percentage so this is tested.

Also adds logging for each node over the high and low watermark every
time a new cluster info usage is gathered (defaults to every 30
seconds).

Related to #8368
Fixes #8367
mute pushed a commit to mute/elasticsearch that referenced this pull request Jul 29, 2015
Fixes an issue where only absolute bytes were taken into account when
kicking off an automatic reroute due to disk usage. Also randomized the
tests to use either an absolute value or a percentage so this is tested.

Also adds logging for each node over the high and low watermark every
time a new cluster info usage is gathered (defaults to every 30
seconds).

Related to elastic#8368
Fixes elastic#8367
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Disk free space threshold - at least a Warning message in the log file
3 participants