Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Shall we bring the template and the configuration options up to date? #74

Closed
v6 opened this issue Dec 3, 2015 · 7 comments
Closed

Shall we bring the template and the configuration options up to date? #74

v6 opened this issue Dec 3, 2015 · 7 comments

Comments

@v6
Copy link

v6 commented Dec 3, 2015

I propose bringing the configuration options up to date with https://github.com/antirez/redis/blob/3.0/redis.conf.

Basically, this is not up to date enough with the latest configuration options to support aof-load-truncated yes and aof-rewrite-incremental-fsync yes, and some additional cluster settings.

I noticed a lack of anything about aof-load-truncated in the repo.

In most cases, I think people will set this to 'yes'.

# An AOF file may be found to be truncated at the end during the Redis
# startup process, when the AOF data gets loaded back into memory.
# This may happen when the system where Redis is running
# crashes, especially when an ext4 filesystem is mounted without the
# data=ordered option (however this can't happen when Redis itself
# crashes or aborts but the operating system still works correctly).
#
# Redis can either exit with an error when this happens, or load as much
# data as possible (the default now) and start if the AOF file is found
# to be truncated at the end. The following option controls this behavior.
#
# If aof-load-truncated is set to yes, a truncated AOF file is loaded and
# the Redis server starts emitting a log to inform the user of the event.
# Otherwise if the option is set to no, the server aborts with an error
# and refuses to start. When the option is set to no, the user requires
# to fix the AOF file using the "redis-check-aof" utility before to restart
# the server.
#
# Note that if the AOF file will be found to be corrupted in the middle
# the server will still exit with an error. This option only applies when
# Redis will try to read more data from the AOF file but not enough bytes
# will be found.
aof-load-truncated yes

# When a child rewrites the AOF file, if the following option is enabled
# the file will be fsync-ed every 32 MB of data generated. This is useful
# in order to commit the file to the disk more incrementally and avoid
# big latency spikes.
aof-rewrite-incremental-fsync yes
# Cluster Slave Validity Factor
# A slave of a failing master will avoid to start a failover if its data
# looks too old.
#
# There is no simple way for a slave to actually have a exact measure of
# its "data age", so the following two checks are performed:
#
#1) If there are multiple slaves able to failover, they exchange messages
#    in order to try to give an advantage to the slave with the best
#    replication offset (more data from the master processed).
#    Slaves will try to get their rank by offset, and apply to the start
#    of the failover a delay proportional to their rank.
#
#2) Every single slave computes the time of the last interaction with
#    its master. This can be the last ping or command received (if the master
#    is still in the "connected" state), or the time that elapsed since the
#    disconnection with the master (if the replication link is currently down).
#    If the last interaction is too old, the slave will not try to failover
#    at all.
#
# The point "2" can be tuned by user. Specifically a slave will not perform
# the failover if, since the last interaction with the master, the time
# elapsed is greater than:
#
#   (node-timeout * slave-validity-factor) + repl-ping-slave-period
#
# So for example if node-timeout is 30 seconds, and the slave-validity-factor
# is 10, and assuming a default repl-ping-slave-period of 10 seconds, the
# slave will not try to failover if it was not able to talk with the master
# for longer than 310 seconds.
#
# A large slave-validity-factor may allow slaves with too old data to failover
# a master, while a too small value may prevent the cluster from being able to
# elect a slave at all.
#
# For maximum availability, it is possible to set the slave-validity-factor
# to a value of 0, which means, that slaves will always try to failover the
# master regardless of the last time they interacted with the master.
# (However they'll always try to apply a delay proportional to their
# offset rank).
#
# Zero is the only value able to guarantee that when all the partitions heal
# the cluster will always be able to continue.
#
cluster-slave-validity-factor 0

# cluster-require-full-coverage

# By default Redis Cluster nodes stop accepting queries if they detect there
# is at least an hash slot uncovered (no available node is serving it).
# This way if the cluster is partially down (for example a range of hash slots
# are no longer covered) all the cluster becomes, eventually, unavailable.
# It automatically returns available as soon as all the slots are covered again.
#
# However sometimes you want the subset of the cluster which is working,
# to continue to accept queries for the part of the key space that is still
# covered. In order to do so, just set the cluster-require-full-coverage
# option to no.
#
#2015.10.19  n8  We don't want a partially working cluster.  This will make
#   diagnostics confusing, i.e. "it works for my test user"
cluster-require-full-coverage yes

May I make a pull request to add this feature, or is it unnecessary for reasons of which I'm not aware?

@v6 v6 changed the title // , Support aof-load-truncated yes // , Support aof-load-truncated yes and aof-rewrite-incremental-fsync yes? Dec 3, 2015
@v6 v6 changed the title // , Support aof-load-truncated yes and aof-rewrite-incremental-fsync yes? // , Support aof-load-truncated yes and aof-rewrite-incremental-fsync yes, and some additional cluster settings? Dec 3, 2015
@v6 v6 changed the title // , Support aof-load-truncated yes and aof-rewrite-incremental-fsync yes, and some additional cluster settings? // , Bring configuration options up to date? Dec 3, 2015
@v6 v6 changed the title // , Bring configuration options up to date? // , Shall we bring the template and the configuration options up to date? Dec 3, 2015
@tarcinil
Copy link

tarcinil commented Dec 8, 2015

I can agree that this might be a requirement. I just implemented a standalone mode and I am getting the following error. This was a fresh install using r10k.

Starting redis-server: 
*** FATAL CONFIG FILE ERROR ***
Reading the configuration file, at line 542
>>> 'cluster-enabled no'
Bad directive or wrong number of arguments
failed

I changed the template to use a comment as the

# Redis Cluster Settings
<% if @cluster_enabled -%>
cluster-enabled yes
cluster-config-file <%= @cluster_config_file %>
cluster-node-timeout <%= @cluster_node_timeout %>
<% else -%>
# cluster-enabled no
<% end -%>

@v6
Copy link
Author

v6 commented Dec 10, 2015

👍

@petems petems changed the title // , Shall we bring the template and the configuration options up to date? Shall we bring the template and the configuration options up to date? Dec 3, 2016
@v6
Copy link
Author

v6 commented Apr 12, 2017

// , Not to be too chatty, but this issue has come up again for us.

Specifically, we need to set a cluster-slave-validity-factor of 2 for one of our applications.

http://download.redis.io/redis-stable/redis.conf

@v6
Copy link
Author

v6 commented Apr 12, 2017

// , In the mean time, how do you recommend that we work around this? Should we just clone the module and modify it, or use it as-is a separate module to use the ReDiS API, or just automatically set it with redis-cli -h $(facter ipaddress) CONFIG SET cluster-slave-validity-factor 2?

@petems
Copy link
Member

petems commented Apr 12, 2017

@v6 I would do two things:

  1. Fork the module to add the config settings you require
  2. Open a pull-request to add those features upstream to this module 👍

@v6
Copy link
Author

v6 commented Apr 25, 2017

// , Well, let's get started: #163

@ekohl
Copy link
Member

ekohl commented May 5, 2020

Due to its age I'm going to close this issue. If there's something missing, please open a new issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants