You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
One features we forgot to implement was the way to avoid choosing a standby or an old master behind a predefined maximum lag as masters (this can happen only with asynchronous replication)
This is usually avoided checking that the keeper is not at the required cluster spec version but under some circumstances, with asynchronous replication, it may happen. So having a defined maximum lag is required.
The text was updated successfully, but these errors were encountered:
sgotti
changed the title
Don't choose standbys behind a defined lag as masters
Don't choose keeper with db behind a defined lag as masters
Apr 24, 2017
sgotti
changed the title
Don't choose keeper with db behind a defined lag as masters
Don't choose keepers with db behind a defined lag as masters
Apr 24, 2017
sgotti
added a commit
to sgotti/stolon
that referenced
this issue
Apr 24, 2017
Add a new cluster spec option `MaxStandbyLag` which defaults to 1MiB and
honor its value when choosing eligible standbys/old master.
Fixessorintlab#251
Add a new cluster spec option `MaxStandbyLag` which defaults to 1MiB and
honor its value when choosing eligible standbys/old master.
Fixessorintlab#251
sgotti
added a commit
to sgotti/stolon
that referenced
this issue
Apr 25, 2017
Add a new cluster spec option `MaxStandbyLag` which defaults to 1MiB and
honor its value when choosing eligible standbys/old master.
Fixessorintlab#251
One features we forgot to implement was the way to avoid choosing a standby or an old master behind a predefined maximum lag as masters (this can happen only with asynchronous replication)
This is usually avoided checking that the keeper is not at the required cluster spec version but under some circumstances, with asynchronous replication, it may happen. So having a defined maximum lag is required.
The text was updated successfully, but these errors were encountered: