Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cleanup some leader election client doc #77991

Merged
merged 1 commit into from
May 21, 2019
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
37 changes: 22 additions & 15 deletions staging/src/k8s.io/client-go/tools/leaderelection/leaderelection.go
Original file line number Diff line number Diff line change
Expand Up @@ -19,12 +19,13 @@ limitations under the License.
// election state. This implementation does not guarantee that only one
// client is acting as a leader (a.k.a. fencing).
//
// A client observes timestamps captured locally to infer the state of the
// leader election. Thus the implementation is tolerant to arbitrary clock
// skew, but is not tolerant to arbitrary clock skew rate. Timestamp(renew time)
// is not meaningful if it was collected on another machine. The implementation
// of this client only acts on locally collected timestamps and cannot rely on
// the accuracy of timestamp in the record for correctness.
// A client only acts on timestamps captured locally to infer the state of the
// leader election. The client does not consider timestamps in the leader
// election record to be accurate because these timestamps may not have been
// produced by a local clock. The implemention does not depend on their
// accuracy and only uses their change to indicate that another client has
// renewed the leader lease. Thus the implementation is tolerant to arbitrary
// clock skew, but is not tolerant to arbitrary clock skew rate.
//
// However the level of tolerance to skew rate can be configured by setting
// RenewDeadline and LeaseDuration appropriately. The tolerance expressed as a
Expand Down Expand Up @@ -107,21 +108,27 @@ type LeaderElectionConfig struct {

// LeaseDuration is the duration that non-leader candidates will
// wait to force acquire leadership. This is measured against time of
// last observed ack. A client needs to wait a full LeaseDuration without
// observing a change to the record before it can attempt to take over even
// when a client with a different identity against the record's starts and
// the renew time in the record is older than LeaseDuration. A.k.a., when
// all clients are shutdown and after at least a LeaseDuration, clients
// started with different identities against the record's must wait a full
// LeaseDuration before acquiring a lock. Thus LeaseDuration should be as
// short as possible to avoid a possible long waiting. LeaseDuration is 15
// seconds in core Kubernetes components.
// last observed ack.
//
// A client needs to wait a full LeaseDuration without observing a change to
// the record before it can attempt to take over. When all clients are
// shutdown and a new set of clients are started with different names against
// the same leader record, they must wait the full LeaseDuration before
// attempting to acquire the lease. Thus LeaseDuration should be as short as
// possible (within your tolerance for clock skew rate) to avoid a possible
// long waits in the scenario.
//
// Core clients default this value to 15 seconds.
LeaseDuration time.Duration
// RenewDeadline is the duration that the acting master will retry
// refreshing leadership before giving up.
//
// Core clients default this value to 10 seconds.
RenewDeadline time.Duration
// RetryPeriod is the duration the LeaderElector clients should wait
// between tries of actions.
//
// Core clients default this value to 2 seconds.
RetryPeriod time.Duration

// Callbacks are callbacks that are triggered during certain lifecycle
Expand Down