New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

pkg/kubelet/cloudresource: fallback to old addresses if sync loop fails #71727

Merged
merged 3 commits into from Feb 1, 2019

Conversation

@mikedanese
Copy link
Member

mikedanese commented Dec 4, 2018

This does a bit more than the title.

  • remove unnecessary sleeping in syncmanager
  • change syncmanager to fallback to old addresses in the case that a single sync loop fails
  • a bit of cleanup

/kind bug
/king sig-node

NONE
@mikedanese

This comment has been minimized.

Copy link
Member Author

mikedanese commented Dec 5, 2018

/retest

func TestNodeAddressesRequest(t *testing.T) {
syncPeriod := 300 * time.Millisecond
maxRetry := 5
func TestNodeAddressesDelay(t *testing.T) {

This comment has been minimized.

@mikedanese

mikedanese Dec 5, 2018

Author Member

This test is flaky on master. I'm wondering if we should remove the concurrency altogether.

This comment has been minimized.

@awly

awly Jan 25, 2019

Contributor

sync.Cond is tricky to reason about so I strongly recommend having a good concurrent test.

This comment has been minimized.

@mikedanese

mikedanese Jan 29, 2019

Author Member

Ok, left as is and added a comment about sync.Cond.

@mikedanese mikedanese force-pushed the mikedanese:fixcrm branch 3 times, most recently from 0e15007 to b7acf1e Jan 24, 2019

@mikedanese

This comment has been minimized.

Copy link
Member Author

mikedanese commented Jan 24, 2019

/retest

1 similar comment
@mikedanese

This comment has been minimized.

Copy link
Member Author

mikedanese commented Jan 25, 2019

/retest

Show resolved Hide resolved pkg/kubelet/cloudresource/cloud_request_manager.go Outdated
func TestNodeAddressesRequest(t *testing.T) {
syncPeriod := 300 * time.Millisecond
maxRetry := 5
func TestNodeAddressesDelay(t *testing.T) {

This comment has been minimized.

@awly

awly Jan 25, 2019

Contributor

sync.Cond is tricky to reason about so I strongly recommend having a good concurrent test.

Show resolved Hide resolved pkg/kubelet/cloudresource/cloud_request_manager_test.go Outdated

@mikedanese mikedanese force-pushed the mikedanese:fixcrm branch from b7acf1e to bf99565 Jan 29, 2019

@awly

This comment has been minimized.

Copy link
Contributor

awly commented Jan 29, 2019

/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm label Jan 29, 2019

@mikedanese

This comment has been minimized.

Copy link
Member Author

mikedanese commented Jan 29, 2019

/retest

@yujuhong

This comment has been minimized.

Copy link
Member

yujuhong commented Jan 31, 2019

/approve

@k8s-ci-robot

This comment has been minimized.

Copy link
Contributor

k8s-ci-robot commented Jan 31, 2019

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: mikedanese, yujuhong

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@fejta-bot

This comment has been minimized.

Copy link

fejta-bot commented Feb 1, 2019

/retest
This bot automatically retries jobs that failed/flaked on approved PRs (send feedback to fejta).

Review the full test history for this PR.

Silence the bot with an /lgtm cancel or /hold comment for consistent failures.

5 similar comments
@fejta-bot

This comment has been minimized.

Copy link

fejta-bot commented Feb 1, 2019

/retest
This bot automatically retries jobs that failed/flaked on approved PRs (send feedback to fejta).

Review the full test history for this PR.

Silence the bot with an /lgtm cancel or /hold comment for consistent failures.

@fejta-bot

This comment has been minimized.

Copy link

fejta-bot commented Feb 1, 2019

/retest
This bot automatically retries jobs that failed/flaked on approved PRs (send feedback to fejta).

Review the full test history for this PR.

Silence the bot with an /lgtm cancel or /hold comment for consistent failures.

@fejta-bot

This comment has been minimized.

Copy link

fejta-bot commented Feb 1, 2019

/retest
This bot automatically retries jobs that failed/flaked on approved PRs (send feedback to fejta).

Review the full test history for this PR.

Silence the bot with an /lgtm cancel or /hold comment for consistent failures.

@fejta-bot

This comment has been minimized.

Copy link

fejta-bot commented Feb 1, 2019

/retest
This bot automatically retries jobs that failed/flaked on approved PRs (send feedback to fejta).

Review the full test history for this PR.

Silence the bot with an /lgtm cancel or /hold comment for consistent failures.

@fejta-bot

This comment has been minimized.

Copy link

fejta-bot commented Feb 1, 2019

/retest
This bot automatically retries jobs that failed/flaked on approved PRs (send feedback to fejta).

Review the full test history for this PR.

Silence the bot with an /lgtm cancel or /hold comment for consistent failures.

@k8s-ci-robot

This comment has been minimized.

Copy link
Contributor

k8s-ci-robot commented Feb 1, 2019

@mikedanese: The following test failed, say /retest to rerun them all:

Test name Commit Details Rerun command
pull-kubernetes-e2e-kops-aws bf99565 link /test pull-kubernetes-e2e-kops-aws

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@mikedanese

This comment has been minimized.

Copy link
Member Author

mikedanese commented Feb 1, 2019

I thought we removed kops from release blockers.

@krzyzacy

This comment has been minimized.

Copy link
Member

krzyzacy commented Feb 1, 2019

/skip

@k8s-ci-robot k8s-ci-robot merged commit ae2b176 into kubernetes:master Feb 1, 2019

18 checks passed

cla/linuxfoundation mikedanese authorized
Details
pull-kubernetes-bazel-build Job succeeded.
Details
pull-kubernetes-bazel-test Job succeeded.
Details
pull-kubernetes-cross Skipped
pull-kubernetes-e2e-gce Job succeeded.
Details
pull-kubernetes-e2e-gce-100-performance Job succeeded.
Details
pull-kubernetes-e2e-gce-device-plugin-gpu Job succeeded.
Details
pull-kubernetes-e2e-kops-aws Context retired without replacement.
pull-kubernetes-e2e-kubeadm-gce Skipped
pull-kubernetes-godeps Skipped
pull-kubernetes-integration Job succeeded.
Details
pull-kubernetes-kubemark-e2e-gce-big Job succeeded.
Details
pull-kubernetes-local-e2e Skipped
pull-kubernetes-local-e2e-containerized Job succeeded.
Details
pull-kubernetes-node-e2e Job succeeded.
Details
pull-kubernetes-typecheck Job succeeded.
Details
pull-kubernetes-verify Job succeeded.
Details
tide In merge pool.
Details
return
}

m.nodeAddressesErr = fmt.Errorf("failed to get node address from cloud provider: %v", err)

This comment has been minimized.

@tedyu

tedyu Feb 1, 2019

Contributor

Should m.nodeAddressesMonitor.Broadcast() be called in this case ?

If so, shouldn't m.nodeAddresses be set before returning ?

This comment has been minimized.

@awly

awly Feb 2, 2019

Contributor

If this line is reached, sync failed an no prior addresses were available.
This will cause all pending NodeAddresses calls to unblock and return this error.

@awly

This comment has been minimized.

Copy link
Contributor

awly commented Feb 2, 2019

@mikedanese I just realized that this could've been a simple sync.RWMutex

@mikedanese mikedanese deleted the mikedanese:fixcrm branch Feb 2, 2019

@mikedanese

This comment has been minimized.

Copy link
Member Author

mikedanese commented Feb 2, 2019

@awly monitors aren't simple? :) feel free to send a PR.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment