Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[4.7] Bug 2073350: Lookup reject ACLs found at startup when removing reject ACLs for services [alternative] #1211

Closed
wants to merge 1 commit into from

Conversation

ricky-rav
Copy link
Contributor

@ricky-rav ricky-rav commented Jul 26, 2022

[alternative solution to https://github.com//pull/1208, avoiding extra ovn-nbctl calls]

The serviceLBMap cache stores, for each load balancer, a service VIP along with its endpoints and reject ACL (if any). The reject ACL is added to this cache either (1) if it was created in the current execution of ovnkube-master or (2) if it was created in a previous execution of ovnkube master and the service still needs a reject ACL (because it has no endpoints).

Now, if the reject ACL was created in the previous execution of ovnkube-master and after restart ovnkube-master first adds the backend pods for this service and only afterwards (re)creates the service, then the existing reject ACL will go unnoticed. Endpoints will be added correctly, but traffic to this service will be dropped because of this reject ACL.

In order to fix this corner case, keep a list of valid reject ACLs found at startup and lookup this list too when trying to delete a reject ACL for a service with endpoints.

Signed-off-by: Riccardo Ravaioli rravaiol@redhat.com

closes #2073350

…vices

The serviceLBMap cache stores, for each load balancer, a service VIP along with its endpoints and reject ACL (if any). The reject ACL is added to this cache either (1) if it was created in the current execution of ovnkube-master or (2) if it was created in a previous execution of ovnkube master and the service still needs a reject ACL (because it has no endpoints).

Now, if the reject ACL was created in the previous execution of ovnkube-master and after restart ovnkube-master first adds the backend pods for this service and only afterwards (re)creates the service, then the existing reject ACL will go unnoticed. Endpoints will be added correctly, but traffic to this service will be dropped because of this reject ACL.

In order to fix this corner case, keep a list of valid reject ACLs found at startup and lookup this list too when trying to delete a reject ACL for a service with endpoints.

Signed-off-by: Riccardo Ravaioli <rravaiol@redhat.com>
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Jul 26, 2022

@ricky-rav: No Bugzilla bug is referenced in the title of this pull request.
To reference a bug, add 'Bug XXX:' to the title of this pull request and request another bug refresh with /bugzilla refresh.

In response to this:

Lookup reject ACLs found at startup when removing reject ACLs for services [alternative]

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Jul 26, 2022

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: ricky-rav
Once this PR has been reviewed and has the lgtm label, please assign danwinship for approval by writing /assign @danwinship in a comment. For more information see:The Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@ricky-rav ricky-rav changed the title Lookup reject ACLs found at startup when removing reject ACLs for services [alternative] [4.7] Lookup reject ACLs found at startup when removing reject ACLs for services [alternative] Jul 26, 2022
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Jul 26, 2022

@ricky-rav: No Bugzilla bug is referenced in the title of this pull request.
To reference a bug, add 'Bug XXX:' to the title of this pull request and request another bug refresh with /bugzilla refresh.

In response to this:

[4.7] Lookup reject ACLs found at startup when removing reject ACLs for services [alternative]

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@ricky-rav
Copy link
Contributor Author

/retest-required

2 similar comments
@ricky-rav
Copy link
Contributor Author

/retest-required

@ricky-rav
Copy link
Contributor Author

/retest-required

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Jul 27, 2022

@ricky-rav: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/okd-e2e-gcp-ovn 959f2e0 link false /test okd-e2e-gcp-ovn
ci/prow/e2e-ovn-hybrid-step-registry 959f2e0 link true /test e2e-ovn-hybrid-step-registry

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@@ -204,13 +208,17 @@ func (ovn *Controller) syncServices(services []interface{}) {
foundSwitches)
ovn.removeACLFromNodeSwitches(foundSwitches, uuid)
}
} else {
OVNRejectACLsAtStartup[name] = uuid
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't understand how the sync code above will not catch a service that has endpoints with a stale ACL? What is different about your corner case?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So the sync code is correct: it removes stale ACLs at startup. What happened with the customer's cluster is that after the sync code and before the service creation the endpoints were added, so the code - after running syncServices - never kept track of the existing reject acl. In particular:

  1. the reject ACL was created in the previous execution of ovnkube-master;
  2. after restart, the sync code correctly keeps this reject ACL, since the service still has no endpoints
  3. the backend pods for this service are added;
  4. the service is (re)created along with its endpoints:

@ricky-rav ricky-rav changed the title [4.7] Lookup reject ACLs found at startup when removing reject ACLs for services [alternative] [4.7] Bug 2073350: Lookup reject ACLs found at startup when removing reject ACLs for services [alternative] Jul 29, 2022
@openshift-ci openshift-ci bot added bugzilla/severity-medium Referenced Bugzilla bug's severity is medium for the branch this PR is targeting. bugzilla/valid-bug Indicates that a referenced Bugzilla bug is valid for the branch this PR is targeting. labels Jul 29, 2022
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Jul 29, 2022

@ricky-rav: This pull request references Bugzilla bug 2073350, which is valid. The bug has been moved to the POST state. The bug has been updated to refer to the pull request using the external bug tracker.

6 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target release (4.7.z) matches configured target release for branch (4.7.z)
  • bug is in the state NEW, which is one of the valid states (NEW, ASSIGNED, ON_DEV, POST, POST)
  • dependent bug Bugzilla bug 2109625 is in the state CLOSED (CURRENTRELEASE), which is one of the valid states (VERIFIED, RELEASE_PENDING, CLOSED (ERRATA), CLOSED (CURRENTRELEASE))
  • dependent Bugzilla bug 2109625 targets the "4.8.z" release, which is one of the valid target releases: 4.8.0, 4.8.z
  • bug has dependents

Requesting review from QA contact:
/cc @anuragthehatter

In response to this:

[4.7] Bug 2073350: Lookup reject ACLs found at startup when removing reject ACLs for services [alternative]

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Jul 29, 2022

@ricky-rav: This pull request references Bugzilla bug 2073350, which is valid.

6 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target release (4.7.z) matches configured target release for branch (4.7.z)
  • bug is in the state POST, which is one of the valid states (NEW, ASSIGNED, ON_DEV, POST, POST)
  • dependent bug Bugzilla bug 2109625 is in the state CLOSED (CURRENTRELEASE), which is one of the valid states (VERIFIED, RELEASE_PENDING, CLOSED (ERRATA), CLOSED (CURRENTRELEASE))
  • dependent Bugzilla bug 2109625 targets the "4.8.z" release, which is one of the valid target releases: 4.8.0, 4.8.z
  • bug has dependents

Requesting review from QA contact:
/cc @anuragthehatter

In response to this:

[4.7] Bug 2073350: Lookup reject ACLs found at startup when removing reject ACLs for services [alternative]

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@ricky-rav
Copy link
Contributor Author

/bugzilla refresh

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Jul 29, 2022

@ricky-rav: This pull request references Bugzilla bug 2073350, which is valid.

6 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target release (4.7.z) matches configured target release for branch (4.7.z)
  • bug is in the state POST, which is one of the valid states (NEW, ASSIGNED, ON_DEV, POST, POST)
  • dependent bug Bugzilla bug 2109625 is in the state CLOSED (CURRENTRELEASE), which is one of the valid states (VERIFIED, RELEASE_PENDING, CLOSED (ERRATA), CLOSED (CURRENTRELEASE))
  • dependent Bugzilla bug 2109625 targets the "4.8.z" release, which is one of the valid target releases: 4.8.0, 4.8.z
  • bug has dependents

Requesting review from QA contact:
/cc @anuragthehatter

In response to this:

/bugzilla refresh

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@ricky-rav
Copy link
Contributor Author

/retest-failed

@openshift-bot
Copy link
Contributor

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@openshift-ci openshift-ci bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 27, 2022
@openshift-bot
Copy link
Contributor

Stale issues rot after 30d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle rotten
/remove-lifecycle stale

@openshift-ci openshift-ci bot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Nov 27, 2022
@ricky-rav ricky-rav closed this Nov 27, 2022
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Nov 27, 2022

@ricky-rav: This pull request references Bugzilla bug 2073350. The bug has been updated to no longer refer to the pull request using the external bug tracker.

In response to this:

[4.7] Bug 2073350: Lookup reject ACLs found at startup when removing reject ACLs for services [alternative]

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bugzilla/severity-medium Referenced Bugzilla bug's severity is medium for the branch this PR is targeting. bugzilla/valid-bug Indicates that a referenced Bugzilla bug is valid for the branch this PR is targeting. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants