Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

k8s: Fix unnecessary update with MetalLB #23210

Merged
merged 1 commit into from Jan 27, 2023

Conversation

ysksuzuki
Copy link
Member

@ysksuzuki ysksuzuki commented Jan 20, 2023

ConvertToK8sV1LoadBalancerIngress sets an empty ports array even if the correspoinding ports field is nil. Because of this, MetalLB updates svc.Status.LoadBalancer.Ingres even though it has already allocated an external IP.

Metal detects a diff as follows, and updates the lb svc unnecessarily.

v1.ServiceStatus{
   LoadBalancer: v1.LoadBalancerStatus{
      Ingress: []v1.LoadBalancerIngress{
         {
            IP:       "10.72.32.4",
            Hostname: "",
            - Ports:    []v1.PortStatus{},
            + Ports:    nil,
         },},},Conditions: nil,}

Fixes: #23107

Signed-off-by: Yusuke Suzuki yusuke-suzuki@cybozu.co.jp

Please ensure your pull request adheres to the following guidelines:

  • For first time contributors, read Submitting a pull request
  • All code is covered by unit and/or runtime tests where feasible.
  • All commits contain a well written commit description including a title,
    description and a Fixes: #XXX line if the commit addresses a particular
    GitHub issue.
  • If your commit description contains a Fixes: <commit-id> tag, then
    please add the commit author[s] as reviewer[s] to this issue.
  • All commits are signed off. See the section Developer’s Certificate of Origin
  • Provide a title or release-note blurb suitable for the release notes.
  • Thanks for contributing!
Removed unnecessary updates to service status by MetalLB

@ysksuzuki ysksuzuki requested a review from a team as a code owner January 20, 2023 12:34
@ysksuzuki ysksuzuki requested a review from squeed January 20, 2023 12:34
@maintainer-s-little-helper maintainer-s-little-helper bot added the dont-merge/needs-release-note-label The author needs to describe the release impact of these changes. label Jan 20, 2023
@github-actions github-actions bot added the kind/community-contribution This was a contribution made by a community member. label Jan 20, 2023
@ysksuzuki ysksuzuki marked this pull request as draft January 24, 2023 14:47
@ysksuzuki ysksuzuki marked this pull request as ready for review January 25, 2023 01:31
@ysksuzuki ysksuzuki requested a review from a team as a code owner January 25, 2023 01:31
Copy link
Member

@dylandreimerink dylandreimerink left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me

@dylandreimerink
Copy link
Member

dylandreimerink commented Jan 25, 2023

/test

Job 'Cilium-PR-K8s-1.24-kernel-5.4' failed:

Click to show.

Test Name

K8sDatapathConfig Encapsulation Check connectivity with transparent encryption, VXLAN, and endpoint routes

Failure Output

FAIL: Kubernetes DNS did not become ready in time

If it is a flake and a GitHub issue doesn't already exist to track it, comment /mlh new-flake Cilium-PR-K8s-1.24-kernel-5.4 so I can create one.

@dylandreimerink dylandreimerink added release-note/bug This PR fixes an issue in a previous release of Cilium. and removed dont-merge/needs-release-note-label The author needs to describe the release impact of these changes. labels Jan 25, 2023
@ysksuzuki
Copy link
Member Author

Some tests are failing, I think I need to rebase my branch. The PRs below are relevant, perhaps.

#22754
#23254

@pchaigno
Copy link
Member

Some tests are failing, I think I need to rebase my branch. The PRs below are relevant, perhaps.

Yeah, we've had quite a few CI issues at the end of last week/beginning of this week. Probably best to rebase for the CI to pass.

ConvertToK8sV1LoadBalancerIngress sets an empty ports array even if
the correspoinding ports field is nil. Because of this, MetalLB updates
svc.Status.LoadBalancer.Ingres even though it has already allocated an
external IP.

Metal detects a diff as follows, and updates the lb svc unnecessarily.

v1.ServiceStatus{
   LoadBalancer: v1.LoadBalancerStatus{
      Ingress: []v1.LoadBalancerIngress{
         {
            IP:       "10.72.32.4",
            Hostname: "",
            - Ports:    []v1.PortStatus{},
            + Ports:    nil,
         },},},Conditions: nil,}

Fixes: cilium#23107

Signed-off-by: Yusuke Suzuki <yusuke-suzuki@cybozu.co.jp>
@sayboras
Copy link
Member

sayboras commented Jan 26, 2023

/test

Job 'Cilium-PR-K8s-1.24-kernel-5.4' failed:

Click to show.

Test Name

K8sDatapathConfig Host firewall With VXLAN and endpoint routes

Failure Output

FAIL: Found 1 k8s-app=cilium logs matching list of errors that must be investigated:

If it is a flake and a GitHub issue doesn't already exist to track it, comment /mlh new-flake Cilium-PR-K8s-1.24-kernel-5.4 so I can create one.

@ysksuzuki
Copy link
Member Author

Cilium-PR-K8s-1.24-kernel-5.4' failed
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.24-kernel-5.4/585/testReport/junit/Suite-k8s-1/24/K8sDatapathConfig_Host_firewall_With_VXLAN_and_endpoint_routes/

Found 1 k8s-app=cilium logs matching list of errors that must be investigated:
level=error

The endpoint regeneration was canceled while cilium-agent was restoring an endpoint for kube-system/coredns-8c79ffd8b-jbl2s because it was being deleted. Another one coredns-8c79ffd8b-bkw2z was scheduled on another node k8s2. I don't see any problem with it.

2023-01-26T03:46:43.495115089Z level=error msg="endpoint regeneration failed" containerID=dfe3c1c7ed datapathPolicyRevision=0 desiredPolicyRevision=1 endpointID=481 error="Error while configuring proxy redirects: context cancelled before waiting for proxy updates: context canceled" identity=26108 ipv4=10.0.0.164 ipv6=10.0.0.164 k8sPodName=kube-system/coredns-8c79ffd8b-jbl2s subsys=endpoint

Full Log

2023-01-26T03:46:20.259238271Z level=debug msg="Endpoint restoring" code=OK containerID=dfe3c1c7ed datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=481 endpointState=restoring identity=26108 ipv4=10.0.0.164 ipv6=10.0.0.164 k8sPodName=kube-system/coredns-8c79ffd8b-jbl2s policyRevision=0 subsys=endpoint type=0
2023-01-26T03:46:21.576918442Z level=debug msg="Starting new controller" name=resolve-labels-kube-system/coredns-8c79ffd8b-jbl2s subsys=controller uuid=3fdafe8d-f465-49f1-a8fc-f48c106c2313
2023-01-26T03:46:21.577219070Z level=debug msg="Allocated specific IP" ip="fd02::42" owner="kube-system/coredns-8c79ffd8b-jbl2s [restored]" subsys=ipam
2023-01-26T03:46:21.577279592Z level=debug msg="Allocated specific IP" ip=10.0.0.164 owner="kube-system/coredns-8c79ffd8b-jbl2s[restored]" subsys=ipam
2023-01-26T03:46:21.577297894Z level=debug msg="Restoring endpoint" endpointID=481 k8sPodName=kube-system/coredns-8c79ffd8b-jbl2s subsys=daemon
2023-01-26T03:46:21.577358577Z level=debug msg="Restoring endpoint from previous cilium instance" code=OK containerID=dfe3c1c7ed datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=481 endpointState=restoring identity=26108 ipv4=10.0.0.164 ipv6=10.0.0.164 k8sPodName=kube-system/coredns-8c79ffd8b-jbl2s policyRevision=0 subsys=endpoint type=0
2023-01-26T03:46:32.196467658Z level=info msg="New endpoint" containerID=dfe3c1c7ed datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=481 identity=26108 ipv4=10.0.0.164 ipv6=10.0.0.164 k8sPodName=kube-system/coredns-8c79ffd8b-jbl2s subsys=endpoint
2023-01-26T03:46:32.197208991Z level=debug msg="Getting CEP during an initialization" containerID=dfe3c1c7ed controller="sync-to-k8s-ciliumendpoint (481)" datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=481 identity=26108 ipv4=10.0.0.164 ipv6=10.0.0.164 k8sPodName=kube-system/coredns-8c79ffd8b-jbl2s subsys=endpointsynchronizer
2023-01-26T03:46:32.198065942Z level=debug msg="Synchronizing endpoint labels with KVStore" code=OK containerID=dfe3c1c7ed datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=481 endpointState=restoring identity=26108 ipv4=10.0.0.164 ipv6=10.0.0.164 k8sPodName=kube-system/coredns-8c79ffd8b-jbl2s policyRevision=0 subsys=endpoint type=0
2023-01-26T03:46:32.198457345Z level=debug msg="Dequeued endpoint from build queue" containerID=dfe3c1c7ed datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=481 identity=26108 ipv4=10.0.0.164 ipv6=10.0.0.164 k8sPodName=kube-system/coredns-8c79ffd8b-jbl2s subsys=endpoint
2023-01-26T03:46:32.198507475Z level=debug msg="Regenerating endpoint" containerID=dfe3c1c7ed datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=481 identity=26108 ipv4=10.0.0.164 ipv6=10.0.0.164 k8sPodName=kube-system/coredns-8c79ffd8b-jbl2s reason="syncing state to host" startTime="2023-01-26 03:46:32.198451006 +0000 UTC m=+12.363096837" subsys=endpoint
2023-01-26T03:46:32.198599216Z level=debug msg="Regenerating endpoint: syncing state to host" code=OK containerID=dfe3c1c7ed datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=481 endpointState=regenerating identity=26108 ipv4=10.0.0.164 ipv6=10.0.0.164 k8sPodName=kube-system/coredns-8c79ffd8b-jbl2s policyRevision=0 subsys=endpoint type=0
2023-01-26T03:46:32.198658367Z level=debug msg="removing directory" containerID=dfe3c1c7ed datapathPolicyRevision=0 desiredPolicyRevision=0 directory=481_next endpointID=481 identity=26108 ipv4=10.0.0.164 ipv6=10.0.0.164 k8sPodName=kube-system/coredns-8c79ffd8b-jbl2s subsys=endpoint
2023-01-26T03:46:32.198955157Z level=debug msg="Starting policy recalculation..." containerID=dfe3c1c7ed datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=481 identity=26108 ipv4=10.0.0.164 ipv6=10.0.0.164 k8sPodName=kube-system/coredns-8c79ffd8b-jbl2s subsys=endpoint
2023-01-26T03:46:32.199007674Z level=debug msg="Forced policy recalculation" containerID=dfe3c1c7ed datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=481 identity=26108 ipv4=10.0.0.164 ipv6=10.0.0.164 k8sPodName=kube-system/coredns-8c79ffd8b-jbl2s subsys=endpoint
2023-01-26T03:46:32.199079067Z level=debug msg="Completed endpoint policy recalculation" containerID=dfe3c1c7ed datapathPolicyRevision=0 desiredPolicyRevision=1 endpointID=481 forcedRegeneration=true identity=26108 ipv4=10.0.0.164 ipv6=10.0.0.164 k8sPodName=kube-system/coredns-8c79ffd8b-jbl2s policyCalculation="&{{{{{0 0} 0 0 0 0}}} {0 0 <nil>} 26980 0}" subsys=endpoint waitingForIdentityCache="&{{{{{0 0} 0 0 0 0}}} {0 0 <nil>} 0 0}" waitingForPolicyRepository="&{{{{{0 0} 0 0 0 0}}} {0 0 <nil>} 561 0}"
2023-01-26T03:46:32.200076441Z level=debug msg="BPF header file hashed (was: \"\")" bpfHeaderfileHash=743310237c4c68e3af0270a595e13aeb830afc25e99d9713708989c59bc576dd containerID=dfe3c1c7ed datapathPolicyRevision=0 desiredPolicyRevision=1 endpointID=481 identity=26108 ipv4=10.0.0.164 ipv6=10.0.0.164 k8sPodName=kube-system/coredns-8c79ffd8b-jbl2s subsys=endpoint
2023-01-26T03:46:32.200135687Z level=debug msg="writing header file" containerID=dfe3c1c7ed datapathPolicyRevision=0 desiredPolicyRevision=1 endpointID=481 file-path=481_next/ep_config.h identity=26108 ipv4=10.0.0.164 ipv6=10.0.0.164 k8sPodName=kube-system/coredns-8c79ffd8b-jbl2s subsys=endpoint
2023-01-26T03:46:32.200715083Z level=debug msg="Preparing to compile BPF" containerID=dfe3c1c7ed datapathPolicyRevision=0 desiredPolicyRevision=1 endpointID=481 identity=26108 ipv4=10.0.0.164 ipv6=10.0.0.164 k8sPodName=kube-system/coredns-8c79ffd8b-jbl2s regeneration-level=rewrite+load subsys=endpoint
2023-01-26T03:46:32.200976469Z level=debug msg="Skipping CiliumEndpoint update because it has not changed" containerID=dfe3c1c7ed controller="sync-to-k8s-ciliumendpoint (481)" datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=481 identity=26108 ipv4=10.0.0.164 ipv6=10.0.0.164 k8sPodName=kube-system/coredns-8c79ffd8b-jbl2s subsys=endpointsynchronizer
2023-01-26T03:46:42.202894582Z level=debug msg="Skipping CiliumEndpoint update because it has not changed" containerID=dfe3c1c7ed controller="sync-to-k8s-ciliumendpoint (481)" datapathPolicyRevision=0 desiredPolicyRevision=1 endpointID=481 identity=26108 ipv4=10.0.0.164 ipv6=10.0.0.164 k8sPodName=kube-system/coredns-8c79ffd8b-jbl2s subsys=endpointsynchronizer
2023-01-26T03:46:43.481101841Z level=debug msg="exiting retrying regeneration goroutine due to endpoint being deleted" containerID=dfe3c1c7ed datapathPolicyRevision=0 desiredPolicyRevision=1 endpointID=481 identity=26108 ipv4=10.0.0.164 ipv6=10.0.0.164 k8sPodName=kube-system/coredns-8c79ffd8b-jbl2s subsys=endpoint
2023-01-26T03:46:43.494038718Z level=info msg="Rewrote endpoint BPF program" containerID=dfe3c1c7ed datapathPolicyRevision=0 desiredPolicyRevision=1 endpointID=481 identity=26108 ipv4=10.0.0.164 ipv6=10.0.0.164 k8sPodName=kube-system/coredns-8c79ffd8b-jbl2s subsys=endpoint
2023-01-26T03:46:43.494123518Z level=debug msg="Reverting endpoint changes after BPF regeneration failed" containerID=dfe3c1c7ed datapathPolicyRevision=0 desiredPolicyRevision=1 endpointID=481 identity=26108 ipv4=10.0.0.164 ipv6=10.0.0.164 k8sPodName=kube-system/coredns-8c79ffd8b-jbl2s subsys=endpoint
2023-01-26T03:46:43.494176911Z level=debug msg="Reverting proxy redirect removals" containerID=dfe3c1c7ed datapathPolicyRevision=0 desiredPolicyRevision=1 endpointID=481 identity=26108 ipv4=10.0.0.164 ipv6=10.0.0.164 k8sPodName=kube-system/coredns-8c79ffd8b-jbl2s subsys=endpoint
2023-01-26T03:46:43.494219786Z level=debug msg="Finished reverting proxy redirect removals" containerID=dfe3c1c7ed datapathPolicyRevision=0 desiredPolicyRevision=1 endpointID=481 identity=26108 ipv4=10.0.0.164 ipv6=10.0.0.164 k8sPodName=kube-system/coredns-8c79ffd8b-jbl2s subsys=endpoint
2023-01-26T03:46:43.494490303Z level=debug msg="Reverting proxy redirect additions" containerID=dfe3c1c7ed datapathPolicyRevision=0 desiredPolicyRevision=1 endpointID=481 identity=26108 ipv4=10.0.0.164 ipv6=10.0.0.164 k8sPodName=kube-system/coredns-8c79ffd8b-jbl2s subsys=endpoint
2023-01-26T03:46:43.494542208Z level=debug msg="Finished reverting proxy redirect additions" containerID=dfe3c1c7ed datapathPolicyRevision=0 desiredPolicyRevision=1 endpointID=481 identity=26108 ipv4=10.0.0.164 ipv6=10.0.0.164 k8sPodName=kube-system/coredns-8c79ffd8b-jbl2s subsys=endpoint
2023-01-26T03:46:43.494584272Z level=debug msg="Finished reverting endpoint changes after BPF regeneration failed" containerID=dfe3c1c7ed datapathPolicyRevision=0 desiredPolicyRevision=1 endpointID=481 identity=26108 ipv4=10.0.0.164 ipv6=10.0.0.164 k8sPodName=kube-system/coredns-8c79ffd8b-jbl2s subsys=endpoint
2023-01-26T03:46:43.494649581Z level=info msg="generating BPF for endpoint failed, keeping stale directory" containerID=dfe3c1c7ed datapathPolicyRevision=0 desiredPolicyRevision=1 endpointID=481 error="Error while configuring proxy redirects: context cancelled before waiting for proxy updates: context canceled" file-path=481_next_fail identity=26108 ipv4=10.0.0.164 ipv6=10.0.0.164 k8sPodName=kube-system/coredns-8c79ffd8b-jbl2s subsys=endpoint
2023-01-26T03:46:43.494693840Z level=debug msg="removing directory" containerID=dfe3c1c7ed datapathPolicyRevision=0 desiredPolicyRevision=1 directory=481_next_fail endpointID=481 identity=26108 ipv4=10.0.0.164 ipv6=10.0.0.164 k8sPodName=kube-system/coredns-8c79ffd8b-jbl2s subsys=endpoint
2023-01-26T03:46:43.494762730Z level=debug msg="removing directory" containerID=dfe3c1c7ed datapathPolicyRevision=0 desiredPolicyRevision=1 directory=481_next endpointID=481 identity=26108 ipv4=10.0.0.164 ipv6=10.0.0.164 k8sPodName=kube-system/coredns-8c79ffd8b-jbl2s subsys=endpoint
2023-01-26T03:46:43.494829019Z level=debug msg="Completed endpoint regeneration with no pending regeneration requests" code=OK containerID=dfe3c1c7ed datapathPolicyRevision=0 desiredPolicyRevision=1 endpointID=481 endpointState=ready identity=26108 ipv4=10.0.0.164 ipv6=10.0.0.164 k8sPodName=kube-system/coredns-8c79ffd8b-jbl2s policyRevision=0 subsys=endpoint type=0
2023-01-26T03:46:43.494916100Z level=warning msg="Regeneration of endpoint failed" bpfCompilation=11.106910675s bpfLoadProg=183.835504ms bpfWaitForELF=11.107087428s bpfWriteELF="889.405µs" containerID=dfe3c1c7ed datapathPolicyRevision=0 desiredPolicyRevision=1 endpointID=481 error="Error while configuring proxy redirects: context cancelled before waiting for proxy updates: context canceled" identity=26108 ipv4=10.0.0.164 ipv6=10.0.0.164 k8sPodName=kube-system/coredns-8c79ffd8b-jbl2s mapSync="58.136µs" policyCalculation="150.944µs" prepareBuild="948.553µs" proxyConfiguration="22.165µs" proxyPolicyCalculation="653.815µs" proxyWaitForAck="4.198µs" reason="syncing state to host" subsys=endpoint total=11.296401703s waitingForCTClean=255ns waitingForLock=837ns
2023-01-26T03:46:43.494968230Z level=debug msg="Error regenerating endpoint: Error while configuring proxy redirects: context cancelled before waiting for proxy updates: context canceled" code=Failure containerID=dfe3c1c7ed datapathPolicyRevision=0 desiredPolicyRevision=1 endpointID=481 endpointState=ready identity=26108 ipv4=10.0.0.164 ipv6=10.0.0.164 k8sPodName=kube-system/coredns-8c79ffd8b-jbl2s policyRevision=0 subsys=endpoint type=200
2023-01-26T03:46:43.495115089Z level=error msg="endpoint regeneration failed" containerID=dfe3c1c7ed datapathPolicyRevision=0 desiredPolicyRevision=1 endpointID=481 error="Error while configuring proxy redirects: context cancelled before waiting for proxy updates: context canceled" identity=26108 ipv4=10.0.0.164 ipv6=10.0.0.164 k8sPodName=kube-system/coredns-8c79ffd8b-jbl2s subsys=endpoint
2023-01-26T03:46:43.495146239Z level=debug msg="Deleting endpoint" code=OK containerID=dfe3c1c7ed datapathPolicyRevision=0 desiredPolicyRevision=1 endpointID=481 endpointState=disconnecting identity=26108 ipv4=10.0.0.164 ipv6=10.0.0.164 k8sPodName=kube-system/coredns-8c79ffd8b-jbl2s policyRevision=0 subsys=endpoint type=0
2023-01-26T03:46:43.495377785Z level=debug msg="removing directory" containerID=dfe3c1c7ed datapathPolicyRevision=0 desiredPolicyRevision=1 directory=481 endpointID=481 identity=26108 ipv4=10.0.0.164 ipv6=10.0.0.164 k8sPodName=kube-system/coredns-8c79ffd8b-jbl2s subsys=endpoint
2023-01-26T03:46:43.495564164Z level=debug msg="removing directory" containerID=dfe3c1c7ed datapathPolicyRevision=0 desiredPolicyRevision=1 directory=481_next_fail endpointID=481 identity=26108 ipv4=10.0.0.164 ipv6=10.0.0.164 k8sPodName=kube-system/coredns-8c79ffd8b-jbl2s subsys=endpoint
2023-01-26T03:46:43.495748416Z level=debug msg="removing directory" containerID=dfe3c1c7ed datapathPolicyRevision=0 desiredPolicyRevision=1 directory=481_next endpointID=481 identity=26108 ipv4=10.0.0.164 ipv6=10.0.0.164 k8sPodName=kube-system/coredns-8c79ffd8b-jbl2s subsys=endpoint
2023-01-26T03:46:43.495752800Z level=debug msg="removing directory" containerID=dfe3c1c7ed datapathPolicyRevision=0 desiredPolicyRevision=1 directory=481_stale endpointID=481 identity=26108 ipv4=10.0.0.164 ipv6=10.0.0.164 k8sPodName=kube-system/coredns-8c79ffd8b-jbl2s subsys=endpoint
2023-01-26T03:46:43.495776055Z level=debug msg="Removed controller" name=resolve-labels-kube-system/coredns-8c79ffd8b-jbl2s subsys=controller uuid=3fdafe8d-f465-49f1-a8fc-f48c106c2313
2023-01-26T03:46:43.497119484Z level=debug msg="Endpoint removed" code=OK containerID=dfe3c1c7ed datapathPolicyRevision=0 desiredPolicyRevision=1 endpointID=481 endpointState=disconnected identity=26108 ipv4=10.0.0.164 ipv6=10.0.0.164 k8sPodName=kube-system/coredns-8c79ffd8b-jbl2s policyRevision=0 subsys=endpoint type=0
2023-01-26T03:46:43.497121542Z level=info msg="Removed endpoint" containerID=dfe3c1c7ed datapathPolicyRevision=0 desiredPolicyRevision=1 endpointID=481 identity=26108 ipv4=10.0.0.164 ipv6=10.0.0.164 k8sPodName=kube-system/coredns-8c79ffd8b-jbl2s subsys=endpoint
2023-01-26T03:46:43.497123730Z level=debug msg="Waiting for proxy updates to complete..." containerID=dfe3c1c7ed datapathPolicyRevision=0 desiredPolicyRevision=1 endpointID=481 identity=26108 ipv4=10.0.0.164 ipv6=10.0.0.164 k8sPodName=kube-system/coredns-8c79ffd8b-jbl2s subsys=endpoint
2023-01-26T03:46:43.497147240Z level=debug msg="Wait time for proxy updates: 6.747µs" containerID=dfe3c1c7ed datapathPolicyRevision=0 desiredPolicyRevision=1 endpointID=481 identity=26108 ipv4=10.0.0.164 ipv6=10.0.0.164 k8sPodName=kube-system/coredns-8c79ffd8b-jbl2s subsys=endpoint
2023-01-26T03:46:43.497150201Z level=debug msg="Released IP" ip=10.0.0.164 owner="kube-system/coredns-8c79ffd8b-jbl2s[restored]" subsys=ipam
2023-01-26T03:46:43.497151877Z level=debug msg="Released IP" ip="fd02::42" owner="kube-system/coredns-8c79ffd8b-jbl2s [restored]" subsys=ipam
2023-01-26T03:46:43.497569529Z level=debug msg="when trying to refresh endpoint labels" containerID=dfe3c1c7ed datapathPolicyRevision=0 desiredPolicyRevision=1 endpointID=481 error="lock failed: endpoint is in the process of being removed" identity=26108 ipv4=10.0.0.164 ipv6=10.0.0.164 k8sPodName=kube-system/coredns-8c79ffd8b-jbl2s subsys=endpoint
2023-01-26T03:46:43.497575778Z level=debug msg="Controller func execution time: 21.920343594s" name=resolve-labels-kube-system/coredns-8c79ffd8b-jbl2s subsys=controller uuid=3fdafe8d-f465-49f1-a8fc-f48c106c2313
2023-01-26T03:46:43.497584580Z level=debug msg="Controller run succeeded; waiting for next controller update or stop" name=resolve-labels-kube-system/coredns-8c79ffd8b-jbl2s subsys=controller uuid=3fdafe8d-f465-49f1-a8fc-f48c106c2313
2023-01-26T03:46:43.497586836Z level=debug msg="Shutting down controller" name=resolve-labels-kube-system/coredns-8c79ffd8b-jbl2s subsys=controller uuid=3fdafe8d-f465-49f1-a8fc-f48c106c2313
2023-01-26T03:46:43.500456266Z level=debug msg="deleting CEP with UID" ciliumEndpointUID=a8c72bb3-5622-4a54-a5c7-d21b2a34eb97 containerID=dfe3c1c7ed controller="sync-to-k8s-ciliumendpoint (481)" datapathPolicyRevision=0 desiredPolicyRevision=1 endpointID=481 identity=26108 ipv4=10.0.0.164 ipv6=10.0.0.164 k8sPodName=kube-system/coredns-8c79ffd8b-jbl2s subsys=endpointsynchronizer
2023-01-26T03:46:43.512326858Z level=debug msg="CEP deleted, calling endpointDeleted" CEPName=kube-system/coredns-8c79ffd8b-jbl2s CESName=ces-m8l6cc5gc-ymlxv subsys=k8s-watcher

@ysksuzuki
Copy link
Member Author

I don't think this change can affect Host firewall With VXLAN and endpoint routes test since ConvertToK8sV1LoadBalancerIngress is called by the BGP manager.

@dylandreimerink
Copy link
Member

This is noting to do with changes make here. I believe this would be flake #22578.

Reviews are in, all tests are green otherwise, marking ready-to-merge.

@dylandreimerink dylandreimerink added the ready-to-merge This PR has passed all tests and received consensus from code owners to merge. label Jan 26, 2023
@aanm aanm merged commit 8298b0f into cilium:master Jan 27, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/community-contribution This was a contribution made by a community member. ready-to-merge This PR has passed all tests and received consensus from code owners to merge. release-note/bug This PR fixes an issue in a previous release of Cilium.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Operator updates Loadbalancer services unnecessarily with MetalLB
6 participants