Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reduce "stale identity observed" warnings #27894

Merged
merged 1 commit into from
Oct 24, 2023

Conversation

leblowl
Copy link
Contributor

@leblowl leblowl commented Sep 2, 2023

Please ensure your pull request adheres to the following guidelines:

  • For first time contributors, read Submitting a pull request
  • All code is covered by unit and/or runtime tests where feasible.
  • All commits contain a well written commit description including a title,
    description and a Fixes: #XXX line if the commit addresses a particular
    GitHub issue.
  • If your commit description contains a Fixes: <commit-id> tag, then
    please add the commit author[s] as reviewer[s] to this issue.
  • All commits are signed off. See the section Developer’s Certificate of Origin
  • Provide a title or release-note blurb suitable for the release notes.
  • Are you a user of Cilium? Please add yourself to the Users doc
  • Thanks for contributing!

Overview

Hi, I've been looking into the "stale identity observed" issues (#14427 and #15283) and documented some of the situations where I've seen these log messages in my setup. Ultimately, I think the log messages arise from a misunderstanding between the datapath and Hubble. In the cases I studied, the datapath looks like it's using an appropriate identity. However Hubble has a limited amount of information and isn't really aware of what the datapath is doing in these situations. It just recieves an IP and security identity from the datapath and warns if the security identity doesn't match the IP cache security identity. However, in some cases, the datapath doesn't use the IP cache security identity, or it sends trace notify events before resolving a security identity via the IP cache, or hardcodes an identity when tunneling a packet, or when a proxy is involved, the datapath doesn't know the original IP for a given security identity.

Currently, this warning is not very helpful. There are so many warnings that it's easy to ignore them.

One solution is to see if we can create various conditions on the Hubble side so that it can tell when to send a warning, such that the warnings are actually meaningful. That's what I've explored in this PR.

Case 1

When encapsulating a packet for sending via the overlay network, if the source seclabel = HOST_ID, then we reassign seclabel with LOCAL_NODE_ID and then send a trace notify:

send_trace_notify(ctx, TRACE_TO_OVERLAY, seclabel, dstid, 0, *ifindex,

In this case any packets with identity HOST_ID, are going to result in warnings from Hubble.

A simple solution is to ignore stale TRACE_TO_OVERLAY events when datapath security ID = remote_node && userspace security ID = host.

Here is an example:

2023-08-30T19:59:00.281808717Z stderr F level=debug msg="&{{4 4 4 3848637028 168 128 1 remote-node health 0 5 0 4} ::} 10.244.1.40 10.244.0.61 TO_OVERLAY" subsys=hubble
2023-08-30T19:59:00.281811808Z stderr F level=debug msg="stale identity observed" identity=6 ipAddr=10.244.1.40 oldIdentity=1 subsys=hubble

In this case 10.244.1.40 is the IP address of cilium_host.

Case 2

Sometimes packets from endpoint link-local addresses are intercepted by cil_from_container. Because link-local addresses are not stored in the IP cache, Hubble assigns them ID 2 (WORLD_ID) (edit: now 9 or 10 for WORLD_IPV4/IPV6).

Here is an example:

2023-08-30T20:15:14.102227401Z stderr F level=debug msg="&{{4 5 1189 0 70 70 1 health unknown 0 5 0 0} ::} fe80::c809:60ff:fecd:ab08 ff02::2 FROM_ENDPOINT" subsys=hubble
2023-08-30T20:15:14.102232825Z stderr F level=debug msg="stale identity observed" identity=4 ipAddr="fe80::c809:60ff:fecd:ab08" oldIdentity=2 subsys=hubble

fe80::c809:60ff:fecd:ab08 is the link-local address of the cilium-health-responder namespace. I've also observed endpoint link-local addresses.

A simple solution is to ignore stale TRACE_FROM_ENDPOINT events when datapath source security ID = {4 | !reserved} && userspace security ID is a world ID. Alternatively, in this case, I wonder if we can just add link-local addresses to the IP cache or prevent endpoints from sending these packets.

Case 3

When a pod sends a packet to the Kubernetes API its IP is masqueraded and then when it receives a response and the masquerade is reversed, cil_from_host determines that the source ID is 2 (edit: now 2, 9 or 10) because there is no packet mark.

Here is an example:

2023-08-31T23:21:50.489380414Z stderr F level=debug msg="&{{4 5 742 2245691099 201 128 1 60454 unknown 0 5 0 0} ::} 10.244.1.242 10.96.0.1 FROM_ENDPOINT" subsys=hubble
2023-08-31T23:21:50.489451772Z stderr F level=debug msg="&{{4 3 742 2245691099 201 128 1 60454 kube-apiserver 0 1 0 0} ::} 10.244.1.242 172.20.0.3 TO_STACK" subsys=hubble
2023-08-31T23:21:50.490396351Z stderr F level=debug msg="&{{4 10 4 1883766498 155 128 1 unknown unknown 0 5 0 6} ::} 172.20.0.3 172.20.0.2 FROM_NETWORK" subsys=hubble
2023-08-31T23:21:50.490422485Z stderr F level=debug msg="&{{4 7 4 1883766498 155 128 1 world unknown 0 5 0 6} ::} 172.20.0.3 10.244.1.242 FROM_HOST" subsys=hubble
2023-08-31T23:21:50.490448558Z stderr F level=debug msg="stale identity observed" identity=2 ipAddr=172.20.0.3 oldIdentity=7 subsys=hubble
2023-08-31T23:21:50.490659919Z stderr F level=debug msg="&{{4 0 742 1883766498 155 128 1 kube-apiserver 60454 742 2 0 9} ac14:3::} 172.20.0.3 10.244.1.242 TO_ENDPOINT" subsys=hubble

In this case, we can see 10.244.1.242 (a CoreDNS instance) is reaching out to 10.96.0.1 (the ClusterIP for the Kubernetes API). 10.96.0.1 get's resolved to 172.20.0.3. And then 10.244.1.242 get's masqueraded to 172.20.0.2 when it's being sent out via eth0. When the response is recieved, 172.20.0.2 is reversed to 10.244.1.242 and then it's routed to cilium_host. It's at this point that the datapath considers the ID as 2 because there isn't any mark on the packet.

A simple solution is to ignore stale TRACE_FROM_HOST events when datapath source security ID = 2 (edit: now 2, 9 or 10) and userspace security ID = 7. In general, if packets are picked up by cil_from_host and they do not have a packet mark, the TRACE_FROM_HOST event will contain ID = 2 (edit: now 2, 9 or 10), as that's the default in inherit_identity_from_host, and the trace happens before any IP cache resolution. Should inherit_identity_from_host return unknown by default?

Case 4

When proxied packets (via Cilium DNS proxy) are sent from the host their source IP is that of the host, yet their security identity is retained from the original source pod.

Here is an example:

2023-09-01T01:51:37.455236212Z stderr F level=debug msg="&{{4 5 1825 2275186770 110 110 1 47432 unknown 0 5 0 0} ::} 10.244.1.11 10.96.0.10 FROM_ENDPOINT" subsys=hubble
2023-09-01T01:51:37.455330139Z stderr F level=debug msg="&{{4 1 1825 2275186770 110 110 1 47432 unknown 36245 0 0 0} ::} 10.244.1.11 10.244.0.127 TO_PROXY" subsys=hubble
2023-09-01T01:53:04.360088799Z stderr F level=debug msg="&{{4 7 4 3946258847 110 110 1 47432 unknown 0 5 0 0} ::} 10.244.1.40 10.244.0.127 FROM_HOST" subsys=hubble
2023-09-01T01:53:04.360106623Z stderr F level=debug msg="stale identity observed" identity=47432 ipAddr=10.244.1.40 oldIdentity=1 subsys=hubble
2023-09-01T01:53:04.360129059Z stderr F level=debug msg="&{{4 4 4 3946258847 110 110 1 47432 60454 0 5 0 4} ::} 10.244.1.40 10.244.0.127 TO_OVERLAY" subsys=hubble
2023-09-01T01:53:04.360136765Z stderr F level=debug msg="stale identity observed" identity=47432 ipAddr=10.244.1.40 oldIdentity=1 subsys=hubble

In this case, 10.244.1.11 sends a DNS request to 10.96.0.10, which get's resolved to 10.244.0.127. The Cilium DNS proxy recieves the packet and sends it back out via cilium_host with address 10.244.1.40. Since the packet retains it's original source security ID, we recieve a warning from both TRACE_FROM_HOST and TRACE_TO_OVERLAY events.

A simple solution is to ignore stale TRACE_FROM_HOST/TRACE_TO_OVERLAY events when datapath source security ID is a local endpoint ID and userspace security ID = 1.

Case 5

When proxied packets (via Cilium DNS proxy) are recieved by the destination host their source IP is that of the proxy, yet their security identity is retained from the original source pod. This is a similar case to #4, but on the receiving side.

In one case, the packet is sent to a DNS instance on the same host:

2023-09-01T02:32:11.417310878Z stderr F level=debug msg="&{{4 0 742 2061866022 110 110 1 47432 60454 742 0 0 9} af4:128::} 10.244.1.40 10.244.1.242 TO_ENDPOINT" subsys=hubble
2023-09-01T02:32:11.417338717Z stderr F level=debug msg="stale identity observed" identity=47432 ipAddr=10.244.1.40 oldIdentity=1 subsys=hubble

And alternatively, to a separate host:

2023-09-01T02:32:12.267932053Z stderr F level=debug msg="&{{4 0 312 3008716175 110 110 1 47432 60454 312 0 0 11} af4:128::} 10.244.1.40 10.244.0.127 TO_ENDPOINT" subsys=hubble
2023-09-01T02:32:12.267936768Z stderr F level=debug msg="stale identity observed" identity=47432 ipAddr=10.244.1.40 oldIdentity=6 subsys=hubble

This results in a stale identity warning from TO_ENDPOINT when delivering to DNS servers on the same host and remote hosts.

A simple solution is to ignore stale TRACE_TO_ENDPOINT events where the datapath source security ID isn't reserved (must be an endpoint) and the userspace security ID is either 1 (host) or 6 (remote-node).

Case 6

When reinstalling Cilium and the agent is just starting, packets arrive via tunnel for the health responder, for some reason instead of going directly to the health responder endpoint, they are passed to the stack.

2023-09-01T18:07:02.393926998Z stderr F level=debug msg="{{4 10 4 2383703504 124 124 1 unknown unknown 0 5 0 6} ::} 172.20.0.3 172.20.0.2 57885 8472 FROM_NETWORK" subsys=hubb
le
2023-09-01T18:07:02.393939748Z stderr F level=debug msg="{{4 9 0 2383703504 74 74 1 unknown unknown 0 5 0 4} ::} 10.244.0.186 10.244.1.104 35416 4240 FROM_OVERLAY" subsys=hub
ble
2023-09-01T18:07:02.393963115Z stderr F level=debug msg="{{4 3 4 2383703504 74 74 1 unknown unknown 0 5 0 3} ::} 10.244.0.186 10.244.1.104 35416 4240 TO_STACK" subsys=hubble
2023-09-01T18:07:02.393980895Z stderr F level=debug msg="{{4 7 4 2383703504 74 74 1 world unknown 0 5 0 3} ::} 10.244.0.186 10.244.1.104 35416 4240 FROM_HOST" subsys=hubble
2023-09-01T18:07:02.393993862Z stderr F level=debug msg="stale identity observed" identity=2 ipAddr=10.244.0.186 oldIdentity=6 subsys=hubble

In this case 10.244.1.104 is the health endpoint.

Perhaps because the endpoint map is not updated quickly enough, these packets get routed to the host and appear to enter a routing loop.

2023-09-01T18:07:02.394106942Z stderr F level=debug msg="{{4 3 4 2383703504 74 74 1 unknown unknown 0 5 0 3} ::} 10.244.0.186 10.244.1.104 35416 4240 TO_STACK" subsys=hubble
2023-09-01T18:07:02.394118785Z stderr F level=debug msg="{{4 7 4 2383703504 74 74 1 world unknown 0 5 0 2} ::} 10.244.0.186 10.244.1.104 35416 4240 FROM_HOST" subsys=hubble
2023-09-01T18:07:02.394130921Z stderr F level=debug msg="stale identity observed" identity=2 ipAddr=10.244.0.186 oldIdentity=6 subsys=hubble
2023-09-01T18:07:02.394144146Z stderr F level=debug msg="{{4 3 4 2383703504 74 74 1 unknown unknown 0 5 0 3} ::} 10.244.0.186 10.244.1.104 35416 4240 TO_STACK" subsys=hubble
2023-09-01T18:07:02.394160199Z stderr F level=debug msg="{{4 7 4 2383703504 74 74 1 world unknown 0 5 0 2} ::} 10.244.0.186 10.244.1.104 35416 4240 FROM_HOST" subsys=hubble
2023-09-01T18:07:02.394167603Z stderr F level=debug msg="stale identity observed" identity=2 ipAddr=10.244.0.186 oldIdentity=6 subsys=hubble
2023-09-01T18:07:02.394179001Z stderr F level=debug msg="{{4 3 4 2383703504 74 74 1 unknown unknown 0 5 0 3} ::} 10.244.0.186 10.244.1.104 35416 4240 TO_STACK" subsys=hubble
2023-09-01T18:07:02.394188292Z stderr F level=debug msg="{{4 7 4 2383703504 74 74 1 world unknown 0 5 0 2} ::} 10.244.0.186 10.244.1.104 35416 4240 FROM_HOST" subsys=hubble
2023-09-01T18:07:02.394190751Z stderr F level=debug msg="stale identity observed" identity=2 ipAddr=10.244.0.186 oldIdentity=6 subsys=hubble

The reason we get a warning is because at the beginning of cil_from_host, it only checks the packet mark for a security ID and then immediately sends a trace notify. These packets are coming from another host so they don't have a packet mark. After cilium get's up and going these warnings go away. In this instance I think we have an actual issue, due to a race condition at startup, though I'm not entirely sure how it could be solved or if it needs to be.

Case 7

Link-local packets appear to be dropped and drop-notify events also cause stale identity (due to them missing from the ipcache). I haven't addressed this case yet.

Alternatives

This PR does limit the warnings, but it seems like there is also a possibility that it limits legitimate warnings. I'm not sure I know enough about the various datapath cases to tell for sure. But, ignoring certain messages based on assumptions of how the datapath is working seems fragile. Just during the time I was preparing this PR, we added new identities for WORLD_IPV4 and WORLD_IPV6 which broke the logic. Perhaps a better solution would be to use different events besides trace/drop notify for these warnings. That way we can emit events from specific points in the datapath and warn on all of these events if they contain stale/incorrect information. It seems like another solution would be to potentially change some of the trace notify calls in the datapath, so they emit information that's more in sync with what userspace expects. But that seems like it would obscure the actual workings of the datapath.

@leblowl leblowl requested a review from a team as a code owner September 2, 2023 00:59
@leblowl leblowl requested a review from chancez September 2, 2023 00:59
@maintainer-s-little-helper maintainer-s-little-helper bot added the dont-merge/needs-release-note-label The author needs to describe the release impact of these changes. label Sep 2, 2023
@github-actions github-actions bot added the kind/community-contribution This was a contribution made by a community member. label Sep 2, 2023
@chancez chancez requested a review from gandro September 5, 2023 16:30
@chancez
Copy link
Contributor

chancez commented Sep 5, 2023

This is great, thanks for the PR!

I'm tagging some folks who are more familiar with datapath along with Hubble to help review since this touches on both aspects.

Copy link
Member

@gandro gandro left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is an amazing piece of work, thanks so much for the investigation and detailed description.

Overall, I think the cases you found all sound reasonable. But I do agree that there is some fragility introduced here, the code is very likely to desync with the datapath over time.

Maybe another option to explore would be if we can find some simple heuristic that works "well enough" to cover most cases. Looking at all the different cases you mentioned, all of the either involve a reserved identity or a world identity.

In the end, assuming there will never be perfect parity due to the information asymmetry between datapath and userspace, I think we need to decide if we want to live with false positives or false negatives.

At the moment, we have lots of false positives (i.e. "stale" log lines for entries that are not actually stale), but no false negatives, because we just log everything.

The current solution in this PR at the removes most of the false positives, while also probably introducing some rare false negatives. It's clearly an improvement, but it has the downside that it's a complex heuristic that can become obsolete if the datapath changes.

A extreme on the other side could be to only log it if both identities are neither reserved nor world, since it seems that this would cover most of the cases here too. In such an implementation, we would have as few false positives as your current solution, at the cost of some false negatives (i.e. we might actually miss some stale IPCache entries if we do that).

I don't really where we should land on the false positive vs false negative tradeoff. I'm tempted to accept a decent amount of false negatives if the false positive rate low. In the end, any logging with lots of false positives will not provide much value, because it just comes too noisy.

I'll also bring this up with some other @cilium/sig-hubble members.

pkg/hubble/parser/common/endpoint.go Outdated Show resolved Hide resolved
pkg/hubble/parser/common/endpoint.go Outdated Show resolved Hide resolved
@leblowl
Copy link
Contributor Author

leblowl commented Sep 10, 2023

Thanks for the review! I agree, I think a simpler and more general heuristic would be easier to maintain and hopefully work well enough. It got me thinking: what are the cases that we are trying to notice with this log warning? I think there is perhaps one case where the BPF map is not updated quickly enough and userspace has a different view of the ip/identity mapping than the datapath has. Is that the only case? If it is, another option might be putting a special trace/debug event after each call to lookup_ip{4,6}_remote_endpoint and only warning about stale identities from those events. I'm not sure if this warning is important enough to justify additional trace/debug events.

@gandro
Copy link
Member

gandro commented Sep 11, 2023

It got me thinking: what are the cases that we are trying to notice with this log warning?

The main motivation was to collect some data to understand why datapath and Hubble would diverge. Unfortunately, no one ever followed up on that until you did, so thanks a lot for that!

I think there is perhaps one case where the BPF map is not updated quickly enough and userspace has a different view of the ip/identity mapping than the datapath has

Yeah, I think besides the logical discrepancies that you documented, this map sync lag the only major reason why we would observe "stale identities".

If it is, another option might be putting a special trace/debug event after each call to lookup_ip{4,6}_remote_endpoint and only warning about stale identities from those events.

I'm not sure what you mean by this? Could you elaborate a bit?

On a more high-level note: We've discussed this PR with some members of the Hubble team and we think the approach PR strikes a good balance between false positives and negatives. It provides a lot of value to have those cases documented in the code, so having them as separate cases is probably more meaningful than having a heuristic. So we're fine to merge this as is, assuming the issue with the endpoint manager is fixed.

@lmb lmb added sig/hubble Impacts hubble server or relay release-note/minor This PR changes functionality that users may find relevant to operating Cilium. labels Sep 25, 2023
@maintainer-s-little-helper maintainer-s-little-helper bot removed the dont-merge/needs-release-note-label The author needs to describe the release impact of these changes. label Sep 25, 2023
@lmb
Copy link
Contributor

lmb commented Sep 28, 2023

@lmb
Copy link
Contributor

lmb commented Sep 28, 2023

I'm converting this to Draft since there is outstanding feedback to be addressed. Please click on "Ready for review" once you've made the necessary changes.

@lmb lmb marked this pull request as draft September 28, 2023 09:59
@leblowl
Copy link
Contributor Author

leblowl commented Oct 9, 2023

Thanks!

@leblowl
Copy link
Contributor Author

leblowl commented Oct 9, 2023

@gandro

Sorry for the delay.

I'm not sure what you mean by this? Could you elaborate a bit?

The idea is similar to this section of code:

cilium/bpf/bpf_host.c

Lines 486 to 504 in 9aa5068

info = lookup_ip4_remote_endpoint(ip4->saddr, 0);
if (info != NULL) {
*sec_identity = info->sec_identity;
if (*sec_identity) {
/* When SNAT is enabled on traffic ingressing
* into Cilium, all traffic from the world will
* have a source IP of the host. It will only
* actually be from the host if "srcid_from_proxy"
* (passed into this function) reports the src as
* the host. So we can ignore the ipcache if it
* reports the source as HOST_ID.
*/
if (*sec_identity != HOST_ID)
srcid_from_ipcache = *sec_identity;
}
}
cilium_dbg(ctx, info ? DBG_IP_ID_MAP_SUCCEED4 : DBG_IP_ID_MAP_FAILED4,
ip4->saddr, srcid_from_ipcache);

Whenever we call lookup_ip4_remote_endpoint or lookup_ip6_remote_endpoint in the BPF code, we can emit debug messages like DBG_IP_ID_MAP_SUCCEED4 with the IP cache security ID. These debug messages could come directly after lookup_ip{4,6}_remote_endpoint calls and so they should always tell us if the IP cache BPF map is stale and thus wouldn't require special conditions in the Hubble code. However, these debug messages wouldn't track other parts of the datapath where we get the security ID from the packet mark for example.

I can fix up the local endpoint check and address the test failures.

@gandro
Copy link
Member

gandro commented Oct 10, 2023

I see, thanks for the clarification! While I think we could add a trace message there, I actually think the existing debug message already serves that purpose. What we could consider is making it a "debug capture" message, so we also have packet context if needed.

I think it becomes more clear to me now that the stale identity observed log messages could be understood to serve two use-cases:

  1. Serve as a indicator that there was an actual mismatch between datapath's and userspace's view of the IPCache. I think this use-case is already pretty well covered by DBG_IP_ID_MAP_SUCCEED4 debug message and the cilium ip list (showing userspace state) and cilium bpf ipcache list (showing datapath state) commands.
  2. Serve as an indicator that the way Hubble determines the security identity of an IP does not align with what the datapath does. This is particularly relevant when annotating trace events where the datapath does not perform an IPCache lookup, such as e.g. in the FROM_LXC trace point. In that case, the metadata annotated by Hubble might be misleading.

As outlined in the amazing PR description, I think you've identified quite a few gaps where Hubble annotates potentially misleading data. Therefore one could argue that the log message has now mostly outlived it's purpose at this point. Instead, the it might be more useful to understand how can we fix the identified mismatches such that Hubble correctly annotates flows going forward.

Having said that, I'm more than happy to accept the overall PR (assuming fixed endpoint lookup and tests). Even if I mentioned that I believe the log message kind of became obsolete thanks to the fact that we now know where the discrepancies are coming from, there might still be more cases out there in the wild that we haven't discovered yet.

For the Hubble project in general, I think the plan forward here should be:

  1. Fix the outstanding technical issues (i.e. endpoint check) with the PR, but otherwise merge it more or less as is
  2. Document the discovered mismatches as "known bugs", e.g. in an separate issue
  3. Investigate to what degree these known mismatches are fixable. If they are, potentially fix them. If they are not easily fixable, we have at least documented them

In Hubble, ignore certain cases where the datapath security ID does
not match the userspace security ID.

Signed-off-by: Lucas Leblow <lucasleblow@mailbox.org>
@leblowl leblowl force-pushed the pr/fix-stale-identity-warnings branch from d401cc3 to c7b653a Compare October 14, 2023 19:21
@leblowl leblowl marked this pull request as ready for review October 14, 2023 20:56
@leblowl
Copy link
Contributor Author

leblowl commented Oct 14, 2023

Thanks! That makes sense. I think I've fixed up the PR.

Copy link
Member

@gandro gandro left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome work, this looks good now! Thanks a lot

@gandro
Copy link
Member

gandro commented Oct 16, 2023

/test

@joestringer joestringer removed the request for review from chancez October 20, 2023 22:07
@maintainer-s-little-helper maintainer-s-little-helper bot added the ready-to-merge This PR has passed all tests and received consensus from code owners to merge. label Oct 20, 2023
@joestringer
Copy link
Member

(Awaiting two unresolved conversations to be resolved before merging)

@gandro
Copy link
Member

gandro commented Oct 23, 2023

Mine has been resolved. cc @chancez if you want to take a look

@chancez
Copy link
Contributor

chancez commented Oct 23, 2023

To be honest, most of it's incomprehensible to me, so I'll just remove myself as a reviewer 😅

Copy link
Contributor

@chancez chancez left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess I cannot remove myself, so ✅

@joestringer
Copy link
Member

joestringer commented Oct 23, 2023

@chancez your review was requested specifically on behalf of hubble, so I think the minimum is just the mechanical aspects of changes to those specific files. It looks like @gandro already engaged from a broader perspective so we're probably good on the general feedback side. Thanks for the review!

EDIT: Ah I see what happened now, Sebastian reviewed on behalf of the same codeowner and at that point you can't drop your review request. Never mind my post above :)

@dylandreimerink dylandreimerink merged commit 28975e3 into cilium:main Oct 24, 2023
61 of 62 checks passed
@gandro
Copy link
Member

gandro commented Dec 13, 2023

Let's backport this in order to improve the situation in CI #15283

@gandro gandro added needs-backport/1.12 needs-backport/1.14 This PR / issue needs backporting to the v1.14 branch and removed needs-backport/1.12 labels Dec 13, 2023
@giorio94 giorio94 mentioned this pull request Dec 13, 2023
10 tasks
@giorio94 giorio94 added backport-pending/1.14 The backport for Cilium 1.14.x for this PR is in progress. and removed needs-backport/1.14 This PR / issue needs backporting to the v1.14 branch labels Dec 13, 2023
@github-actions github-actions bot added backport-done/1.14 The backport for Cilium 1.14.x for this PR is done. and removed backport-pending/1.14 The backport for Cilium 1.14.x for this PR is in progress. labels Dec 14, 2023
@YutaroHayakawa YutaroHayakawa mentioned this pull request Dec 20, 2023
5 tasks
@YutaroHayakawa YutaroHayakawa added backport-pending/1.13 The backport for Cilium 1.13.x for this PR is in progress. and removed needs-backport/1.13 labels Dec 20, 2023
@github-actions github-actions bot added backport-done/1.13 The backport for Cilium 1.13.x for this PR is done. and removed backport-pending/1.13 The backport for Cilium 1.13.x for this PR is in progress. labels Jan 11, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
backport-done/1.13 The backport for Cilium 1.13.x for this PR is done. backport-done/1.14 The backport for Cilium 1.14.x for this PR is done. kind/community-contribution This was a contribution made by a community member. ready-to-merge This PR has passed all tests and received consensus from code owners to merge. release-note/minor This PR changes functionality that users may find relevant to operating Cilium. sig/hubble Impacts hubble server or relay
Projects
No open projects
Status: Released
Status: Released
Development

Successfully merging this pull request may close these issues.

None yet

8 participants