-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Respond with ICMP reply for traffic to services without backends #28157
Conversation
/test |
06ff822
to
9ef142b
Compare
/test |
9ef142b
to
cd1a0bc
Compare
/test |
cd1a0bc
to
982335a
Compare
/test |
982335a
to
e333851
Compare
/ci-l4lb |
e333851
to
4dbf77a
Compare
/ci-l4lb |
18834c0
to
a9a4a6c
Compare
/test |
40c9631
to
5fa64c3
Compare
5fa64c3
to
dcea760
Compare
The --service-no-backend-response=reject feature requires the use of `bpf_skb_adjust_room` with the `BPF_ADJ_ROOM_MAC` mode to make room to add the outer IP + ICMP header. However this mode is only available after v5.2, so this commit adds a probe to check for the availability and will fall back to --service-no-backend-response=drop for kernels that do not support it. Signed-off-by: Dylan Reimerink <dylan.reimerink@isovalent.com>
The current ingress conformance test adds an ingress and then does a curl to confirm it works. In the past, Cilium would have dropped the request packet until the datapath was setup. This silent dropping causes curl to retry for 15 seconds before giving up. With the ICMP reply however, curl gets an immediate response and will give up immediately. This causes the test to fail. So this commit adds manual retry logic and delays in the test script. Signed-off-by: Dylan Reimerink <dylan.reimerink@isovalent.com>
This commit adds token bucket ratelimiting to the datapath. It is implemented purely in BPF. A new map is added to keep track of the buckets. A bucket can be keyed on anything, though since ICMPv6 is currently the only user it is keyed on ifindex. The value is the current amounts of tokens in the bucket and the last time we added tokens to the bucket. For every event we check if there is at least 1 token left in the bucket if so, we decrement the token count and continue, if not we execute the rate limiting action. Typically a timer would add new tokens into the bucket, in our case we keep track of the last time we added tokens and calculate how many tokens we should have added since then before we do the token check. This implements a burstable rate limiting mechanism. The burst size and token refil is configurable. For ICMPv6 it is currently set to 100 replies per second with a burst size of 1000. Signed-off-by: Dylan Reimerink <dylan.reimerink@isovalent.com>
dcea760
to
6c2ae70
Compare
/test |
lovely |
A funny thing is that it can be technically possible, that in the same Service, one Port has backends and other not, but this is an artifact of the capability of using named ports for Services , but as Tim says in , that is a pretty esoteric configuration kubernetes/kubernetes#24875 (comment) so I think that this is correct |
Ah, the description of the PR is out of date, we changed this after review to match exactly what kube-proxy does to avoid potential implementation differences of the ICMP code for clients. I will correct the description so it doesn't cause future confusion. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm too late for the party, but I've got some comments on the TBF implementation 😅
since_last_topup = ktime_get_ns() - value->last_topup; | ||
if (since_last_topup > settings->topup_interval_ns) { | ||
/* Add tokens of every missed interval */ | ||
value->tokens += (since_last_topup / settings->topup_interval_ns) * |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Rounding here could skip intervals. For example, it this function is called at 0 s, 1.5 s and 3 s, it will add 1000 tokens at 0 s, another 1000 tokens at 1.5 s, and yet another 1000 tokens at 3 s = in total 3000 tokens. If, however, this function was called at 0 s, 1 s, 2 s and 3 s, it would add 1000 tokens at each call = in total 4000 tokens over the same time period.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right. So a better way to keep track would be:
long cur_time = ktime_get_ns();
[...]
long intervals = since_last_topup / settings->topup_interval_ns;
long remainder = since_last_topup % settings->topup_interval_ns;
value->last_topup = cur_time - remainder;
So a 1.5s, we set last_topup
to 1s
instead of 1.5s
. So at the 3s mark we would add 2000 instead of 1000.
Is that correct?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks correct to me.
if (!value) { | ||
new_value.last_topup = ktime_get_ns(); | ||
new_value.tokens = settings->tokens_per_topup - 1; | ||
ret = map_update_elem(&RATELIMIT_MAP, key, &new_value, BPF_ANY); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This lookup-and-update is racy if called from two CPUs. Do we care?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I considered this. It would mean that the ratelimit isn't 100% accurate, letting more traffic through than the limit. To fix that we would need to use atomics which are slow. I thought the performance is more important in this given situation. Perhaps we should add this to the comments in case other wonder.
/* Add tokens of every missed interval */ | ||
value->tokens += (since_last_topup / settings->topup_interval_ns) * | ||
settings->tokens_per_topup; | ||
value->last_topup = ktime_get_ns(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should reuse the ktime_get_ns() value fetched above, otherwise it's another source of inexactness, although only a tiny one.
I will make a follow-up PR for that, thanks for the feedback! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Two comments below - better late than never :).
ctx_get_ifindex(ctx)); | ||
return ctx_redirect(ctx, ctx_get_ifindex(ctx), 0); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe this needs an edt_set_aggregate(ctx, 0)
, to prevent false-positives in to-netdev
's Bandwidth-Manager code.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That is a good point, I have not been able to check the interaction with the bandwidth manager. But I suspect you are right. Will have to do a followup.
cilium_dbg_capture(ctx, DBG_CAPTURE_DELIVERY, | ||
ctx_get_ifindex(ctx)); | ||
return ctx_redirect(ctx, ctx_get_ifindex(ctx), 0); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When this ICMP packet hits to-netdev
, did you check how it interacts with the SNAT engine?
I would expect that it gets dropped, whenever the addressed service IP equals IPV4_MASQUERADE.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
did you check how it interacts with the SNAT engine?
No, I did not.
I would expect that it gets dropped, whenever the addressed service IP equals IPV4_MASQUERADE.
Right, which would be in the case of a node port service?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
which would be in the case of a node port service?
Correct. That's a scenario we want to support, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I think so. How does host traffic normally deal with this? Additionally, I see where have a marker to skip SNAT ctx_snat_done_set(ctx);
would calling it before doing the redirect help?
It has been a while since I looked in depth at the SNAT path.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I think so. How does host traffic normally deal with this?
It gets dropped :). We only support a limited set of ICMP types. I hope we can extend this as needed.
Additionally, I see where have a marker to skip SNAT
ctx_snat_done_set(ctx);
would calling it before doing the redirect help?
Yep, I had the same thought. I believe that would fit as work-around (and would even allow you to skip the HostFW in to-netdev
... that's another aspect we didn't consider yet in this PR). Long-term it would be best to teach the SNAT engine about this ICMP type.
|
||
#ifdef SERVICE_NO_BACKEND_RESPONSE | ||
if (ret == DROP_NO_SERVICE) { | ||
ep_tail_call(ctx, CILIUM_CALL_IPV4_NO_SERVICE); | ||
return DROP_MISSED_TAIL_CALL; | ||
} | ||
#endif | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
With ep-routes enabled, the ICMP packet should now pass through to-container
on the way back into the pod. Would we thus require an ingress network policy change to allow this traffic?
This feels very similar to the topic of avoiding policy for service loopback replies ...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A CT entry should be created for the outgoing connection to the service, so when doing policy checking the ICMP reply should be flagged as return traffic an thus not subject to any ingress policy.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unfortunately lb4_local() currently doesn't create the RELATED entry (note the NULL
for map_related
). So I don't think there's any CT entry in place that would allow such ICMP traffic to pass through network policy enforcement.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right, but isn't the CT entry created here before we get to the LB stage?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see that depends on ENABLE_PER_PACKET_LB
being enabled or not
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right, but isn't the CT entry created here before we get to the LB stage?
Nope, that part is only reached after selecting the backend (this CT entry tracks the client -> backend connection).
@@ -597,7 +599,7 @@ enum { | |||
#define DROP_INVALID_EXTHDR -156 | |||
#define DROP_FRAG_NOSUPPORT -157 | |||
#define DROP_NO_SERVICE -158 | |||
#define DROP_UNUSED8 -159 /* unused */ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@dylandreimerink This drop reason wasn't added in flow.proto
and drop.go
.
Under normal circumstances, we shouldn't reuse any of these, since renaming a proto field/type causes a backwards-incompatible change. We're lucky that in this case, drop reason 159
is actually missing from the proto as well as from drop.go. 😅
SERVICE_BACKEND_NOT_FOUND = 158;
NO_TUNNEL_OR_ENCAPSULATION_ENDPOINT = 160;
In any case, I'm marking all unused ones as deprecated in #29482.
cc @rolinh
So far we have been dropping packets meant for services which do not have endpoints/backends. This causes clients to needlessly wait for replies and to retry sending traffic. This PR adds the ability to send back an ICMP or ICMPv6 reply with Destination unreachable (type 3) + Port unreachable (code 3) whenever this happens.
This behavior is controllable via a new
--service-no-backend-response
flag, which defaults toreject
so we match expected behavior by default. It can also be set todrop
to preserve the existing behavior in case that was desired.This new behavior works for both North/South traffic entering a node and East/West traffic responding to a request from a pod within the cluster.
Fixes: #10002