-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
v1.14 Backports 2023-10-30 #28870
v1.14 Backports 2023-10-30 #28870
Commits on Nov 3, 2023
-
labels/cidr: Cache GetCIDRLabels computation
[ upstream commit e0f6c47 ] Cache the computation of intermediate CIDR labels to speed up GetCIDRLabels and reduce memory usage by deduplicating CIDR strings. Even though now most of the cost is in building up the resulting "labels.Labels", it is not memoized yet as it is mutable and mutated by e.g. MergeLabels. Before: goos: linux goarch: amd64 pkg: github.com/cilium/cilium/pkg/labels/cidr cpu: AMD Ryzen 9 5950X 16-Core Processor BenchmarkGetCIDRLabels/0.0.0.0/0 6005072 199.4 ns/op 640 B/op 3 allocs/op BenchmarkGetCIDRLabels/10.16.0.0/16 402415 2876 ns/op 3748 B/op 38 allocs/op BenchmarkGetCIDRLabels/192.0.2.3/32 216280 5457 ns/op 8032 B/op 70 allocs/op BenchmarkGetCIDRLabels/192.0.2.3/24 285751 4113 ns/op 5056 B/op 54 allocs/op BenchmarkGetCIDRLabels/192.0.2.0/24 286141 4116 ns/op 5055 B/op 54 allocs/op BenchmarkGetCIDRLabels/::/0 6016551 199.6 ns/op 640 B/op 3 allocs/op BenchmarkGetCIDRLabels/fdff::ff/128 37502 31938 ns/op 30786 B/op 450 allocs/op BenchmarkGetCIDRLabels/f00d:42::ff/128 35725 33607 ns/op 33658 B/op 450 allocs/op BenchmarkGetCIDRLabels/f00d:42::ff/96 50270 23798 ns/op 20231 B/op 297 allocs/op After: goos: linux goarch: amd64 pkg: github.com/cilium/cilium/pkg/labels/cidr cpu: AMD Ryzen 9 5950X 16-Core Processor BenchmarkGetCIDRLabels/0.0.0.0/0 7320565 164.0 ns/op 624 B/op 2 allocs/op BenchmarkGetCIDRLabels/10.16.0.0/16 1000000 1083 ns/op 2396 B/op 2 allocs/op BenchmarkGetCIDRLabels/192.0.2.3/32 593683 1948 ns/op 5008 B/op 2 allocs/op BenchmarkGetCIDRLabels/192.0.2.3/24 337100 3498 ns/op 7728 B/op 3 allocs/op BenchmarkGetCIDRLabels/192.0.2.0/24 793645 1427 ns/op 2767 B/op 2 allocs/op BenchmarkGetCIDRLabels/::/0 7213646 166.1 ns/op 624 B/op 2 allocs/op BenchmarkGetCIDRLabels/fdff::ff/128 168543 7064 ns/op 18515 B/op 3 allocs/op BenchmarkGetCIDRLabels/f00d:42::ff/128 165129 7184 ns/op 18516 B/op 3 allocs/op BenchmarkGetCIDRLabels/f00d:42::ff/96 91777 13056 ns/op 29283 B/op 6 allocs/op Signed-off-by: Jussi Maki <jussi@isovalent.com> Signed-off-by: Fabio Falzoi <fabio.falzoi@isovalent.com> Signed-off-by: Fabio Falzoi <fabio.falzoi@isovalent.com>
Configuration menu - View commit details
-
Copy full SHA for ddbcbaa - Browse repository at this point
Copy the full SHA ddbcbaaView commit details -
labels/cidr: Use a lru cache to store CIDR labels
[ upstream commit 3debf6d ] To avoid excessive heap usage, limit the number of cached labels using a LRU map. Maximum cache size is empirically set to 16384 and a lock is used to serialize concurrent accesses to the cache. Note that, being a LRU cache, the Get operations modify the cache internal status, so a classic mutex has been used instead of a rwmutex. Benchmark results to compare LRU-based memoization against the non-memoized version: name old time/op new time/op delta GetCIDRLabels/0.0.0.0/0-8 204ns ± 4% 351ns ± 2% +71.85% (p=0.000 n=8+10) GetCIDRLabels/10.16.0.0/16-8 4.20µs ±10% 1.73µs ± 3% -58.84% (p=0.000 n=10+9) GetCIDRLabels/192.0.2.3/32-8 8.02µs ± 4% 3.27µs ± 2% -59.25% (p=0.000 n=9+9) GetCIDRLabels/192.0.2.3/24-8 6.65µs ±11% 3.26µs ± 2% -51.02% (p=0.000 n=10+8) GetCIDRLabels/192.0.2.0/24-8 6.52µs ±10% 2.86µs ± 1% -56.10% (p=0.000 n=10+9) GetCIDRLabels/::/0-8 330ns ± 2% 354ns ± 5% +7.28% (p=0.000 n=9+10) GetCIDRLabels/fdff::ff/128-8 52.9µs ± 6% 12.4µs ± 6% -76.48% (p=0.000 n=10+10) GetCIDRLabels/f00d:42::ff/128-8 55.3µs ± 5% 12.6µs ± 5% -77.27% (p=0.000 n=10+10) GetCIDRLabels/f00d:42::ff/96-8 41.8µs ± 7% 12.4µs ± 2% -70.20% (p=0.000 n=10+9) name old alloc/op new alloc/op delta GetCIDRLabels/0.0.0.0/0-8 656B ± 0% 624B ± 0% -4.88% (p=0.000 n=10+10) GetCIDRLabels/10.16.0.0/16-8 3.17kB ± 0% 2.40kB ± 0% -24.46% (p=0.000 n=10+10) GetCIDRLabels/192.0.2.3/32-8 6.88kB ± 0% 5.01kB ± 0% -27.21% (p=0.000 n=8+8) GetCIDRLabels/192.0.2.3/24-8 6.32kB ± 0% 5.01kB ± 0% -20.73% (p=0.000 n=9+9) GetCIDRLabels/192.0.2.0/24-8 6.32kB ± 0% 4.93kB ± 0% -22.03% (p=0.000 n=10+10) GetCIDRLabels/::/0-8 656B ± 0% 624B ± 0% -4.88% (p=0.000 n=10+10) GetCIDRLabels/fdff::ff/128-8 25.9kB ± 0% 18.5kB ± 0% -28.58% (p=0.000 n=10+10) GetCIDRLabels/f00d:42::ff/128-8 28.8kB ± 0% 18.5kB ± 0% -35.70% (p=0.000 n=10+10) GetCIDRLabels/f00d:42::ff/96-8 25.1kB ± 0% 18.5kB ± 0% -26.20% (p=0.000 n=10+8) name old allocs/op new allocs/op delta GetCIDRLabels/0.0.0.0/0-8 3.00 ± 0% 2.00 ± 0% -33.33% (p=0.000 n=10+10) GetCIDRLabels/10.16.0.0/16-8 37.0 ± 0% 2.0 ± 0% -94.59% (p=0.000 n=10+10) GetCIDRLabels/192.0.2.3/32-8 69.0 ± 0% 2.0 ± 0% -97.10% (p=0.000 n=10+10) GetCIDRLabels/192.0.2.3/24-8 53.0 ± 0% 2.0 ± 0% -96.23% (p=0.000 n=10+10) GetCIDRLabels/192.0.2.0/24-8 53.0 ± 0% 2.0 ± 0% -96.23% (p=0.000 n=10+10) GetCIDRLabels/::/0-8 3.00 ± 0% 2.00 ± 0% -33.33% (p=0.000 n=10+10) GetCIDRLabels/fdff::ff/128-8 449 ± 0% 3 ± 0% -99.33% (p=0.000 n=10+10) GetCIDRLabels/f00d:42::ff/128-8 449 ± 0% 3 ± 0% -99.33% (p=0.000 n=10+10) GetCIDRLabels/f00d:42::ff/96-8 295 ± 0% 3 ± 0% -98.98% (p=0.000 n=10+10) Signed-off-by: Fabio Falzoi <fabio.falzoi@isovalent.com> Signed-off-by: Fabio Falzoi <fabio.falzoi@isovalent.com>
Configuration menu - View commit details
-
Copy full SHA for 87a9972 - Browse repository at this point
Copy the full SHA 87a9972View commit details -
cidr/labels: Add benchmark for cache heap usage
[ upstream commit ebfaa30 ] Add a benchmark to estimate the heap usage of a full CIDR labels LRU cache. Results show that heap usage is less than ~5 MiB: $ go test ./pkg/labels/cidr/... -run=^$ -bench="BenchmarkCIDRLabelsCacheHeapUsage" -benchtime=1x --- BENCH: BenchmarkCIDRLabelsCacheHeapUsageIPv4-8 cidr_test.go:396: Memoization map heap usage: 4146.02 KiB --- BENCH: BenchmarkCIDRLabelsCacheHeapUsageIPv6-8 cidr_test.go:438: Memoization map heap usage: 4571.74 KiB The benchmark must be called with `-benchtime=1x` to get meaningful values from `runtime.ReadMemStats`. For that reason, it is skipped by default. Signed-off-by: Fabio Falzoi <fabio.falzoi@isovalent.com> Signed-off-by: Fabio Falzoi <fabio.falzoi@isovalent.com>
Configuration menu - View commit details
-
Copy full SHA for 08696cb - Browse repository at this point
Copy the full SHA 08696cbView commit details -
labels/cidr: Add benchmark for concurrent exection of GetCIDRLabels
[ upstream commit 85aef68 ] Adding a LRU cache to memoize GetCIDRLabels means sharing state between different goroutines concurrently executing GetCIDRLabels. To measure the scalability of the LRU cache + mutex approach against the non-cached previous version, a benchmark with increasing number of goroutines is added. Running the benchmark against the current version and the one without labels memoization shows that the change gives a performance improvement even when up to 48 goroutines compete for the exclusive access to the LRU cache: name old time/op new time/op delta GetCIDRLabelsConcurrent/1-8 493µs ±31% 259µs ± 4% -47.54% (p=0.000 n=20+9) GetCIDRLabelsConcurrent/2-8 889µs ±10% 474µs ± 5% -46.69% (p=0.000 n=19+10) GetCIDRLabelsConcurrent/4-8 1.74ms ± 3% 0.89ms ± 4% -49.04% (p=0.000 n=20+10) GetCIDRLabelsConcurrent/16-8 7.14ms ± 6% 3.77ms ± 8% -47.20% (p=0.000 n=18+10) GetCIDRLabelsConcurrent/32-8 14.3ms ± 6% 7.3ms ± 2% -48.69% (p=0.000 n=20+9) GetCIDRLabelsConcurrent/48-8 21.9ms ± 5% 11.1ms ± 3% -49.24% (p=0.000 n=19+10) Signed-off-by: Fabio Falzoi <fabio.falzoi@isovalent.com> Signed-off-by: Fabio Falzoi <fabio.falzoi@isovalent.com>
Configuration menu - View commit details
-
Copy full SHA for 772e78c - Browse repository at this point
Copy the full SHA 772e78cView commit details -
envoy: extract getEndpointsForLBBackends with unittest
[ upstream commit 5be0299 ] This commit extracts the logic that creates Envoy endpoints (ClusterLoadAssignments) for LoadBalancing Backends into its function. In addition some unit tests were added. Signed-off-by: Marco Hofstetter <marco.hofstetter@isovalent.com> Signed-off-by: Fabio Falzoi <fabio.falzoi@isovalent.com>
Configuration menu - View commit details
-
Copy full SHA for 7aa62ff - Browse repository at this point
Copy the full SHA 7aa62ffView commit details -
envoy: fix lb backend endpoint calculation
[ upstream commit c4079a7 ] Currently, mapping loadbalancing backends to Envoy endpoints contains a bug that the LbEndpoints are kept/appended over the whole backendMap. Therefore, later endpoints will contain the LBEndpoints of all previous backends. This commit fixes this by putting the variable `lbEndpoints` into the right scope (per port). Signed-off-by: Marco Hofstetter <marco.hofstetter@isovalent.com> Signed-off-by: Fabio Falzoi <fabio.falzoi@isovalent.com>
Configuration menu - View commit details
-
Copy full SHA for 52962ec - Browse repository at this point
Copy the full SHA 52962ecView commit details -
operator: Fix logic used to sync Cilium's IngressClass on startup
[ upstream commit 2cfc825 ] This commit introduces changes to the ingress class manager piece of the ingress controller, in order to address bugs impacting the proper synching of Cilium's IngressClass during start up. The following changes are made: * Replace use of an Informer with Resource[T] for IngressClass. This helps simplify the logic used to perform the initial sync. * Move the responsibility of tracking if Cilium should act as the default IngressClass into the ingress class manager, rather than having the ingress controller track this itself when processing IngressClass events. After the ingress class manager is constructed, the ingress controller will be able to determine if Cilium is the default IngressClass for a cluster through the ingress class manager. The ingress controller no longer has to wait to process an event for Cilium's IngressClass to learn if Cilium should be the default. Before this commit, the ingress controller would process all Ingress resources before processing IngressClass resources. This is because the Ingress resource informer would be started before the ingress class manager, so all events related to Ingress resources would appear in the ingress controller's event queue before events relating to IngressClass resources. This presented a problem, because the ingress controller would always believe that it was not the default IngressClass for a cluster on startup while processing each Ingress resource for the first time. This could lead to the following situation: 1. The ingress controller processes all Ingress resources. 2. The ingress controller processes IngressClass resources, and learns that it should act as the default IngressClass for the cluster. 3. A resync of Ingress resources is triggered. This double-sync overhead can act as a problem for large-scale clusters. Signed-off-by: Ryan Drew <ryan.drew@isovalent.com> Signed-off-by: Fabio Falzoi <fabio.falzoi@isovalent.com>
Configuration menu - View commit details
-
Copy full SHA for 3eae191 - Browse repository at this point
Copy the full SHA 3eae191View commit details -
gha: test geneve tunneling in addition to vxlan
[ upstream commit 77e09f5 ] Switch one of the matrix entries currently configuring vxlan tunneling to geneve, so that we appropriately cover both protocols in combination with clustermesh. Signed-off-by: Marco Iorio <marco.iorio@isovalent.com> Signed-off-by: Fabio Falzoi <fabio.falzoi@isovalent.com>
Configuration menu - View commit details
-
Copy full SHA for 802666b - Browse repository at this point
Copy the full SHA 802666bView commit details -
pkg/endpoint: run the metadata resolver after registering the endpoint
[ upstream commit 07d3a21 ] We need to refetch the pod labels again because we have just added the endpoint into the endpoint manager. If we have received any pod events, more specifically any events that modified the pod labels, between the time the pod was created and the time it was added into the endpoint manager, the pod event would not have been processed since the pod event handler would not find the endpoint for that pod in the endpoint manager. Thus, we will fetch the labels again and update the endpoint with these labels. Signed-off-by: André Martins <andre@cilium.io> Signed-off-by: Fabio Falzoi <fabio.falzoi@isovalent.com>
Configuration menu - View commit details
-
Copy full SHA for b474c96 - Browse repository at this point
Copy the full SHA b474c96View commit details -
bugtool: Collect XFRM error counters twice
[ upstream commit c1803ba ] This commit changes the bugtool report to collect the XFRM error counters (i.e., /proc/net/xfrm_stat) twice instead of only once. We will collect at the beginning and end of the bugtool collection. In that way, there will be around 5-6 seconds between the two collections and we may see if any counter is currently increasing. $ diff cilium-bugtool-cilium-7d54p-20231025-115151/cmd/cat*--proc-net-xfrm_stat.md 5c5 < XfrmInStateProtoError 4 --- > XfrmInStateProtoError 6 In this example, we can easily see that the XfrmInStateProtoError is increasing. That suggests a key rotation issue is currently ongoing (cf. IPsec troubleshooting docs). I tried other approaches to collect over a longer timespan. That may allow us to see slower increases. They all end up being more complex or messier (we'd need to collect at beginning and end of the sysdump instead). In the end, considering this is already a fallback plan for when customers don't collect Prometheus metrics, I think the current, simpler approach is good enough. Signed-off-by: Paul Chaignon <paul.chaignon@gmail.com> Signed-off-by: Fabio Falzoi <fabio.falzoi@isovalent.com>
Configuration menu - View commit details
-
Copy full SHA for 1c255fc - Browse repository at this point
Copy the full SHA 1c255fcView commit details -
helm: Add missing type to poststart iptables regex
[ upstream commit b836cb1 ] We recently introduced AWS-CONNMARK-CHAIN iptables rules deletion, but didn't add them to an if statement guarding actual deletion. Signed-off-by: Maciej Kwiek <maciej@isovalent.com> Signed-off-by: Fabio Falzoi <fabio.falzoi@isovalent.com>
Configuration menu - View commit details
-
Copy full SHA for 12f5544 - Browse repository at this point
Copy the full SHA 12f5544View commit details -
helm: Always delete AWS iptable rules
[ upstream commit 6ab728d ] This change causes Cilium DaemonSet postStart hook to always delete AWS iptable rules unless `cni.chainingMode` is set to `aws-cni`. This will result in the postStart hook being a noop in all non-AWS deployments. Unfortunately there is no way for helm chart to know whether it is running on AWS not in ENI mode. This approach will make sure that we are deleting AWS-specific iptables rules that cause issues while not requiring us to introduce new configuration flags for users. Signed-off-by: Maciej Kwiek <maciej@isovalent.com> Signed-off-by: Fabio Falzoi <fabio.falzoi@isovalent.com>
Configuration menu - View commit details
-
Copy full SHA for 0a63d87 - Browse repository at this point
Copy the full SHA 0a63d87View commit details -
labels/cidr: Fix labels memoization in GetCIDRLabels
[ upstream commit 55517ea ] The previous version of the implementation was actually computing the labels starting from broader prefixes to narrower ones (first "/0", then "/1" and so on). As soon as we had a cache hit, the recursion stopped without calculating the remaining labels for the CIDRs up to "ones". This produced an incorrect (shorter) set of labels for a CIDR. Also, netip.PrefixFrom(...) does not mask the internally stored address, thus lowering the cache hit ratio even if two different CIDRs, used as keys in the LRU cache, are equal in terms of masked address. (e.g: "1.1.1.1/16" and "1.1.0.0/16"). So, netip.Addr.Prefix(...) is used instead. After the fix, the performance are roughly equal (but with an increased chance of having a cache hit). Instead, the maximum heap usage in the worst case (LRU cache filled up with IPv6 labels) is increased 10x. BenchmarkCIDRLabelsCacheHeapUsageIPv4 cidr_test.go:628: Memoization map heap usage: 5483.24 KiB BenchmarkCIDRLabelsCacheHeapUsageIPv6 cidr_test.go:670: Memoization map heap usage: 54721.70 KiB name old time/op new time/op delta GetCIDRLabels/0.0.0.0/0-8 256ns ±39% 218ns ±46% ~ (p=0.393 n=10+10) GetCIDRLabels/10.16.0.0/16-8 1.35µs ± 3% 1.39µs ± 5% +2.66% (p=0.012 n=9+10) GetCIDRLabels/192.0.2.3/32-8 2.52µs ± 2% 2.58µs ± 2% +2.58% (p=0.001 n=10+9) GetCIDRLabels/192.0.2.3/24-8 2.57µs ± 1% 2.24µs ± 3% -12.69% (p=0.000 n=8+10) GetCIDRLabels/192.0.2.0/24-8 2.27µs ± 4% 2.26µs ± 3% ~ (p=0.690 n=9+8) GetCIDRLabels/::/0-8 277ns ± 2% 278ns ± 3% ~ (p=0.796 n=9+9) GetCIDRLabels/fdff::ff/128-8 9.42µs ± 1% 9.34µs ± 6% ~ (p=0.484 n=9+10) GetCIDRLabels/f00d:42::ff/128-8 9.58µs ± 2% 9.62µs ± 7% ~ (p=0.905 n=10+9) GetCIDRLabels/f00d:42::ff/96-8 9.63µs ± 1% 8.45µs ± 3% -12.27% (p=0.000 n=8+9) GetCIDRLabelsConcurrent/1-8 205µs ± 3% 207µs ± 3% ~ (p=0.356 n=9+10) GetCIDRLabelsConcurrent/2-8 385µs ± 5% 386µs ± 7% ~ (p=0.631 n=10+10) GetCIDRLabelsConcurrent/4-8 784µs ± 5% 780µs ± 1% ~ (p=0.156 n=10+9) GetCIDRLabelsConcurrent/16-8 3.24ms ± 1% 3.25ms ± 2% ~ (p=0.529 n=10+10) GetCIDRLabelsConcurrent/32-8 6.40ms ± 1% 6.39ms ± 3% ~ (p=0.497 n=9+10) GetCIDRLabelsConcurrent/48-8 9.69ms ± 1% 10.09ms ± 6% +4.09% (p=0.008 n=8+9) name old alloc/op new alloc/op delta GetCIDRLabels/0.0.0.0/0-8 624B ± 0% 624B ± 0% ~ (all equal) GetCIDRLabels/10.16.0.0/16-8 2.40kB ± 0% 2.40kB ± 0% ~ (all equal) GetCIDRLabels/192.0.2.3/32-8 5.01kB ± 0% 5.01kB ± 0% ~ (all equal) GetCIDRLabels/192.0.2.3/24-8 5.01kB ± 0% 4.93kB ± 0% -1.64% (p=0.002 n=8+10) GetCIDRLabels/192.0.2.0/24-8 4.93kB ± 0% 4.93kB ± 0% ~ (all equal) GetCIDRLabels/::/0-8 624B ± 0% 624B ± 0% ~ (all equal) GetCIDRLabels/fdff::ff/128-8 18.5kB ± 0% 18.5kB ± 0% ~ (all equal) GetCIDRLabels/f00d:42::ff/128-8 18.5kB ± 0% 18.5kB ± 0% ~ (p=0.108 n=9+10) GetCIDRLabels/f00d:42::ff/96-8 18.5kB ± 0% 18.5kB ± 0% -0.06% (p=0.000 n=10+10) GetCIDRLabelsConcurrent/1-8 321kB ± 0% 321kB ± 0% ~ (p=0.127 n=10+8) GetCIDRLabelsConcurrent/2-8 641kB ± 0% 641kB ± 0% ~ (p=0.928 n=10+10) GetCIDRLabelsConcurrent/4-8 1.28MB ± 0% 1.28MB ± 0% ~ (p=0.853 n=10+10) GetCIDRLabelsConcurrent/16-8 5.13MB ± 0% 5.13MB ± 0% ~ (p=0.739 n=10+10) GetCIDRLabelsConcurrent/32-8 10.3MB ± 0% 10.3MB ± 0% ~ (p=0.218 n=10+10) GetCIDRLabelsConcurrent/48-8 15.4MB ± 0% 15.4MB ± 0% ~ (p=0.218 n=10+10) name old allocs/op new allocs/op delta GetCIDRLabels/0.0.0.0/0-8 2.00 ± 0% 2.00 ± 0% ~ (all equal) GetCIDRLabels/10.16.0.0/16-8 2.00 ± 0% 2.00 ± 0% ~ (all equal) GetCIDRLabels/192.0.2.3/32-8 2.00 ± 0% 2.00 ± 0% ~ (all equal) GetCIDRLabels/192.0.2.3/24-8 2.00 ± 0% 2.00 ± 0% ~ (all equal) GetCIDRLabels/192.0.2.0/24-8 2.00 ± 0% 2.00 ± 0% ~ (all equal) GetCIDRLabels/::/0-8 2.00 ± 0% 2.00 ± 0% ~ (all equal) GetCIDRLabels/fdff::ff/128-8 3.00 ± 0% 3.00 ± 0% ~ (all equal) GetCIDRLabels/f00d:42::ff/128-8 3.00 ± 0% 3.00 ± 0% ~ (all equal) GetCIDRLabels/f00d:42::ff/96-8 3.00 ± 0% 3.00 ± 0% ~ (all equal) GetCIDRLabelsConcurrent/1-8 138 ± 0% 138 ± 0% ~ (all equal) GetCIDRLabelsConcurrent/2-8 277 ± 0% 277 ± 0% ~ (all equal) GetCIDRLabelsConcurrent/4-8 555 ± 0% 555 ± 0% ~ (p=0.248 n=10+9) GetCIDRLabelsConcurrent/16-8 2.22k ± 0% 2.22k ± 0% ~ (p=0.353 n=7+10) GetCIDRLabelsConcurrent/32-8 4.44k ± 0% 4.44k ± 0% ~ (p=0.723 n=10+10) GetCIDRLabelsConcurrent/48-8 6.66k ± 0% 6.66k ± 0% ~ (p=0.090 n=10+9) Fixes: e0f6c47 ("labels/cidr: Cache GetCIDRLabels computation") Signed-off-by: Fabio Falzoi <fabio.falzoi@isovalent.com> Signed-off-by: Fabio Falzoi <fabio.falzoi@isovalent.com>
Configuration menu - View commit details
-
Copy full SHA for c74aad6 - Browse repository at this point
Copy the full SHA c74aad6View commit details -
labels/cidr: Improve CIDR labels testing
[ upstream commit 1b9b3fc ] After the introduction of a LRU cache in GetCIDRLabels, the tests should verify the labels computation both when the cache is cold but also when it is hot. Thus, the tests are refactored to check the returned labels twice. Also, an additional test is added to verify that the labels stay consistent when we call GetCIDRLabels with the following sequences of prefixes: 1) "xxx/32", "xxx/31", ..., "xxx/0", "xxx/1", ..., "xxx/32" 2) "xxx/0", "xxx/1", ..., "xxx/32", "xxx/31", ..., "xxx/0" Finally, InCluster tests are removed since cluster identity does not exist anymore. Signed-off-by: Fabio Falzoi <fabio.falzoi@isovalent.com> Signed-off-by: Fabio Falzoi <fabio.falzoi@isovalent.com>
Configuration menu - View commit details
-
Copy full SHA for 04cec79 - Browse repository at this point
Copy the full SHA 04cec79View commit details -
labels: Move away from checker for CIDR labels testing
[ upstream commit 9f2034e ] Migrate remaining tests relying on checker to the testing package from Go standard library. Signed-off-by: Fabio Falzoi <fabio.falzoi@isovalent.com> Signed-off-by: Fabio Falzoi <fabio.falzoi@isovalent.com>
Configuration menu - View commit details
-
Copy full SHA for 814f490 - Browse repository at this point
Copy the full SHA 814f490View commit details -
labels: Refactor CIDRLabelsCacheHeapUsage into tests
[ upstream commit 71b7ad5 ] TestCIDRLabelsCacheHeapUsageIP{v4,v6} are meant to estimate the maximum heap usage when filling up the CIDR labels LRU cache with labels derived only from IPv4 and labels derived only from IPv6. Since they give meaningful results only when running them with benchtime=1x, thery are refactored to be just tests with a t.Log() to output the heap usage statistics. Signed-off-by: Fabio Falzoi <fabio.falzoi@isovalent.com> Signed-off-by: Fabio Falzoi <fabio.falzoi@isovalent.com>
Configuration menu - View commit details
-
Copy full SHA for fd99b32 - Browse repository at this point
Copy the full SHA fd99b32View commit details -
labels: Halve CIDR labels LRU cache size
[ upstream commit 6f1253e ] After fixing the GetCIDRLabels implementation to generate all the labels required for a CIDR, the heap usage of the LRU cache increased 10x in the worst case (all IPv6 labels). To reduce heap usage, the cache size is halved, resulting in ~25 MiB in the IPv6 only case with roughly the same performance. === RUN TestCIDRLabelsCacheHeapUsageIPv4 cidr_test.go:527: Memoization map heap usage: 1714.41 KiB --- PASS: TestCIDRLabelsCacheHeapUsageIPv4 (0.67s) === RUN TestCIDRLabelsCacheHeapUsageIPv6 cidr_test.go:571: Memoization map heap usage: 26527.13 KiB --- PASS: TestCIDRLabelsCacheHeapUsageIPv6 (0.71s) name old time/op new time/op delta GetCIDRLabels/0.0.0.0/0-8 198ns ±40% 238ns ±34% ~ (p=0.325 n=10+10) GetCIDRLabels/10.16.0.0/16-8 1.32µs ± 8% 1.33µs ± 8% ~ (p=0.812 n=10+10) GetCIDRLabels/192.0.2.3/32-8 2.41µs ± 3% 2.39µs ± 5% ~ (p=0.278 n=10+9) GetCIDRLabels/192.0.2.3/24-8 2.05µs ± 2% 2.05µs ± 1% ~ (p=0.948 n=9+9) GetCIDRLabels/192.0.2.0/24-8 2.05µs ± 2% 2.04µs ± 1% ~ (p=0.797 n=9+8) GetCIDRLabels/::/0-8 277ns ±31% 257ns ± 1% ~ (p=0.349 n=10+8) GetCIDRLabels/fdff::ff/128-8 9.02µs ± 6% 8.80µs ± 3% ~ (p=0.077 n=9+9) GetCIDRLabels/f00d:42::ff/128-8 9.40µs ± 6% 9.01µs ± 5% -4.12% (p=0.035 n=10+10) GetCIDRLabels/f00d:42::ff/96-8 7.78µs ± 4% 7.58µs ± 1% -2.59% (p=0.011 n=8+9) GetCIDRLabelsConcurrent/1-8 189µs ± 8% 173µs ± 3% -8.85% (p=0.000 n=10+9) GetCIDRLabelsConcurrent/2-8 350µs ± 2% 346µs ± 1% -1.05% (p=0.001 n=8+8) GetCIDRLabelsConcurrent/4-8 703µs ± 1% 692µs ± 1% -1.59% (p=0.000 n=9+9) GetCIDRLabelsConcurrent/16-8 2.97ms ± 7% 2.91ms ± 1% ~ (p=0.122 n=10+8) GetCIDRLabelsConcurrent/32-8 5.81ms ± 1% 5.77ms ± 1% -0.57% (p=0.011 n=8+9) GetCIDRLabelsConcurrent/48-8 8.87ms ± 6% 8.71ms ± 1% ~ (p=0.139 n=9+8) name old alloc/op new alloc/op delta GetCIDRLabels/0.0.0.0/0-8 624B ± 0% 624B ± 0% ~ (all equal) GetCIDRLabels/10.16.0.0/16-8 2.40kB ± 0% 2.40kB ± 0% ~ (all equal) GetCIDRLabels/192.0.2.3/32-8 5.01kB ± 0% 5.01kB ± 0% ~ (all equal) GetCIDRLabels/192.0.2.3/24-8 4.93kB ± 0% 4.93kB ± 0% ~ (all equal) GetCIDRLabels/192.0.2.0/24-8 4.93kB ± 0% 4.93kB ± 0% ~ (all equal) GetCIDRLabels/::/0-8 624B ± 0% 624B ± 0% ~ (all equal) GetCIDRLabels/fdff::ff/128-8 18.5kB ± 0% 18.5kB ± 0% ~ (all equal) GetCIDRLabels/f00d:42::ff/128-8 18.5kB ± 0% 18.5kB ± 0% ~ (all equal) GetCIDRLabels/f00d:42::ff/96-8 18.5kB ± 0% 18.5kB ± 0% ~ (all equal) GetCIDRLabelsConcurrent/1-8 321kB ± 0% 321kB ± 0% ~ (p=0.645 n=10+10) GetCIDRLabelsConcurrent/2-8 641kB ± 0% 641kB ± 0% ~ (p=0.796 n=10+10) GetCIDRLabelsConcurrent/4-8 1.28MB ± 0% 1.28MB ± 0% ~ (p=0.353 n=10+10) GetCIDRLabelsConcurrent/16-8 5.13MB ± 0% 5.13MB ± 0% ~ (p=0.083 n=10+8) GetCIDRLabelsConcurrent/32-8 10.3MB ± 0% 10.3MB ± 0% ~ (p=0.481 n=10+10) GetCIDRLabelsConcurrent/48-8 15.4MB ± 0% 15.4MB ± 0% ~ (p=0.796 n=10+10) name old allocs/op new allocs/op delta GetCIDRLabels/0.0.0.0/0-8 2.00 ± 0% 2.00 ± 0% ~ (all equal) GetCIDRLabels/10.16.0.0/16-8 2.00 ± 0% 2.00 ± 0% ~ (all equal) GetCIDRLabels/192.0.2.3/32-8 2.00 ± 0% 2.00 ± 0% ~ (all equal) GetCIDRLabels/192.0.2.3/24-8 2.00 ± 0% 2.00 ± 0% ~ (all equal) GetCIDRLabels/192.0.2.0/24-8 2.00 ± 0% 2.00 ± 0% ~ (all equal) GetCIDRLabels/::/0-8 2.00 ± 0% 2.00 ± 0% ~ (all equal) GetCIDRLabels/fdff::ff/128-8 3.00 ± 0% 3.00 ± 0% ~ (all equal) GetCIDRLabels/f00d:42::ff/128-8 3.00 ± 0% 3.00 ± 0% ~ (all equal) GetCIDRLabels/f00d:42::ff/96-8 3.00 ± 0% 3.00 ± 0% ~ (all equal) GetCIDRLabelsConcurrent/1-8 138 ± 0% 138 ± 0% ~ (all equal) GetCIDRLabelsConcurrent/2-8 277 ± 0% 277 ± 0% ~ (all equal) GetCIDRLabelsConcurrent/4-8 555 ± 0% 555 ± 0% ~ (all equal) GetCIDRLabelsConcurrent/16-8 2.22k ± 0% 2.22k ± 0% ~ (p=0.176 n=10+7) GetCIDRLabelsConcurrent/32-8 4.44k ± 0% 4.44k ± 0% ~ (p=0.867 n=10+10) GetCIDRLabelsConcurrent/48-8 6.66k ± 0% 6.66k ± 0% ~ (p=0.682 n=8+10) Signed-off-by: Fabio Falzoi <fabio.falzoi@isovalent.com> Signed-off-by: Fabio Falzoi <fabio.falzoi@isovalent.com>
Configuration menu - View commit details
-
Copy full SHA for 266b572 - Browse repository at this point
Copy the full SHA 266b572View commit details -
ctmap: clean up hard-coded values
[ upstream commit 7f3888f ] De-obfuscate some of the flags to improve readability. Signed-off-by: Julian Wiedmann <jwi@isovalent.com> Signed-off-by: Fabio Falzoi <fabio.falzoi@isovalent.com>
Configuration menu - View commit details
-
Copy full SHA for 70d91ee - Browse repository at this point
Copy the full SHA 70d91eeView commit details -
ctmap: add GC test-case for SNATed TCP
[ upstream commit c19e447 ] Test that when the CT entry for 192.168.61.11:38193 -> 192.168.61.12:80 is removed, then the related IN and OUT NAT entries are purged along with it. Signed-off-by: Julian Wiedmann <jwi@isovalent.com> Signed-off-by: Fabio Falzoi <fabio.falzoi@isovalent.com>
Configuration menu - View commit details
-
Copy full SHA for 7724178 - Browse repository at this point
Copy the full SHA 7724178View commit details -
ctmap: add test for Legacy DSR
[ upstream commit 5b22423 ] DSR uses a OUT NAT entry for RevDNAT of backend replies. Prior to the changes in cilium#22978, this NAT entry was protected by the CT_INGRESS entry which bpf_lxc creates for the backend connection. Test that GC of the NAT entry works when the CT entry is removed. Signed-off-by: Julian Wiedmann <jwi@isovalent.com> Signed-off-by: Fabio Falzoi <fabio.falzoi@isovalent.com>
Configuration menu - View commit details
-
Copy full SHA for 4863f10 - Browse repository at this point
Copy the full SHA 4863f10View commit details -
ctmap: improve description for PurgeOrphanNATEntries()
[ upstream commit 6b9e351 ] Point out that PurgeOrphanNATEntries() is only a fallback, to purge NAT entries that are unexpectedly no longer backed by any CT entry. During normal operations NAT entries should get purged as part of the GC for their specific CT entry. Signed-off-by: Julian Wiedmann <jwi@isovalent.com> Signed-off-by: Fabio Falzoi <fabio.falzoi@isovalent.com>
Configuration menu - View commit details
-
Copy full SHA for a8eab2b - Browse repository at this point
Copy the full SHA a8eab2bView commit details -
ctmap: set
dsr
flag for relevant CT entries in TestOrphanNatGC()[ upstream commit 74b3f56 ] CT entries that get created for a DSR connection by the datapath will have the `dsr` flag set. Reflect this in the CT entries that we use for tests. The flag currently doesn't make a difference for the GC logic, but let's still be a bit more accurate. Signed-off-by: Julian Wiedmann <jwi@isovalent.com> Signed-off-by: Fabio Falzoi <fabio.falzoi@isovalent.com>
Configuration menu - View commit details
-
Copy full SHA for ad2c710 - Browse repository at this point
Copy the full SHA ad2c710View commit details -
ctmap: move some NAT GC logic into ctmap
[ upstream commit e743901 ] We want to add more advanced handling into the GC logic, which requires information from the actual CT entry. Let's consolidate all of the decision making in the purgeCtEntry*() functions, so that the NAT code doesn't need to understand all the details of how CT and NAT interact. Signed-off-by: Julian Wiedmann <jwi@isovalent.com> Signed-off-by: Fabio Falzoi <fabio.falzoi@isovalent.com>
Configuration menu - View commit details
-
Copy full SHA for b3d0437 - Browse repository at this point
Copy the full SHA b3d0437View commit details -
ctmap: limit DSR purge to CT entries with .dsr flag
[ upstream commit c1a2d1f ] Clarify which CT entries potentially require purging of a DSR-related NAT entry. This reduces the risk of accidentally purging unrelated NAT entries, and allows the GC logic to do less work. Signed-off-by: Julian Wiedmann <jwi@isovalent.com> Signed-off-by: Fabio Falzoi <fabio.falzoi@isovalent.com>
Configuration menu - View commit details
-
Copy full SHA for 712f9f1 - Browse repository at this point
Copy the full SHA 712f9f1View commit details -
ctmap: add NAT purge for nodeport-backed DSR NAT entries
[ upstream commit 21072cd ] With cilium#22978 we changed how DSR NAT entries are managed. Instead of associating the NAT entry's lifetime with bpf_lxc's CT_INGRESS entry, the nodeport code on the backend now creates its own CT_EGRESS entry. When such a CT_EGRESS entry is GC'ed, we should therefore also purge the related DSR NAT entry. Also add a test for this case. Signed-off-by: Julian Wiedmann <jwi@isovalent.com> Signed-off-by: Fabio Falzoi <fabio.falzoi@isovalent.com>
Configuration menu - View commit details
-
Copy full SHA for 2a4145c - Browse repository at this point
Copy the full SHA 2a4145cView commit details