When running with option `--set upgradeCompatibility=1.7`, then the diff
between the ConfigMaps is:
```
@@ -60,8 +60,7 @@
#
# Only effective when monitor aggregation is set to "medium" or higher.
monitor-aggregation-flags: all
-
- # ct-global-max-entries-* specifies the maximum number of connections
+ # bpf-ct-global-*-max specifies the maximum number of connections
# supported across all endpoints, split by protocol: tcp or other. One pair
# of maps uses these values for IPv4 connections, and another pair of maps
# use these values for IPv6 connections.
@@ -71,10 +70,9 @@
# policy drops or a change in loadbalancing decisions for a connection.
#
# For users upgrading from Cilium 1.2 or earlier, to minimize disruption
- # during the upgrade process, comment out these options.
+ # during the upgrade process, set bpf-ct-global-tcp-max to 1000000.
bpf-ct-global-tcp-max: "524288"
bpf-ct-global-any-max: "262144"
-
# bpf-policy-map-max specified the maximum number of entries in endpoint
# policy map (per endpoint)
bpf-policy-map-max: "16384"
@@ -140,9 +138,6 @@
install-iptables-rules: "true"
auto-direct-node-routes: "false"
kube-proxy-replacement: "probe"
- enable-host-reachable-services: "false"
- enable-external-ips: "false"
- enable-node-port: "false"
node-port-bind-protection: "true"
enable-auto-protect-node-port-range: "true"
enable-endpoint-health-checking: "true"
```
When running without upgradeCompatibility, the diff is:
```
@@ -43,6 +43,7 @@
# Enable IPv6 addressing. If enabled, all endpoints are allocated an IPv6
# address.
enable-ipv6: "false"
+ enable-bpf-clock-probe: "true"
# If you want cilium monitor to aggregate tracing for packets, set this level
# to "low", "medium", or "maximum". The higher the level, the less packets
@@ -60,24 +61,12 @@
#
# Only effective when monitor aggregation is set to "medium" or higher.
monitor-aggregation-flags: all
-
- # ct-global-max-entries-* specifies the maximum number of connections
- # supported across all endpoints, split by protocol: tcp or other. One pair
- # of maps uses these values for IPv4 connections, and another pair of maps
- # use these values for IPv6 connections.
- #
- # If these values are modified, then during the next Cilium startup the
- # tracking of ongoing connections may be disrupted. This may lead to brief
- # policy drops or a change in loadbalancing decisions for a connection.
- #
- # For users upgrading from Cilium 1.2 or earlier, to minimize disruption
- # during the upgrade process, comment out these options.
- bpf-ct-global-tcp-max: "524288"
- bpf-ct-global-any-max: "262144"
-
# bpf-policy-map-max specified the maximum number of entries in endpoint
# policy map (per endpoint)
bpf-policy-map-max: "16384"
+ # Specifies the ratio (0.0-1.0) of total system memory to use for dynamic
+ # sizing of the TCP CT, non-TCP CT, NAT and policy BPF maps.
+ bpf-map-dynamic-size-ratio: "0.0025"
# Pre-allocation of map entries allows per-packet latency to be reduced, at
# the expense of up-front memory allocation for the entries in the maps. The
@@ -136,18 +125,24 @@
wait-bpf-mount: "false"
masquerade: "true"
+ enable-bpf-masquerade: "true"
enable-xt-socket-fallback: "true"
install-iptables-rules: "true"
auto-direct-node-routes: "false"
kube-proxy-replacement: "probe"
- enable-host-reachable-services: "false"
- enable-external-ips: "false"
- enable-node-port: "false"
node-port-bind-protection: "true"
enable-auto-protect-node-port-range: "true"
+ enable-session-affinity: "true"
+ k8s-require-ipv4-pod-cidr: "true"
+ k8s-require-ipv6-pod-cidr: "false"
enable-endpoint-health-checking: "true"
enable-well-known-identities: "false"
enable-remote-node-identity: "true"
+ operator-api-serve-addr: "127.0.0.1:9234"
+ ipam: "cluster-pool"
+ cluster-pool-ipv4-cidr: "10.0.0.0/8"
+ cluster-pool-ipv4-mask-size: "24"
+ disable-cnp-status-updates: "true"
```
Signed-off-by: Thomas Graf <thomas@cilium.io>