Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

doc: Ensure ConfigMap remains compatible across 1.7 -> 1.8 upgrade #12097

Merged
merged 1 commit into from
Jun 18, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
4 changes: 2 additions & 2 deletions Documentation/concepts/scalability/report.rst
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ Setup
helm template cilium \\
--namespace kube-system \\
--set global.endpointHealthChecking.enabled=false \\
--set global.healthChecking.enabled=false \\
--set config.healthChecking=false \\
--set config.ipam=kubernetes \\
--set global.k8sServiceHost=<KUBE-APISERVER-LB-IP-ADDRESS> \\
--set global.k8sServicePort=<KUBE-APISERVER-LB-PORT-NUMBER> \\
Expand All @@ -33,7 +33,7 @@ Setup


* ``--set global.endpointHealthChecking.enabled=false`` and
``--set global.healthChecking.enabled=false`` disable endpoint health
``--set config.healthChecking=false`` disable endpoint health
checking entirely. However it is recommended that those features be enabled
initially on a smaller cluster (3-10 nodes) where it can be used to detect
potential packet loss due to firewall rules or hypervisor settings.
Expand Down
2 changes: 1 addition & 1 deletion Documentation/gettingstarted/kubeproxy-free.rst
Original file line number Diff line number Diff line change
Expand Up @@ -751,7 +751,7 @@ replacement. For users who run on older kernels which do not support the network
namespace cookies, a fallback in-cluster mode is implemented, which is based on
a fixed cookie value as a trade-off. This makes all applications on the host to
select the same service endpoint for a given service with session affinity configured.
To disable the feature, set ``global.sessionAffinity.enabled=false``.
To disable the feature, set ``config.sessionAffinity=false``.

Limitations
###########
Expand Down
30 changes: 18 additions & 12 deletions Documentation/install/upgrade.rst
Original file line number Diff line number Diff line change
Expand Up @@ -144,13 +144,15 @@ version of Cilium are `here <https://github.com/cilium/cilium#stable-releases>`_
Upgrading to the latest micro release ensures the most seamless experience if a
rollback is required following the minor release upgrade.

Step 2: Option A: Generate YAML using Helm (Recommended)
--------------------------------------------------------
Step 2: Option A: Regenerate deployment files with upgrade compatibility (Recommended)
--------------------------------------------------------------------------------------

Since Cilium version 1.6, `Helm` is used to generate the YAML file for
deployment. This allows to regenerate the entire YAML from scratch using the
same option sets as used for the initial deployment while ensuring that all
Kubernetes resources are updated accordingly to version you are upgrading to:
`Helm` can be used to generate the YAML files for deployment. This allows to
regenerate all files from scratch for the new release. By specifying the option
``--set config.upgradeCompatibility=1.7``, the generated files are guaranteed
to not contain an options with side effects as you upgrade from version 1.7.
You still need to ensure that you are specifying the same options as used for
the initial deployment:
Comment on lines +154 to +155
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure if doable right now but it's probably worth us revisiting either --reuse-values and/or export + -f old-options-file at some point to clearly define this.


.. include:: ../gettingstarted/k8s-install-download-release.rst

Expand All @@ -162,6 +164,8 @@ Kubernetes resources are updated accordingly to version you are upgrading to:
.. parsed-literal::

helm template |CHART_RELEASE| \\
--set config.upgradeCompatibility=1.7 \\
--agent.keepDeprecatedProbes=true \\
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this a leftover? --agent.keepDeprecatedProbes=true will not make sense for users upgrading from 1.8 to 1.9. Actually, shouldn't these instructions be version agnostic as we have dedicated sections when users are upgrading to a specific version?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's a hard question, it will still be needed for most users as they come from 1.7 -> 1.8 -> 1.9.

The alternative is to default on or HTTP probes but also provide an options and to default off for <1.8 upgradeCompatibility but allow users to overwrite it.

--namespace kube-system \\
> cilium.yaml
kubectl apply -f cilium.yaml
Expand All @@ -172,7 +176,10 @@ Kubernetes resources are updated accordingly to version you are upgrading to:

.. parsed-literal::

helm upgrade cilium |CHART_RELEASE| --namespace=kube-system
helm upgrade cilium |CHART_RELEASE| \\
--namespace=kube-system \\
--set config.upgradeCompatibility=1.7 \\
--agent.keepDeprecatedProbes=true

.. note::

Expand All @@ -181,7 +188,7 @@ Kubernetes resources are updated accordingly to version you are upgrading to:
``install/kubernetes/cilium/values.yaml`` and use it to regenerate the YAML
tgraf marked this conversation as resolved.
Show resolved Hide resolved
for the latest version. Running any of the previous commands will overwrite
the existing cluster's `ConfigMap` which might not be ideal if you want to
keep your existing `ConfigMap`.
keep your existing `ConfigMap` (see next option).

Step 2: Option B: Preserve ConfigMap
------------------------------------
tgraf marked this conversation as resolved.
Show resolved Hide resolved
Expand Down Expand Up @@ -413,9 +420,8 @@ IMPORTANT: Changes required before upgrading to 1.8.0
For large clusters running CRD mode, this visibility is costly as it requires
all nodes to participate. In order to ensure scalability, ``CiliumNetworkPolicy``
status visibility has been disabled for all new deployments. If you want to
enable it, set the ConfigMap option ``disable-cnp-status-updates`` to false by
using Helm ``--set global.cnpStatusUpdates.enabled=true`` or by editing the
``ConfigMap`` directly.
enable it, set the ConfigMap option ``disable-cnp-status-updates`` to false or
set the Helm variable ``--set config.enableCnpStatusUpdates=true``.

* Prior to 1.8 release, Cilium's eBPF-based kube-proxy replacement was not able
to handle Kubernetes HostPort feature and therefore CNI chaining with the
Expand Down Expand Up @@ -639,7 +645,7 @@ Removed options

Removed helm options
~~~~~~~~~~~~~~~~~~~~
* ``operator.synchronizeK8sNodes``: was removed and replaced with ``global.synchronizeK8sNodes``
* ``operator.synchronizeK8sNodes``: was removed and replaced with ``config.synchronizeK8sNodes``

Removed resource fields
~~~~~~~~~~~~~~~~~~~~~~~
Expand Down
123 changes: 89 additions & 34 deletions install/kubernetes/cilium/charts/config/templates/configmap.yaml
Original file line number Diff line number Diff line change
@@ -1,3 +1,35 @@
{{- /* Default values with backwards compatibility */ -}}
{{- $defaultEnableCnpStatusUpdates := "true" -}}
{{- $defaultBpfMapDynamicSizeRatio := 0.0 -}}
{{- $defaultBpfMasquerade := "false" -}}
{{- $defaultBpfClockProbe := "false" -}}
{{- $defaultIPAM := "hostscope" -}}
{{- $defaultSessionAffinity := "false" -}}
{{- $defaultOperatorApiServeAddr := "localhost:9234" -}}
{{- $defaultBpfCtTcpMax := 524288 -}}
{{- $defaultBpfCtAnyMax := 262144 -}}

{{- /* Default values when 1.8 was initially deployed */ -}}
{{- if semverCompare ">=1.8" (default "1.8" .Values.upgradeCompatibility) -}}
tgraf marked this conversation as resolved.
Show resolved Hide resolved
{{- $defaultEnableCnpStatusUpdates = "false" -}}
{{- $defaultBpfMapDynamicSizeRatio = 0.0025 -}}
{{- $defaultBpfMasquerade = "true" -}}
{{- $defaultBpfClockProbe = "true" -}}
{{- $defaultIPAM = "cluster-pool" -}}
{{- $defaultSessionAffinity = "true" -}}
{{- if .Values.global.ipv4.enabled }}
{{- $defaultOperatorApiServeAddr = "127.0.0.1:9234" -}}
{{- else -}}
{{- $defaultOperatorApiServeAddr = "[::1]:9234" -}}
{{- end }}
{{- $defaultBpfCtTcpMax = 0 -}}
{{- $defaultBpfCtAnyMax = 0 -}}
{{- end -}}

{{- $ipam := (coalesce .Values.ipam $defaultIPAM) -}}
{{- $bpfCtTcpMax := (coalesce .Values.global.bpf.ctTcpMax $defaultBpfCtTcpMax) -}}
{{- $bpfCtAnyMax := (coalesce .Values.global.bpf.ctAnyMax $defaultBpfCtAnyMax) -}}

apiVersion: v1
kind: ConfigMap
metadata:
Expand Down Expand Up @@ -55,12 +87,14 @@ data:
cilium-endpoint-gc-interval: "{{ .Values.global.endpointGCInterval }}"
{{- end }}

{{- if .Values.identityChangeGracePeriod }}
# identity-change-grace-period is the grace period that needs to pass
# before an endpoint that has changed its identity will start using
# that new identity. During the grace period, the new identity has
# already been allocated and other nodes in the cluster have a chance
# to whitelist the new upcoming identity of the endpoint.
identity-change-grace-period: {{ .Values.global.identityChangeGracePeriod | quote }}
identity-change-grace-period: {{ default "5s" .Values.identityChangeGracePeriod | quote }}
{{- end }}

# If you want to run cilium in debug mode change this value to true
debug: {{ .Values.global.debug.enabled | quote }}
Expand All @@ -69,7 +103,7 @@ data:
debug-verbose: "{{ .Values.global.debug.verbose }}"
{{- end }}

{{- if .Values.global.agent.healthPort }}
{{- if ne (int .Values.global.agent.healthPort) 9876 }}
# Set the TCP port for the agent health status API. This is not the port used
# for cilium-health.
agent-health-port: "{{ .Values.global.agent.healthPort }}"
Expand Down Expand Up @@ -132,7 +166,11 @@ data:
custom-cni-conf: "{{ .Values.global.cni.customConf }}"
{{- end }}

enable-bpf-clock-probe: {{ .Values.global.bpf.clockProbe | quote }}
{{- if hasKey .Values "bpfClockProbe" }}
enable-bpf-clock-probe: {{ .Values.bpfClockProbe | quote }}
{{- else if eq $defaultBpfClockProbe "true" }}
enable-bpf-clock-probe: {{ $defaultBpfClockProbe | quote }}
{{- end }}

# If you want cilium monitor to aggregate tracing for packets, set this level
# to "low", "medium", or "maximum". The higher the level, the less packets
Expand All @@ -151,10 +189,7 @@ data:
# Only effective when monitor aggregation is set to "medium" or higher.
monitor-aggregation-flags: {{ .Values.global.bpf.monitorFlags }}

# Configure blacklisting of local routes not owned by Cilium.
blacklist-conflicting-routes: "{{ .Values.global.blacklistConflictingRoutes }}"

{{- if or .Values.global.bpf.ctTcpMax .Values.global.bpf.ctAnyMax }}
{{- if or $bpfCtTcpMax $bpfCtAnyMax }}
# bpf-ct-global-*-max specifies the maximum number of connections
# supported across all endpoints, split by protocol: tcp or other. One pair
# of maps uses these values for IPv4 connections, and another pair of maps
Expand All @@ -166,11 +201,11 @@ data:
#
# For users upgrading from Cilium 1.2 or earlier, to minimize disruption
# during the upgrade process, set bpf-ct-global-tcp-max to 1000000.
{{- if .Values.global.bpf.ctTcpMax }}
bpf-ct-global-tcp-max: "{{ .Values.global.bpf.ctTcpMax }}"
{{- if $bpfCtTcpMax }}
bpf-ct-global-tcp-max: {{ $bpfCtTcpMax | quote }}
{{- end }}
{{- if .Values.global.bpf.ctAnyMax }}
bpf-ct-global-any-max: "{{ .Values.global.bpf.ctAnyMax }}"
{{- if $bpfCtAnyMax }}
bpf-ct-global-any-max: {{ $bpfCtAnyMax | quote }}
{{- end }}
{{- end }}

Expand All @@ -192,9 +227,13 @@ data:
bpf-policy-map-max: "{{ .Values.global.bpf.policyMapMax }}"
{{- end }}

{{- if hasKey .Values "bpfMapDynamicSizeRatio" }}
bpf-map-dynamic-size-ratio: {{ .Values.bpfMapDynamicSizeRatio | quote }}
{{- else if ne $defaultBpfMapDynamicSizeRatio 0.0 }}
# Specifies the ratio (0.0-1.0) of total system memory to use for dynamic
# sizing of the TCP CT, non-TCP CT, NAT and policy BPF maps.
bpf-map-dynamic-size-ratio: "{{ .Values.global.bpf.mapDynamicSizeRatio }}"
bpf-map-dynamic-size-ratio: {{ $defaultBpfMapDynamicSizeRatio | quote }}
{{- end }}

# Pre-allocation of map entries allows per-packet latency to be reduced, at
# the expense of up-front memory allocation for the entries in the maps. The
Expand Down Expand Up @@ -319,7 +358,11 @@ data:
{{- end }}

masquerade: {{ .Values.global.masquerade | quote }}
enable-bpf-masquerade: {{ .Values.global.bpfMasquerade | quote }}
{{- if hasKey .Values "bpfMasquerade" }}
enable-bpf-masquerade: {{ .Values.bpfMasquerade | quote }}
{{- else if eq $defaultBpfMasquerade "true" }}
enable-bpf-masquerade: {{ $defaultBpfMasquerade | quote }}
{{- end }}
{{- if .Values.global.egressMasqueradeInterfaces }}
egress-masquerade-interfaces: {{ .Values.global.egressMasqueradeInterfaces }}
{{- end }}
Expand Down Expand Up @@ -406,8 +449,10 @@ data:
node-port-bind-protection: {{ .Values.global.nodePort.bindProtection | quote }}
enable-auto-protect-node-port-range: {{ .Values.global.nodePort.autoProtectPortRange | quote }}
{{- end }}
{{- if .Values.global.sessionAffinity }}
enable-session-affinity: {{ .Values.global.sessionAffinity.enabled | quote }}
{{- if hasKey .Values "sessionAffinity" }}
enable-session-affinity: {{ .Values.sessionAffinity | quote }}
{{- else if eq $defaultSessionAffinity "true" }}
enable-session-affinity: {{ $defaultSessionAffinity | quote }}
{{- end }}

{{- if and .Values.global.pprof .Values.global.pprof.enabled }}
Expand All @@ -425,11 +470,11 @@ data:
{{- if and .Values.global.k8s .Values.global.k8s.requireIPv4PodCIDR }}
k8s-require-ipv4-pod-cidr: {{ .Values.global.k8s.requireIPv4PodCIDR | quote }}
{{- else }}
{{- if eq .Values.ipam "cluster-pool" }}
{{- if eq $ipam "cluster-pool" }}
k8s-require-ipv4-pod-cidr: {{ .Values.global.ipv4.enabled | quote}}
{{- end }}
{{- end }}
{{- if eq .Values.ipam "cluster-pool" }}
{{- if eq $ipam "cluster-pool" }}
k8s-require-ipv6-pod-cidr: {{ .Values.global.ipv6.enabled | quote}}
{{- end }}
{{- if and .Values.global.endpointRoutes .Values.global.endpointRoutes.enabled }}
Expand All @@ -450,10 +495,8 @@ data:
# Disable health checking, when chaining mode is not set to portmap or none
enable-endpoint-health-checking: "false"
{{- end }}
{{- if .Values.global.healthChecking.enabled }}
enable-health-checking: "true"
{{- else}}
enable-health-checking: "false"
{{- if hasKey .Values "healthChecking" }}
enable-health-checking: {{ .Values.healthChecking | quote }}
{{- end }}
{{- if or .Values.global.wellKnownIdentities.enabled .Values.global.etcd.managed }}
enable-well-known-identities: "true"
Expand All @@ -462,18 +505,20 @@ data:
{{- end }}
enable-remote-node-identity: {{ .Values.global.remoteNodeIdentity | quote }}

synchronize-k8s-nodes: {{ .Values.global.synchronizeK8sNodes | quote }}
policy-audit-mode: {{ .Values.global.policyAuditMode | quote }}
{{- if hasKey .Values "synchronizeK8sNodes" }}
synchronize-k8s-nodes: {{ .Values.synchronizeK8sNodes | quote }}
{{- end }}
{{- if .Values.policyAuditMode }}
policy-audit-mode: {{ .Values.policyAuditMode | quote }}
{{- end }}

{{- if .Values.global.ipv4.enabled }}
operator-api-serve-addr: '127.0.0.1:9234'
{{- else }}
operator-api-serve-addr: '[::1]:9234'
{{- if ne $defaultOperatorApiServeAddr "localhost:9234" }}
operator-api-serve-addr: {{ $defaultOperatorApiServeAddr | quote }}
{{- end }}

{{- if .Values.global.hubble.enabled }}
# Enable Hubble gRPC service.
enable-hubble: {{ .Values.global.hubble.enabled | quote }}
{{- if .Values.global.hubble.enabled }}
# UNIX domain socket for Hubble server to listen to.
hubble-socket-path: {{ .Values.global.hubble.socketPath | quote }}
{{- if .Values.global.hubble.eventQueueSize }}
Expand All @@ -499,11 +544,14 @@ data:
# An additional address for Hubble server to listen to (e.g. ":4244").
hubble-listen-address: {{ .Values.global.hubble.listenAddress | quote }}
{{- end }}

{{- if .Values.disableIptablesFeederRules }}
# A space separated list of iptables chains to disable when installing feeder rules.
disable-iptables-feeder-rules: {{ .Values.global.disableIptablesFeederRules | join " " | quote }}
ipam: {{ default "cluster-pool" .Values.ipam | quote }}
{{- if eq .Values.ipam "cluster-pool" }}
disable-iptables-feeder-rules: {{ .Values.disableIptablesFeederRules | join " " | quote }}
{{- end }}
{{- if ne $ipam "hostscope" }}
ipam: {{ $ipam | quote }}
{{- end }}
{{- if eq $ipam "cluster-pool" }}
{{- if .Values.global.ipv4.enabled }}
cluster-pool-ipv4-cidr: {{ .Values.global.ipam.operator.clusterPoolIPv4PodCIDR | quote }}
cluster-pool-ipv4-mask-size: {{ .Values.global.ipam.operator.clusterPoolIPv4MaskSize | quote }}
Expand All @@ -514,6 +562,13 @@ data:
{{- end }}
{{- end }}

{{- if .Values.global.cnpStatusUpdates }}
disable-cnp-status-updates: {{ not .Values.global.cnpStatusUpdates.enabled | quote }}
{{- if hasKey .Values "enableCnpStatusUpdates" }}
disable-cnp-status-updates: {{ (not .Values.enableCnpStatusUpdates) | quote }}
{{- else if (eq $defaultEnableCnpStatusUpdates "false") }}
disable-cnp-status-updates: "true"
{{- end }}

{{- if hasKey .Values "blacklistConflictingRoutes" }}
# Configure blacklisting of local routes not owned by Cilium.
blacklist-conflicting-routes: {{ .Values.blacklistConflictingRoutes | quote }}
{{- end }}
42 changes: 41 additions & 1 deletion install/kubernetes/cilium/charts/config/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -6,4 +6,44 @@
# - eni
# - azure
# - crd
ipam: "cluster-pool"
#ipam: "cluster-pool"

# identityChangeGracePeriod is the grace period that needs to pass
# before an endpoint that has changed its identity will start using
# that new identity. During the grace period, the new identity has
# already been allocated and other nodes in the cluster have a chance
# to whitelist the new upcoming identity of the endpoint.
#identityChangeGracePeriod: "5s"

#enableCnpStatusUpdates: false

# bpfMapDynamicSizeRatio is the ratio (0.0-1.0) of total system memory to use
# for dynamic sizing of CT, NAT, neighbor and SockRevNAT BPF maps. If set to
# 0.0, dynamic sizing of BPF maps is disabled. The default value of 0.0025
# (0.25%) leads to approximately the default CT size kube-proxy sets on a
# node with 16 GiB of total system memory.
#bpfMapDynamicSizeRatio: 0.0025

# bpfMasquerade enables masquerading with BPF instead of iptables
#bpfMasquerade: true

# bpfClockProbe enables the probing and potential of a more efficient clock
# source for the BPF datapath
#bpfClockProbe: true

# blacklistConflictingRoutes instructs the cilium-agent whether to blacklist
# IP allocations conflicting with local non-cilium routes.
#blacklistConflictingRoutes: true

# sessionAffinity enable support for service sessionAffinity
#sessionAffinity: true

#healthChecking: true

#synchronizeK8sNodes: true

# enables non-drop mode for installed policies. In audit mode
# packets affected by policies will not be dropped. Policy related
# decisions can be checked via the poicy verdict messages.
#policyAuditMode: false

2 changes: 0 additions & 2 deletions install/kubernetes/cilium/charts/operator/values.yaml
Original file line number Diff line number Diff line change
@@ -1,6 +1,4 @@
image: operator
# Deprecated: please use synchronizeK8sNodes in install/kubernetes/cilium/values.yaml
synchronizeK8sNodes: true

# Service account annotations
serviceAccount:
Expand Down