New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
istio-cni-node CrashLoopBackOff istio 1.10.4 #34904
Comments
I wonder if it is because I was using the istioctl version 1.11.0, i have downgraded my cli to 1.10.4. when i was trying 1.11.0 this install line said it was installing the default profile, now when i changed my cli to 1.10.4 this install line appears normal: This will install the Istio 1.10.4 profile with ["Istio core" "Istiod" "CNI" "Ingress gateways" "Egress gateways"] components into the cluster. Proceed? (y/N) y |
hmm does appear to be problem installing istio 1.10 yaml with an istioctl of 1.11 |
How is downgrade performed? Did you update the image directly? |
@Monkeyanator I think we had similar problems with mutating webhook - did we ever solve it or just work around? |
@bianpengyuan I switched my cli to use istioctl 1.10.4 and then the istioctl install of the 1.10.4 based yaml worked fine. |
Worked around that by manually setting a field that would get incorrectly overwritten during SSA, seems to be different problem from here since I don't think istioctl version mattered. |
🚧 This issue or pull request has been closed due to not having had activity from an Istio team member since 2021-08-30. If you feel this issue or pull request deserves attention, please reopen the issue. Please see this wiki page for more information. Thank you for your contributions. Created by the issue and PR lifecycle manager. |
Bug description
istio-cni-node in CrashLoopBackOff state
Expected behavior
istio cni nodes to start
Steps to reproduce the bug
Not sure if its due to upgrading to 1.11.0 and then downgrading to 1.10.4.
The first error I got was this during the install:
2021-08-27T11:48:02.448091Z error installer failed to update resource with server-side apply for obj DaemonSet/kube-system/istio-cni-node: DaemonSet.apps "istio-cni-node" is invalid: spec.template.spec.containers[1].image: Required value
✘ CNI encountered an error: failed to update resource with server-side apply for obj DaemonSet/kube-system/istio-cni-node: DaemonSet.apps "istio-cni-node" is invalid: spec.template.spec.containers[1].image: Required value
I then deleted the istio-cni-node daemonset from the kube-system namespace and tried to install again. I didn't get the same error but now the istio cni nodes won't start
here is the log from one of the istio cni node pods:
kubectl logs istio-cni-node-4h7kb -n kube-system
2021-08-27T12:00:49.959194Z info install cni with configuration:
CNINetDir: /etc/cni/multus/net.d
MountedCNINetDir: /host/etc/cni/net.d
CNIConfName: istio-cni.conf
ChainedCNIPlugin: false
CNINetworkConfigFile:
CNINetworkConfig: {
"cniVersion": "0.3.1",
"name": "istio-cni",
"type": "istio-cni",
"log_level": "info",
"log_uds_address": "LOG_UDS_ADDRESS",
"kubernetes": {
"kubeconfig": "KUBECONFIG_FILEPATH",
"cni_bin_dir": "/var/lib/cni/bin",
"exclude_namespaces": [ "istio-system", "kube-system" ]
}
}
LogLevel: warn
KubeconfigFilename: ZZZ-istio-cni-kubeconfig
KubeconfigMode: 0600
KubeCAFile:
SkipTLSVerify: false
K8sServiceProtocol:
K8sServiceHost: 172.21.0.1
K8sServicePort: 443
K8sNodeName: istio-cni-node-4h7kb
UpdateCNIBinaries: true
SkipCNIBinaries: [[]]
2021-08-27T12:00:49.959343Z info Directory /host/opt/cni/bin is not writable, skipping.
2021-08-27T12:00:49.959371Z info Directory /host/secondary-bin-dir is not writable, skipping.
2021-08-27T12:00:49.959845Z info write kubeconfig file /host/etc/cni/net.d/ZZZ-istio-cni-kubeconfig with:
Kubeconfig file for Istio CNI plugin.
apiVersion: v1
kind: Config
clusters:
cluster:
server: https://[172.21.0.1]:443
certificate-authority-data: <CA cert from /var/run/secrets/kubernetes.io/serviceaccount/ca.crt>
users:
user:
token: ""
contexts:
context:
cluster: local
user: istio-cni
current-context: istio-cni-context
2021-08-27T12:00:49.959999Z info Cleaning up.
2021-08-27T12:00:49.960034Z info Removing existing binaries
Error: open /host/etc/cni/net.d/ZZZ-istio-cni-kubeconfig.tmp.902500738: permission denied
Usage:
install-cni [flags]
Flags:
--chained-cni-plugin Whether to install CNI plugin as a chained or standalone (default true)
--cni-conf-name string Name of the CNI configuration file
--cni-net-dir string Directory on the host where CNI networks are installed (default "/etc/cni/net.d")
--cni-network-config string CNI config template as a string
--cni-network-config-file string CNI config template as a file
-h, --help help for install-cni
--kube-ca-file string CA file for kubeconfig. Defaults to the pod one
--kubecfg-file-name string Name of the kubeconfig file (default "ZZZ-istio-cni-kubeconfig")
--kubeconfig-mode int File mode of the kubeconfig file (default 384)
--log-level string Fallback value for log level in CNI config file, if not specified in helm template (default "warn")
--mounted-cni-net-dir string Directory on the container where CNI networks are installed (default "/host/etc/cni/net.d")
--skip-cni-binaries stringArray Binaries that should not be installed
--skip-tls-verify Whether to use insecure TLS in kubeconfig file
--update-cni-binaries Update binaries (default true)
Version (include the output of
istioctl version --remote
andkubectl version --short
andhelm version --short
if you used Helm)istioctl version --remote
client version: 1.11.0
control plane version: 1.10.4
data plane version: 1.10.3 (248 proxies), 1.10.4 (10 proxies)
kubectl version --short
Client Version: v1.18.0
Server Version: v1.19.0+d856161
helm version --short
v3.5.2+g167aac7
How was Istio installed?
istioctl
Environment where the bug was observed (cloud vendor, OS, etc)
IBM Cloud
openshift 4.6
Additionally, please consider running
istioctl bug-report
and attach the generated cluster-state tarball to this issue.Refer cluster state archive for more details.
The text was updated successfully, but these errors were encountered: