New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ambient does not work on minikube #46163
Comments
Link to bug-report.tar.gz |
I tried starting minikube with |
with kindnet CNI:
istioctl bug-report: bug-report.tar.gz |
The initial warns are from node cleanup, as the logs mention, they're not relevant and are only logged as WARN:
This is an actual error, however:
CNI can't seem to find a valid route on the node for the ztunnel pod IP k8s gives it, and so the node initialization fails before it gets to creating the ipset. This is effectively a catastrophic failure (tho it doesn't put the CNI agent into an unready state - it probably should, but that's a bit tricky given the CNI agent does double duty for sidecar and ambient). Check |
ztunnel pod and DS both look OK (the resource yaml for both are further down below):
Ztunnel pod logs:
Here's the tail of the istio-cni-node logs:
ztunnel pod YAML: $ kubectl get -n istio-system pods -l app=ztunnel -oyaml
apiVersion: v1
items:
- apiVersion: v1
kind: Pod
metadata:
annotations:
ambient.istio.io/redirection: disabled
cni.projectcalico.org/allowedSourcePrefixes: '["0.0.0.0/0"]'
prometheus.io/port: "15020"
prometheus.io/scrape: "true"
sidecar.istio.io/inject: "false"
creationTimestamp: "2023-07-25T23:49:30Z"
generateName: ztunnel-
labels:
app: ztunnel
controller-revision-hash: 5577d475d5
pod-template-generation: "1"
sidecar.istio.io/inject: "false"
name: ztunnel-thlvf
namespace: istio-system
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: DaemonSet
name: ztunnel
uid: a86eb416-288f-4252-8480-0dd95494156f
resourceVersion: "846"
uid: e6f0414c-d539-482e-8cdc-30634253a916
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchFields:
- key: metadata.name
operator: In
values:
- minikube
containers:
- args:
- proxy
- ztunnel
env:
- name: CLUSTER_ID
value: Kubernetes
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
- name: INSTANCE_IP
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.podIP
- name: SERVICE_ACCOUNT
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.serviceAccountName
image: gcr.io/istio-testing/ztunnel:1.19-alpha.c641d08aa437381c3678805e17c0479f247e714a
imagePullPolicy: IfNotPresent
name: istio-proxy
ports:
- containerPort: 15020
name: ztunnel-stats
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz/ready
port: 15021
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources:
requests:
cpu: 500m
memory: 2Gi
securityContext:
allowPrivilegeEscalation: false
capabilities:
add:
- NET_ADMIN
drop:
- ALL
privileged: false
readOnlyRootFilesystem: true
runAsGroup: 1337
runAsNonRoot: false
runAsUser: 0
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/istio
name: istiod-ca-cert
- mountPath: /var/run/secrets/tokens
name: istio-token
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-mz2hf
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
nodeName: minikube
nodeSelector:
kubernetes.io/os: linux
preemptionPolicy: PreemptLowerPriority
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: ztunnel
serviceAccountName: ztunnel
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoSchedule
operator: Exists
- key: CriticalAddonsOnly
operator: Exists
- effect: NoExecute
operator: Exists
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
- effect: NoSchedule
key: node.kubernetes.io/disk-pressure
operator: Exists
- effect: NoSchedule
key: node.kubernetes.io/memory-pressure
operator: Exists
- effect: NoSchedule
key: node.kubernetes.io/pid-pressure
operator: Exists
- effect: NoSchedule
key: node.kubernetes.io/unschedulable
operator: Exists
volumes:
- name: istio-token
projected:
defaultMode: 420
sources:
- serviceAccountToken:
audience: istio-ca
expirationSeconds: 43200
path: istio-token
- configMap:
defaultMode: 420
name: istio-ca-root-cert
name: istiod-ca-cert
- name: kube-api-access-mz2hf
projected:
defaultMode: 420
sources:
- serviceAccountToken:
expirationSeconds: 3607
path: token
- configMap:
items:
- key: ca.crt
path: ca.crt
name: kube-root-ca.crt
- downwardAPI:
items:
- fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
path: namespace
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2023-07-25T23:49:30Z"
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: "2023-07-25T23:49:50Z"
status: "True"
type: Ready
- lastProbeTime: null
lastTransitionTime: "2023-07-25T23:49:50Z"
status: "True"
type: ContainersReady
- lastProbeTime: null
lastTransitionTime: "2023-07-25T23:49:30Z"
status: "True"
type: PodScheduled
containerStatuses:
- containerID: docker://8b0b42a525cb36b3dac407c454e229571548dc74331726f14e0d54b95269baeb
image: gcr.io/istio-testing/ztunnel:1.19-alpha.c641d08aa437381c3678805e17c0479f247e714a
imageID: docker-pullable://gcr.io/istio-testing/ztunnel@sha256:f27b080094d7ab3dbe388b4bae2f482e51841e2d0061c68370a52e8982fadf60
lastState: {}
name: istio-proxy
ready: true
restartCount: 0
started: true
state:
running:
startedAt: "2023-07-25T23:49:50Z"
hostIP: 192.168.39.145
phase: Running
podIP: 10.244.0.9
podIPs:
- ip: 10.244.0.9
qosClass: Burstable
startTime: "2023-07-25T23:49:30Z"
kind: List
metadata:
resourceVersion: "" ztunnel DS yaml: $ kubectl get -n istio-system ds/ztunnel -oyaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
annotations:
deprecated.daemonset.template.generation: "1"
creationTimestamp: "2023-07-25T23:49:30Z"
generation: 1
labels:
install.operator.istio.io/owning-resource: installed-state
install.operator.istio.io/owning-resource-namespace: istio-system
istio.io/rev: default
operator.istio.io/component: Ztunnel
operator.istio.io/managed: Reconcile
operator.istio.io/version: 1.19-alpha.c641d08aa437381c3678805e17c0479f247e714a
name: ztunnel
namespace: istio-system
resourceVersion: "847"
uid: a86eb416-288f-4252-8480-0dd95494156f
spec:
revisionHistoryLimit: 10
selector:
matchLabels:
app: ztunnel
template:
metadata:
annotations:
ambient.istio.io/redirection: disabled
cni.projectcalico.org/allowedSourcePrefixes: '["0.0.0.0/0"]'
prometheus.io/port: "15020"
prometheus.io/scrape: "true"
sidecar.istio.io/inject: "false"
creationTimestamp: null
labels:
app: ztunnel
sidecar.istio.io/inject: "false"
spec:
containers:
- args:
- proxy
- ztunnel
env:
- name: CLUSTER_ID
value: Kubernetes
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
- name: INSTANCE_IP
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.podIP
- name: SERVICE_ACCOUNT
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.serviceAccountName
image: gcr.io/istio-testing/ztunnel:1.19-alpha.c641d08aa437381c3678805e17c0479f247e714a
imagePullPolicy: IfNotPresent
name: istio-proxy
ports:
- containerPort: 15020
name: ztunnel-stats
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz/ready
port: 15021
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources:
requests:
cpu: 500m
memory: 2Gi
securityContext:
allowPrivilegeEscalation: false
capabilities:
add:
- NET_ADMIN
drop:
- ALL
privileged: false
readOnlyRootFilesystem: true
runAsGroup: 1337
runAsNonRoot: false
runAsUser: 0
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/istio
name: istiod-ca-cert
- mountPath: /var/run/secrets/tokens
name: istio-token
dnsPolicy: ClusterFirst
nodeSelector:
kubernetes.io/os: linux
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: ztunnel
serviceAccountName: ztunnel
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoSchedule
operator: Exists
- key: CriticalAddonsOnly
operator: Exists
- effect: NoExecute
operator: Exists
volumes:
- name: istio-token
projected:
defaultMode: 420
sources:
- serviceAccountToken:
audience: istio-ca
expirationSeconds: 43200
path: istio-token
- configMap:
defaultMode: 420
name: istio-ca-root-cert
name: istiod-ca-cert
updateStrategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
type: RollingUpdate
status:
currentNumberScheduled: 1
desiredNumberScheduled: 1
numberAvailable: 1
numberMisscheduled: 0
numberReady: 1
observedGeneration: 1
updatedNumberScheduled: 1 and here's one of the problems - this pod in the bookinfo namespace isn't able to start:
|
Aight so we have 2 different problems with 2 different CNIs:
Whatever CNI you were using by default in
☝️ this is the cause of the problem with that CNI plugin. Regardless of the subsequent errors for other pods, the issue is that the Istio CNI DS cannot find a route from itself to the ztunnel IP thru the K8S network, as configured. That code is here, and we're using the standard Go netlink library to do a route lookup. There should be a route configured by the K8S cluster (not Istio) CNI from the CNI pod to the ztunnel pod, and there is not (from the perspective of the Istio CNI DS, anyway, which is what matters). We need to understand why that is for that particular CNI plugin. A good way to test this is to jump into the CNI pod and do an
That's effectively what the
For your kindnet log, we are getting a slew of different errors, all of them related to a lack of permissions by the CNI DS:
At the very least, host node's What |
I'm using I start it via: By default, that script will use the Note the default cni used by that script is |
Started minikube withOUT CNI pod IP:
Ztunnel pod IP:
From CNI DS:
From CNI pod:
istio-cni-node pod errors: This looks bad:
There are other warnings in here, like:
|
Correct, this is the error mentioned previously when the default CNI is used. There's no route to the
☝️ Did you try this? Open a shell into the
The kvm2 driver doesn't seem to be properly supporting running
My recommendation would be to use the |
Yes, I did that and reported the results in my previous comment. I provide the commands and output results for that. I'll repeat here. The first two commands get the IPs, the second two commands run that CNI pod IP:
Ztunnel pod IP:
From CNI DS (this is the
From CNI pod (this is the
|
Ah my bad, I missed that. Apologies. Okay so:
tl;dr we probably need to update the docs for ambient to mention the specific config required for For now use the
With that config (or kind) I am unable to reproduce your errors. Minikube seems to default to The |
istio/istio#46163 Signed-off-by: Benjamin Leggett <benjamin.leggett@solo.io>
istio/istio#46163 Signed-off-by: Benjamin Leggett <benjamin.leggett@solo.io>
istio/istio#46163 Signed-off-by: Benjamin Leggett <benjamin.leggett@solo.io>
istio/istio#46163 Signed-off-by: Benjamin Leggett <benjamin.leggett@solo.io>
istio/istio#46163 Signed-off-by: Benjamin Leggett <benjamin.leggett@solo.io>
istio/istio#46163 Signed-off-by: Benjamin Leggett <benjamin.leggett@solo.io> Co-authored-by: Benjamin Leggett <benjamin.leggett@solo.io>
OK. Thanks. I'll try that and see how it goes. For now, I have it all working with KinD so this isn't any kind of blocker for me personally. But I figured it is good to try to nail down what is required in order to get things to work on minikube, also. |
With CNI of The only error message I see in the DS istio-cni-node:
Warnings in the logs:
However, I can't seem to get it to work.... trying to get the sleep pod to make a request to bookinfo:
I see the same thing with my traffic generator:
So something is still missing. The one difference I can see is my version of minikube:
Let me try this with the latest (1.31) and see if I get any better results. |
meh.. still doesn't work with minikube 1.31
That exec call worked up until I labeled the namespace with the ambient enabled label. I was seeing this when not ambient-enabled:
How do I know if the Here's my output of minikube startup (I see nothing that mentions kindnet -- only thing it says about CNI is "Configuring CNI (Container Networking Interface) ...")
|
Signed-off-by: Benjamin Leggett <benjamin.leggett@solo.io>
Hey @jmazzitelli sorry for the delay on this. Now that Tested with |
Also tested with |
@bleggett I think I'm missing something. I tried to set that cniNetnsDir and it failed:
This is the 1.20 dev build I have (just downloaded it this morning):
I also tried the 1.21 dev build and it also failed the same way.
|
@jmazzitelli ugh yeah my bad - if you use plain Helm it will work but I forgot extra steps are required to expose Helm flags via Fixing that with #47499 then above should work. |
…#47444) * Fix for istio#46163 Signed-off-by: Benjamin Leggett <benjamin.leggett@solo.io> * Release notes Signed-off-by: Benjamin Leggett <benjamin.leggett@solo.io> --------- Signed-off-by: Benjamin Leggett <benjamin.leggett@solo.io>
…ikube issue (#47524) * Fix for #46163 - Make `netns` host node path configurable (#47444) * Fix for #46163 Signed-off-by: Benjamin Leggett <benjamin.leggett@solo.io> * Release notes Signed-off-by: Benjamin Leggett <benjamin.leggett@solo.io> --------- Signed-off-by: Benjamin Leggett <benjamin.leggett@solo.io> * Expose #47444 to istioctl (#47499) Signed-off-by: Benjamin Leggett <benjamin.leggett@solo.io> --------- Signed-off-by: Benjamin Leggett <benjamin.leggett@solo.io>
This should have been taken care of when this issue is resolved. Could you try it? Instruction is posted on our slack: |
cc @josunect ^^ |
Hi @jmazzitelli @johscheuer have you had chance to try it? Thx! |
I have not. Is this available in a dev build that we can grab from https://storage.googleapis.com/istio-build ? That's how I test things. |
Not yet, because the PRs (very large) for #48212 are still under reviews for the past 2 weeks. The testing from various env will increase our confidence about merging the PRs thus the ask. :). |
@linsun I set the cni by "istioctl install --set values.cni.cniNetnsDir="/var/run/docker/netns" --set profile=ambient --set "components.ingressGateways[0].enabled=true" --set "components.ingressGateways[0].name=istio-ingressgateway" --skip-confirmation" in minikube env, but the istio-cni-node yaml not changed:
|
Hi, this feature not merged yet, you'd have to use a temporary build from Yuval. #46163 (comment) |
This should be resolved, please try latest master or release 1.21 build, or wait for the official 1.21 release. see #48212. |
Sorry the code was just merged last Friday late (evening time for ET) - istio-1.21.0-beta.0 may not have the change, cc @istio/release-managers and @bleggett to chime in. |
https://github.com/istio/istio/wiki/Dev%20Builds - latest dev build should have the change @josunect |
The change should be In the -beta.1 since it was merged after beta.0 was released. I expect that the new build will be available in a day or 2. |
Thanks for the update |
Microservice PODs failing Docker Desktop Beta-1 release
Installation of istio is successful
|
Tested in minikube and it is working as expected. |
It's failing in docker desktop ztunnel not starting with the same error above. Will try to look into it & debug. |
Thanks @harsh4870 - pls keep us posted. Does istio sidecars work for your docker desktop env? cc @bleggett FYI |
Yes, sidecar setup is working like charm with Docker Desktop, in the Ambient profile ztunnel failing. Tried fully resetting the K8s cluster on Docker Desktop but same error. Sidecar tested with 1.22 while for Ambient using 1.21-beta-1 |
@harsh4870 What error? Can you please open a separate issue for Docker Desktop to avoid confusing this issue, since this issue was originally raised for minikube? |
I think there is some kind of issue in I was going to do some testing for this issue in minikube, creating a I was able to reproduce it installing Istio with Ambient profile and bookinfo, this is how the pods look:
|
Thank you @bleggett, I'll try on a newer build. |
Is this the right place to submit this?
Bug Description
Follow this: https://istio.io/latest/docs/ops/ambient/getting-started/
I did not install Gateway APIs. I followed the "Istio APIs" instructions to install:
Cluster is minikube. I'm using a Istio 1.19-dev build (see "Version" field for details).
Things look installed properly:
But there are errors in the CNI daemonset.. see cni-errors.log which is from
kubectl logs -n istio-system daemonset/istio-cni-node > cni-errors.log
First error in the logs is:
with a bunch of
and then
Version
Minikube (relevant for this issue; this error doesn't happen with KinD):
Operating System/Hardware:
The text was updated successfully, but these errors were encountered: