Skip to content
This repository has been archived by the owner on Aug 17, 2023. It is now read-only.

Unable to get kfctl apply to fully deploy kfctl_k8s_istio.v1.2.0.yaml #472

Open
theNewFlesh opened this issue Jan 6, 2021 · 7 comments
Open

Comments

@theNewFlesh
Copy link

Running Kubeflow via minikube start --docker

kfctl_k8s_istio.v1.2.0.yaml is just the file curled from the tutorial.

kfctl apply -V -f kfctl_k8s_istio.v1.2.0.yaml gives me errors:

In the admission-webhook-bootstrap-stateful-set-0 pod logs I find this:

Error from server (NotFound): mutatingwebhookconfigurations.admissionregistration.k8s.io "admission-webhook-mutating-webhook-configuration" not found
patching ca bundle for webhook configuration...
Error from server (NotFound): mutatingwebhookconfigurations.admissionregistration.k8s.io "admission-webhook-mutating-webhook-configuration" not found

Also pods metacontroller-0 and Istio-telemetry both go into a CrashLoopBackoff death spiral without any logs.

Help please.


macOS 11.0.1
minikube v1.16.0
kfctl v1.2.0-0-gbc038f9
docker desktop 3.0.3

@kubeflow-bot kubeflow-bot added this to To Do in Needs Triage Jan 6, 2021
@theNewFlesh
Copy link
Author

Also found this in metacontroller-0 events:

Error: failed to start container "metacontroller": Error response from daemon: OCI runtime create failed: container_linux.go:370: starting container process caused: process_linux.go:459: container init caused: process_linux.go:422: setting cgroup config for procHooks process caused: failed to write "400000" to "/sys/fs/cgroup/cpu/kubepods/burstable/pod22bf4ea2-a41c-43e1-a6d4-f807ebc00756/metacontroller/cpu.cfs_quota_us": write /sys/fs/cgroup/cpu/kubepods/burstable/pod22bf4ea2-a41c-43e1-a6d4-f807ebc00756/metacontroller/cpu.cfs_quota_us: invalid argument: unknown

@moficodes
Copy link
Contributor

Minikube setup instruction as follows

https://www.kubeflow.org/docs/started/workstation/minikube-linux/#start-minikube

Could you try to start minikube this way?

@theNewFlesh
Copy link
Author

Yeah I tried that first, same issue.

@theNewFlesh
Copy link
Author

I've rebuilt everything to see if I can reproduce the issue.
Now I get
Readiness probe failed: HTTP probe failed with statuscode: 503 for Istio-ingressgateway

Logs:
2021-01-07T23:03:42.107903Z info Envoy proxy is NOT ready: config not received from Pilot (is Pilot running?): cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected

@theNewFlesh
Copy link
Author

A little help please.

@peteboothroyd
Copy link

I was also just having this issue with pretty much the same set up, and found this which seemed to solve the problem: kubeflow/kubeflow#5447 (comment), specifically I ended up modifying the minikube start command to:

minikube start \
--cpus 6 \
--memory 12288 \
--disk-size=120g \
--extra-config=apiserver.service-account-issuer=api \
--extra-config=apiserver.service-account-signing-key-file=/var/lib/minikube/certs/sa.key \
--extra-config=apiserver.service-account-api-audiences=api \
--kubernetes-version v1.16.15

...and now all of the pods have start successfully.

The kubernetes version was set based upon the compatibility matrix although I'm not sure if it's strictly necessary. Hopefully that fixes your issue too.

@gecube
Copy link

gecube commented Apr 17, 2021

I was hit by the same issue on different versions of kubernetes (1.18 to 1.20)
The solution was simple - to install cert-manager separately from official manifest:

$ kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.3.0/cert-manager.yaml

and install kubeflow components one-by-one.

Also I checked that the keys --service-account-api-audiences, --service-account-signing-key-file, --service-account-issuer were added to API server, but it is typical setup of many distributions of k8s, so it doesn't look like solution.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
Development

No branches or pull requests

4 participants