-
Notifications
You must be signed in to change notification settings - Fork 565
Description
Bug Report
What did you do?
Installed Operator Lifecycle Manager on k3s.
What did you expect to see?
Successful installation.
What did you see instead? Under which circumstances?
Does not install. Not all pods start.
NAME READY STATUS RESTARTS AGE
catalog-operator-7b788c597d-jkb52 1/1 Running 0 2d
olm-operator-946bd977f-cjh45 0/1 CrashLoopBackOff 415 2d
Environment
- operator-lifecycle-manager version:
0.13.0
- Kubernetes version information:
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.3-k3s.2", GitCommit:"e7e6a3c4e9a7d80b87793612730d10a863a25980", GitTreeState:"clean", BuildDate:"2019-11-18T18:31:23Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.3-k3s.2", GitCommit:"e7e6a3c4e9a7d80b87793612730d10a863a25980", GitTreeState:"clean", BuildDate:"2019-11-18T18:31:23Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"linux/amd64"}
- Kubernetes cluster kind:
k3s version v1.0.0 (18bd921c)
Possible Solution
Additional context
Pod logs from crashing pod
time="2019-11-22T17:06:04Z" level=info msg="log level info"
time="2019-11-22T17:06:04Z" level=info msg="TLS keys not set, using non-https for metrics"
time="2019-11-22T17:06:04Z" level=info msg="Using in-cluster kube client config"
time="2019-11-22T17:06:04Z" level=info msg="Using in-cluster kube client config"
time="2019-11-22T17:06:06Z" level=fatal msg="couldn't clean previous release" error="Delete https://10.43.0.1:443/apis/operators.coreos.com/v1alpha1/namespaces/olm/catalogsources/olm-operators: dial tcp 10.43.0.1:443: connect: no route to host"