Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Installing on Docker Desktop instead of minikube / problems with full install #1107

Closed
andreialecu opened this issue Feb 23, 2019 · 5 comments

Comments

@andreialecu
Copy link

andreialecu commented Feb 23, 2019

Docker now has Kubernetes built in. There are various CPU usage issues with minikube so I'm trying to use the built in Docker way to install fission.

I'm running this on Windows fwiw.

I was able to install the "minimal" package successfully via helm, but I'm not able to see function logs with it - I assume because of the missing influxdb.

Installing the full package results in the fission cli returning errors such as:

$ fission spec apply --wait
Fatal error: Error forwarding to controller port: error upgrading connection: the server does not allow this method on the requested resource

kubectl get po --all-namespaces shows this:

image

I notice that the controller shows ready 0/1.

With the minimal setup the controller starts properly and fission connects to it.

On minikube, the full version works as well. It's just on Docker Desktop's kubernetes that I see these problems.

Also tried with both helm and the kubectl apply methods.

Alternatively, is there a way to get logging to work on the minimal install?

@andreialecu
Copy link
Author

Upon further investigation I see that the difference in configuration for the controller between fission-all and fission-minimal is this in config.yaml:

fission-all:

canary:
  enabled: true
  prometheusSvc: "http://fission-1-0-0-prometheus-server.default"

The minimal version doesn't use prometheus. Based on my screenshot above, I also notice that prometheus-server and prometheus-alertmanager isn't ready, so this must be why the controller keeps crashing.

Possibly related to: #618

@andreialecu
Copy link
Author

I was able to solve the problem. It was caused by the fact that the PersistentVolumeClaims were stuck on pending because there were no PersistentVolumes available.

I got it to work by manually creating 3 PVs like this:

pv-volume.yaml:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv1
spec:
  capacity:
    storage: 10Gi
  storageClassName: hostpath
  accessModes:
  - ReadWriteOnce
  hostPath:
    path: /tmp/pvc1

---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv2
spec:
  capacity:
    storage: 10Gi
  storageClassName: hostpath
  accessModes:
  - ReadWriteOnce
  hostPath:
    path: /tmp/pvc2

---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv3
spec:
  capacity:
    storage: 10Gi
  storageClassName: hostpath
  accessModes:
  - ReadWriteOnce
  hostPath:
    path: /tmp/pvc3

Then run kubectl create -f pv-volume.yaml. All the pods should then successfully start.

I'm a beginner with kubernetes so the yaml above may not be optimal, but at least it works now.

@andreialecu
Copy link
Author

To anyone finding this, there's a better solution to PVCs getting fulfilled here:
docker/for-win#1758

@vishal-biyani
Copy link
Member

@andreialecu Thanks for reporting the issue and solution. I have created a documentation update issue so that we document this better for new users. BTW are you trying this on a Windows machine? I was able to provision a volume automatically on Mac at least.

I will close this issue for now but please reach us on Slack or here if needed again.

@andreialecu
Copy link
Author

Yes this is on a Windows machine and is apparently an upstream Docker issue. There's a workaround in my previous comment.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants